Tony Gilroy Says the World of Andor Could Expand, but It’s Up to Lucasfilm

We know Cassian's fate thanks to Rogue One, but the Star Wars series could launch other characters into new stories.
Apple Intelligence was announced with iOS 18 and has been available since last October, when Apple released iOS 18.1 to the public. Although most apps provide support for Apple Intelligence features by default, developers can choose not to have them in their apps – and it seems that Meta has decided to do so.
more…Gemini Live’s feature that lets it see and respond to what’s on your camera and your screen will now be free for all Android users via the Gemini app, Google announced today.
The AI-powered feature officially launched earlier this month for everyone on Pixel 9 and Samsung Galaxy S25 using the Gemini app. At the time, Google said the feature would launch “soon” for all Android users though would only be available with a Gemini Advanced subscription. But the company has changed its mind and is now making it available for free.
“We’ve been hearing great feedback on Gemini Live with camera and screen share, so we decided to bring it to more people,” Google said on X.
The feature will roll out to all Android users with the Gemini app starting today, and the rollout will take place “over the coming weeks.” If you want to get an idea of how the feature works, check out this video from Google. In it, a person holds their phone with the camera open at an aquarium so that Gemini can see the animals and share information.
Today, Microsoft announced that its similar AI tool, called Copilot Vision, is available now for free in the Edge browser.
OpenAI is reportedly in discussions to acquire Windsurf, the AI coding startup formerly known as Codeium, for around $3 billion, Bloomberg reported, citing people familiar with the deal. If the deal goes through, it would be the company’s biggest acquisition […]
The post OpenAI in talks to acquire AI coding assistant startup Windsurf for $3 billion first appeared on Tech Startups.
Following last week’s launch, the Pixel 9a is getting a small camera update that presumably addresses some bugs.
more…
On Wednesday, OpenAI announced the release of two new models—o3 and o4-mini—that combine simulated reasoning capabilities with access to functions like web browsing and coding. These models mark the first time OpenAI's reasoning-focused models can use every ChatGPT tool simultaneously, including visual analysis and image generation.
OpenAI announced o3 in December, and until now, only less capable derivative models named "o3-mini" and "03-mini-high" have been available. However, the new models replace their predecessors—o1 and o3-mini.
OpenAI is rolling out access today for ChatGPT Plus, Pro, and Team users, with Enterprise and Edu customers gaining access next week. Free users can try o4-mini by selecting the "Think" option before submitting queries. OpenAI CEO Sam Altman tweeted that "we expect to release o3-pro to the pro tier in a few weeks."
© Floriana via Getty Images
Copilot Vision, Microsoft’s AI assistant feature that can interpret what’s on your screen and help you use apps, is now available for free use within the Edge browser, Mustafa Suleyman, CEO of Microsoft AI, announced on Bluesky today. Vision is a “talk-based experience,” as Microsoft calls it, meaning you use it by speaking into the air, then waiting for Copilot to respond.
Suleyman says if you opt into the feature, Copilot Vision can “literally see what you see on screen.” Suleyman suggests having Copilot Vision guide you through a recipe while you cook or having it “decode” job descriptions “and jump right into customized interview prep or cover letter brainstorming.” (Although it might not be the best idea to use AI for your resume.) According to a Microsoft support page, “Copilot Vision may highlight portions of the screen to help you find relevant information,” but it doesn’t actually click links or do anything on your behalf.
Broader, system-wide Copilot Vision features are still limited to Copilot Pro subscribers. With a subscription, Vision expands beyond Edge, letting you ask it to help you use features in Photoshop or video editing software, or guide you through a game like Minecraft, as it did for The Verge’s Tom Warren earlier this month.
To try out Copilot Vision, open this link to Microsoft’s website in the Edge browser. That should give you a prompt to opt into the feature, and once you’ve given permission, you can open the Copilot sidebar while on a website, click the microphone icon, and your Vision session begins, signified by a chime and your browser changing its hue.
Or that’s how it should go. In my case, it took a couple of tries before Edge asked if I wanted to opt in. And once I could opt in and initiate a Vision session, the controls never appeared — as of this writing, I simply have a message floating over the bottom of my browser that says “One moment…” But I’m using a fairly old, underpowered laptop, so your mileage may vary.
According to Microsoft, the company logs Copilot’s responses to you but doesn’t collect your inputs, images, or page content while in a Copilot Vision session. When you’re ready to stop sharing your screen with Copilot, you can either end the session or close the browser window.