โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

OpenAI announces full โ€œo1โ€ reasoning model, $200 ChatGPT Pro tier

On Thursday during a live demo as part of its "12 days of OpenAI" event, OpenAI announced a new tier of ChatGPT with higher usage limits for $200 a month and the full version of "o1," the full version of a so-called reasoning model the company debuted in September.

Unlike o1-preview, o1 can now process images as well as text (similar to GPT-4o), and it is reportedly much faster than o1-preview. In a demo question about a Roman emperor, o1 took 14 seconds for an answer, and 1 preview took 33 seconds. According to OpenAI, o1 makes major mistakes 34 percent less often than o1-preview, while "thinking" 50 percent faster. The model will also reportedly become even faster once deployment is finished transitioning the GPUs to the new model.

Whether the new ChatGPT Pro subscription will be worth the $200 a month fee isn't yet fully clear, but the company specified that users will have access to an even more capable version of o1 called "o1 Pro Mode" that will do even deeper reasoning searches and provide "more thinking power for more difficult problems" before answering.

Read full article

Comments

ยฉ OpenAI / Benj Edwards

Join us today for Ars Live: Our first encounter with manipulative AI

19 November 2024 at 07:40

In the short-term, the most dangerous thing about AI language models may be their ability to emotionally manipulate humans if not carefully conditioned. The world saw its first taste of that potential danger in February 2023 with the launch of Bing Chat, now called Microsoft Copilot.

During its early testing period, the temperamental chatbot gave the world a preview of an "unhinged" version of OpenAI's GPT-4 prior to its official release. Sydney's sometimes uncensored and "emotional" nature (including use of emojis) arguably gave the world its first large-scale encounter with a truly manipulative AI system. The launch set off alarm bells in the AI alignment community and served as fuel for prominent warning letters about AI dangers.

On November 19 at 4 pm Eastern (1 pm Pacific), Ars Technica Senior AI Reporter Benj Edwards will host a livestream conversation on YouTube with independent AI researcher Simon Willison that will explore the impact and fallout of the 2023 fiasco. We're calling it "Bing Chat: Our First Encounter with Manipulative AI."

Read full article

Comments

ยฉ Aurich Lawson | Getty Images

โŒ
โŒ