Welcome back to The Prompt.
Amazon is secretly working on a ChatGPT competitor, code-named Metis (likely after the Greek goddess of wisdom), powered by an in-house AI model called Olympus, Business Insider reported. The AI assistant can provide up-to-date answers from real time sources and is being developed by the retail giant’s own AGI team, which is working on building artificial intelligence that can think and learn like humans.
Now, let’s get into the headlines.
BIG PLAYS
For months, the AI community wondered about the whereabouts of OpenAI cofounder Ilya Sutskever, especially after he participated in Sam Altman’s brief ouster from the company in late 2023. The question “Where’s Ilya?,” which at one point trended on X, was answered on Wednesday when Sutskever announced that he is starting a new venture called Safe SuperIntelligence, a research-oriented company with “one focus, one goal and one product” — to build safe AI systems. The company claims it will “not do anything else” nor get distracted by selling AI products, until it has achieved safe superintelligence, Sutskever said in an interview with Bloomberg.
Apple has been talking to rival Meta about integrating Meta’s generative AI models into Apple Intelligence, an AI system that will be available on iPhones and other Apple devices, according to the Wall Street Journal. Apple, which is already partnering with OpenAI to bring ChatGPT to its devices, has also held similar discussions with AI startups Anthropic and Perplexity. Apple could help distribute these AI models to its more than a billion iPhone users and drive up their subscription revenues.
ETHICS + LAW
A group of the biggest record labels including Universal Music Group and Sony Music Entertainment are suing two music-generation AI companies, Suno and Udio AI, for alleged copyright infringement on a “massive scale” and for using their content for AI training without consent.
Plus, AI search startup Perplexity appears to be accessing and scraping content from websites like Wired and other Condé Nast-owned publications through a secret IP address after developers blocked its web crawler Perplexity Bot from stealing its content, a Wired investigation found. On multiple instances, Perplexity’s AI search engine conjured false quotes and attributed them to real people. Leading startups like OpenAI and Anthropic have also been disregarding an established web standard that allows publishers to exclude their content from being used for AI training. Earlier this month Forbes found that Perplexity had plagiarized journalistic work from Forbes, Bloomberg and CNBC through a feature called Perplexity Pages.
REGULATION
Y Combinator CEO Garry Tan and the founders of 140 AI startups signed an open letter opposing California’s new AI regulation that would require the makers of AI frontier models to conduct risk assessments of their systems and hold developers legally liable for misuse of the technology. While the bill applies to AI model builders that have spent over $100 million to train their systems, nascent startups that have dubbed themselves “little tech,” said the bill could hinder the state’s ability to retain AI talent.
AI DEAL OF THE WEEK
Chip startup Etched AI has raised $120 million in Series A funding at a $500 million valuation. Founded by three Harvard dropouts, the startup is building a custom chip and software that is designed specifically for transformer models, the technology behind ChatGPT and other generative AI. CEO Gavin Uberti, a former mathematics world champion, said existing hardware like Nvidia GPUs are slower and less efficient because most of the power is used to move memory around the chip rather than carrying out AI tasks. “If you’re willing to run one model architecture onto the chip, you can take out the vast majority out of that memory movement and fit way more compute onto the chip,” Uberti told Forbes.
Also notable: AI video generation startup HeyGen has raised $60 million in a round led by Benchmark at a $500 million valuation. The company, which allows users to create AI avatars, said it has crossed $35 million in annual recurring revenue and has 40,000 enterprise clients.
DEEP DIVE
On Thursday, Anthropic unveiled its most intelligent AI model yet, Claude 3.5 Sonnet, which the company claims is cheaper, faster and has better performance than rival AI systems like OpenAI’s GPT-4 in terms of coding, writing and interpreting complex instructions. The model is part of a new family of AI models that will include a smaller size model (Haiku) and larger one (Opus), both of which have not yet been released. Claude 3.5 Sonnet, which generates responses at twice the speed of Anthropic’s previous model, Claude 3 Opus, can also better understand humor and nuance. The model has already been integrated into a number of applications including Quora’s chatbot Poe and clinical documentation company Deep Scribe’s medical scribe.
I spoke to Anthropic CEO and cofounder Dario Amodei about his ambitions for the next iteration of Claude models. (This interview has been edited for clarity and conciseness.)
Rashi Shrivastava: What prompted you to create a new family of AI models, beginning with Claude 3.5 Sonnet?
Dario Amodei: Three months ago we released the Claude 3 family of models. And there’s a technological trade-off curve between the intelligence of a model versus its speed and cost. Each of those models is on a different point on that trade-off curve and serves a different set of business needs. What Anthropic generally wants to do is push out that tradeoff so that the high end models of today can be served for much lower cost and while being much faster.
Shrivastava: You say that this model is significantly better in terms of understanding nuance and humor. Can you give an example of what that looks like?
Amodei: I think just recently there was a lawyer who assigned Claude 3 Opus to make Supreme Court decisions and he said it was surprisingly thoughtful, moderate and balanced and sometimes makes better decisions than the Justices. And for everything that Claude 3 is kind of good at, Claude 3.5 sonnet pushes that a little bit more.
Shrivastava: How were you able to improve the model’s performance? Would you say it was better training data or scaling compute, or a mixture of both?
Amodei: The name of the game here is we’re always trying to improve our models by improving every ingredient that goes into them, the architecture and algorithms, the quantity and quality of data that goes into them, the amount of compute that’s used to train them. I think the general scaling hypothesis continues to hold, although we’re also getting better at getting more out of a given amount of compute to train the models.
Shrivastava: Do you anticipate that these models could be used differently from previous versions? How so?
Amodei: As the models get smarter and also as they get more affordable and faster for any given level of intelligence, that makes a much wider set of applications more economical. So for example in biomedical areas, today we’re using them for better clinical documentation, but as the models get smarter, they’re going to increasingly be used for the core of the field.
YOUR WEEKLY DEMO
As companies continue to train their large AI systems on public data, you may want to protect your Instagram posts and personal data from seeping into training datasets. Experts suggest trying to use tools like ChatGPT in incognito mode or engaging with models through the company’s API. If you’re based in the U.K. or Europe, you can opt out of Meta using your content for training purposes.
QUIZ
The CEO of this company told shareholders he is on a mission to usher in artificial superintelligence.
- Softbank
- Tesla
- Amazon
- Meta
Check if you got it right here.
MODEL BEHAVIOR
A tiny island in the eastern Caribbean is making a fortune thanks to a boom in AI in the most unlikely way possible. In the late 1990s, Anguilla was awarded the .ai country domain code. Now, spurred by an AI frenzy, sales of .ai domain have skyrocketed, making up a third of the government’s revenue.
Read the full article here