• Starter AI
  • Posts
  • Apple’s AI moment, Enter the 1-bit LLM era, Llama 3 is coming

Apple’s AI moment, Enter the 1-bit LLM era, Llama 3 is coming

The big Apple is finally catching up.

Hello, Starters!

We are constantly emphasising that AI development is a wild race, and if you're not at the same pace, you'll quickly be left behind. This seemed to be Apple's case, but not anymore.

Here’s what you’ll find today:

  • Apple is “investing significantly” in GenAI

  • The era of 1-bit LLMs

  • Meta’s Llama 3 is slated for July

  • Adobe’s upcoming GenAI music tool 

  • Alibaba unveils EMO

  • And more.

During Apple's annual shareholder meeting, CEO Tim Cook stated that the company is aware of generative AI's potential to transform users' lives and that they're "investing significantly" in it. This commentary ends long-standing rumours of Apple falling behind in the AI race.

Recently, it was revealed that Apple discontinued its "Apple Car" project to focus exclusively on AI, although Cook didn't comment on the matter.

Apple is no stranger to AI technology; they've previously claimed that it powers most of their products. Nevertheless, they shy away from using the term explicitly, turning to "machine learning" instead. The company is expected to announce a significant AI breakthrough at this year's WWDC.

🤖 The era of 1-bit LLMs (1 min)

Researchers have recently introduced BitNet b1.58, a groundbreaking 1-bit variant of LLMs. This approach redefines our understanding of models as it operates at just 1.58 bits while matching the performance of full-precision Transformer LLMs.

This opens a new path for high-performance models that can also bring cost-effectiveness due to reduced computational requirements, potentially transforming the deployment of future LLM applications.

According to sources, Meta is releasing the next version of the famous open source model "Llama" in July. The upcoming Llama 3 aims to improve user interaction by differentiating complex topics to provide relevant context instead of avoiding questions.

The model is intended to rival GPT-4 and double the size of the current Llama 2. Not much information has been disclosed, so we are yet to find out whether it will be solely a language model or multimodal.

🎶A research team from Adobe, the University of California, and Carnegie Mellon is developing “Project Music GenAI Control,” a platform that can generate audio from text prompts or a reference sound, enabling users to customise the outcome to their liking.

🎬Alibaba's Institute for Intelligent Computing has unveiled an AI system dubbed “EMO,” or Emote Portrait Alive, which can create fluid and expressive movements to animate a portrait photo. Additionally, it can generate videos of the person talking or singing to a provided audio track.

The Intercept, Raw Story, and AlterNet sue OpenAI and Microsoft

What did you think of today's newsletter?

Login or Subscribe to participate in polls.

Thank you for reading!