Dear Prime Intellect Investors,

We are excited to send you our monthly investor update for October, to keep you informed about our progress and areas where your support can make the greatest impact. (You can see all updates here).

Asks

  1. Connect us to AI teams looking for the cheapest, reliable GPUs on-demand, from small amounts to gigantic clusters, from short, medium to long durations. We’ll have it all and should be able to find the best compute deals.
  2. We appreciate introductions or pointers to great candidates for our open positions: Careers page.
    1. Chief Operating Officer !
    2. Founding Protocol Engineer !
    3. Founding GTM/Sales !
    4. Research Engineer - Distributed Training !
    5. Research Engineer - Mid Training
    6. AI Research Residency
    7. (Senior) Full-Stack Engineer
    8. (Senior) UI/UX Engineer

Key Highlights:

  1. **We started the first ever 10B Decentralized Training Run (and hit 40%+ progress) + We launched our own decentralized training framework Prime.**

    GbXmgKwa4AAfN0H.jpg

  2. Prime Compute platform is growing 🚀 ****

    1. We scaled the compute spend on our platform to $332k this month (vs $248k last), 33,8% growth month over month. Beyond this and not yet reflected in the numbers we signed the first $40k contract with a customer, and preparing a first 6-7 figure one. Lots of initiatives that should grow demand much further.
      1. 2671 signups (up 45%)
      2. 1351 gpu orders (up 91%)
      3. 524 new paying customers vs 229 last month (up 129%)

Screenshot 2024-11-02 at 22.07.08.png

Screenshot 2024-11-02 at 22.07.28.png

  1. Platform feature progress:

    1. Custom Docker images are live, allowing users to create and share templates on our platform. The first partners have already created public custom docker images to launch GPUs running their applications, decentralized AI protocol miners and more.
    2. Our public API is now in private beta. It enables anyone to plug into the cheapest compute for your applications and workflows.
    3. Many features that make it easy to create new distributed training runs, allowing users to seamlessly contribute compute. We are exploring partnerships to host exciting new research experiments on our platform.
  2. NeurIPS paper acceptance:

    1. Our paper on the 7B-parameter Metagenomic Foundation Model, developed in collaboration with the Nucleic Acid Observatory at MIT, has been accepted at the Foundation Models for Science workshop at NeurIPS. We are preparing a public announcement to highlight our state-of-the-art scientific foundation model and initiatives to prevent near-term risk coming from AGI + ASI.
  3. New Hires:

    1. Mike joined us full-time as a research engineer. He’s the creator of LibreCUDA - an open-source library replacing the CUDA driver API to enable launching CUDA code on Nvidia GPUs without relying on the proprietary CUDA runtime. He will be working on the lower-level stack within our distributed training team.
  4. We are hosting a Prime Intellect protocol event at Ethereum’s Devcon week Nov 12th 6-9pm and sent everyone an invite. We will be speaking at the following events and a few more.

    1. Nov 11th: https://lu.ma/SuperintelligenceSummit?tk=AKWT5f
    2. Nov 12th https://lu.ma/aj6wc6rq
    3. Nov 14th: d/acc day @ devcon, co-hosted w vitalik and others.
    4. Nov 14th: https://lu.ma/7s7231aw?tk=vFoOW6