Getting started

High-level overview of the Leoma subnet and how to participate.

What is Leoma?

Leoma is a Bittensor subnet for AI-generated video:

  • Current support: Text-Image to Video (TI2V) — validators send a first frame (image) and a text prompt; miners return a short video. Validators compare output to the real clip and record whether each submission passed evaluation. Ranking and weights are then derived from the current pass-based scoring rules.

  • Current support: Text-Image to Video (TI2V) — validators send a first frame (image) and a text prompt; miners return a short video. Validators compare output to the real clip and record whether each submission passed evaluation. Ranking and weights are then derived from the current pass-based scoring rules.

  • Roadmap: Text-to-Video (T2V) and Image-to-Video (I2V) support are planned.

Workflow (simple)

  1. Validators sample tasks (first frame + prompt from real clips, e.g. stored in Hippius S3), send them to miners, collect videos, run evaluation, and submit results to the Leoma API.

  2. The API computes rank and weights from aggregated pass-based results. Validators call GET /weights and set those weights on-chain.

  3. Miners register a Hugging Face model (naming: leoma prefix, hotkey suffix) and Chute endpoint via on-chain commit. They appear in the valid-miners list and receive challenges; stronger pass performance improves rank and rewards.

Roles

Role
In Leoma

Miner

Upload a TI2V model to Hugging Face (name: leoma... + your hotkey), deploy to Chutes, commit on-chain. Earn subnet alpha when your outputs pass evaluation and rank well.

Validator

Run the validator (evaluation + weight setting). Requires API URL, Hippius S3, OpenAI, Chutes API key, and Bittensor wallet.

Next steps

Last updated