# Introduction

Leoma is an **AI video subnet** on [Bittensor](https://docs.learnbittensor.org/). Miners run **Text-Image to Video (TI2V)** models; validators sample tasks (first frame + prompt from real clips), send challenges to miners, evaluate outputs, and set on-chain weights from the current pass-based ranking results.

**Supported model type (current):** **Text-Image to Video (TI2V)** only.

**Roadmap:** Support for **Text-to-Video (T2V)** and **Image-to-Video (I2V)** is planned.

## Contents

* [**Getting started**](https://docs.leoma.ai/getting-started) — Protocol overview and Bittensor context
* [**Miner setup**](https://docs.leoma.ai/mining) — Hugging Face model (naming, upload), on-chain commit, monitoring
* [**Validator setup**](https://docs.leoma.ai/validation) — Validator setup and workflow
* [**Storage (Hippius S3)**](https://docs.leoma.ai/storage) — Source videos and sample artifacts
* [**API reference**](https://docs.leoma.ai/api) — Leoma API endpoints and auth

## Resources

* **App / dashboard:** Leoma frontend (Overview, Product, Network, Docs, Help)
* **Whitepaper:** Protocol details and incentives
* **Community:** Discord, Twitter, GitHub (see Help page in the app)
