# `HuggingfaceClient.Hub.Leaderboards`
[🔗](https://github.com/huggingface/huggingface_client/blob/v0.1.0/lib/huggingface_client/hub/evaluation/leaderboards.ex#L1)

HuggingFace Hub Leaderboards API.

Leaderboards track and compare model performance on benchmarks.
The Open LLM Leaderboard is the most prominent example.

See: https://huggingface.co/docs/leaderboards

## Example

    # List all public leaderboards
    {:ok, boards} = HuggingfaceClient.list_leaderboards()

    # Get the Open LLM Leaderboard results
    {:ok, results} = HuggingfaceClient.get_leaderboard("open-llm-leaderboard/results")
    results["entries"]
    |> Enum.sort_by(& &1["average_score"], :desc)
    |> Enum.take(10)
    |> Enum.each(fn m ->
      IO.puts("#{m["model_name"]}: #{m["average_score"]}")
    end)

# `get`

```elixir
@spec get(
  String.t(),
  keyword()
) :: {:ok, map()} | {:error, Exception.t()}
```

Gets a specific leaderboard by its Space ID.

The Space ID is in the format `"owner/space-name"` for leaderboard Spaces.

## Example

    {:ok, lb} = HuggingfaceClient.get_leaderboard("open-llm-leaderboard/results")
    IO.puts("Models evaluated: #{length(lb["entries"])}")

    # Sort by average score
    top10 = lb["entries"]
    |> Enum.sort_by(& &1["average_score"], :desc)
    |> Enum.take(10)

# `get_model_results`

```elixir
@spec get_model_results(
  String.t(),
  keyword()
) :: {:ok, [map()]} | {:error, Exception.t()}
```

Gets leaderboard results for a specific model.

## Example

    {:ok, results} = HuggingfaceClient.get_model_leaderboard_results(
      "meta-llama/Llama-3.1-8B-Instruct",
      access_token: "hf_..."
    )

# `list`

```elixir
@spec list(keyword()) :: {:ok, [map()]} | {:error, Exception.t()}
```

Lists public leaderboards on the Hub.

## Options

- `:search` — filter by name
- `:limit` — maximum results (default: 50)
- `:access_token`

## Example

    {:ok, leaderboards} = HuggingfaceClient.list_leaderboards()
    Enum.each(leaderboards, fn lb -> IO.puts(lb["name"]) end)

# `open_llm_leaderboard`

```elixir
@spec open_llm_leaderboard(keyword()) :: {:ok, [map()]} | {:error, Exception.t()}
```

Gets the Open LLM Leaderboard results — the most popular HF benchmark.

Returns the latest evaluation results sorted by average performance.

## Example

    {:ok, entries} = HuggingfaceClient.open_llm_leaderboard()
    entries
    |> Enum.take(5)
    |> Enum.each(fn m ->
      IO.puts("#{m["model_name"]}: avg=#{m["average_score"]}")
    end)

# `submit`

```elixir
@spec submit(keyword()) :: {:ok, map()} | {:error, Exception.t()}
```

Submits a model to a leaderboard for evaluation.

## Options

- `:leaderboard_id` — leaderboard Space ID (required)
- `:model_id` — HF model ID to submit (required)
- `:revision` — model revision/branch (default: `"main"`)
- `:precision` — `"float16"`, `"bfloat16"`, `"float32"`, `"8bit"`, `"4bit"`
- `:weight_type` — `"Original"`, `"Adapter"`, `"Delta"`
- `:access_token`

## Example

    {:ok, submission} = HuggingfaceClient.submit_to_leaderboard(
      leaderboard_id: "open-llm-leaderboard",
      model_id: "my-org/my-fine-tuned-llm",
      revision: "main",
      precision: "bfloat16",
      access_token: "hf_..."
    )

---

*Consult [api-reference.md](api-reference.md) for complete listing*
