Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ foundation models from Google.
For additional information about Gemma, see
[ai.google.dev/gemma](https://ai.google.dev/gemma). Model weights, including
gemma.cpp specific artifacts, are
[available on kaggle](https://www.kaggle.com/models/google/gemma-2).
[available on kaggle](https://www.kaggle.com/models/google/gemma-3).

## Who is this project for?

Expand Down Expand Up @@ -34,7 +34,7 @@ portable SIMD for CPU inference.

For production-oriented edge deployments we recommend standard deployment
pathways using Python frameworks like JAX, Keras, PyTorch, and Transformers
([all model variations here](https://www.kaggle.com/models/google/gemma)).
([all model variations here](https://www.kaggle.com/models/google/gemma-3)).

## Contributing

Expand Down Expand Up @@ -104,7 +104,7 @@ winget install --id Microsoft.VisualStudio.2022.BuildTools --force --override "-
### Step 1: Obtain model weights and tokenizer from Kaggle or Hugging Face Hub

Visit the
[Kaggle page for Gemma-2](https://www.kaggle.com/models/google/gemma-2/gemmaCpp)
[Kaggle page for Gemma-2](https://www.kaggle.com/models/google/gemma-3/gemmaCpp)
and select `Model Variations |> Gemma C++`.

On this tab, the `Variation` dropdown includes the options below. Note bfloat16
Expand Down
Loading