From 83bdfb10388438aa416850f38ba645af852fa2c2 Mon Sep 17 00:00:00 2001 From: Jan Wassenberg Date: Mon, 11 May 2026 08:34:33 -0700 Subject: [PATCH] Update kaggle links to Gemma v3. Refs #911 PiperOrigin-RevId: 913714909 --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index b740f66a..4c957f0e 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ foundation models from Google. For additional information about Gemma, see [ai.google.dev/gemma](https://ai.google.dev/gemma). Model weights, including gemma.cpp specific artifacts, are -[available on kaggle](https://www.kaggle.com/models/google/gemma-2). +[available on kaggle](https://www.kaggle.com/models/google/gemma-3). ## Who is this project for? @@ -34,7 +34,7 @@ portable SIMD for CPU inference. For production-oriented edge deployments we recommend standard deployment pathways using Python frameworks like JAX, Keras, PyTorch, and Transformers -([all model variations here](https://www.kaggle.com/models/google/gemma)). +([all model variations here](https://www.kaggle.com/models/google/gemma-3)). ## Contributing @@ -104,7 +104,7 @@ winget install --id Microsoft.VisualStudio.2022.BuildTools --force --override "- ### Step 1: Obtain model weights and tokenizer from Kaggle or Hugging Face Hub Visit the -[Kaggle page for Gemma-2](https://www.kaggle.com/models/google/gemma-2/gemmaCpp) +[Kaggle page for Gemma-2](https://www.kaggle.com/models/google/gemma-3/gemmaCpp) and select `Model Variations |> Gemma C++`. On this tab, the `Variation` dropdown includes the options below. Note bfloat16