Skip to content

Commit 3f8f0f8

Browse files
committed
Update README.md
1 parent 8e8c520 commit 3f8f0f8

1 file changed

Lines changed: 24 additions & 1 deletion

File tree

README.md

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,28 @@ This package provides:
2828

2929
Documentation is available at [https://llama-cpp-python.readthedocs.io/en/latest](https://llama-cpp-python.readthedocs.io/en/latest).
3030

31+
32+
## Discussions
33+
34+
Starting March 2026, I am excited to announce that we have officially enabled the **Discussions** tab for `llama-cpp-python`!
35+
36+
You can access it right here: [GitHub Discussions](https://github.com/JamePeng/llama-cpp-python/discussions).
37+
38+
**Why Discussions? & Updates on Documentation**
39+
As the project has evolved, our existing documentation (`docs`) has unfortunately become a bit bloated and outdated. To provide you with more timely and clear information:
40+
41+
* **New Feature Releases:** Moving forward, whenever a new feature is rolled out, I will publish a dedicated standalone article in the Discussions section. These posts will include detailed explanations, usage guides, and important caveats.
42+
* This approach will serve as a more agile and interactive "live documentation" while we figure out the best way to refactor the old docs.
43+
44+
**Join the Community**
45+
I warmly welcome all of you to use this new space. Let's build together:
46+
47+
* 💬 **Discuss & Share:** Have a question, an idea, or a cool use case? Share it with the community!
48+
* 🛠️ **Maintain & Test:** Help us test new features, troubleshoot issues, and collaboratively maintain the repository.
49+
* 📚 **Learn & Grow:** I hope everyone can benefit from this project, learn from each other, and gain valuable insights.
50+
51+
Thank you for your continuous support!
52+
3153
## Installation
3254

3355
Requirements:
@@ -522,6 +544,7 @@ Below are the supported multi-modal models and their respective chat handlers (P
522544
| [paddleocr-vl-1.5](https://huggingface.co/JamePeng2023/PaddleOCR-VL-1.5-GGUF) | `PaddleOCRChatHandler` | `paddleocr` |
523545
| [qwen2.5-vl](https://huggingface.co/unsloth/Qwen2.5-VL-3B-Instruct-GGUF) | `Qwen25VLChatHandler` | `qwen2.5-vl` |
524546
| [qwen3-vl](https://huggingface.co/unsloth/Qwen3-VL-8B-Thinking-GGUF) | `Qwen3VLChatHandler` | `qwen3-vl` |
547+
| [qwen3.5](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF) | `Qwen35ChatHandler` | `qwen3.5` |
525548
526549
Then you'll need to use a custom chat handler to load the clip model and process the chat messages and images.
527550
@@ -1039,7 +1062,7 @@ This package is under active development and I welcome any contributions.
10391062
To get started, clone the repository and install the package in editable / development mode:
10401063
10411064
```bash
1042-
git clone --recurse-submodules https://github.com/abetlen/llama-cpp-python.git
1065+
git clone https://github.com/JamePeng/llama-cpp-python --recursive
10431066
cd llama-cpp-python
10441067
10451068
# Upgrade pip (required for editable mode)

0 commit comments

Comments
 (0)