When switching embedding models (e.g., local nomic-embed-text 768d → OpenAI text-embedding-3-large 3072d), the extracted facts and entities are still valid — only the vectors need recomputing.
Currently the only option is to clear the DB and re-retain everything, which re-runs LLM extraction unnecessarily.
A reindex-embeddings command (CLI or API) that recomputes vectors from stored memory text using the current embedding config would make model upgrades cheap.
Related: #738 (vector extension per-bank), #385 (embedding quantization)
When switching embedding models (e.g., local
nomic-embed-text768d → OpenAItext-embedding-3-large3072d), the extracted facts and entities are still valid — only the vectors need recomputing.Currently the only option is to clear the DB and re-retain everything, which re-runs LLM extraction unnecessarily.
A
reindex-embeddingscommand (CLI or API) that recomputes vectors from stored memory text using the current embedding config would make model upgrades cheap.Related: #738 (vector extension per-bank), #385 (embedding quantization)