vidarbrekke

Ollama Memory Embeddings

by vidarbrekke v1.0.4

Configure OpenClaw memory search to use Ollama as the embeddings server (OpenAI-compatible /v1/embeddings) instead of the built-in node-llama-cpp local GGUF loading. Includes interactive model selection and optional import of an existing local embedding GGUF into Ollama.

1,036
Downloads
2
Stars
2
Installs
5
Versions

Latest Changes

Install Ollama Memory Embeddings with One Click

Get a managed OpenClaw server and install this skill from your dashboard. No SSH, no Docker, no configuration needed.

Deploy with ClawHost