manvinder01

Langcache Semantic Caching for OpenClaw

by manvinder01 v1.0.0

This skill should be used when the user asks to "enable semantic caching", "cache LLM responses", "reduce API costs", "speed up AI responses", "configure LangCache", "search the semantic cache", "store responses in cache", or mentions Redis LangCache, semantic similarity caching, or LLM response caching. Provides integration with Redis LangCache managed service for semantic caching of prompts and responses.

1,318
Downloads
1
Versions

Latest Changes

Install Langcache Semantic Caching for OpenClaw with One Click

Get a managed OpenClaw server and install this skill from your dashboard. No SSH, no Docker, no configuration needed.

Deploy with ClawHost