staybased

Peer Review

by staybased v1.0.0

Multi-model peer review layer using local LLMs via Ollama to catch errors in cloud model output. Fan-out critiques to 2-3 local models, aggregate flags, synthesize consensus. Use when: validating trade analyses, reviewing agent output quality, testing local model accuracy, checking any high-stakes Claude output before publishing or acting on it. Don't use when: simple fact-checking (just search the web), tasks that don't benefit from multi-model consensus, time-critical decisions where 60s latency is unacceptable, reviewing trivial or low-stakes content. Negative examples: - "Check if this date is correct" β†’ No. Just web search it. - "Review my grocery list" β†’ No. Not worth multi-model inference. - "I need this answer in 5 seconds" β†’ No. Peer review adds 30-60s latency. Edge cases: - Short text (<50 words) β†’ Models may not find meaningful issues. Consider skipping. - Highly technical domain β†’ Local models may lack domain knowledge. Weight flags lower. - Creative writing β†’ Factual review doesn't apply well. Use only for logical consistency.

414
Downloads
2
Installs
1
Versions

Latest Changes

Install Peer Review with One Click

Get a managed OpenClaw server and install this skill from your dashboard. No SSH, no Docker, no configuration needed.

Deploy with ClawHost