๐ Introducing MARL โ Runtime Middleware That Reduces LLM Hallucination Without Fine-Tuning
Now available on PyPI ยท GitHub ยท ClawHub ยท HuggingFace
AI models sense they could be wrong, but they can't actually fix what's broken.
We evaluated 9 SOTA models (GPT-5.2, Claude Opus 4.6, Gemini 3 Pro, etc.) across 1,800 assessments in FINAL Bench and found a 39.2%p gap between "recognizing potential errors (MA=0.694)" and "actually finding and fixing them (ER=0.302)."
MARL (Model-Agnostic Runtime Middleware for LLMs) was built to close this metacognitive gap. It decomposes a single LLM call into a 5-stage expert pipeline (Hypothesis โ Solver โ Auditor โ Adversarial Verifier โ Synthesizer), transforming "answer in one shot" into "think, doubt, correct, and rewrite."
No weight modification โ works instantly with GPT-5.4, Claude, Gemini, Llama, or any OpenAI API-compatible LLM by changing one line: base_url. Ships with 9 domain-specific emergence engines (invention, pharma, genomics, chemistry, ecology, law, and more โ 5,538 expert data items) activated by a simple tag like model="gpt-5.4::pharma".
pip install marl-middleware
MARL is also officially registered on ClawHub, the skill marketplace of OpenClaw โ an AI agent platform with 260K+ developers and 3,200+ skills. It's the first middleware in the Reasoning Enhancement category. One command โ clawhub install marl-middleware โ gives your AI agent a metacognition upgrade.
The AI benchmark ecosystem has three structural problems. Major benchmarks like MMLU have surpassed 90%, losing discriminative power. Most leaderboards publish unverified self-reported scores โ our cross-verification found Claude Opus 4.6's ARC-AGI-2 listed as 37.6% (actual: 68.8%), Gemini 3.1 Pro as 88.1% (actual: 77.1%). OpenAI's own audit confirmed 59.4% of SWE-bench Verified tasks are defective, yet it remains widely used.
ALL Bench addresses this by comparing 91 models across 6 modalities (LLM ยท VLM ยท Agent ยท Image ยท Video ยท Music) with 3-tier confidence badges (โโ cross-verified ยท โ single-source ยท ~ self-reported). Composite scoring uses a 5-Axis Framework and replaces SWE-Verified with contamination-resistant LiveCodeBench.
Key finding: metacognition is the largest blind spot. FINAL Bench shows Error Recovery explains 94.8% of self-correction variance, yet only 9 of 42 models are even measured. The 9.2-point spread (Kimi K2.5: 68.71 โ rank 9: 59.5) is 3ร the GPQA top-model spread, suggesting metacognition may be the single biggest differentiator among frontier models today.
VLM cross-verification revealed rank reversals โ Claude Opus 4.6 leads MMMU-Pro (85.1%) while Gemini 3 Flash leads MMMU (87.6%), producing contradictory rankings between the two benchmarks.
If you've ever tried to compare GPT-5.2 and Claude Opus 4.6 side by side, you've probably hit the same wall: the official Hugging Face leaderboard only tracks open-source models, so the most widely used AI systems simply aren't there. ALL Bench fixes that by bringing closed-source models, open-weight models, and โ uniquely โ all four teams under South Korea's national sovereign AI program into a single leaderboard. Thirty-one frontier models, one consistent scoring scale. Scoring works differently here too. Most leaderboards skip benchmarks a model hasn't submitted, which lets models game their ranking by withholding results. ALL Bench treats every missing entry as zero and divides by ten, so there's no advantage in hiding your weak spots. The ten core benchmarks span reasoning (GPQA Diamond, AIME 2025, HLE, ARC-AGI-2), coding (SWE-bench Verified, LiveCodeBench), and instruction-following (IFEval, BFCL). The standout is FINAL Bench โ the world's only benchmark measuring whether a model can catch and correct its own mistakes. It reached rank five in global dataset popularity on Hugging Face in February 2026 and has been covered by Seoul Shinmun, Asia Economy, IT Chosun, and Behind. Nine interactive charts let you explore everything from composite score rankings and a full heatmap to an open-vs-closed scatter plot. Operational metrics like context window, output speed, and pricing are included alongside benchmark scores. All data is sourced from Artificial Analysis Intelligence Index v4.0, arXiv technical reports, Chatbot Arena ELO ratings, and the Korean Ministry of Science and ICT's official evaluation results. Updates monthly.
The architecture is the key part. Instead of using Gradio as the UI, I use it purely as an API engine. FastAPI serves a fully custom HTML/JS frontend that calls /gradio_api/call/chat via SSE streaming. No DOM conflicts, no layout constraints.
Four main features: instant model switching with automatic spec adjustment (max tokens, temperature ceiling, Vision availability all update per model), Thinking Mode via /think prefix with collapsible reasoning chain, Vision image upload via base64 conversion, and HF OAuth implemented directly at the FastAPI level.
For model selection: 122B-A10B with Thinking Mode for math, logic, and agents. 27B for writing, translation, and instruction following. 35B-A3B for fast everyday questions.
A few surprises during development โ Gradio 6.x removed several parameters quietly, base64 image strings broke gr.Image(type="pil") so I switched to gr.Textbox with backend PIL conversion, and Thinking Mode parsing needed a full rewrite with indexOf instead of regex.
Thanks to the Qwen team for making this possible. Try it out and let me know what you think.
Most generative AI training data is crawled without consent. Your text gets summarized, images reprocessed, videos clipped โ with no way to prove you're the original creator. Existing watermarks are either visible or wiped out by a single AI preprocessing pass.
Detect Before, Track After
Pre-embed โ Detect theft without any watermark. Text plagiarism check, image similarity analysis (perceptual hash, SSIM, color histogram, feature matching), and video temporal matching catch copies, edits, and excerpts.
Post-embed โ Embed invisible multi-layer watermarks. If one layer is destroyed, others survive independently. Even full removal leaves forensic traces as evidence.
Text: 4 Independent Layers
Four mechanisms work simultaneously: zero-width Unicode characters at morpheme/word boundaries (Korean Kiwi + English NLP), style fingerprinting via synonym-ending-connective substitution, SHA-256 timestamped evidence packages, and punctuation-anchored micro-marks. Each layer uses a different Unicode category, so attacks on one cannot eliminate the others. Full bilingual support, zero readability impact.
34-Attack Defense
7 categories, 34 attacks simulated: Unicode normalization, invisible character removal, homoglyph substitution (9,619 confusables), and AI rewriting. Each scored on Signal (watermark survival) + Trace (forensic evidence of attack) โ proving deliberate removal even when watermarks are destroyed.
Image & Video
Images: DCT frequency-domain watermarks surviving JPEG compression and resize. Videos: keyframe watermarking with temporal propagation and majority-vote extraction. Both support pre-embed similarity detection.
Who Is This For
Creators, rights holders needing legal evidence, media companies, and organizations tracking document leaks. Korean/English bilingual, open source, Gradio-based.