Fastino Labs Open-Sources GLiGuard: A 300M Parameter Safety Moderation Model That Matches or Exceeds Accuracy of Models 23–90x Its Size
Editors Pick
Agentic AI
Artificial Intelligence
AI Infrastructure
Tech News
AI Paper Summary
Technology
AI Shorts
Applications
New Releases
Open Source
Security
Software Engineering
Staff
As LLM-powered applications move into production — and as AI agents take on more consequential tasks like browsing the web, writing and executing code, and interacting with external services — safety moderation has quietly become one of the most operationally expensive parts of the stack.
Most developers who’ve deployed a production LLM system know the problem: you need to evaluate every user prompt before it reaches the model, and every model response before it reaches the user. That means your guardrail model runs on every single request, at every turn of a conversation. The guardrail latency compounds. The cost compounds. And the current generation of open-source guardrail models — LlamaGuard4 (12B), WildGuard (7B), ShieldGemma (27B), NemoGuard (8B) — are all decoder-only models with billions of parameters, built for flexibility but not for speed.
Fastino Labs released
GLiGuard
, a 300 million parameter open-source safety moderation model designed to address this specific problem. GLiGuard evaluates multiple safety dimensions in a single pass, and across nine safety benchmarks, its accuracy matches or exceeds models that are 23 to 90 times its size while running up to 16 times faster.
https://pioneer.ai/blog/gliguard-16x-faster-safety-moderation-with-a-small-language-model
Why Decoder LLMs May Not Be the Right Tool for Safety Moderation
To understand what makes GLiGuard different, it helps to understand why existing guardrail models are slow. Most major guardrail models are built on decoder-only transformer architectures, they generate their safety verdicts autoregressively, one token at a time — the same way a large language model generates a response to a chat message.
This design made sense when safety requirements were fluid. Decoder models can interpret natural language task descriptions and adapt to new safety policies without retraining. But autoregressive generation is inherently sequential, which makes it slow and computationally expensive.
There’s a compounding problem on top of that. Most guardrail models need to assess inputs across multiple safety dimensions: what type of harm is present, whether the user prompt is attempting to bypass safety training, whether the model’s response is itself unsafe, and so on. Because decoder models generate output sequentially, these assessments are typically produced one after another, and latency compounds as more criteria are evaluated.
In other words, the architecture that makes decoder models flexible is also the architecture that makes them the wrong tool for what is fundamentally a classification problem.
What GLiGuard Actually Does
GLiGuard is a small encoder-based model that reframes safety moderation as a text classification problem rather than a text generation problem. Encoder models process the entire input at once and output a single classification label for a set of fixed labels, whereas decoder models generate their output one token at a time, left to right.
The key architectural insight is in how GLiGuard handles multiple tasks simultaneously. Instead of generating tokens, GLiGuard encodes both the input text and task definitions (labels) together. These are then fed to the model, which scores every label simultaneously in a single forward pass and returns the highest-scoring label for each task. Because all tasks and their candidate labels are part of the input itself, evaluating additional safety dimensions doesn’t add latency; it simply means including more labels in the input.
https://pioneer.ai/blog/gliguard-16x-faster-safety-moderation-with-a-small-language-model
GLiGuard runs four moderation tasks concurrently in one forward pass:
Safety classification
(safe / unsafe) — applied to both user prompts before generation and model responses after generation.
Jailbreak strategy detection
across 11 strategies, including prompt injection, roleplay bypass, instruction override, and social engineering. If any jailbreak strategy is detected, the prompt is automatically flagged as unsafe.
Harm category detection
across 14 categories — violence, sexual content, hate speech, PII exposure, misinformation, child safety, copyright violation, and others. A single input can trigger multiple categories at once.
Refusal detection
(compliance / refusal), tracked separately to help measure over-refusal (when a model refuses safe requests) and detect false compliance (when a model appears to comply but doesn’t). If a refusal is detected, the response is automatically marked as safe.
Training Data and Fine-Tuning
GLiGuard was trained on a mixture of human-annotated and synthetically generated training data. For prompt safety, response safety, and refusal detection, the team used WildGuardTrain, a dataset of 87,000 human-annotated examples. For harm category and jailbreak strategy detection, labels for the unsafe samples were generated using GPT-4.1.
During early training, the model struggled to distinguish between similar harm categories like toxic speech and violence, so the team used Pioneer to generate supplemental synthetic data with edge cases targeting these fine-grained distinctions.
On the architecture side, GLiGuard was trained via full fine-tuning of the GLiNER2-base-v1 checkpoint for 20 epochs using the AdamW optimizer. GLiNER2 is Fastino’s own architecture for multi-task text classification — a natural starting point for a model designed to score multiple label sets in one pass.
https://pioneer.ai/blog/gliguard-16x-faster-safety-moderation-with-a-small-language-model
Benchmark Results: Accuracy and Speed
The research team evaluated GLiGuard across nine established safety benchmarks. These benchmarks cover both prompt and response classification, testing whether a model can identify harmful content, withstand adversarial attacks, distinguish between different types of harm, and avoid over-flagging safe content. Results use macro-averaged F1, a standard metric that balances precision and recall.
On accuracy:
GLiGuard scores 87.7 average F1 on prompt classification, within 1.7 points of the best model (PolyGuard-Qwen at 89.4).
It achieves the second-highest average F1 on response classification (82.7), behind only Qwen3Guard-8B (84.1).
It outperforms LlamaGuard4-12B, ShieldGemma-27B, and NemoGuard-8B despite being 23–90× smaller.
https://pioneer.ai/blog/gliguard-16x-faster-safety-moderation-with-a-small-language-model
On throughput and latency, benchmarked on a single NVIDIA A100 GPU:
GLiGuard achieves up to 16.2× higher throughput (133 vs. 8.2 samples/s at batch size 4).
GLiGuard achieves up to 16.6× lower latency: 26 ms vs. 426 ms at sequence length 64.
These are not marginal improvements. At 26 ms per request versus 426 ms, the difference is meaningful in any real-time user-facing application, and the compounding effect across a multi-turn conversation makes the gap even larger in practice.
Marktechpost’s Visual Explainer
GLiGuard — Fastino Labs
1 / 6
01 — Overview
What is
GLiGuard
?
GLiGuard is an open-source
300M parameter safety moderation model
released by Fastino Labs on May 12, 2026. It is designed to act as a guardrail layer between users and LLMs — screening every user prompt before it reaches the model and every model response before it reaches the user.
300M
Parameters — runs on a single GPU
16x
Faster throughput vs. SOTA decoder guardrails
4
Safety tasks evaluated in a single forward pass
Apache 2.0
Hugging Face
Pioneer Inference
Encoder Architecture
02 — The Problem
Why Existing
Guardrails
Are Slow
Most production guardrail models — LlamaGuard4, WildGuard, ShieldGemma, NemoGuard — are built on
decoder-only transformer architectures
. They generate safety verdicts autoregressively, one token at a time, the same way a large language model generates a chat response.
Decoder Guard Models
Generate verdicts
token by token
Sequential output —
latency compounds
per task
7B — 27B parameters required
Expensive to run at real-time scale
Separate passes per safety dimension
GLiGuard (Encoder)
Processes entire input
at once
All tasks evaluated in
one forward pass
300M parameters
Single GPU deployment
More dimensions = no added latency
03 — Architecture
Single Pass.
Multiple Tasks.
GLiGuard reframes safety moderation as a
text classification problem
, not a text generation problem. It encodes the input text and all task definitions (labels) together, then scores every label simultaneously in one forward pass. Adding more safety dimensions does not increase latency — it simply means more labels in the input.
Base model:
Fine-tuned from the
GLiNER2-base-v1
checkpoint using full fine-tuning for 20 epochs with the AdamW optimizer. Training data:
87,000 human-annotated examples
from WildGuardTrain, plus synthetic edge-case data generated via GPT-4.1 and Pioneer for fine-grained harm category distinctions.
04 — Capabilities
4 Moderation Tasks in
One Pass
01
Safety Classification — safe / unsafe
Applied to both user prompts before generation and model responses after generation.
02
Jailbreak Strategy Detection — 11 strategies
Detects prompt injection, roleplay bypass, instruction override, social engineering, and others. Any detected strategy auto-flags the prompt as unsafe.
03
Harm Category Detection — 14 categories
Violence, sexual content, hate speech, PII exposure, misinformation, child safety, copyright violation, and others. A single input can trigger multiple categories.
04
Refusal Detection — compliance / refusal
Tracks over-refusal (refusing safe requests) and false compliance. A detected refusal auto-marks the response as safe.
05 — Benchmarks
Accuracy vs.
Much Larger Models
Evaluated across 9 safety benchmarks using macro-averaged F1. Speed benchmarked on a single NVIDIA A100 GPU.
Prompt Classification — Avg. F1
GLiGuard (0.3B)
87.7
PolyGuard-Qwen (7B)
89.4
Llama
← Torna alle news