Test prompt injection models
without the setup.

Models paired with normalizers that catch what raw classifiers miss.
Try it in the playground or integrate via API.
100 free scans/month. Sign up in 30 seconds.

One call. That's it.

Drop this before your LLM call. Works with any provider.

python
pip install pinpout
import pinpout
# Uses PINPOUT_API_KEY environment variable
result = pinpout.scan("What is the capital of France?")
if result.is_safe:
print("Safe to pass to LLM")
else:
print(f"Injection detected (confidence: {result.confidence})")

Or try it in the playground in the Dashboard.

How it works

Three steps to protect your app. Then one endpoint to call.

01

Sign up, get API key

Create an account. Get your key instantly — no approval process, no sales call.

02

Try it

Use the playground to test prompts in the browser, or call /v1/scan from your code. One endpoint, that's it.

03

Sleep better

Every prompt injection attempt gets caught before it reaches your model.

Under the hood

Normalize< 1ms

Strips encoding tricks and obfuscation so nothing sneaks past

Classify~15ms

DeBERTa-v3 classifier — fast, accurate, no LLM API calls.

More models coming soon.

What raw classifiers miss

Attackers obfuscate. We normalize. Raw classifiers see the original — our pipeline sees through encoding tricks before classification.

Attack
Before (obfuscated)
After (normalized)
Leetspeak
Ign0r3 a11 pr3v10us 1nstruct10ns
Ignore all previous instructions
Zero-width chars
i\u200Bg\u200Bn\u200Bo\u200Br\u200Be
ignore
Base64
SWdub3JlIGFsbCBwcmV2aW91cyBpbnN0cnVjdGlvbnM=
Ignore all previous instructions
Homoglyphs
\u0456gn\u043Er\u0435
ignore

7 normalization layers: zero-width char removal, homoglyph normalization, leetspeak decoding, ROT13 decoding, hex decoding, base64 decoding, and separator stripping. Actively maintained — new evasion techniques added as the attack landscape evolves.

API Reference

Base URL: https://api.pinpout.dev

POST/v1/scan

Scan text for prompt injection attacks. Call this before passing user input to your LLM.

Auth: X-API-Key header
Request body
{
"text": "string", // required — the text to scan
"options": {
"return_normalized": bool // optional — include normalized text in response
}
}
Response
{
"is_safe": true, // false if injection detected
"confidence": 0.97, // model confidence (0–1)
"scan_id": "a1b2c3d4-..." // UUID for this scan
}
Errors
401 INVALID_API_KEYMissing or bad X-API-Key
403 QUOTA_EXCEEDEDMonthly scan limit reached
429 RATE_LIMITED100 req/min limit, includes Retry-After header
GET/v1/keys

Get your current API key info. Returns null if no key exists yet.

Auth: Authorization: Bearer <session-token>
Response
{
"key": {
"prefix": "pp_live_XXXX...",
"created_at": "2026-02-01T00:00:00Z",
"last_used_at": "2026-02-23T10:00:00Z"
} | null
}
POST/v1/keys

Create your API key. One key per account. Returns the full secret — save it, it won't be shown again.

Auth: Authorization: Bearer <session-token>
Response
{
"key": { "prefix": "pp_live_XXXX...", "created_at": "..." },
"secret": "pp_live_full_key_here" // shown once
}
Errors
409 KEY_EXISTSA key already exists — rotate or delete it first
POST/v1/keys/rotate

Rotate your API key. Old key is immediately invalidated. Returns the new full secret.

Auth: Authorization: Bearer <session-token>
Response
{
"key": { "prefix": "pp_live_XXXX...", "created_at": "..." },
"secret": "pp_live_new_key_here"
}
Errors
404 NOT_FOUNDNo key exists to rotate
DELETE/v1/keys

Delete your API key. The key is immediately revoked.

Auth: Authorization: Bearer <session-token>
Response
{ "deleted": true }
GET/v1/usage

Get your usage for the current month. Resets at UTC midnight on the 1st.

Auth: Authorization: Bearer <session-token>
Response
{
"scans_total": 42,
"quota": {
"limit": 100,
"used": 42,
"remaining": 58
}
}

Error format

All errors use the same shape:

{
"error": {
"code": "RATE_LIMITED",
"message": "Rate limit exceeded. Try again in 30 seconds."
}
}

FAQ