This identity allows the paywall to track usage, apply charges, and enforce authorization/balance rules.
Why it matters
Without knowing which user made a request, Paywalls.ai cannot:
- Verify whether they have authorized your paywall to charge them
- Check if they have enough balance
- Associate charges and usage records with them
Providing a user ID is mandatory for all billable endpoints (e.g., /chat/completions
, /user/charge
, etc.).
What is a user ID
User ID can be any arbitrary string identifier, for example an internal ID from a database. Paywalls.ai does not care about the content of the ID, as long as it is unique per user, and consistent between user sessions.
We don’t recommend to use IDs that can identify a real person, for example
emails or phone numbers, though it is not forbidden. Better to use randomized
ID string, to reduce security and compliance risks.
Identification Methods
Paywalls.ai supports three ways to specify the user for maximum flexibility across clients, SDKs, and middleware.
Avoid using spaces and special characters in the user ID.
Option 1: user
field in the request body
✅ Recommended for OpenAI-compatible clients
Most OpenAI SDKs (Node, Python, etc.) already support a user
field in the request body for /chat/completions
and similar endpoints.
Example:
POST /v1/chat/completions
Authorization: Bearer sk-paywalls-...
Content-Type: application/json
{
"model": "gpt-4o-mini",
"messages": [
{"role": "user", "content": "Write me a haiku about paywalls."}
],
"user": "user_12345" // 👈 User ID here
}
This is the cleanest and most portable method — the user
travels with the request payload and works in any OpenAI-compatible client.
✅ Best for middleware injection
When modifying the request body isn’t convenient (e.g., for streaming requests or proxy middleware), send the user ID in a custom header:
POST /v1/chat/completions
Authorization: Bearer sk-paywalls-...
X-Paywall-User: user_12345
Content-Type: application/json
This is ideal if you want to inject user IDs at the network or middleware level without touching the original request body.
Option 3: URL Path Parameter
⚠ Fallback when nothing else fits your system
If neither body nor headers can be modified (e.g., legacy clients, hard-coded SDKs),
you can include the user ID directly in the URL path:
POST https://api.paywalls.ai/v1/user_12345/chat/completions
Authorization: Bearer sk-paywalls-...
Content-Type: application/json
Note: This format is less standard and may require additional routing
logic.\ Use only when the first two options are not possible.
Behavior on Requests
Once a request is received:
- User ID is extracted from:
user
field in body → highest priority
X-Paywall-User
header → fallback
- URL path parameter → last resort
- Authorization check:
- If the user is not authorized → response includes
{ connected: false, connect_link: "..." }
- No charge is applied.
- Balance check:
- If authorized but no balance → response includes
{ connected: true, balance: 0, topup_link: "..." }
- No charge is applied.
- Billing & forwarding:
- If authorized and funded →
a. Cost is calculated
b. Balance is deducted
c. Request is forwarded to the provider
d. Model response is streamed back to the client
Best Practices
- **Always use Option 1 (body **
user
field) if possible — it’s OpenAI-spec compatible and future-proof.
- Ensure user IDs are unique per end-user but consistent across requests for accurate billing.
- Avoid exposing sensitive personal information as
user
values — use internal IDs instead.
- If using Option 2 or 3, confirm your client or middleware passes the user ID on every request.
More Examples
OpenAI SDK (Node.js)
Basic (body user
, non-streaming)
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.PAYWALLS_API_KEY!, // your Paywalls.ai key
baseURL: "https://api.paywalls.ai/v1",
});
const resp = await client.chat.completions.create({
model: "gpt-3.5-turbo",
user: "user_123", // 👈 user identity here
messages: [{ role: "user", content: "Write a haiku about paywalls." }],
});
console.log(resp.choices[0]?.message?.content);
Streaming (body user
)
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.PAYWALLS_API_KEY!,
baseURL: "https://api.paywalls.ai/v1",
});
const stream = await client.chat.completions.create({
model: "gpt-4o-mini",
user: "user_123",
stream: true,
messages: [
{ role: "user", content: "Explain usage-based pricing in 2 lines." },
],
});
for await (const chunk of stream) {
const delta = chunk.choices?.[0]?.delta?.content;
if (delta) process.stdout.write(delta);
}
With header X-Paywall-User
(middleware-friendly)
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.PAYWALLS_API_KEY!,
baseURL: "https://api.paywalls.ai/v1",
defaultHeaders: { "X-Paywall-User": "user_123" }, // 👈 header-based identity
});
const resp = await client.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Hi!" }],
});
OpenAI SDK (Python)
from openai import OpenAI
client = OpenAI(
api_key=os.environ["PAYWALLS_API_KEY"],
base_url="https://api.paywalls.ai/v1",
)
resp = client.chat.completions.create(
model="gpt-3.5-turbo",
user="user_123", # 👈 body user field
messages=[{"role": "user", "content": "Summarize usage-based billing."}],
)
print(resp.choices[0].message.content)
Header variant (inject X-Paywall-User
) if your SDK layer allows custom headers:
from openai import OpenAI
client = OpenAI(
api_key=os.environ["PAYWALLS_API_KEY"],
base_url="https://api.paywalls.ai/v1",
default_headers={"X-Paywall-User": "user_123"}
)
Vercel AI SDK (Edge / Next.js)
Server Action / Route Handler (body user
)
import { OpenAI } from "openai";
import { OpenAIStream, StreamingTextResponse } from "ai";
export const runtime = "edge";
export async function POST(req: Request) {
const { messages } = await req.json();
const openai = new OpenAI({
apiKey: process.env.PAYWALLS_API_KEY!,
baseURL: "https://api.paywalls.ai/v1",
});
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
user: "user_123", // 👈 include user here
stream: true,
messages,
});
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
}
With header injection (no body changes)
import { OpenAI } from "openai";
import { OpenAIStream, StreamingTextResponse } from "ai";
export const runtime = "edge";
export async function POST(req: Request) {
const { messages } = await req.json();
const openai = new OpenAI({
apiKey: process.env.PAYWALLS_API_KEY!,
baseURL: "https://api.paywalls.ai/v1",
defaultHeaders: { "X-Paywall-User": "user_123" }, // 👈 header-based
});
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
stream: true,
messages,
});
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
}
Fetch / cURL
fetch (header)
const r = await fetch("https://api.paywalls.ai/v1/chat/completions", {
method: "POST",
headers: {
Authorization: `Bearer ${process.env.PAYWALLS_API_KEY}`,
"Content-Type": "application/json",
"X-Paywall-User": "user_123", // 👈 header identity
},
body: JSON.stringify({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Hello!" }],
}),
});
const json = await r.json();
cURL (body user
)
curl https://api.paywalls.ai/v1/chat/completions \
-H "Authorization: Bearer $PAYWALLS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model":"gpt-3.5-turbo",
"user":"user_123",
"messages":[{"role":"user","content":"Hello!"}]
}'
cURL (header)
curl https://api.paywalls.ai/v1/chat/completions \
-H "Authorization: Bearer $PAYWALLS_API_KEY" \
-H "Content-Type: application/json" \
-H "X-Paywall-User: user_123" \
-d '{"model":"gpt-3.5-turbo","messages":[{"role":"user","content":"Hello!"}]}'