1. Overview
The CheatSentry API allows server owners and community tools to check Discord IDs
against admin-reviewed risk entries. The API returns risk signals and should not
be used as the sole basis for automatic punishment. Internal evidence is never
exposed via the API — only neutral, structured metadata.
2. Authentication
Every request must be authenticated with an API key from your dashboard. The key can be passed via either header:
Authorization: Bearer YOUR_API_KEY
For backwards compatibility, x-api-key: YOUR_API_KEY is also accepted.
3. Endpoint
GET /api/v1/check/:discordId
Checks a Discord ID against the risk database. Returns risk level, status, source count and review metadata. :discordId must be a 16–21 digit numeric Discord snowflake.
A health-check endpoint is also available at GET /api/status.
4. Example request
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://cheatsentry.com/api/v1/check/123456789012345678
5. Example response
{
"discordId": "123456789012345678",
"riskLevel": "medium",
"status": "reviewed",
"summary": "Risk signal found. Manual review recommended.",
"lastReviewedAt": "2026-04-30T12:00:00.000Z"
}
The legacy field set (found, sourcesCount, publicNote, checkedAt) is still returned for compatibility.
6. Risk levels
-
clean
No reviewed risk entry found.
-
low
Minor or weak signal. Manual context review recommended.
-
medium
Relevant risk signal. Review evidence and server policy before action.
-
high
Strong reviewed signal. Manual verification recommended before restrictive action.
-
confirmed
Confirmed high-confidence entry reviewed by administrators. We still recommend following your own policy and appeal process.
7. Responsible use
Important:
CheatSentry is designed to support moderation decisions, not replace them.
We strongly recommend against fully automated bans based only on a single API
response. Server owners should consider context, their own rules and any
available appeal information before taking action.
Suggested patterns:
- Use low / medium as a flag for manual staff review.
- Use high / confirmed to notify staff or restrict sensitive actions, not to issue silent permanent bans.
- Always log the API response alongside your moderation decision.
- Provide affected users with a clear path to appeal.
8. FiveM example (Lua)
This example queries CheatSentry on player connect and notifies staff. It does not ban automatically — it only flags and logs.
-- server.lua (FiveM)
local API_KEY = GetConvar("cheatsentry_key", "")
local API_URL = "https://cheatsentry.com/api/v1/check/"
AddEventHandler("playerConnecting", function(name, setKickReason, deferrals)
local src = source
local discordId
for _, id in ipairs(GetPlayerIdentifiers(src)) do
if string.sub(id, 1, 8) == "discord:" then
discordId = string.sub(id, 9)
break
end
end
if not discordId then return end
deferrals.defer()
deferrals.update("Running CheatSentry risk check...")
PerformHttpRequest(API_URL .. discordId, function(status, body)
deferrals.done()
if status ~= 200 or not body then return end
local ok, data = pcall(json.decode, body)
if not ok or not data then return end
local level = data.riskLevel or "clean"
-- Log to admin console for review
print(("[CheatSentry] %s -> %s (%s)"):format(discordId, level, data.status or "-"))
-- Notify online staff for high / confirmed signals
if level == "high" or level == "confirmed" then
TriggerEvent("cheatsentry:flagPlayer", src, data)
end
end, "GET", "", {
["Authorization"] = "Bearer " .. API_KEY,
["Accept"] = "application/json",
})
end)
Decision-making (warn, restrict, kick, ban) should always be implemented in your own admin tooling.
9. RedM example (Lua)
RedM uses the same Lua runtime as FiveM. The integration pattern is intentionally identical: check, log, flag — never auto-ban.
-- server.lua (RedM)
local API_KEY = GetConvar("cheatsentry_key", "")
local API_URL = "https://cheatsentry.com/api/v1/check/"
AddEventHandler("playerConnecting", function(name, setKickReason, deferrals)
local src = source
local discordId
for _, id in ipairs(GetPlayerIdentifiers(src)) do
if string.sub(id, 1, 8) == "discord:" then
discordId = string.sub(id, 9)
break
end
end
if not discordId then return end
deferrals.defer()
deferrals.update("Risk check in progress...")
PerformHttpRequest(API_URL .. discordId, function(status, body)
deferrals.done()
if status ~= 200 or not body then return end
local ok, data = pcall(json.decode, body)
if not ok or not data then return end
-- Only flag for manual review; let staff decide
if data.riskLevel == "high" or data.riskLevel == "confirmed" then
TriggerClientEvent("chat:addMessage", -1, {
template = "[Staff] CheatSentry flag for review.",
args = {},
})
print(("[CheatSentry] flag %s level=%s"):format(discordId, data.riskLevel))
end
end, "GET", "", {
["Authorization"] = "Bearer " .. API_KEY,
})
end)
10. Node.js example
Using the global fetch API available in Node.js 18+.
// check.js
const API_KEY = process.env.CHEATSENTRY_KEY;
async function checkDiscordId(discordId) {
const res = await fetch(
`https://cheatsentry.com/api/v1/check/${encodeURIComponent(discordId)}`,
{
headers: {
Authorization: `Bearer ${API_KEY}`,
Accept: 'application/json',
},
}
);
if (!res.ok) {
throw new Error(`CheatSentry API error: ${res.status}`);
}
return res.json();
}
// Usage — log + decide manually
const result = await checkDiscordId('123456789012345678');
console.log(result.riskLevel, result.status, result.summary);
11. Discord bot example (discord.js)
A simple slash command that checks a Discord ID and returns the result only to staff (ephemeral reply). It never bans automatically.
// /check command — staff-only response
import { SlashCommandBuilder, PermissionFlagsBits } from 'discord.js';
export const data = new SlashCommandBuilder()
.setName('check')
.setDescription('Run a CheatSentry risk check')
.addStringOption(o =>
o.setName('id').setDescription('Discord ID').setRequired(true)
)
.setDefaultMemberPermissions(PermissionFlagsBits.ModerateMembers);
export async function execute(interaction) {
const id = interaction.options.getString('id', true);
if (!/^\d{16,21}$/.test(id)) {
return interaction.reply({ content: 'Invalid Discord ID.', ephemeral: true });
}
await interaction.deferReply({ ephemeral: true });
const res = await fetch(`https://cheatsentry.com/api/v1/check/${id}`, {
headers: { Authorization: `Bearer ${process.env.CHEATSENTRY_KEY}` },
});
if (!res.ok) {
return interaction.editReply('Risk check failed. Please try again later.');
}
const data = await res.json();
await interaction.editReply({
embeds: [{
title: `Risk signal · ${data.riskLevel ?? 'clean'}`,
description: data.summary ?? 'No reviewed risk entry found.',
fields: [
{ name: 'Status', value: String(data.status ?? '-'), inline: true },
{ name: 'Last reviewed', value: data.lastReviewedAt ?? '—', inline: true },
],
footer: { text: 'CheatSentry — manual review recommended before action.' },
}],
});
}
Result is shown only to the moderator who ran the command. The bot never publishes the response in a public channel.
12. Rate limits
| Plan | Requests / day | Burst (60s) |
| Free | 50 | 60 |
| Basic | 1,000 | 60 |
| Pro | 10,000 | 60 |
| Enterprise | Custom | Custom |
When exceeded, the API responds with HTTP 429.
13. Error codes
400 invalid_discord_id — invalid format.
401 missing_api_key / invalid_api_key — header missing or wrong.
429 daily_limit_reached / rate_limit — limit reached.
500 internal_error — server error. Please contact support.