For RedM communities

Discord ID risk checks
for RedM roleplay servers.

RedM roleplay lives or dies on community trust. CheatSentry gives your staff a privacy-first risk-intelligence layer for Discord IDs — admin-reviewed entries, fair appeals, no public shaming.

Roleplay-awareContext over verdict
Lua-readySame script style as FiveM
Fair appealsManual review every time
Why RedM owners use CheatSentry

Built for serious roleplay communities

Your players invested hundreds of hours into characters, factions and stories. A single rushed ban can break that trust. CheatSentry is designed to keep humans in the loop.

Signals, not verdicts

Risk levels are graded — clean, low, medium, high, confirmed. Your staff weighs context and roleplay history before acting.

No public shaming

We never publish private evidence and never expose IPs or real names. Drama-free moderation by design.

Whitelist application support

Use the API as part of your application review process — alongside your interview, character backstory and references.

Fair appeals for roleplayers

If a flagged player can give context — character account, mistaken identity, outdated entry — they can request a manual review.

Same Lua patterns as FiveM

If you already run a FiveM server, your existing scripts port over almost unchanged. One API key, one set of patterns.

Privacy-first by default

Discord IDs only. We don't request, store or process character names or in-game positions.

Lua snippet

Drop-in RedM example

Same shape as FiveM, just tuned for typical RedM server flows. Full version in the API docs.

-- server.lua  (RedM)
local API_KEY = GetConvar("cheatsentry_key", "")
local API_URL = "https://cheatsentry.com/api/v1/check/"

AddEventHandler("playerConnecting", function(name, _, deferrals)
    local src = source
    local discordId
    for _, id in ipairs(GetPlayerIdentifiers(src)) do
        if string.sub(id, 1, 8) == "discord:" then
            discordId = string.sub(id, 9); break
        end
    end
    if not discordId then return end

    deferrals.defer()
    deferrals.update("Risk check in progress...")

    PerformHttpRequest(API_URL .. discordId, function(status, body)
        deferrals.done()
        if status ~= 200 or not body then return end
        local ok, data = pcall(json.decode, body); if not ok then return end

        if data.riskLevel == "high" or data.riskLevel == "confirmed" then
            print(("[CheatSentry] flag %s level=%s"):format(discordId, data.riskLevel))
        end
    end, "GET", "", { ["Authorization"] = "Bearer " .. API_KEY })
end)
Workflow

From check to fair decision

  1. 01

    Application or connect

    You query CheatSentry as part of your whitelist review or at connect time.

  2. 02

    Risk signal returned

    Risk level + summary in under ~100 ms. Internal evidence is never exposed via the API.

  3. 03

    Staff weighs context

    Your moderators consider character history, references and rules — not just the API output.

  4. 04

    Appeal channel stays open

    Affected players can submit a manual-review request through our appeal form.

Plans

Simple plans. No surprises.

All paid plans include API access, dashboard and abuse-prevention. Limits scale with your daily volume.

Pricing is visible to signed-in users. We share plan details only inside the dashboard. Sign in or create a free account to view pricing.

Free

Sign in to view

  • 50 API requests / day
  • 10 web searches / day
  • Basic dashboard access
  • Community support
Sign in to view

Basic

Sign in to view

  • 1,000 API requests / day
  • Higher web search limits
  • API key access
  • Email support
Sign in to view

Enterprise

Sign in to view

  • Custom request volume
  • Dedicated support
  • Custom integration options
  • Abuse-prevention consulting
Sign in to view

CheatSentry should be used as an additional risk signal. We recommend manual review before taking action against a user.

Protect your RedM roleplay community

Free tier, no credit card. Same Lua patterns you already know from FiveM.