For Discord communities

Discord bot anti-cheat checks
staff-only, evidence-based.

Replace gut-feeling moderation with reviewed risk indicators. CheatSentry's Discord ID risk API plugs into your existing bot in minutes — staff sees the signal, the public never sees private evidence.

Slash commandDrop-in /check
EphemeralResult visible to staff only
No public shamingEvidence stays private
Why Discord communities use CheatSentry

Trust signals, not surveillance

Discord moderators face hundreds of joins per day. CheatSentry adds a reviewed risk signal — without scraping profiles or storing real names.

Slash command, staff-only

The /check reply is ephemeral by default. Public channels never see the result, even if a moderator runs it there.

Permission-gated

Lock the command to ModerateMembers or your own staff role so non-staff cannot trigger checks.

Audit-friendly

Every API call goes through your CheatSentry account. Per-staff tracking, daily limits and audit trail come built-in.

No bulk profile scraping

We don't read members on join, we don't snapshot the server. You query intentionally, one ID at a time.

Fair appeal process

Members who feel an entry is wrong can request a manual review directly. Your bot is not the appeals court.

Works with discord.js / py-cord

Plain HTTPS GET — no SDK required. The docs ship a discord.js example you can paste in.

discord.js snippet

Drop-in /check command

Staff-only ephemeral reply. The bot never auto-bans. Full example in the API docs.

// /check command — staff-only response
import { SlashCommandBuilder, PermissionFlagsBits } from 'discord.js';

export const data = new SlashCommandBuilder()
  .setName('check')
  .setDescription('Run a CheatSentry risk check')
  .addStringOption(o =>
    o.setName('id').setDescription('Discord ID').setRequired(true)
  )
  .setDefaultMemberPermissions(PermissionFlagsBits.ModerateMembers);

export async function execute(interaction) {
  const id = interaction.options.getString('id', true);
  if (!/^\d{16,21}$/.test(id)) {
    return interaction.reply({ content: 'Invalid Discord ID.', ephemeral: true });
  }
  await interaction.deferReply({ ephemeral: true });

  const res = await fetch(`https://cheatsentry.com/api/v1/check/${id}`, {
    headers: { Authorization: `Bearer ${process.env.CHEATSENTRY_KEY}` },
  });
  if (!res.ok) return interaction.editReply('Risk check failed.');

  const data = await res.json();
  await interaction.editReply({
    embeds: [{
      title: `Risk signal · ${data.riskLevel ?? 'clean'}`,
      description: data.summary ?? 'No reviewed risk entry found.',
      footer: { text: 'CheatSentry — manual review recommended.' },
    }],
  });
}
Workflow

From command to careful decision

  1. 01

    Moderator runs /check

    Staff invokes the slash command with a Discord ID — only staff roles can see it.

  2. 02

    Risk signal returned

    Risk level, status and a short summary land in an ephemeral reply.

  3. 03

    Staff weighs context

    Moderators apply server rules. The bot does not auto-warn, auto-mute or auto-ban.

  4. 04

    Appeal stays open

    Affected members can submit a manual-review request.

Plans

Simple plans. No surprises.

All paid plans include API access, dashboard and abuse-prevention. Limits scale with your daily volume.

Pricing is visible to signed-in users. We share plan details only inside the dashboard. Sign in or create a free account to view pricing.

Free

Sign in to view

  • 50 API requests / day
  • 10 web searches / day
  • Basic dashboard access
  • Community support
Sign in to view

Basic

Sign in to view

  • 1,000 API requests / day
  • Higher web search limits
  • API key access
  • Email support
Sign in to view

Enterprise

Sign in to view

  • Custom request volume
  • Dedicated support
  • Custom integration options
  • Abuse-prevention consulting
Sign in to view

CheatSentry should be used as an additional risk signal. We recommend manual review before taking action against a user.

Add risk intelligence to your Discord bot

Free tier, no credit card. discord.js example included in the docs.