Acceptable Use Policy

JCIL.AI is a values-aligned AI platform for churches, universities, ministries, and faith-aligned organizations. This Acceptable Use Policy is intentionally short, strict, and enforced.

Zero-tolerance categories

Accounts engaging in any of the following are suspended immediately and permanently, and we report to law enforcement where required:

  • Child sexual abuse material (CSAM) — any production, distribution, solicitation, or depiction of minors in sexual contexts
  • Terrorism, planning or glorifying violent attacks, or content supporting designated terrorist organizations
  • Credible threats of violence against specific persons
  • Content intended to cause immediate physical harm
  • Malware, ransomware, or tooling to attack computer systems
  • Impersonation of real people to defraud or harass them

Prohibited uses

Beyond the zero-tolerance categories, you may not use JCIL.AI for:

  • Targeted harassment — using the platform to stalk, dox, or coordinate abuse of an individual or group
  • Election interference — generating or distributing content intended to mislead voters about when, where, or how to vote
  • Deceptive content — generating content designed to pass as human-authored in contexts where disclosure is legally required (AI disclosure laws in Utah, Colorado, and other states)
  • Illegal activity — any use that violates US federal law, applicable state law, or the laws of your jurisdiction
  • Circumventing our safety systems — attempting to jailbreak the moderation pipeline, bypass rate limits, abuse the safety-webhook system, or tamper with subscription tier enforcement
  • Mass scraping — automated extraction of responses outside the published API rate limits
  • Reselling without authorization — reselling JCIL.AI capacity to your customers without a written agreement

Content ownership & responsibility

  • You own the inputs you send and the outputs generated from your conversations. We do not train models on your content.
  • You are responsible for ensuring you have the rights to the content you submit — copyrighted text, personal photos of third parties, confidential business data, etc.
  • AI outputs are generated; they are not a substitute for professional judgment. Do not rely on JCIL.AI as a replacement for licensed legal, medical, financial, or pastoral counsel.

Values alignment

JCIL.AI is a Scripture-aligned platform. The base model is wrapped with an identity prompt affirming biblical values — pro-life, pro-family, truth-telling. This is a feature of the product, not a bug. Customers who need an AI that takes the opposite posture are welcome to use other products.

Our safety behavior disproportionately protects children, families, and vulnerable people. Do not use JCIL.AI in a way that undermines those protections — for example, instructing the model to help circumvent age-verification, conceal abuse, or produce content sexualizing minors. Even if the wording is ambiguous, the intent will be caught by our moderation pipeline and enforced against your account.

Organization-deployed widgets

When you deploy JCIL Cloud's embeddable chat widget on your site, you become responsible for:

  • Ensuring the widget is offered only to visitors aged 13+ unless your deployment has verifiable parental consent (COPPA posture).
  • Handling safety events — if our pipeline detects a crisis pattern in a visitor message, the event is signed and POSTed to your configured webhook. You are responsible for the human follow-up.
  • Keeping your custom identity prompt aligned with the JCIL base values — you can shape voice, tone, and scope, but you cannot override the base guardrails, and doing so will fail silently.
  • The content and actions of your end users. We enforce our policies against you; you enforce your policies against them.

Enforcement

  • Progressive penalties for non-zero-tolerance violations: 10 minute suspension → 1 hour → 24 hours → 7 days → permanent account termination.
  • Immediate termination for zero-tolerance categories.
  • Appeals: contact compliance through the contact form with topic "account appeal." We review within 72 hours.
  • Law enforcement cooperation: we cooperate with valid legal process and report CSAM to NCMEC as required by law.
Last updated: April 17, 2026.