- AI Persona
- A defined character identity — including name, tone, behavioral rules, and scope — assigned to an AI system to shape how it presents and responds to users.
- System Prompt
- The hidden instruction set given to a large language model before any user interaction, which establishes the persona's role, constraints, and behavioral guidelines.
- Avatar
- A visual or textual representation of an AI persona used in interfaces, often paired with a name and personality to create a consistent user-facing identity.
- Persona Scope
- The defined boundaries of subject matter, tone, tasks, and user interactions the AI persona is authorized to perform under the agreement.
- Permitted Use Case
- Specific applications or contexts in which the persona may be deployed, such as customer support, content generation, or educational tutoring.
- Prohibited Conduct
- Explicit categories of behavior the persona must never perform, such as providing medical advice, impersonating a real individual, or generating regulated financial guidance.
- Prompt Injection
- An attack where a user crafts input designed to override or subvert the system prompt and cause the AI to behave outside its defined persona and rules.
- Hallucination
- When an AI model generates confident-sounding but factually incorrect or fabricated output — a key liability risk for deployed AI personas.
- Derivative Works
- Content, adaptations, or new creative outputs generated by or based on the AI persona, raising questions about who owns them under copyright law.
- Model Terms of Service
- The usage policies published by the underlying AI platform provider (e.g., OpenAI) that govern what personas and outputs are permissible, and which take precedence over any downstream agreement.
- Fine-Tuning
- The process of further training a base AI model on domain-specific data to adjust its outputs, personality, or knowledge — relevant when a persona is built on a customized model.