- Prompt
- The text instruction or question submitted to ChatGPT that determines the nature and quality of the response it generates.
- Prompt Engineering
- The practice of crafting, structuring, and iterating on prompts to produce more accurate, relevant, and useful outputs from an AI language model.
- Large Language Model (LLM)
- An AI system trained on large volumes of text data to generate, summarize, translate, and reason about language β ChatGPT is built on OpenAI's GPT-4 family of LLMs.
- System Prompt
- A set of instructions provided to ChatGPT before a conversation begins, used to define its persona, constraints, or output format for a session.
- Temperature
- A setting that controls how predictable or creative ChatGPT's responses are β lower values produce consistent outputs; higher values produce more varied ones.
- Hallucination
- When ChatGPT generates a confident-sounding response that contains factually incorrect or fabricated information β a known limitation requiring human review.
- Context Window
- The maximum amount of text (measured in tokens) that ChatGPT can process in a single conversation before earlier content is dropped from its memory.
- Token
- The unit ChatGPT uses to measure text β roughly 0.75 words. Token limits affect both what you can send in a prompt and how long the response can be.
- RAG (Retrieval-Augmented Generation)
- A technique that supplements an LLM's response with content retrieved from a specific external document or database, improving accuracy on proprietary topics.
- Zero-Shot Prompt
- A prompt that asks ChatGPT to perform a task with no examples provided β relying entirely on its pre-trained knowledge.
- Few-Shot Prompt
- A prompt that includes two to five examples of the desired input-output pattern, steering ChatGPT toward a specific format or style.