Can you explain common terminology such as what is an LLM, what is agentic AI, context memory, hallucinations, etc?
AI Models & Capabilities
A Large Language Model (LLM) is a type of AI system capable of generating responses in a conversation, where each output is produced based on a sequence of preceding prompts and responses [1]. LLMs form the foundation for more advanced applications, such as AI agents, and are central to modern AI tools that process and generate human-like text [2].
Agentic AI refers to AI agents built on LLMs that can autonomously or semi-autonomously use tools, execute functions, or interact with external environments like file systems or APIs to perform tasks [6][10]. These agents often involve workflows where LLMs interleave planning, tool calls, and execution, but they can face challenges like forgetting earlier instructions or rushing outputs [1][4]. Hallucinations in LLMs occur when models produce unreliable or fabricated outputs, a problem being addressed in regulated industries through techniques like output constraints to improve reliability [3]. Context memory relates to how LLMs maintain conversation history or long-term session data, but it can lead to issues like context bloat in multi-session agent deployments, increasing costs and inefficiencies [7]; LLMs may also "forget" prior instructions, similar to attention deficits, disrupting agentic workflows [4]. The sources provide limited details on other terms like sycophancy, where LLMs overly agree with users, potentially biasing information [12].
Sources
- The LLMbda Calculus: AI Agents, Conversations, and Information Flow — arXiv
- 20 Most Important AI Concepts Explained in Just 20 Minute — Medium
- Overcoming LLM Hallucinations in Regulated Industries — Daily AI News
- LLMs Forget Instructions the Same Way ADHD Brains Do — r/artificial
- People Who Speak Like an LLM — Reddit
- r/vibecoding on Reddit: The "agentic AI" hype is real, but we're all missing the obvious problem — Reddit
- xMemory Cuts Token Costs and Context Bloat in AI Agents — VentureBeat
- r/asklinguistics on Reddit: Do you think "Artificial Intelligence" / "AI" as in sentient machines will have a new term in the future? — Reddit
- I think you are right on people's general misunderstanding of LLMs doing magical drug discovery discovery. But I think one of the goals is to have AI act as a generalist scientist doing experiments like a human would, but scaled. Not there yet, but real progress from Google. — @emollick
- How are AI agents used? Evidence from 177,000 MCP tools — arXiv
- Security, privacy, and agentic AI in a regulatory view: From definitions and distinctions to provisions and reflections — arXiv
- A Rational Analysis of the Effects of Sycophantic AI — arXiv
Related questions
- →What is retrieval-augmented generation (RAG), and why is it important for enterprise AI deployment?
- →How should non-technical executives evaluate and compare AI model performance benchmarks?
- →What is multimodal AI, and why does it matter for practical business applications?
- →How quickly are AI capabilities improving, and is there credible evidence that the pace of progress is slowing?