top of page
Mobile Phone Mock-ups

Unlock Your Communication Potential with AI-Powered Services

X logo and link to communicationuk.com AI Prompt Library

High-Stakes Reputation Management Protocol: 2026 Concise Guide for the AI Era

Reputation is now a hybrid of human and AI influence, shaped by journalists, generative engines such as ChatGPT, Google AI Overviews, and Perplexity, as well as systems that rapidly summarise, rank, and amplify information. Proactive defence requires readiness for both GEO and traditional media to ensure facts are anchored for human and algorithmic audiences.
 

1. Preemptive Foundations

Establish a verifiable digital presence before receiving any inquiries.

  • Implement structured data (Schema.org for Organisation, Person, FAQ, NewsArticle) on key pages to make facts AI-parseable and citable.

  • Maintain high-quality media kits, fact sheets, and executive profiles, ensuring consistent naming and verified content credentials (C2PA) to counter deepfakes.

  • Optimise for GEO by providing clear summaries, concise Q&A sections under 300 characters, authoritative language, and up-to-date timestamps. Regularly audit AI visibility.

  • Key takeaway: Proactively verify and address misrepresentations during interviews to ensure factual accuracy.
     

2. Intelligence Gathering on Inquiries

The initial response determines risk. Assess whether you are dealing with a human reporter or a potential AI-amplified threat.
 

Key Questions (before engaging):

  • Story angle/topic/deadline?

  • Who else interviewed (competitors, sources)?

  • Any AI/synthetic content influence or bot amplification signs?
     

Politely decline (avoid saying "no comment") and redirect inquiries to fact pages with structured data. Monitor for early signs of misinformation using AI. Treat off-the-record requests with caution. Avoid them unless working with trusted contacts, and always specify boundaries clearly.

3. Commanding Interviews

Maintain control of the narrative at all times.

  • Prepare three to four key message points and consistently steer the conversation toward them.

  • Use phrases such as "That's interesting, but the key point is..." or "More importantly..." to guide the discussion. Repeatedly: "The most critical takeaway is..."

  • Key takeaway: Citing authenticated sources strengthens your message for both human and AI audiences.

 

4. Defending Against Aggressive / AI-Amplified Attacks

Consider deepfakes, bot swarms, and challenging questions as standard threats.

Prepare concise, confident responses to challenging questions:
 

  • Personal or irrelevant: "That's unrelated; I'd focus on..."

  • Synthetic probes: "This appears AI-generated. Here is our verified position [link to fact page]."

Stay courteous in hostile encounters and, if needed, calmly request that the camera be turned off. Take time to gather information, for example, by inviting the individual inside or offering a drink. a drink.

  • If the situation becomes hostile, end the interaction and escalate the issue to the editor.

Deepfake-Specific:

  • Use non-digital verification (code words/questions) for executive requests.

  • Train teams on red flags for urgency and emotional manipulation.

  • Maintain authenticated photo/video files.
     

Key takeaway: Fast, multi-channel fact-checking minimises the lasting impact of misinformation.
 

5. Visual/Broadcast Representation

Visuals are highly influential; take steps to protect against manipulation.
 

  • Provide high-resolution, credentialed portraits with solid backgrounds, professional grooming, and minimal glare.

  • Use watermarked and credentialed props or charts during television appearances.

  • Key takeaway: Direct broadcast audiences to verified facts for trust and clarity.
     

6. Post-Publication Remediation

Uncorrected errors persist in AI training/summaries forever.

Priorities:
 

  • Request corrections from media (vital but hard).

  • Publish Letters to the Editor (low effort, high framing control).

  • Key takeaway: Directly flagging misrepresentations ensures inaccuracies are challenged early.
     

Legal: Last resort for libel/deepfakes (new regs emerging); document everything.


Core 2026 Philosophy:

Key takeaway: Clarity, verifiability, and preparation are the foundation for effective reputation management in the AI era.

---------------------------------------------

A Beginner's Guide to How Artificial Intelligence Works

 

Introduction: Your Journey into AI

Artificial Intelligence (AI) is no longer a concept confined to science fiction; it's a technology that is transforming how we communicate, learn, and create every day. But how does it actually work? This guide will demystify the core concepts behind AI, breaking down the essential building blocks into simple, understandable terms.

To begin, it helps to visualise the world of AI using a simple analogy of a vast library. This helps clarify the relationship between AI and its most important subfields:

  • Artificial Intelligence (AI): The vast library itself—the entire field dedicated to creating intelligent machines.

  • Machine Learning (ML): The librarians who are constantly learning how to find the right books (information) within the library, not by being given rules, but by recognising patterns from past requests.

  • Deep Learning (DL): The advanced search algorithms the librarians use. These are multi-layered systems that allow them to understand highly complex patterns, much like a human brain.

  • Large Language Models (LLMs): Brilliant, chatty assistants who use all this knowledge to generate new, human-like responses based on the information available in the library.

With this framework in mind, let's explore what these concepts mean in practice, starting with a formal definition of Artificial Intelligence.

1. What is Artificial Intelligence (AI)?

Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (acquiring information and rules), reasoning (using those rules to reach conclusions), and problem-solving. AI encompasses any system designed to perform tasks that we normally associate with human intelligence, from understanding language and recognising patterns to making predictions. The fundamental goal of AI is to develop systems that can tackle complex challenges and enhance our own capabilities across nearly every field, including medicine, education, and finance.

Most of the AI systems you interact with today are powered by an engine called Machine Learning.

2. The Core Engine: Machine Learning (ML)

Machine Learning (ML) is a subset of AI that uses statistical methods and algorithms to enable computers to learn from data. Instead of being explicitly programmed with rules for every possible situation, an ML system learns to identify patterns and make predictions or decisions on its own. It uses historical data as input to predict new output values. Think of it as a system that improves at a task over time by gaining more experience, much like a person does.

This ability to learn from data is the foundation of modern AI, but how a machine learns determines the kinds of problems it can solve for us. Let's explore the two most common approaches.

3. How AI Learns: Two Fundamental Approaches

AI models are not born smart; they must be "trained" on data. The way this training is done generally falls into two primary methods: supervised and unsupervised learning.

3.1. Supervised Learning: Learning with Labels

Supervised learning is a training method where an AI model learns from labelled data. This means each piece of input data is paired with a corresponding correct output. The process is similar to teaching a child through examples, such as using flashcards where the picture of an object is paired with its name. The model's goal is to learn the mapping function between the input and output so it can predict outcomes for new, unseen data.

Practical Example: Training an AI to recognise cats. To train an AI to identify pictures of cats, you would feed it thousands of images that have been explicitly labelled "cat." The model learns the patterns associated with these images (e.g., pointy ears, whiskers, fur) and uses that knowledge to correctly identify cats in new photos it has never seen before.

3.2. Unsupervised Learning: Finding Hidden Patterns

In unsupervised learning, the AI model is trained on unlabelled data. The system is not told what the "correct" answers are. Instead, its primary goal is to explore the data and discover hidden patterns, structures, or groupings on its own. This is like an AI sorting a mixed box of fruits and asking it to group them by similarity, without telling it what the fruits are.

Practical Example: Grouping news articles by topic An AI could be given a massive, unlabeled collection of news articles. Using unsupervised learning, it could sort these articles into topics like "sports," "politics," and "technology" on its own, simply by identifying patterns and common words in the text.

3.3. At a Glance: Supervised vs. Unsupervised Learning

The following table synthesises the key differences between these two foundational learning approaches.

Feature

Supervised Learning

Unsupervised Learning

Data Type

Labeled Data

Unlabeled Data

Primary Goal

To predict outcomes based on input-output pairs.

To discover hidden patterns and structures in data.

Simple Analogy

Teaching a child with labelled flashcards.

Figuring out that a playlist of songs can be grouped into 'rock,' 'jazz,' and 'classical' without ever being told what those genres are.

While these two approaches define how an AI model is trained, the underlying structure that does the actual learning is often inspired by the human brain. This is where neural networks come into play, serving as the cognitive architecture for many of the most powerful AI systems today, whether they are learning with labelled data or finding patterns on their own.

4. The Brain of AI: Neural Networks & Deep Learning

A Neural Network is a computational model inspired by the structure and function of the human brain. It consists of interconnected nodes, or "neurons," organised in layers. These networks are used in machine learning to recognise patterns and make decisions. As data passes through the network, each layer extracts increasingly abstract and complex features.

Deep Learning is a subset of Machine Learning that uses neural networks with many layers (hence the term "deep") to model and understand intricate patterns in large datasets. It is this multi-layered structure that allows modern AI to perform incredibly complex tasks that were once thought to be exclusively human, such as understanding natural speech, generating realistic images from text, and driving autonomous vehicles. Returning to our library analogy, this is like the librarians developing the ability to not just find books on 'cats,' but to understand the subtle connections between art history, zoology, and internet culture to generate a truly insightful answer about why cats are popular online.

The sheer complexity that these deep, multi-layered neural networks can model is what makes the creative power of Generative AI possible.

5. The Creative Force: What is Generative AI?

These are the brilliant assistants from our analogy, now capable of not just finding information in the library but writing entirely new books in the style of the authors they've studied.

Generative AI (GenAI) refers to artificial intelligence systems that can create new content, such as text, images, music, or code. These models learn the patterns and structure of the data they were trained on and then use that knowledge to generate novel content that is similar but not identical to the original data. Tools like ChatGPT are a prime example, capable of producing human-like text based on a user's prompt.

For a student, Generative AI offers powerful applications that can enhance learning and creativity. Here are three of the most significant:

  • Content Creation AI tools can help generate articles, social media posts, and creative stories. This can be a powerful way to brainstorm, overcome writer's block, and spark new ideas for assignments and projects.

  • Personalised Learning Generative AI can create customised educational resources, such as study guides tailored to a specific topic or quizzes that adapt to a student's knowledge level. Its ability to answer a broad spectrum of questions makes it a versatile tool for creating more personalised educational experiences.

  • Problem-Solving GenAI can act as a "supercharged research assistant." It can be used to brainstorm arguments for an essay, break complex concepts into simpler terms, or even help debug code, making it an invaluable partner for tackling challenging academic tasks.

Generative AI's ability to create, assist, and personalise content is a driving force behind the current wave of technological innovation.

6. Conclusion: Your Journey Begins

You've just taken a tour through the foundational concepts that make artificial intelligence work: the broad field of AI, the learning engine of Machine Learning, the training methods of Supervised and Unsupervised Learning, the cognitive architecture of Neural Networks, and the creative power of Generative AI.

These technologies are more than just buzzwords; they are powerful tools that are reshaping our world. Understanding these basics is the first step toward using them effectively, ethically, and creatively. This is not the end of your journey—it’s the beginning. The world of AI evolves daily, so keep experimenting, asking questions, and learning. Your next great idea is just a prompt away.

---------------------

A "Performance" Hierarchical AI Dictionary
 

The new updated dictionary is now organised into hierarchical layers. This encourages users to "level up" their knowledge, moving from passive understanding to active application.
 

Category A: The New Essentials (The "Must-Knows" for 2025)

 

  • Agentic AI: Unlike passive chatbots, Agentic AI refers to systems designed to act autonomously. They perceive their environment, make decisions, and execute actions to achieve specific goals, with features such as adaptability and goal orientation 1, 2.
     

  • Multimodal AI: Systems that can process, understand, and generate across multiple data types—text, images, audio, and video—simultaneously. This enables richer context-aware reasoning (e.g., GPT-4o, Gemini) 3-5.
     

  • Generative AI (GenAI): A subset of AI that generates new content (text, code, images) from learned statistical structures rather than merely analysing existing data. It powers tools from ChatGPT to Sora 6-8.
     

  • Hallucination: When a model confidently generates content that looks plausible but is factually incorrect or entirely fabricated. This is a by-product of predicting the "next likely token" rather than verifying truth (9, 10).
     

  • Large Language Model (LLM): A neural network with billions or trillions of parameters, trained on vast text corpora to predict the next token. These models (such as Grok and Claude) can reason, code, and serve as general-purpose interfaces 11, 12.
     

Category B: Advanced Prompting & Engineering (Following on from the "Precise Prompt Pedagogy")
 

  • Chain-of-Thought (CoT): A prompting technique that forces a model to spell out its intermediate reasoning steps (e.g., "Let's think step by step") before giving a final answer. This significantly boosts accuracy on complex logic problems 13, 14.
     

  • RAG (Retrieval-Augmented Generation): A technique in which the AI retrieves relevant documents from an external source (such as a company database) and injects them into the prompt. This grounds the AI's answer in real data, reducing hallucinations.
     

  • Zero-Shot vs. Few-Shot Learning:
     

  • Zero-Shot: Asking a model to perform a task it hasn't explicitly seen examples of in the previous prompt.
     

  • Few-Shot: Providing 2–5 examples in the prompt (e.g., "Sentiment: Positive") to demonstrate the desired pattern to the AI 17, 18.
     

  • Tree of Thoughts: An emerging technique in which the AI branches its reasoning into multiple paths, evaluates them, and selects the best outcome, similar to how humans explore different possibilities before deciding.
     

  • System Prompts: The "rules of engagement" are defined at the start of a conversation. These set the AI’s behaviour, format, and constraints (e.g., "You are a JSON formatter").
     

Category C: The Technical "Under the Hood" (For Tech Confidence)
 

  • Parameters: Internal variables learned by the model during training. Collectively, billions of parameters encode everything the model "knows".
     

  • Token: The basic unit of text an LLM processes (roughly 0.75 of a word). Context limits and pricing are calculated in tokens, not words.
     

  • Temperature: A setting that controls the randomness of AI output. Low temperature yields predictable, safe answers; high temperature encourages creativity but increases the risk of "hallucinations".


Quantisation: Compressing a model by storing its weights in fewer bits (e.g., 4-bit vs. 16-bit). This makes models smaller and faster to run on local devices without significant loss of accuracy.
 

  • Vector Database: A specialised database that stores data as mathematical vectors. It enables "semantic search" (searching by meaning rather than keywords), which is the backbone of RAG systems.

Category D: Ethics & Safety (The Countercultural "Rebellion")
 

  • Alignment: The process of ensuring an AI system pursues human-intended goals and respects constraints (such as safety and fairness) as it scales. This is central to preventing AI from acting in unintended, harmful ways.
     

  • Black Box: An AI system whose internal decision-making process is opaque to humans. We can see the inputs and outputs, but not the rationale for a specific decision.
     

  • Guardrails: Rules or filters implemented to limit the actions an AI can take or the answers it can provide, ensuring it handles data properly and avoids generating unethical content.
     

  • Grounding: The process of linking AI responses to verifiable, factual sources to ensure accuracy and reduce fabrication.

Eddy Jackson MBE X logo
AI Dictionary for Everyone
A Beginner's Guide to Artificial Intelligence (2026 Edition)
Introduction

Artificial Intelligence (AI) is one of the most exciting and rapidly changing fields in technology today. As of February 2026, AI is no longer just something from science fiction films – it is part of everyday life. AI helps recommend videos on streaming services, powers voice assistants on phones, assists doctors in diagnosing illnesses, creates artwork and music, and even helps pupils with homework. Tools like chatbots and image generators have become widely available, making AI accessible to almost anyone with an internet connection.

This dictionary is designed specifically for non-experts: school pupils, teachers, parents, and adults who want to understand AI without needing a technical background. The explanations use simple language, everyday analogies, and real-world examples. We avoid complex mathematics or programming code, focusing instead on what the terms mean, why they matter, and how they affect our lives.

The field of AI has evolved dramatically in recent years. The breakthrough of large language models (LLMs) around 2022–2023 sparked the generative AI boom. By 2026, AI systems are increasingly multimodal (handling text, images, video, and audio together), more capable of reasoning step-by-step, and appearing as "agents" that can perform multi-step tasks autonomously. At the same time, important conversations continue about safety, fairness, job impacts, and ethical use.

This refreshed dictionary includes classic concepts alongside newer terms that have emerged with recent advances, such as AI agents, chain-of-thought reasoning, and retrieval-augmented generation. It draws on established definitions while updating examples to reflect the state of AI in 2026.

Whether you are a pupil exploring AI for a school project, a teacher explaining it in class, or an adult curious about the technology shaping our world, this guide aims to make AI clear and approachable.

Estimated word count: approximately 25,000 words (including introduction and all entries).

Table of Contents
A
  • Agent (AI Agent)

  • Algorithm

  • Alignment (AI Alignment)

  • Artificial General Intelligence (AGI)

  • Artificial Intelligence (AI)

  • Artificial Narrow Intelligence (Narrow AI or Weak AI)

  • Artificial Superintelligence (ASI)

  • Attention Mechanism

B
  • Bias

  • Big Data

C
  • Chain of Thought (CoT)

  • Chatbot

  • Computer Vision

D
  • Data Augmentation

  • Dataset

  • Deep Learning

  • Diffusion Model

E
  • Embedding

  • Emergent Abilities

  • Ethics in AI

  • Explainable AI (XAI)

F
  • Fine-Tuning

  • Foundation Model

G
  • Generative AI

  • Grok

H
  • Hallucination

  • Hyperparameter

I
  • Inference

L
  • Large Language Model (LLM)

M
  • Machine Learning (ML)

  • Model

  • Multimodal AI

N
  • Natural Language Processing (NLP)

  • Neural Network

O
  • Overfitting

P
  • Parameter

  • Prompt

  • Prompt Engineering

R
  • Reasoning (AI Reasoning)

  • Reinforcement Learning

  • Retrieval-Augmented Generation (RAG)

S
  • Scaling Laws

  • Supervised Learning

  • Synthetic Data

T
  • Token

  • Training

  • Transformer

  • Turing Test

U
  • Unsupervised Learning

The 2026 AI Dictionary | Updated 14 February
A
Agent (AI Agent)

An AI agent is a software program that can perform tasks autonomously on behalf of a user. Unlike a simple chatbot that only answers questions, an agent can plan and execute multi-step actions – for example, researching a topic, booking a flight, or managing emails.

In 2026, AI agents are becoming common in productivity tools. Think of them as a helpful digital assistant that doesn't just talk but actually does things. However, agents still need human oversight to avoid mistakes. Agents raise exciting possibilities for automation but also concerns about privacy and control.

Algorithm

An algorithm is a step-by-step set of instructions for solving a problem or completing a task. In everyday life, a recipe is an algorithm: "mix ingredients, bake at 180°C for 30 minutes."

In AI, algorithms are the rules that tell computers how to learn from data or make decisions. All AI systems rely on algorithms – some simple, some very complex. Good algorithms are clear, efficient, and produce reliable results.

Alignment (AI Alignment)

AI alignment refers to the challenge of ensuring that advanced AI systems behave in ways that match human values and intentions. If an AI is powerful but misaligned, it might achieve goals in harmful or unexpected ways.

Researchers work on alignment through careful design, testing, and safety techniques. It is one of the most important long-term issues in AI, especially as systems become more capable.

Artificial General Intelligence (AGI)

AGI is a hypothetical future AI that can understand, learn, and perform any intellectual task that a human can do, across many different domains. Unlike today's AI, which excels at specific tasks, AGI would be flexible and adaptable like a person.

Most experts believe AGI has not yet been achieved in 2026, though progress continues. AGI could bring huge benefits but also significant risks, which is why safety research is emphasised.

Artificial Intelligence (AI)

Artificial Intelligence is the broad field of creating computers or machines that can perform tasks normally requiring human intelligence, such as seeing, speaking, reasoning, or creating.

AI can be narrow (good at one thing) or aim toward general intelligence. Modern AI often learns from large amounts of data rather than following strict programmed rules.

Artificial Narrow Intelligence (Narrow AI or Weak AI)

This is the type of AI we have today – systems that are very good at specific tasks but cannot do anything else. Examples include a chess-playing program, a recommendation algorithm on Netflix, or a voice assistant like Siri.

Narrow AI is powerful within its domain but lacks general understanding. Almost all current AI applications are narrow AI.

Artificial Superintelligence (ASI)

ASI would be an AI that surpasses human intelligence in virtually every way – not just matching humans but exceeding them in creativity, scientific discovery, and problem-solving.

ASI remains theoretical and could be far in the future. Discussions focus on ensuring it would be beneficial if it ever arrives.

Attention Mechanism

The attention mechanism is a technique that helps AI models focus on the most relevant parts of their input, much like how you pay more attention to important words when reading a sentence.

It is a key part of transformer models and enables better handling of long texts or complex data. Without attention, models would treat all information equally, leading to poorer performance.

B
Bias

Bias in AI occurs when a system produces unfair or discriminatory outcomes because its training data reflects human prejudices or imbalances. For example, a hiring AI might favour certain demographics if trained on biased historical data.

Bias is a major ethical concern. Developers work to reduce it through diverse data, testing, and fairness checks.

Big Data

Big data refers to extremely large collections of information that traditional tools cannot easily process. AI thrives on big data because it provides the examples needed for learning patterns.

Sources include social media posts, sensor readings, or transaction records. Handling big data responsibly involves privacy protection.

C
Chain of Thought (CoT)

Chain of Thought is a prompting technique where an AI is encouraged to "think step by step" before giving an answer. This improves reasoning on complex problems.

For example, asking "Let's think step by step: how would you solve this maths problem?" leads to better results than a direct question. CoT has become standard in advanced models by 2026.

Chatbot

A chatbot is a program that simulates conversation with humans, usually through text or voice. Modern chatbots use large language models to understand and generate natural responses.

They are used in customer service, education, and entertainment. Good chatbots feel helpful and human-like, but they can still make errors.

Computer Vision

Computer vision is the field of AI that enables machines to interpret and understand visual information from images or videos. Examples include facial recognition, self-driving cars detecting road signs, or medical scans identifying diseases.

It works by training neural networks on millions of labelled images.

D
Data Augmentation

Data augmentation creates variations of existing training data to make models more robust. For images, this might mean flipping, rotating, or changing colours.

It helps AI generalise better and reduces overfitting when real data is limited.

Dataset

A dataset is a collection of data used to train or test AI models. It can include text, images, numbers, or other information.

Quality datasets are clean, diverse, and representative. Poor datasets lead to poor AI performance.

Deep Learning

Deep learning is a subset of machine learning that uses neural networks with many layers ("deep") to learn complex patterns from raw data. It powers most modern breakthroughs in image recognition, speech, and language.

The "deep" part allows automatic feature discovery without human engineering.

Diffusion Model

Diffusion models are a type of generative AI that create images or other data by starting with random noise and gradually refining it into something meaningful.

They are behind many 2026 image and video generators, producing high-quality, creative results.

E
Embedding

An embedding is a numerical representation of data (words, images, etc.) in a lower-dimensional space where similar items are close together.

Embeddings help AI understand relationships – for example, "king" is close to "queen" in meaning.

Emergent Abilities

Emergent abilities are unexpected capabilities that appear in large AI models as they scale up, even though smaller models lack them. Examples include solving complex reasoning tasks or basic coding.

They show that bigger models can sometimes do qualitatively new things.

Ethics in AI

AI ethics involves principles for ensuring AI is developed and used responsibly – considering fairness, privacy, transparency, and societal impact.

Key issues include job displacement, misinformation, and weaponisation. Many organisations now have ethics guidelines.

Explainable AI (XAI)

Explainable AI focuses on making AI decisions understandable to humans, rather than treating models as "black boxes."

This is important for trust, especially in medicine or law, where people need to know why a decision was made.

F
Fine-Tuning

Fine-tuning takes a pre-trained model and further trains it on specific data to specialise it for a task.

It is efficient and widely used to create custom AI applications without starting from scratch.

Foundation Model

A foundation model is a large, general-purpose AI model trained on broad data that can be adapted for many tasks.

Most modern LLMs and multimodal models are foundation models.

G
Generative AI

Generative AI creates new content – text, images, music, video, or code – in response to prompts.

Popular examples include chatbots for writing and tools for art or video. It has transformed creativity but raises concerns about copyright and misinformation.

Grok

Grok is a large language model developed by xAI, designed to be maximally truth-seeking, helpful, and capable of answering a wide range of questions.

Named after a term meaning deep understanding, Grok aims to advance scientific discovery and provide clear, reasoned responses. As of 2026, it continues to evolve with improved reasoning and multimodal capabilities.

H
Hallucination

A hallucination is when an AI confidently produces incorrect or fabricated information.

It happens because models predict patterns statistically rather than knowing facts absolutely. Users should always verify important information.

Hyperparameter

Hyperparameters are settings that control how an AI model learns, such as learning rate or number of layers. They are set before training, unlike parameters learned from data.

Choosing good hyperparameters is crucial for performance.

I
Inference

Inference is the process of using a trained AI model to make predictions on new data.

It is the "using" phase after training – fast and efficient on devices like phones.

L
Large Language Model (LLM)

An LLM is an AI model trained on vast amounts of text to understand and generate human-like language.

Examples include Grok, GPT series, Claude, and Gemini. They power chatbots, translation, summarisation, and more.

M
Machine Learning (ML)

Machine learning is a subset of AI where systems learn patterns from data rather than being explicitly programmed.

It includes supervised, unsupervised, and reinforcement learning.

Model

In AI, a model is the trained program that makes predictions or generates outputs based on input data.

Models range from simple equations to massive neural networks with billions of parameters.

Multimodal AI

Multimodal AI processes and generates multiple types of data – text, images, audio, video – together.

By 2026, multimodal models can describe images, answer questions about videos, or create illustrations from text descriptions.

N
Natural Language Processing (NLP)

NLP is the branch of AI that helps computers understand, interpret, and generate human language.

It powers translation apps, sentiment analysis, and voice assistants.

Neural Network

A neural network is a computing system inspired by the human brain, consisting of interconnected nodes (neurons) organised in layers.

Information flows through the network, with weights adjusted during training to recognise patterns.

O
Overfitting

Overfitting occurs when a model learns the training data too well, including noise, and performs poorly on new data.

It is like memorising answers without understanding the subject.

P
Parameter

Parameters are the numerical values inside an AI model (often billions in LLMs) that are adjusted during training to capture patterns.

More parameters generally allow more complex learning, but require more computing power.

Prompt

A prompt is the input text or instruction given to a generative AI model to guide its output.

Good prompts are clear and specific.

Prompt Engineering

Prompt engineering is the skill of crafting effective prompts to get the best results from AI models.

It involves techniques like providing examples, asking for step-by-step reasoning, or specifying format.

R
Reasoning (AI Reasoning)

AI reasoning refers to an AI's ability to solve problems logically, draw conclusions, and explain its thought process.

Advanced 2026 models use techniques like chain-of-thought to improve reasoning on maths, science, and planning tasks.

Reinforcement Learning

Reinforcement learning is a type of machine learning where an agent learns by trial and error, receiving rewards or penalties for actions.

It is used in games (like AlphaGo), robotics, and recommendation systems.

Retrieval-Augmented Generation (RAG)

RAG is a technique that combines information retrieval with generation – an AI searches for relevant documents before generating a response.

This reduces hallucinations and keeps answers up-to-date with external knowledge.

S
Scaling Laws

Scaling laws describe how AI model performance improves predictably as model size, data, and compute increase.

They have guided the development of larger, more capable models.

Supervised Learning

Supervised learning trains models on labelled data, where inputs are paired with correct outputs.

It is used for classification (e.g., spam detection) and regression (e.g., price prediction).

Synthetic Data

Synthetic data is artificially generated data that mimics real data.

It is useful when real data is scarce, sensitive, or biased.

T
Token

A token is a small unit of text (roughly a word or part of a word) that LLMs process.

Models have context windows measured in tokens, limiting how much input they can handle at once.

Training

Training is the process of feeding data to an AI model and adjusting its parameters to minimise errors.

It requires massive computing power for large models.

Transformer

The transformer is an architecture introduced in 2017 that revolutionised AI with its attention mechanism.

It enables parallel processing and is the basis for most modern LLMs and multimodal models.

Turing Test

The Turing Test, proposed by Alan Turing in 1950, evaluates whether a machine can exhibit intelligent behaviour indistinguishable from a human in conversation.

Passing it remains a milestone, though modern tests focus on broader capabilities.

U
Unsupervised Learning

Unsupervised learning finds patterns in unlabelled data without specific guidance.

It is used for clustering similar items or anomaly detection.

Index (Alphabetical List of Terms)
  • Agent (AI Agent)

  • Algorithm

  • Alignment (AI Alignment)

  • Artificial General Intelligence (AGI)

  • Artificial Intelligence (AI)

  • Artificial Narrow Intelligence (Narrow AI or Weak AI)

  • Artificial Superintelligence (ASI)

  • Attention Mechanism

  • Bias

  • Big Data

  • Chain of Thought (CoT)

  • Chatbot

  • Computer Vision

  • Data Augmentation

  • Dataset

  • Deep Learning

  • Diffusion Model

  • Embedding

  • Emergent Abilities

  • Ethics in AI

  • Explainable AI (XAI)

  • Fine-Tuning

  • Foundation Model

  • Generative AI

  • Grok

  • Hallucination

  • Hyperparameter

  • Inference

  • Large Language Model (LLM)

  • Machine Learning (ML)

  • Model

  • Multimodal AI

  • Natural Language Processing (NLP)

  • Neural Network

  • Overfitting

  • Parameter

  • Prompt

  • Prompt Engineering

  • Reasoning (AI Reasoning)

  • Reinforcement Learning

  • Retrieval-Augmented Generation (RAG)

  • Scaling Laws

  • Supervised Learning

  • Synthetic Data

  • Token

  • Training

  • Transformer

  • Turing Test

  • Unsupervised Learning

This dictionary can be copied into a word processor, formatted further, and exported as a PDF for upload to your Wix site. If you would like expansions on specific terms, additional entries, or sections (e.g., AI in education), let me know!

bottom of page