AI Glossary

We understand that many of the terms related to AI can be hard to remember. Here’s a list of them for reference!

Affect Heuristic – Users’ decisions are sometimes influenced by their emotional reactions to AI outputs rather than an objective assessment of the information. For example, they might favor AI recommendations that elicit positive feelings or align with their emotional state, even if those recommendations are not the most accurate or appropriate.

AI (Artificial Intelligence) – The computer simulation of processes typically performed by human intelligence, aiming to mimic and surpass human abilities such as decision-making, communication, and creativity.

AI Ethics – The consideration of how to ensure that AI technology is used safely and responsibly. Stakeholders such as engineers and policymakers can promote ethical use by implementing systems that support unbiased, safe, and environmentally responsible AI practices.

Algorithm – A set of rules or instructions (typically written in code) that an AI system follows. Algorithms are used to organize and analyze data, enabling data scientists to make predictions, build models, or allow AI systems to make decisions.

Application Programming Interface (API) – A set of protocols that define how two software programs interact with each other.

Automation Bias – When users favor information from AI over information from other sources, such as encyclopedias or research websites. For example, in legal settings, judges may over-rely on AI-generated information.

Availability Heuristic – The tendency to rely on information that is most readily available or recent, which can distort one’s perception of AI’s overall accuracy. For instance, if an AI system recently provided helpful information, users might overestimate its reliability despite potential limitations.

Big Data – Large data sets analyzed to reveal patterns and trends that support human or AI decision-making. It’s called “big” data because companies and organizations can rapidly gather massive amounts of complex information using data collection tools and systems.

Chatbot – A software application designed to imitate human conversation through text or voice commands.

Cognitive Computing – Another term for artificial intelligence.

Computer Vision – A field of science and technology focused on enabling computers to recognize objects within images and videos and make decisions based on that information.

Confirmation Bias – The tendency to accept AI recommendations that align with one’s preconceived notions or expectations. In hiring, for example, AI-driven assessments that confirm a recruiter’s initial impressions may go unquestioned, perpetuating existing biases.

Data Mining – The process of discovering patterns and trends in large data sets by using techniques from machine learning, statistics, and databases to extract useful information.

Data Science – A field that uses algorithms and processes to gather and analyze big data, identifying patterns and insights that influence business decisions.

Deep Learning – A machine learning technique that uses layered algorithms and hardware components called neurons to create an artificial neural network (ANN). Deep learning stands out because the algorithms can optimize results through repetition without human correction—essentially a large-scale form of “guessing and checking.”

Emergent Behavior – When an AI system displays unpredicted or unintended abilities that only arise when individual elements act together as a whole.

Expertise Bias – The tendency to defer excessively to AI systems under the belief that advanced technology must be more accurate than human judgment. This can lead users to undervalue their own expertise and critical thinking, especially in nuanced situations.

Framing Bias and Narrative Fallacy – The way AI outputs are framed or presented can significantly affect their perceived credibility. AI recommendations delivered in a coherent, compelling narrative may appear more convincing, even if they lack strong evidence or are based on incomplete data.

Generative AI – A type of AI technology that creates new content, including text, video, code, and images. These systems are trained on large amounts of data to identify patterns that enable content generation.

Guardrails – Mechanisms and frameworks designed to ensure that AI systems operate within ethical, legal, and technical boundaries. They prevent AI from causing harm, making biased decisions, or being misused.

Hallucination – An incorrect response from an AI system, or false information presented as factual.

Hyperparameter – A setting that controls the learning process of an algorithm and determines the values of model parameters that the learning algorithm eventually optimizes.

Image Recognition – The process of identifying an object, person, place, or text within an image or video.

Large Language Model (LLM) – An AI model trained on vast amounts of text so that it can understand natural language and generate human-like text.

Limited Memory – A type of AI system that learns from real-time events and stores information in a database to improve future predictions.

Machine Learning – A subset of AI in which algorithms mimic human learning by processing data. Over time, these algorithms improve in accuracy when making predictions or classifications, without human assistance. Machine learning focuses on developing models that learn from data to predict trends and behaviors.

Natural Language Processing (NLP) – A type of AI that enables computers to understand spoken and written human language, enabling features such as text and speech recognition.

Neural Network – A deep learning model designed to resemble the structure of the human brain. It uses large data sets to perform calculations and generate outputs, powering features such as speech and image recognition.

Ordering Effects – The sequence in which AI recommendations are presented can strongly influence user trust. If an AI provides accurate recommendations early on, users may develop unwarranted trust even when errors occur later.

Overestimating Explanations – Detailed explanations from AI systems can sometimes create a false sense of security, leading users to trust the AI more than is warranted.

Overfitting – Occurs when a machine learning algorithm performs well only on its training data but fails to generalize to new data, resulting in inaccurate predictions.

Pattern Recognition – The use of computer algorithms to analyze, detect, and label regularities in data. This process helps classify data into different categories.

Predictive Analytics – A form of analytics that uses technology to forecast future events or trends based on historical data and patterns.

Prescriptive Analytics – A form of analytics that uses technology to evaluate possible scenarios, performance data, and available resources to help organizations make better strategic decisions.

Prompt – The input a user provides to an AI system to receive a desired output.

Quantum Computing – The use of quantum-mechanical phenomena such as entanglement and superposition to perform computations. Quantum machine learning applies these principles to accelerate processing, operating much faster than classical computing.

Reinforcement Learning – A type of machine learning in which an algorithm learns by interacting with its environment, receiving positive reinforcement for correct predictions and negative reinforcement for incorrect ones.

Representativeness Heuristic – The tendency to treat AI models as analogous to search engines due to their similar interface and interaction style, leading users to overestimate the AI’s accuracy and reliability.

Sentiment Analysis – Also known as opinion mining, this process uses AI to analyze the tone and emotion expressed in text.

Structured Data – Data that is defined and searchable, typically organized into rows and columns for easier analysis. Examples include phone numbers, dates, and product SKUs.

Supervised Learning – A type of machine learning that learns from labeled input and output data. It’s called “supervised” because the algorithm is guided by known examples.

Token – A basic unit of text that a large language model uses to process and generate language. A token may be an entire word or part of a word.

Training Data – The information or examples provided to an AI system to enable it to learn, recognize patterns, and generate new content.

Transfer Learning – A machine learning method in which knowledge gained from one task is applied to new but related tasks.

Turing Test – Created by computer scientist Alan Turing to evaluate a machine’s ability to exhibit intelligence comparable to that of humans, especially in language and behavior.

Unstructured Data – Data that is not organized in a predefined way. To analyze unstructured data, systems must first apply structure through classification or organization.

Unsupervised Learning – A type of machine learning that identifies patterns in unlabeled data, often used to develop predictive models or data clusters.

Voice Recognition – Also called speech recognition, this is a method of human-computer interaction in which computers interpret human speech to produce written or spoken outputs.


Scroll to Top