Skip to Main Content

Generative AI

What are hallucinations?

What are hallucinations?

"Hallucinations" are AI responses that appear plausible or even authoritative but are factually incorrect, nonsensical, or not grounded in the input data or the real world. When a generative AI model doesn't have access to or can't identify the correct answer to a question, it will often invent information to fill in the gaps in its knowledge base and confidently present this fictional information as a factual answer. Hallucinations cannot be entirely prevented, and, in fact, they're becoming more frequent as AI models become more powerful. One common way to reduce hallucinations is through grounding (see below).

What is grounding and what does it mean for generative AI models to be grounded?

What is grounding and what does it mean for generative AI models to be grounded?

Grounding gives an AI model the ability to connect its responses to a verifiable data source. Grounded AI models can provide links or references to the data source(s) they used to generate their response. Grounding makes it easier to verify the information AI generates, because you can more easily access the source of the information, evaluate it, and confirm that the content supports the answer given by the AI tool. 

Grounded AI tools are usually connected to web search and provide links to the web pages where they found their information, but some AI models can be grounded in data sources that you provide. This means that you can upload files like class notes, lecture slides, videos, and assigned readings, and the AI's responses will be based on these materials. This can help avoid misinformation and hallucinations in the AI model's responses. It also tailors the output to reflect the specific terminology, themes, and content that your instructor is using for your course.

With AI tools grounded in files from your class, you can:

  • ask questions about course content and get more reliable answers
  • get detailed explanations and examples of concepts from class
  • have the AI model create study guides and flash cards specific to your class
  • turn your notes and readings into a conversational podcast-style audio file so that you can study on the go
  • and more!

One example of this type of AI tool is NotebookLM, which is free to use if you have a Gmail account. 

What is a large language model (LLM)?

What is a large language model (LLM)?

A Large Language Model (LLM) is an AI model designed to understand, interpret, and generate natural language. "Natural language" refers to the constantly evolving languages that humans speak, write, and understand every day, as opposed to more formal languages like computer programming languages or mathematical notation which have more specific purposes and stricter rules. LLMs are trained on enormous datasets of text and code, and they work by predicting the probability of a sequence of words occurring in a given context, similar to the predictive text feature on some smartphones.

What is a multi-modal model (MMM)?

What is a multi-modal model?

A multimodal AI model is an artificial intelligence model that can process and understand information from multiple types of data, which may include, for example, text, images, audio, video, and other kinds of input. Instead of being limited to a single type of input or output (like a language model that only processes text), a multimodal AI can interpret and generate multiple data formats.

What are guardrails?

What are guardrails?

Guardrails are rules or limitations that developers build into AI models to reduce the occurrence of harmful, biased, or inappropriate content. Unfortunately, guardrails aren't 100% effective, so AI responses are sometimes biased, offensive, or inaccurate. 

AI developers rely on user feedback to improve their AI models, so if you encounter offensive, irresponsible, or inaccurate AI responses, be sure to click the "thumbs down" icon below the response and leave feedback. Developers review these responses and may be able to use them to make future AI models safer and more reliable!

What is machine learning?

What is machine learning?

Machine learning is a branch of AI that empowers computers to learn from data rather than being programmed with fixed step-by-step rules on how to operate. Machine learning algorithms identify patterns in data, learn from these patterns, and then make predictions or decisions by generalizing from the specific examples it has encountered. This allows systems to improve their performance over time as they are exposed to more data.

The datasets on which the models are trained can be anything from images and text to sensor readings and financial transactions. The better the input training data is, the better the computers become at their tasks. Once trained, a machine learning model can use the learned patterns to make predictions on new, unseen data.

What is a chatbot?

What is a chatbot?

A chatbot is a computer program designed to simulate conversation with human users, especially over the internet. The core function of a chatbot is to mimic human-like dialogue. It aims to understand what you say or type and respond in a way that feels natural and relevant.  Most chatbots primarily interact through text-based interfaces, but some can also engage through voice.