Extended Cut! How AI Works

Decorative image

Understanding AI is no longer just for tech specialists. By learning about AI, educators can guide students in developing AI literacy and build critical thinking skills for evaluating AI-generated content. Familiarity with AI tools can help teachers enhance their own productivity and create more engaging, personalized learning experiences. As AI continues to evolve and integrate into daily life, educators who grasp its fundamentals will be better equipped to foster ethical awareness and ensure their students are ready for the challenges and opportunities of an AI-driven future.

If you only have a few minutes to learn, check out our Quick Study Nitty Gritty Overview or refer back to a list of AI vocabulary.


AI Video Resources


Essential AI for Educators


What is AI?

Artificial Intelligence (AI) is like a super-smart computer program that can learn, reason, and perform tasks that typically require human intelligence. It's designed to process vast amounts of information, recognize patterns, and make decisions or predictions based on that data.

The Evolution of AI

The term Artificial Intelligence (AI) was coined at a defense conference in the 1950s. It referred to any computer system that can perform tasks that typically require human intelligence. These tasks include learning, problem-solving, understanding language, and recognizing patterns.

AI Transforms through Machine Learning 

Early AI research focused on building expert systems that allowed machines to learn a set of rules and anticipate what might happen. Early AI didn’t improve with more learning but was limited by its programming. 

In 1959, IBM programmer Arthur Samuel proposed the term machine learning to describe a branch of computer science that uses statistical techniques to allow computers to learn from data without being explicitly programmed. In AI, this allows a system to improve its performance through experience. Instead of being explicitly programmed for every scenario, these systems learn from data and can recognize and predict patterns. In the 1990s, machine learning became the focus of AI research. 

AI Revolution with Neural Networks 

Neural Networks are computing systems that are modeled after the human brain. They consist of interconnected "neurons" that process and transmit information, called “nodes.” 

Like with machine learning, neural networks must be trained with large data sets. There are four types of learning that support the training of AI neutral networks:  

  • Supervised Learning: Data are labeled so that the model learns to understand the association between inputs and outputs. For example, it might learn to identify cats in photos by studying many images labeled "cat" or "not cat." 
  • Unsupervised Learning: The AI finds patterns in unlabeled data. It might group similar items together without being told what the groups should be. This is similar to what happens when you watch Netflix and then receive suggestions about what other shows to watch. 
  • Reinforcement Learning: The model learns by interacting with a smaller amount of unlabeled data, but it receives rewards and penalties for its actions. For example, when you click on Facebook ads, you are more likely to see those ads again. 
  • Deep Learning: Deep learning refers to large, complex neural networks with many layers. For instance, in order to train autonomous vehicles, the model must learn to detect which pixels constitute road signs, other vehicles, pedestrians, and other obstacles within an image.

Decorative image
Machine Learning + Neural Networks = Foundational Models 

In the book Teaching with AI, José Antonio Bowen and C. Edward Watson suggest that machine learning and neural networks are the basis of foundational models. Foundational models are large AI systems trained on vast amounts of data that can be adapted for various tasks. These models are trained on large, diverse datasets, which gives them extensive capabilities that we are still in the process of discovering and understanding. 

Large Language Models (LLMs) 

Large Language Models (LLMs) are a type of foundational model specifically designed to understand and generate human-like text, answer questions, and perform various language-related tasks. One example of a Large Language Model (LLM) is a GPT, which means Generative Pre-trained Transformer.  

  • Generative means it can create new content, like text or code. 
  • Pre-trained means it's learned from a vast amount of existing data. 
  • Transformer refers to the specific type of AI architecture it uses. 

ChatGPT is an example of a Large Language Model (LLM) developed by the company OpenAI. Although the field is constantly evolving, other LLMs currently defining the field include PaLM (from Google), LLaMA (from Meta), Claude (from Anthropic), Pi (from Inflection), Grok (from xAI).

Natural Language Processing (NLP)

Natural Language Processing (NLP) opened the door to the mass use of AI. Natural Language Processing is a branch of artificial intelligence that focuses on the interaction between computers and human language. It involves the ability of a computer program to understand, interpret, manipulate, and generate human language in a valuable way. NLP combines computational linguistics—rule-based modeling of human language—with statistical data, machine learning, and deep learning models.

Decorative image
Parameters 

The “T” in GPT stands for Transformers. Programmers develop an AI’s training architecture by setting parameters that determine an AI’s output.  

Parameters are like the "knowledge" or "skills" of an AI model that help it determine what to generate. A simple parameter could be the number of words in a sentence. The more parameters a model has, the more complex patterns it can understand and generate. More parameters can also make the AI model slower and more costly. 

For example: 

  • A model with fewer parameters might be good at simple tasks like identifying basic grammar errors. 
  • A model with many parameters (like GPT-3 or Claude) can handle more complex tasks like writing essays or coding. 
  • GPT-3 had 175 billion parameters, whereas GPT-4 had 1.76 trillion parameters and more memory. GPT-3.5 failed the bar exam, while GPT-4 immediately passed and did better than most humans. 

The number of parameters is often used as a rough measure of a model's capability, although it is not the only factor that matters.  

Tokens 

A neural network only processes data in numbers, so all words and images need to be turned into tokens. Tokens provide a digital stand-in in the form of 0s and 1s. They’re the building blocks that the AI model uses to understand and generate text. 

When you interact with an AI model, it processes your input as a series of tokens and generates its response token by token. In 2017, the revolution in AI technology was that programmers discovered how to give each token a weight, allowing AI models to process information simultaneously. This led to faster processing and a more natural, human-like way of interacting as well as a merger of separate fields into one. Now AI can treat words, images, code, and music as language. 

Decorative image
Generative AI 

Generative AI refers to artificial intelligence systems that use deep learning to create new content, such as text, images, or audio.  

Imagine a very smart computer program that has been trained on vast amounts of existing information. This program can then use what it has learned to create new things that seem human-made. The AI model uses its vast mental library to combine and rearrange the knowledge in creative ways. 

For example: 

  • A text-generating AI can write stories, articles, or answer questions
  • An image-generating AI can create new pictures based on text descriptions. 
  • A music-generating AI can compose new songs in various styles. 

These AI models don't simply copy existing work. They understand patterns and concepts, allowing them to produce original content. However, the quality and accuracy can vary, and human oversight is vital to ensuring quality and accuracy. 

Curious how to write a prompt for a Large Language Model? Check out some prompt engineering basics.

Decorative image
Hallucinations and Bias 

Notice anything about how programmers train AI models? It's similar to how humans learn. AI systems can reflect and amplify biases present in their training data or introduced by their creators. For example: 

  • If a language model is trained primarily on text written by one demographic group, it may not represent diverse perspectives accurately. 
  • An image recognition system trained on a dataset with mostly light-skinned faces might perform poorly on darker skin tones. 
  • A hiring algorithm trained on historical data might perpetuate existing gender or racial biases in employment. 

To address these issues, AI developers need to carefully consider data selection, model design, and ongoing testing for fairness and bias. 

Consider this example, explained in the book Teaching with AI by C. Edward Watson and José Antonio Bowen (Watson & Bowen 18): 

 

When Stable Diffusion, an AI capable of creating photorealistic images, is asked to create images related to high-paying jobs, the images have lighter skin tones than when asked about lower-paying jobs, with three times as many men over women in the high-paying jobs. When asked for images of “doctors,” only 7% of the images generated are women, when 39% of US doctors are women: Stable Diffusion exaggerates existing inequities, which is ap- parent in images in the internet training set (Nicoletti & Bass, 2023). Images generated by other AI image creators also yield biases.  

Adobe’s Firefly AI image generator tries to correct this by making the number of women or Black doctors proportional to the population of that group in the United States: half the images of doctors it generates are women, and 14% of the doctors are Black, even though only 6% of US doctors are Black (Howard, 2023). Firefly has been trained to increase the probability that a request for an image of a Supreme Court justice in 1960 will be a woman, even though Sandra Day O’Connor be- came the first woman appointed to the Court in 1981.  

Bias can come from training data, but the well-intentioned Firefly examples highlight another set of potential problems: human reviewers who rate and provide feedback for the models output also have bias. If AIs can create images of the world as it could be or as it is, who gets to choose? Bias can also be hidden in network architecture, decoding algorithms, model goals, and perhaps more worrisome, in the undiscovered potential of these models.  

The “G” in GPT stands for generative, which means that AI models are now capable of combining words, images, and concepts in ways that we have never seen before. But the ability to “hallucinate” can also lead to unpredictability and misinformation.

Other AI Limitations

There are other key limitations of current AI systems beyond hallucinations that are important to consider. 

  1. Decorative image
    Lack of common sense reasoning:
    AI often struggles with basic common sense that humans take for granted. It may miss obvious contextual clues or make illogical leaps.
  2. Limited transfer learning:
    AI models typically excel only in the specific domains they're trained for. They sometimes struggle to apply knowledge from one area to another without extensive retraining.
  3. Inability to understand causality:
    Most AI systems can recognize patterns but have difficulty understanding cause-and-effect relationships.
  4. Lack of true understanding:
    AI can process and generate language, but it doesn't truly understand meaning or context the way humans do.
  5. Data dependency:
    AI models are only as good as the data they're trained on. They can perpetuate biases or inaccuracies present in their training data.
  6. Brittleness:
    AI systems can be fragile, performing poorly when faced with inputs slightly different from their training data.
  7. Explainability issues:
    Many advanced AI models, especially deep learning systems, operate as "black boxes," making it difficult to understand how they arrive at their decisions.
  8. High computational requirements:
    Large AI models often require significant computational resources, which can be expensive and energy-intensive.
  9. Lack of adaptability:
    Unlike humans, AI systems generally can't adapt quickly to new situations without retraining.
  10. Ethical decision-making:
    AI lacks the ability to make nuanced ethical judgments, which can be problematic in complex real-world scenarios.
  11. Emotional intelligence:
    While AI can recognize emotions to some extent, it lacks true emotional understanding and empathy.
  12. Creativity limitations:
    Although AI can generate content, its "creativity" is fundamentally based on recombining existing information rather than true novel creation.

Using AI as a tool, humans can elevate their performance and delegate unwanted tasks. But these limitations highlight the fact that while AI has made tremendous strides, it is still far from matching the full range of human cognitive abilities.

The Artificial General Intelligence (AGI) Misunderstanding 

Artificial General Intelligence (AGI) is a theoretical form of AI that would have the ability to understand, learn, and apply intelligence in a way that's comparable to human intelligence. This is often what frightens people. But consider how AGI differs from current AI systems:

  • Decorative image
    Versatility: 
    • AGI: Would be able to perform any intellectual task that a human can do
    • Current AI: Specialized for specific tasks (e.g., image recognition, language processing)
  • Learning ability:
    • AGI: Could learn and adapt to new situations without additional training
    • Current AI: Requires specific training for each new task or domain
  • Reasoning and problem-solving:
    • AGI: Would have general problem-solving skills applicable across domains
    • Current AI: Excels in predefined problem spaces but struggles with novel situations
  • Self-awareness:
    • AGI: Might possess self-awareness or consciousness (though this is debated)
    • Current AI: Lacks true self-awareness or understanding of its own existence
  • Transfer learning:
    • AGI: Could easily apply knowledge from one domain to another
    • Current AI: Limited ability to transfer learning between different tasks
  • Creativity and innovation:
    • AGI: Could potentially generate truly novel ideas and solutions
    • Current AI: Can create within learned patterns but struggles with true innovation
  • Common sense reasoning
    • AGI: Would have human-like common sense understanding of the world
    • Current AI: Lacks intuitive understanding of everyday concepts
  • Emotional intelligence:
    • AGI: Might understand and respond to emotions in a human-like way
    • Current AI: Can recognize emotions but doesn't truly understand them

It is important to note that AGI remains a theoretical concept. All current AI systems, including the most advanced ones, are forms of narrow or specialized AI. The development of AGI, if possible, is likely many years or decades away and is the subject of much debate and research in the AI community.

POSSIBILITIES!

The possibilities for AI are truly boundless, but your approach to it will shape its impact on your life. Will you embrace AI thoughtfully and creatively, allowing it to become a powerful tool that elevates your work? Or will you see it as just another intimidating force changing the world around you?

Decorative image

Let's face it: AI is here to stay. Students are already using it, and soon it will be an expected skill. Your colleagues are leveraging AI to streamline their work, or they will be in the near future. Just as computers revolutionized how we process information and tackle daily tasks, AI is transforming the very fabric of how we live, work, and innovate.

We are all at a crossroads. How do you choose to view AI? Is it a source of fear or excitement? Does it feel overwhelming, or do you see it as an opportunity for gradual, step-by-step learning? Remember, it is not about choosing between humans or machines. The real magic happens when we combine our strengths.

So why not choose both? Together, humans and AI can achieve astounding things.


Learn More

Ready to learn more? Terrific! Check out iLearnNH's prompt engineering basics and prompt library for K-12 teachers.

If you have more time, we love the book Teaching with AI: A Practical Guide to a New Era of Human Learning by José Antonio Bowen and C. Edward Watson. It provides the clearest, most comprehensive overview of AI and its impacts on education we have seen. We owe many of our ideas on the AI Literacy Hub to the thoughts and writing of Bowen and Watson.

You can also check out the writing of Mike Kentz, an English teacher who is thinking innovatively and strategically about how to use AI in the classroom.