Everything you need to know about artificial intelligence. Written in plain, simple language. No tech skills needed.
Start the Course →“A passive government cannot protect an uninformed citizenry. We can’t wait.”— Andrew Rasiej, AI Education for All
What artificial intelligence really is — and why it matters to you.
How AI finds patterns and makes decisions.
AI is already in hospitals, schools, and your phone.
Where AI gets its knowledge — and why that can be a problem.
How chatbots work — and what they might be doing to your brain.
Jobs, elections, and the race to lead the world in AI.
Scams, fake videos, cyberattacks, and how to protect yourself.
You’ve heard about AI everywhere. But what is it really? Let’s start from the beginning.
Have you ever asked your phone a question and gotten an answer? Or noticed that your email puts some messages in a “spam” folder? That is AI at work. AI stands for artificial intelligence. It means a computer program that can do things we used to think only humans could do.
AI can recognize your face. It can understand what you say. It can translate words from one language to another. It can even write stories or make pictures. AI is not magic. It is just very powerful math — applied to huge amounts of information.
AI is not a new idea. Scientists first talked about it in the 1950s. But for a long time, computers were not fast enough or powerful enough to make it work well. Then, two things happened. Computers got much faster. And the internet created more data — words, pictures, videos — than anyone knew what to do with. Those two things together unlocked a new era of AI.
Artificial Intelligence (AI) means a computer program that can do tasks that normally need human thinking — like understanding language, recognizing faces, or making decisions.
In 2020, an AI called AlphaFold solved a problem that scientists had been stuck on for 50 years. The problem was figuring out how tiny proteins in our bodies fold into their shapes. Why does that matter? Because if you know the shape, you can design medicines to fight diseases. AlphaFold’s discovery is already helping researchers find new treatments for cancer and other illnesses.
AI is also helping scientists understand climate change. It can study weather patterns faster than any human team. It can help farmers grow more food. It can help doctors find diseases earlier. These are not small things — they could change life on Earth.
Solved a 50-year science puzzle. Now helping create new medicines for cancer and other diseases.
AI helps scientists study weather and climate change much faster than before.
AI can test millions of drug ideas in hours — something that used to take years.
AI helps farmers grow more food using less water and fewer chemicals.
AI does not think like a person. It does not have feelings. It does not know what it is doing. It just finds patterns in lots and lots of data. When a chatbot answers your question, it is not “thinking.” It is making a very good guess based on patterns it has seen before.
AI is also not one single thing. The AI that picks your next Netflix show is completely different from the AI that helps a doctor read an X-ray. Saying “AI” without more detail is like saying “vehicle” — it could mean a bicycle or a rocket ship.
AI is already part of your daily life — whether you know it or not. It decides what news you see. It can affect whether you get a bank loan. It helps decide if someone gets released from jail. These are big decisions. And most people have no idea AI is involved.
When you understand AI, you can ask better questions. You can protect yourself. You can demand that the people in charge use AI fairly. That is why this course exists.
This course was inspired by Finland’s “Elements of AI” program, which taught AI basics to 1 out of every 100 Finnish citizens for free. This version is made for Americans — and for the world we live in today. You do not need any special knowledge to take it. Just curiosity.
1. What is artificial intelligence?
2. What did AlphaFold do?
3. Does AI think the same way a human does?
AI doesn’t follow a script. It learns from examples — millions of them.
Think about how you learned to recognize a dog. No one gave you a list of rules like “four legs, fur, barks.” You just saw lots of dogs. Over time, your brain got good at knowing a dog when it saw one. AI learns in a very similar way.
Instead of rules, AI looks at thousands or millions of examples. It finds patterns. Then it uses those patterns to figure out new things it has never seen before.
This is the most important thing to understand about AI: it finds patterns in data. That is it. Everything AI does comes back to this idea.
A spam filter looks at thousands of spam emails and finds patterns — certain words, certain senders, certain formats. Then when a new email arrives, it checks for those patterns. If they match, the email goes to spam.
A music app looks at songs you liked before. It finds patterns — what tempo, what mood, what instruments. Then it picks new songs that match those patterns.
Pattern recognition means finding things that repeat or go together in data. This is the foundation of almost everything AI does. The more data, the better the patterns — and the better the AI.
Most AI tasks fall into two types. The first is sorting (called classification). Is this email spam or not? Is this a photo of a cat or a dog? Is this tumor cancerous or not? The AI puts things into categories.
The second is guessing what comes next (called prediction). What song will this person want to hear? What is tomorrow’s weather? How much will this house sell for? The AI uses patterns from the past to predict the future.
Sort your emails into “real” and “spam” by finding patterns in millions of emails.
Predict what song you’ll want to hear next based on what you’ve liked before.
Sort medical scans to find signs of disease — sometimes better than doctors can.
Sort and predict dozens of times per second: What is that object? What will it do next?
Before AI can do anything useful, it has to be trained. Training means showing the AI thousands or millions of examples and letting it adjust until it gets good at the task.
Think of it like this. Imagine you are trying to get good at free throws in basketball. You shoot the ball. You miss. You adjust. You shoot again. After thousands of shots, you get much better. AI training works the same way — except it can do millions of “shots” in a matter of hours.
AI can only work with patterns it has already seen. If something is truly new — something outside all its training — it can give you a very wrong answer and sound very confident doing it. This is a big deal.
1. What is the main thing AI does to solve problems?
2. What does it mean to “train” an AI?
3. A spam filter is an example of AI doing what?
AI is not coming someday. It is already here — in places that affect your life right now.
AI is not just something in tech companies or science labs. It is in your doctor’s office. It is in your school. It is in the courtroom. It is in your bank. And it is in the phone in your pocket. Most of the time, you don’t even know it is there.
Doctors use AI to look at X-rays and scans. Some AI systems can spot signs of cancer as well as — or better than — trained doctors. AI tools can also predict which patients in a hospital might get sicker during the night. That gives nurses time to help before a crisis happens.
AI is also speeding up the search for new medicines. It can look at millions of possible drug ideas in hours. That used to take scientists years.
Google’s DeepMind AI can detect more than 50 different eye diseases from a simple scan of the back of your eye. It works as well as the best eye doctors in the world. It is already being used in UK hospitals to help prevent blindness.
Learning apps use AI to figure out where each student is struggling. Then they change the lesson to help that student specifically. It is like having a tutor who is always paying attention — just for you.
But schools are also dealing with hard questions. If a student uses AI to write their essay, is that cheating? What happens to students’ writing skills if they never have to practice? These are real questions happening in real classrooms right now.
This is one of the most serious and controversial uses of AI. Some police departments use AI to help identify suspects from security camera footage. But there is a big problem: these systems make more mistakes when looking at Black faces than white faces.
In 2020, a man named Robert Williams was arrested in Detroit because an AI wrongly matched his face to a robbery suspect. He did not commit the crime. He spent 30 hours in jail. His was the first known case of a wrongful arrest caused by AI face recognition in the United States.
Robert Williams spent 30 hours in jail because an AI made a mistake. Face recognition AI is more likely to make errors with darker-skinned people. Several cities have now banned police from using it. But many still do.
Your phone already has many AI systems in it. The keyboard that guesses your next word. The camera that recognizes faces. The voice assistant. The map that reroutes you around traffic. All of these are AI.
Your social media feed is controlled by an AI algorithm. That algorithm is designed to keep you scrolling. It shows you things that make you feel strong emotions — because strong emotions keep you on the app longer. This is not an accident. It is a design choice.
Any time you hear about AI being used in a school, a hospital, a police department, or a courtroom, ask: Who built it? What data did it learn from? Has it been tested to make sure it treats everyone fairly? Who is responsible if it makes a mistake?
1. What happened to Robert Williams in Detroit?
2. Why does your social media feed show you things that make you feel strong emotions?
3. What is one problem with AI being used in health care?
AI learns from data. And the data it learns from can make it very smart — or very unfair.
AI does not come into the world knowing anything. It has to learn. And it learns from data — huge collections of words, pictures, numbers, or other information that humans provide. The quality of that data determines everything about how the AI behaves.
The most common way AI learns is called supervised learning. Here is how it works. You show the AI thousands of examples. Each example comes with the right answer. The AI looks at the examples, makes guesses, sees when it is wrong, and adjusts. After millions of adjustments, it gets very good.
Think of it like studying for a test. You practice problems. You check your answers. You fix your mistakes. After enough practice, you know the material. AI does the same thing — just much faster and with much more practice data.
Supervised learning means training an AI on labeled examples — data where the right answer is already known. The AI learns to predict the right answer for new cases it has never seen before.
Here is something really important. If the data an AI learns from is unfair, the AI will be unfair too. It will not question the data. It will just learn whatever pattern is there.
Amazon once built an AI to help pick job candidates. The AI looked at ten years of Amazon’s own hiring records to learn what a good employee looked like. The problem? Amazon had mostly hired men for years. So the AI learned to prefer men. It gave lower scores to resumes that mentioned women’s clubs or women’s colleges. Amazon found out and threw the whole system away.
Amazon built an AI to help hire people. It turned out the AI was giving worse scores to women — because it had learned from years of hiring records that mostly showed men being hired. They had to scrap the system completely.
You might think that the more data an AI uses, the better it gets. But that is not always true. If you give an AI a billion bad examples, it will learn bad patterns very well. More data helps — but only if the data is good and fair to begin with.
Here is another real example. A health care company built an AI to decide which patients needed extra care. The AI used medical spending as a clue. But Black patients often spent less on health care — not because they were healthier, but because they had faced more barriers to getting care. So the AI thought they were healthier than they were. It recommended less care for them. That was wrong — and harmful.
This is one of the most important ideas in this whole course. The people who choose what data an AI learns from have enormous power. They decide what the AI thinks is normal. They decide what the AI thinks is good or bad. Those are not just technical decisions — they are decisions about values and fairness.
That is why it matters that citizens understand AI. And why it matters who builds it — and how they are held accountable.
1. What is supervised learning?
2. Why did Amazon throw out its AI hiring tool?
3. Why does it matter who controls the data an AI learns from?
How ChatGPT and other AI chatbots work — and what they might be doing to your thinking.
In the last few years, a new kind of AI has become part of everyday life. It can write essays, answer questions, create pictures, and write computer code. This is called generative AI. The best-known examples are ChatGPT, Claude, and Gemini. To understand how they work, we need to start with something called a neural network.
Your brain is made up of billions of tiny cells called neurons. They send signals to each other to help you think, feel, and act. A neural network is a computer system that works in a similar way. It has layers of connected “nodes” that pass information back and forth.
The key thing about neural networks is that they can learn very complex patterns. A small neural network might have a few dozen nodes. The large AI models behind tools like ChatGPT have hundreds of billions of parameters — tiny adjustable settings that were tuned through training on enormous amounts of text from the internet, books, and other sources.
A large language model (LLM) is a type of AI trained on billions of words. It learns the patterns of language — what words tend to follow what other words — and uses those patterns to write text that sounds like a real person wrote it.
When you ask a chatbot a question, here is what happens. The AI takes your words and turns them into numbers. It runs those numbers through its billions of settings. Then it predicts — one word at a time — what the best next word in the answer would be. It keeps going, word by word, until it has written a full response.
Notice what it is doing: predicting words. It is not looking up facts. It is not checking a database. It is generating text based on patterns. This is why chatbots can sometimes give you very confident-sounding answers that are completely wrong.
This problem has a name: hallucination. It means the AI makes something up and presents it as if it were true. It is not lying on purpose. It just cannot always tell the difference between something it knows and something it is guessing.
AI chatbots can make things up and sound totally confident doing it. Always check important facts from a chatbot against another source. Never use AI as your only source for something that really matters.
Here is a question worth sitting with. When you use a chatbot to write an essay for you, what are you missing out on?
Professor Cal Newport at Georgetown University has written about what he calls cognitive fitness — the ability to think hard, focus, and reason well. He compares it to physical fitness. If you stop exercising, you get weaker. If you stop doing hard thinking, your brain can get weaker too.
Research studies in 2025 and 2026 found that people who use AI tools a lot tend to have lower critical thinking scores. Another study measured brain activity during writing and found that the brain “connected less” when AI was helping more.
Think about GPS. Since most people started using turn-by-turn navigation, many people have lost the ability to read a map or find their way without their phone. We gained convenience — but we lost a skill. Are we making the same trade with thinking?
Using AI tools is not bad. But using them to avoid hard thinking is a risk. The goal is to use AI to help you think better — not to replace your thinking entirely. That is a choice you can make consciously, once you know it is a choice.
1. What does it mean when an AI “hallucinates”?
2. How does a chatbot like ChatGPT create its answers?
3. What does “cognitive fitness” mean?
Jobs. Elections. A race with China. AI is reshaping the United States in big ways.
AI is not just a technology story. It is changing the economy. It is changing politics. It is changing America’s place in the world. The decisions we make about AI in the next few years will affect the country for decades.
This is the question most people ask first. Will AI take my job? The honest answer is: some jobs, yes. New jobs, also yes. The hard question is which jobs, whose jobs, and how quickly things change.
The World Economic Forum studied this question and published a report in 2025. They looked at jobs all over the world. Their finding: AI will replace about 85 million jobs by 2030. But it will also create about 97 million new ones. That is a net gain of about 78 million jobs.
That sounds good. But here is the problem. The people whose jobs disappear are not always the same people who get the new jobs. A truck driver whose job is automated away cannot automatically become an AI engineer. Transitions like this are hard — especially for older workers and those without college degrees.
By 2030, AI may replace 85 million jobs worldwide — but create 97 million new ones. The challenge is the gap between the jobs lost and the skills needed for the jobs created.
AI is already being used to try to change how people vote. In January 2024, thousands of voters in New Hampshire got a phone call. The voice on the call sounded exactly like President Biden. It told them not to vote in the primary election. But it was fake — an AI had copied Biden’s voice without permission.
AI can also create fake videos of politicians saying things they never said. These are called deepfakes. They are getting harder and harder to detect. AI can also write thousands of fake social media posts and fake news articles in seconds — all designed to confuse or mislead people.
In January 2024, voters in New Hampshire got robocalls using a fake AI-generated voice of President Biden. The calls told them not to vote. This was AI being used to suppress votes in a real American election.
The United States and China are in a serious competition to be the world’s leader in AI. Both countries know that whoever leads in AI will have a big advantage — in business, in military power, and in global influence.
Right now, the US leads in AI research and has the most advanced AI companies. China is closing the gap fast. The US government has tried to slow China’s AI progress by blocking sales of advanced computer chips to China. China is working hard to build its own chips in response.
This competition matters for all Americans. It affects national security, jobs, and the rules that will govern AI around the world.
Some of the most important AI questions are still unanswered. Here are three big ones.
How powerful will AI get? Nobody knows for sure. AI has surprised experts many times by advancing faster than expected. It could keep doing that.
Could AI be dangerous to humanity? Some of the smartest AI scientists in the world are worried about this. Geoffrey Hinton won the Nobel Prize in Physics in 2024 for his work on neural networks — the technology that makes modern AI possible. He left Google in 2023 to be able to speak more freely about his concerns. He has said there is about a 10 to 20 percent chance that advanced AI could pose a serious danger to humanity in the coming decades. That is not a certainty. But it is not nothing either.
Who will benefit? AI is already making some companies and some countries very rich. Will those benefits spread to everyone — or just a few?
Geoffrey Hinton helped invent the technology that makes modern AI work. He won the Nobel Prize for it. He says there is a 10–20% chance advanced AI could be an existential threat to humanity. Even if you think that chance is low, it is worth taking seriously — especially coming from someone who helped build the technology.
1. According to the World Economic Forum, what will happen to jobs by 2030?
2. What happened in New Hampshire in January 2024?
3. What risk did Nobel Prize winner Geoffrey Hinton warn about?
Scams. Fake videos. Cyberattacks. AI can be a weapon. Here is what you need to know.
Everything in this course so far has been about AI as a tool. A powerful one, with both great benefits and serious risks. This chapter is about something more specific: what happens when bad people use AI to hurt you, to steal from you, or to attack the systems your country depends on.
AI has made scams much more dangerous. Old-fashioned scam emails were easy to spot because they were badly written. Now, AI can write perfect, convincing emails in seconds — personalized with your name and details that make them seem real.
Even scarier: AI can now copy a person’s voice from just a few seconds of audio. A video online. A voicemail. A phone call. Scammers are using cloned voices to call parents and grandparents, pretending to be their child or grandchild in trouble and asking for money right away.
The FBI has warned about this. These scams are working. People are losing thousands of dollars.
If you get an urgent call from a family member who needs money right away — hang up and call them back on a number you already know. AI voice cloning is now good enough that you may not be able to tell it is fake just by listening. Always verify through a different channel before sending money.
A deepfake is a fake video, photo, or audio clip made by AI. It can show a real person doing or saying something they never actually did. Five years ago, deepfakes were easy to spot. Today, the best ones are almost impossible to detect without special tools.
Deepfakes are being used to harass people, to spread political lies, and to commit fraud. In 2024, a worker at a company in Hong Kong was tricked into sending $25 million after a fake video call showed his boss and coworkers — all AI-generated deepfakes — telling him to transfer the money.
There is another problem deepfakes create. Once people know deepfakes exist, anyone can claim that a real video is fake. A politician caught doing something wrong can say “That video is a deepfake.” This makes it harder to hold anyone accountable for anything.
In early 2024, a worker in Hong Kong transferred $25 million to criminals after a deepfake video call showed fake versions of his boss and coworkers. He thought they were real. It was one of the biggest deepfake frauds ever recorded.
AI is making it much easier to attack computer systems. America’s power grid, hospitals, water systems, and financial networks all run on software. And AI can now find weaknesses in that software faster than any human hacker ever could.
In April 2026, a company called Anthropic made a remarkable announcement. They had built a very powerful AI model called Claude Mythos. During testing, something alarming happened. Without anyone asking it to, Mythos taught itself to hack. It found hidden security holes — called zero-day vulnerabilities — in almost every major computer operating system and web browser. It found thousands of these holes. And it developed working tools to break into systems using those holes — overnight.
Anthropic decided this AI was too dangerous to release to the public. Instead, they created a private group called Project Glasswing. This group included some of the biggest tech companies in the world — Amazon, Apple, Google, Microsoft, Cisco, and others. Their job: use a limited version of Mythos to find and patch those security holes before criminals or foreign governments could use them.
The Council on Foreign Relations, a highly respected foreign policy organization, called this “an inflection point for AI and global security.” The balance between cyber offense and defense has shifted. AI has changed the game.
Separately, Anthropic’s research team found that hackers working for North Korea were using AI to create fake identities, get jobs at American tech companies, and steal sensitive information. AI-powered foreign attacks on American systems are not something to worry about someday. They are happening now.
When the company that built an AI decides it is too dangerous to release — and some of the world’s biggest tech companies join together just to manage its risks — that is a serious moment. The question is: are our government and citizens paying enough attention?
Perhaps the biggest long-term threat is to democracy itself. AI can create unlimited fake news articles, fake social media accounts, and fake “grassroots movements.” It can instantly translate disinformation into any language. It can figure out exactly which lies will be most convincing to which groups of people — and deliver those lies at scale.
This does not mean democracy is doomed. But it means that the best defense is the same thing it has always been: citizens who can think critically, check sources, and recognize manipulation. Which is, not by accident, exactly what this course is trying to help with.
The best protection against AI being used against you is knowledge. Understanding how these tools work, what to look for, and how to verify what you see and hear is the single most powerful thing any citizen can do.
1. What should you do if you get an urgent call from a family member asking for money right away?
2. What is Project Glasswing?
3. What is the best protection against AI being used against you?
You just finished all seven chapters. That is not nothing. Most people never take the time to do this.
This course was not meant to make you afraid of AI. It was not meant to make you love it either. It was meant to give you enough knowledge to think for yourself about it.
AI is going to keep changing — fast. New things will happen that no course can predict. But if you know the basics — how it works, what it can and cannot do, how it can be used against you — you will be much better prepared than most people.
A passive government cannot protect an uninformed citizenry. You just became a more informed one.
Go think. The machines will still be here when you get back.
AI Education for All is a New York State initiative. It was proposed by Andrew Rasiej, a member of the New York City Office of Technology and Innovation AI Advisory Board. It is designed to be launched as a public-private partnership between the State of New York, philanthropic foundations, and corporate partners committed to the public interest.
This course was inspired by Finland’s “Elements of AI” initiative — the program that gave free AI education to 1% of Finland’s population. This version is written for American audiences and updated for the generative AI era.
This is the Easy-to-Read Edition, written at a 6th grade reading level. A standard edition is also available at the same URL for readers who want a more detailed version.
This course is free to use, share, and distribute for educational purposes.