By Glenn Kleiman

The new AI–generative artificial intelligence–has received enormous attention since Open AI made ChatGPT freely available in November 2022. ChatGPT provides a simple-to-use chat interface that makes its impressive AI capabilities accessible to everyone. Within days, millions of people were using it, growing to 100 million active users within two months. Microsoft (using GPT), Google, Anthropic, Hugging Face and other AI developers quickly released similar tools, so multiple generative AI language systems are now available, along with AI tools for generating images, videos, computer codedata visualizations, and more. Impressive examples of AI-generated materials quickly became available online.

The release of generative AI products triggered an avalanche of news stories, podcasts, videos, scholarly articles, and discussions about AI’s impact on how we work, learn, play, communicate, create, gather information, conduct research, make decisions, and accomplish all sorts of other tasks. Analyses of how AI is already changing many industries and occupations, and predictions of forthcoming changes, are published daily. There is also widespread attention to the limitations, unintended hazards, potential malevolent uses, and societal dangers of the powerful new AI tools, ranging from biased and inaccurate information to, some say, possibly leading to the extinction of the human species. Both high hopes and grave fears abound as AI technology advances and we recognize that this is just the beginning of the AI revolution.

The rapid advances, widespread use, and projections for the future of AI require educators to address its impact on teaching and learning. How does AI change what students need to learn? Can teachers use AI to be more effective with their students? Can schools mitigate the risks involved in students and teachers using AI? Can it help address the long-standing equity of opportunity and achievement gaps that were exacerbated by the pandemic? Can it help improve teachers’ working conditions to better attract and retain dedicated teachers? Can it enable administrators to be more effective managers and communicators?

Following are eight things you need to know.

1. The new generative AI is fundamentally different from prior technologies

The principle that computers do exactly what they are programmed to do and nothing more has been taught since the earliest days of computers. Learning to program involves writing detailed step-by-step instructions in a language the computer can process.

AI systems prior to the new generative AI ones follow the same principle and are now called rules-based or classical AI. Each classical AI tool is programmed with a specific set of rules. For example, an AI tool to recognize pictures of animals would be programmed with rules that enable it to use the size, shape, color, body parts (e.g., tails, wings, horns, fins, snouts), and a large number of other distinguishing features to determine whether a picture shows a dog, cat, lion, tiger, penguin, eagle, or another animal. The success of the animal recognition tool would depend on the accuracy and completeness of the programmed rules. While there have been many advances and important insights gained from classical AI systems, rule-based picture recognition systems never came close to matching human abilities.

The new generative AI systems, or models as they are called, work very differently. They are built with machine learning algorithms that extract patterns from enormous data sets. The extracted information is captured in digital neural networks that are somewhat modeled on the neurons and synapses of the human brain. Once the neural network is created, requests to the AI model trigger algorithms that use the information in the network to generate responses. That is, generative AI systems build a knowledge structure — the neural network — from the data on which they are trained and then use that knowledge to generate responses.

For example, in contrast to a classical rules-based animal picture recognition system, a generative AI model is trained on a large data set containing thousands of labeled pictures of each different animal. It creates a neural network that captures features that distinguish each category of animal, so it can then identify animals in new pictures. Generative systems’ ability to recognize pictures now matches that of humans. Most impressively, these systems can also generate new and novel pictures, as shown in the examples below of AI-generated pictures of a penguin and an eagle in the styles of Picasso and Da Vinci. The examples also show that generative AI models can create multiple outputs in response to a single prompt. Generative AI models often do things that surprise even their creators since they are not limited to tasks for which they have been programmed.


Created by prompting DALL-E to create a picture of a penguin and an eagle in the style of Picasso.

Created by prompting DALL-E to create a picture of a penguin and an eagle in the style of Leonardo da Vinci.

Generative AI is made possible through a combination of powerful computers, enormous training data sets, machine learning algorithms used to process the training data, digital neural networks encompassing the information captured during the training process, and large financial investments. The resulting models are called foundational since many applications serving different purposes can be built upon them. The advent of these models can be seen as analogous to the advent of electricity, which makes so many different things possible, or of the cell phone, which provides the foundation for tools for navigation, shopping, finance, entertainment, media production, and so much more — just take a look at the apps on your phone.

The generative AI systems that most immediately impact education are called large language models (LLMs) since they are trained on text and generate text outputs. Open AI’s GPT (used in both ChatGPT and Microsoft Bing), Google Bard, and Anthropic Claude are major examples. It is difficult to fathom the scale of these systems — GPT 3.5 was trained on the equivalent of three times the text in the Library of Congress, the training process required 1,024 high-speed computers working nonstop for more than a month, and the resulting neural network has 175 billion connections. GPT 4.0’s neural network is estimated to be about a thousand times larger, with 170 trillion connections.

In this article, AI refers to the new generative AI models. “Classical” or “rules-based” will refer to the prior AI systems.

2. Generative AI can accomplish many tasks, often with surprising proficiency

Generative AI can accomplish many tasks that previously required human intelligence, often vastly exceeding human capabilities. For example, AI systems have learned to play complex games like chess and Go at levels beyond those of the greatest human masters. AI Large Language Models (LLMs) have received enormous attention for their ability to write messages, blogs, reports, poems, songs, plays, jokes, books and every other form of text. They provide many examples of going beyond the capabilities that the creators of these systems expected, such as being able to mimic the styles of known authors (see the example in the next section). Other AI systems create novel pictures in response to text prompts; analyze X-ray and MRI images to help diagnose medical problems; make predictions and recommendations; solve scientific problems; and drive cars, navigating busy streets. The list of generative AI capabilities goes on and on.

Ness Labs provides a visual summary of the AI creativity landscape that shows logos of AI products grouped into linguistic, visual, musical, coding and other categories of creativity. Their graphic, shown below, provides a sense of the rich array of AI products available as of November 2022.


The Artificial Creativity Landscape, from Ness Lab, November 2022

A recent summary of applications of generative AI provides examples of AI generating images, videos, music, speech, text, and computer code. It then describes industry-specific applications for healthcare, banking, gaming, fashion, travel, marketing, HR, customer service, human resources, and education. There is more information about educational applications in later sections of this article.

AI systems are now being integrated to extend their capabilities. For example, language, image and speech systems are being integrated so all three modes can be used for inputs and outputs. That enables AI models to describe images in text and speech, create pictures from verbal descriptions, generate infographics and visual displays of data, and write and illustrate books, to give just a few examples. AI systems are also being connected to internet search tools so they can incorporate up-to-date information from the web along with the knowledge in their neural networks, as well as powerful special-purpose applications, such as the Wolfram Alpha mathematics engine, that provide capabilities that were not part of the original model (ChatGPT is poor at mathematics, for example).

The widely used productivity tools from Microsoft and Google will soon have AI capability built in, so you will be able to ask your word processor and slide presentation program to draft, revise, critique, summarize, translate, and edit text as you write. This will provide AI that can serve as a co-author, editor, muse, or ghostwriter built into the tools students and teachers use daily, impacting how writing is done and taught.

The technology continues to develop, with more powerful and integrated systems on the horizon, so we have just begun to uncover what generative AI will be able to do.

3. Educators have both high hopes and grave fears about AI

The hopeful vision is that AI will enable teachers to do more of what only teachers can — build caring relationships with students; understand their needs, backgrounds, and cultures; guide and inspire their learning; and help prepare them for their futures. In this vision, AI partners with teachers to provide customized learning resources, digital tutors, and new learning experiences involving simulated, augmented and virtual reality environments that engage students in active and collaborative learning. This view sees AI as helping to provide pathways to success for each student, enabling them to overcome barriers to learning and progress at their own pace. Since AI can serve all students, the positive vision sees it as helping to address the long-term persistent opportunity and achievement gaps among student groups that have increased since the start of the pandemic. Overall, the hopeful vision is that AI will help drive educational reforms that help prepare students for their further education, personal and occupational success, and constructive civic involvement in the global, digital, AI-permeated world.

The fearful vision is that AI will result in students spending more time interacting with onscreen and robotic digital agents rather than teachers, coaches, and peers. It sees the claims of personalizing learning for each student as a facade since AI will not really know the students, their communities and their cultures. It sees AI-based educational resources as rife with biases, misinformation, over-simplification and formulaic, boring presentations. It sees students misusing AI to do their work for them, so they don’t engage in productive learning. It sees attempts to address students’ social and emotional needs with AI as misguided and likely more damaging than helpful. It sees AI increasing the educational equity gaps, with students in privileged areas taught to control AI and use it productively while students in less privileged areas will experience AI-directed rote learning to prepare for tests. The negative vision also sees policymakers seeking to replace teachers with AI and increasing student-teacher ratios to save funding. Overall, the fearful vision is that AI will lead to the dehumanization of education and the loss of the guidance, connections, caring, insights, and inspiration that good teachers provide to their students.

The challenge for educators is to understand the potential and the dangers of AI and to design and implement its use in ways that positively impact students and improve the education system overall while avoiding the potential negative impacts.

4. Generative AI is very different from human intelligence

A starting point to distinguish AI and human intelligence is to consider how each learns. AI learns from the data it is fed, while humans learn from a wide range of experiences in the world. LLMs, for example, can be said to have only book (or text) knowledge in that all they “know” is derived from the enormous amount of text on which they are trained. Human learning is much richer, involving experience in the physical world, motivation to accomplish goals, modeling of other people, and interactions within families, communities and cultures, as well as learning from texts and other media. Human babies innately have abilities to learn to perceive the world, develop social interactions, and master spoken languages. Human learning is ongoing throughout one’s life. It involves developing an understanding of causes and effects, human emotions, a sense of self, empathy for others, moral principles, the ability to understand the many subtleties of human interactions, and so much more that makes humans human. AI will never match the richness of human experience, knowledge and intuition.


As an example of how human and machine intelligence differ, consider all you know and feel about people “breaking bread” together. An AI system can generate recipes modeled on information from its training texts about recipes, foods, taste, and nutrition. It can be asked to produce recipes with certain ingredients, to meet specific dietary restrictions or preferences, and other requirements. It can provide recipes but not make them in an actual kitchen or taste the results, so following an AI recipe can lead to a delicious dish or a disaster.

AI has never experienced the taste, aroma and texture of fine food, the joy of preparing a delicious meal and sharing it with family and friends, the relief of a hot bowl of soup on a cold night and a cold drink on a hot day, or the many other experiences we have each had with food. AI might produce reasonable-sounding language about all those things, but it simply mimics information it has absorbed without any human-like understanding. While AI can write a description of the family Thanksgiving dinner picture below, it cannot come close to capturing what humans will see and feel about it.

5. Generative AI is very different from human intelligence

Large language models such as ChatGPT and Google Bard have been the focus of intense discussions about the impact of AI on education. As described above, these models are trained on enormous amounts of text taken from the Internet and other digital text resources. The resulting models are able to provide responses to a vast array of requests, often generating surprisingly good responses. However, these models also have many limitations and pose risks that are especially concerning when used by students. The major limitations and risks are described below. LLMs are trained at a certain time and the information in the neural network is not updated regularly, so these models can provide outdated information and are unable to respond well when asked about events that occurred after they were trained. For example, ChatGPT 4.0 can provide information about how the Hubble telescope, which was launched in 1990, contributed to our understanding of the universe but it is unable to provide information about discoveries with the Webb Telescope, which was launched in December 2021, several months after GPT was trained.
  • LLMs can fabricate quotes, statistics, and facts of all sorts, often in convincing ways. These false facts are called hallucinations in the AI world and reflect the fact that LLMs do not have built-in fact-checking capabilities. For example, I prompted GPT-3.5 to provide Some references by Glenn Kleiman about the Hubble telescope and it immediately generated The Hubble Space Telescope: A New Window on the Universe, by Glenn D. Kleiman, Scientific American, Vol. 262, 4, October 1990, pp. 34–41. with a realistic title, journal, volume, date, and page numbers (while giving me a new middle initial). However, I have never written on that topic or published in that journal, nor has anyone else with the same name.
  • Many websites and social media included in LLM training data sets contain racist, sexist, homophobic, transphobic, xenophobic, conspiratorial, threatening, violent, and other types of biased and toxic information. Unfortunately, this information is then included in LLM neural networks and can appear in responses generated by the systems. The garbage in, garbage out principle applies to generative AI.
  • LLMs can provide outputs that lack cultural and linguistic sensitivity and inclusivity since so much of their training data was created by people in the category that has been labeled with the acronym WEIRD: Western, Educated, Industrialized, Rich and Democratic.
  • LLMs can respond inappropriately to the social and emotional states of users, so they can provide very poor responses to requests about social and emotional issues, including major problems such as depression and thoughts of suicide.
  • AI models can be used intentionally to deceive and misinform. They can quickly produce massive misinformation campaigns, producing variations of postings intended to deceive different audiences. This is, of course, a major concern in future elections. AI can also be used to create what are called deep fakes, making it appear, for example, that someone said things they never said or met with people who they have never been with.
  • Additional risks stem from the fact that unlike traditional computer programs, in which one can check the flow of information through the programmed instructions, generative AI lacks transparency and explainability. That is, we cannot view and check the data AI uses and the process it follows to generate responses, but can only evaluate the outputs it produces.
  • The use of generative AI tools by teachers and students raises concerns about privacy and security. Will the information entered into them be used to train future AI systems or for other commercial purposes? Will facial and voice recognition be used to track where people go and what they say? In education, this leads to concerns about meeting the COPPA (Children’s Online Privacy Protection Act) and FERPA (Family Educational Rights and Privacy Act) requirements for U.S. schools and relevant requirements in other countries.
  • A risk specific to education is that AI can be used in ways that hinder rather than enhance learning, as discussed further below.

The heart of the matter is that the core machine learning and neural network approach to creating generative AI models does not discriminate between constructive, accurate, and appropriate output and destructive, misleading and inappropriate output.

AI researchers and developers are well aware of all the limitations and risks described above and are working on approaches to mitigate them. These approaches, which are in various stages of development and implementation, include the following, among others:

  • LLMs are now being connected to the web, with Microsoft Bing (which uses ChatGPT) being the first one to do so. This enables the AI tool to combine updated information with that contained in the neural network, helping to remedy the lack of up-to-date information.
  • Data sets used to train new AI models are being curated to reduce undesirable content. In addition, filters are being added to check and refine LLM outputs before they are provided to users. These filters are created using a reinforcement learning approach, in which people rate a large number of outputs as appropriate or inappropriate to train the model to avoid problematic outputs. While curation and filtering can help mitigate the problems of inappropriate content, they will not solve all the problems. The process of curating content and creating filters raises difficult questions about who decides and what criteria are used to designate responses as unacceptable. In addition, the models can be coaxed to bypass the filters. One example is an LLM refusing to respond to a direct request about how to build a bomb, but then providing bomb-building instructions when the request was made in the context of writing a movie script about a terrorist act.
  • Work is underway to improve how AI addresses users’ social and emotional needs. This is a particular concern for robots being designed to interact with and help people.
  • There are also ongoing efforts to provide more transparency into how AI models generate their responses and how to better align responses with the intentions of the prompts provided to the model.
  • Legislation is being proposed in the U.S., the EU, and elsewhere, to address the risks of AI, including toxic output, privacy and security, and to require that AI developers be responsible for the negative impacts of their products. Legislation is likely to require that AI-generated materials contain a “watermark” so they can be traced back to the AI system that produced them.

While the limitations and risks of AI are widely recognized, and efforts are underway in industry, government agencies, university research labs, and other organizations to address these issues, educators need to be aware of them and consider how to protect students and teachers from the potential negative impacts of using AI tools.

6. Students can use generative AI in ways that enhance learning and in ways that hinder learning

The power and flexibility of generative AI models open many ways in which they can be used in learning activities. Some have been quickly recognized and explored by many teachers; others are currently being explored in university and industry research labs but will soon be built into products for classroom use. To consider just a few examples for education, generative AI can provide:


  • Personalized tutors that present information, provide ongoing assessments of students’ learning, and adjust the materials to meet the needs of each student;
  • Immediate constructive feedback to guide students, ranging from early elementary students as they read aloud to advanced students as they draft research reports;
  • Interactive characters in simulations and in augmented and virtual reality environments that enable students to actively explore historical events, other cultures, scientific phenomena, and more;
  • Support for students who have visual or auditory impairments by converting among text, speech, and sign language, and by describing visual materials or sounds in ways the individual can process;
  • Translations across languages to support students who are learning English and students who are learning other languages;
  • Guidance for students as they write, ranging from helping them brainstorm ideas, overcome writer’s block, create strong arguments, draft original stories, and polish and edit their writing;
  • Real-time analysis of interactions among groups of students and real-time suggestions to help facilitate the group process.
  • Powerful tools to enhance students’ creativity in art, photography, music, and dance.
7. Al can enable teachers to do more of what only teachers can do

AI can provide productive assistants for teachers. A few examples of how AI assistants can support teachers include the following:


  • Help teachers plan lessons and adapt lessons to meet the needs of individual students;
  • Create and grade assessments and analyze the results to inform instructional decisions;
  • Assist with required record-keeping and reporting;
  • Help communicate with families by drafting letters to parents, translating materials for non-English speaking families, and providing real-time translations during meetings with members of those families;
  • Monitor and facilitate classroom interactions, for example coaching the teacher to ask open questions and to engage all students;
  • Provide suggestions for solving learning, behavioral, classroom management, and other problems teachers encounter.

To use AI effectively, teachers need opportunities to learn about AI and how they can put it to effective use, guidance for implementing it in their classrooms, and support for trying new ways of teaching with technology. As part of their professional learning, they need to experience using AI to enhance their own learning, time with colleagues to consider how it can be used most effectively, and coaching to improve how they employ AI. A significant commitment is needed, in both preservice preparation and in-service professional learning programs, to enable teachers to make effective use of the potential of AI to enhance their students’ learning.

8. Educators need to embrace AI to prepare students for their futures

AI has infiltrated many aspects of our lives and powers much of what we do, from creating our media to developing new medicines to running our factories. Its uses will expand as AI agents continue to become our partners in education, work, play, and social and civic life. We are at the cusp of the AI age, and educators need to embrace the changes required to prepare students for their lives in the global, digital, AI world.

Since ChatGPT became freely available during the 2022–2023 school year, schools have been grappling with how it and other AI tools can be used and misused. Some first reacted by banning its use for school assignments but quickly found that banning AI was unproductive and unrealistic. Bans also present equity issues since some students have access to AI on their own computers while others do not.

Educators recognize that their schools need to quickly put in place guidelines for the appropriate and productive uses of AI. Developing and implementing AI guidelines is complex, and this is all so new that there are no well-established and tested models for schools and districts to build upon. Schools need to develop criteria and processes for selecting AI tools for use by teachers and students. They need clear guidance for what uses of AI are acceptable while providing flexibility for different grade levels, subject areas, and learning activities. They need processes to follow when it appears students may have misused AI to do their work for them. They need to consider how AI can support students with disabilities, students with learning differences, and English language learning students. They need to incorporate AI in Individualized Education Programs (IEPs), school improvement plans, equity initiatives, and other plans and programs. Most importantly, they need to build well-designed professional learning opportunities into teachers’ schedules to support them in learning to use AI effectively.

The use of AI tools for writing provides a good example of the issues schools need to consider. Certainly, students inputting their assignments as prompts to an AI system and then submitting the output as their work constitutes AI-age plagiarism and is detrimental to learning. But can students use AI to help them conduct research, brainstorm ideas, produce initial outlines, or draft suggested starting paragraphs to help them overcome writer’s block? Can AI serve the role of a reviewer, much as a peer might, to provide constructive feedback at appropriate points during the writing, revising, and editing process? Do the guidelines differ for different grade levels, subject areas, and types of assignments? Is the real goal for students to become proficient writers without using modern writing tools or to become proficient writers who use those tools well? Is there a progression of learning writing basics without AI and then gradually learning to put AI tools to good use? These are some of the many questions educators need to address in the near term. (Articles about teaching writing with AI are here and here.) Parallel questions arise across the curriculum since generative AI brings new capabilities to every field.

In addition to enhancing current practices with AI, educators and policymakers also need to begin considering more significant reforms. To a large extent, our schools continue to operate on a model designed more than 100 years ago to prepare students for the industrial age. It is time to develop new models to prepare students for their lives in the global, digital, AI age.

About the author

Glenn Kleiman is a Senior Advisor at the Stanford Graduate School of Education, where his work focuses on the potential of AI to enhance teaching and learning.