While sometimes feared, technology is often a perfect match for us humans. Technology, of course, is dependent on humans creating it. Once created, technologies, when properly applied, increase humans’ productivity and efficiency. We have seen this synergistic pairing from the advent of fire, through to invention of the wheel and all other technologies, leading us up to today, and artificial intelligence (AI).
AI is a machine’s ability to perform cognitive functions like that of the human brain, including interacting with an external environment, perception, reasoning, problem solving, decision making and even creative production of prose, music, images and more. Many of us have interacted with AI-empowered machines for years, often without realizing it. Every verbal request to voice assistants like Siri, Alexa and Google are dependent on AI, as are many of the chatbots that pop up when navigating websites.
Applied AI is the application of artificial intelligence to everyday scenarios, and it is evolving at a rapid pace. Recently, generative AI tools like ChatGPT and GPT-4 have risen to the mainstream and are having a profound impact on business and education alike, with the potential to increase productivity, quality and efficiency. However, when misapplied, those very same tools pose significant threats and erode trust.
What is Generative AI?
Generative AI learns from existing data to generate — at scale — new content that reflects the characteristics of the training data but does not repeat it. As a result, generative AI can produce a wide variety of new content, including images, videos, music, speech, text, software code and product designs.
How Does Generative AI Work?
Generative AI begins with a prompt. ChatGPT, for instance, responds to conversational text prompts to generate new content. Prompts for other solutions take the form of text, image, video, design, musical notes, or any input that the system processes.
Large language models (LLMs) then take over. Those LLMs, like the ones driving ChatGPT, have been trained through deep learning algorithms to recognize, generate, translate, and summarize enormous quantities of written language and textual data.
After ingesting the original prompt, the generative AI platform’s algorithms deliver its new content in any of several forms, including essays, mathematical solutions, or even “deep fake” videos. Recent new features in the quickly evolving space now allow users to refine initial results with feedback about the style, tone, voice, and other elements they want any subsequently generated content to reflect.
Use Cases & Benefits of Generative AI in Schools
Generative AI exploded on the scene in late 2022, and now, less than a year later, the potential application in education appears seemingly endless. Perhaps the most promising aspect is the potential to create personalized, interactive learning content and experiences for each student.
A few of the many use case examples of generative AI in education include:
- Generate questions based on a student’s current level of understanding or achievement.
- Deliver real-time feedback and assessments, empowering students and teachers to identify strengths to leverage, opportunities to improve, and any additional supportive needs necessary.
- Create personalized, adaptive lesson plans based on a student’s assessed level of aptitude and performance.
- Develop interactive learning activities, including games and simulations, to engage students across various subjects.
- Provide step-by-step hints and suggestions to accelerate a student’s problem-solving and learning.
- Generate email and other messages for teachers to send to parents.
With regards to a school district and its teaching resources, generative AI has the potential to act as a teacher’s assistant, delivering individualized, real-time feedback and expanding the teaching capabilities at every educational level.
Of course, the flipside of generative AI’s benefits to educators contains the technology’s limitations, challenges and perceived threats.
Limitations & Challenges of Generative AI In Education
Like all beneficial technologies, generative AI also arrives with its fair share of limitations, challenges and potential for abuse. A primary concern lies in the potential for bias in generated educational content, as a platform’s algorithms are only as good — as unbiased — as the data on which it is trained. As a result, education materials could be created that perpetuate or even amplify stereotypes or prejudices.
Additionally, while generative AI systems are immensely intelligent and powerful, they currently lack the creativity, originality, polish and that certain je nais se quois that teachers and students can deliver so well. Both students and teachers have the creativity to “think outside the box;” generative AI programs, however, are limited by the data they have been provided.
Then, of course, the digital divide between students with and without ready access to technology may be destined to widen by generative AI in education, adversely impacting underserved and minority students. Also, there’s the standing threat of a student’s data security, privacy and online safety.
However, the biggest challenge of generative AI in education might be the threat of unethical use of the technology. Naturally, plagiarism, the practice of taking someone else’s work or ideas and passing them off as one’s own, is a dominant concern. But the challenge of generative AI as a crutch goes much deeper, with the threat of students relying too heavily on the technology to provide immediate answers stunting their ability to think critically and solve problems.
Moving Forward with Generative AI in Education
Without question, generative AI has the potential to transform education. However, as the adage from Marvel’s Spiderman goes, “With great power comes great responsibility.” It is important for school districts to stay on the leading edge of technology-empowered education, and that responsibility to the community starts now.
First, school districts must draft policies based on a clear understanding of emerging technologies. Unfortunately, there is no central source of recommended best practices currently. Thus, school districts should collaborate across the entire K-12 spectrum to develop AI norms.
Policies will begin to shape teaching curricula, but they first must train educators on what AI is, what it isn’t, and how it can be effectively used in learning. Administrators, teachers and students alike will require advanced levels of media literacy and digital citizenship to identify incorrectly used AI-generated content.
In that regard, there’s no time to waste. According to Walton Family Foundation research released in May 2023, 51 percent of teachers reported having used ChatGPT within months of its release, with 40 percent using it at least once a week. In that same study, 75 percent of students believed ChatGPT could help them learn faster, and 73 percent of teachers agreed.
Secondly, districts need to effectively monitor generative AI app usage to help decision makers understand how the solutions are being used and how to best incorporate them into the curriculum. Leading EdTech solutions like Lightspeed Filter™ and Lightspeed Digital Insight™ can provide those answers.
Not only can the proper EdTech solutions track the generative AI apps are being used by teachers and students, then enable schools to identify the most effective apps and make informed decisions about how to incorporate them into either their curricula or educator professional development training decisions. Comprehensive solutions like Lightspeed Digital Insight also ensure schools can monitor generative AI app usage while protecting student data.
Finally, school districts must fully embrace teaching digital literacy and data ethics to all K-12 students. The AI genie is out of the bottle, and it’s never going back in. Curricula needs to be developed and deployed that help students effectively and safely explore the space. Furthermore, their education needs to include how generative AI and other technology tools AI tools can be used to manipulate information, and how we all must be diligent in not only evaluating the accuracy and bias of AI data sounds but also maintaining a high degree of integrity and ethically using AI’s output.
Conclusion
There’s little doubt generative AI will continue to blossom and eventually evolve into becoming a staple in education. But school districts, teachers and students must all learn how to use the technology responsibly and ethically. In addition to the establishment of new policies and teaching methods, IT infrastructure will need to be strengthened to ensure the technology is effectively integrated, while at the same time ensuring equity, privacy and safety.
For AI-related resources, be certain to visit the International Society for Technology in Education (ITSE) AI focus page. Additionally, please connect with Lightspeed Systems and share your AI insights!