AI Role in Education

|

AI News

AI's changing role in education: Monthly highlights of 2023

Headshot for Lulu Gao

Dec 28, 2023

Number 2023 surrounded by education- and AI-related icons
Number 2023 surrounded by education- and AI-related icons
Number 2023 surrounded by education- and AI-related icons

Illustrated by AI, written by humans.

January - Fear of cheating

Large language models were skyrocketing in quality of output and virality of adoption. ChatGPT set a new record for the fastest-growing userbase, hitting an estimated 100 million monthly active users by the end of January.

This incited a fear of cheating in education, causing a string of ChatGPT bans in school districts in Seattle, LA, NYC, and more. Teachers also scrambled to try AI writing detectors like ZeroGPT, but could not ensure success in reigning in use of AI.

ChatGPT banned in a school

Large language models were skyrocketing in quality of output and virality of adoption. ChatGPT set a new record for the fastest-growing userbase, hitting an estimated 100 million monthly active users by the end of January.

This incited a fear of cheating in education, causing a string of ChatGPT bans in school districts in Seattle, LA, NYC, and more. Teachers also scrambled to try AI writing detectors like ZeroGPT, but could not ensure success in reigning in use of AI.

ChatGPT banned in a school

Large language models were skyrocketing in quality of output and virality of adoption. ChatGPT set a new record for the fastest-growing userbase, hitting an estimated 100 million monthly active users by the end of January.

This incited a fear of cheating in education, causing a string of ChatGPT bans in school districts in Seattle, LA, NYC, and more. Teachers also scrambled to try AI writing detectors like ZeroGPT, but could not ensure success in reigning in use of AI.

ChatGPT banned in a school

February - AI is here to stay

A report by Global Market Insights projected the market for AI in education to reach $30 billion by 2032.

Different news media outlets also start covering not just the initial shock and concern over AI in education, but also some of the positive possibilities. These included how AI could help teacher productivity by automating repetitive tasks and how AI could be used to tutor students, giving them personalized and immediate feedback.

Pros and cons scale about AI

March - The Generative AI Race Continues; GPT-4 is released

Microsoft added Dalle to Bing, which had just integrated ChatGPT in February. Google launched Bard, their own AI chatbot. Anthropic released Claude, a competitor LLM to ChatGPT. OpenAI unveiled GPT-4, a model with more advanced reasoning capabilities, context length, and varied input types compared to GPT-3.5, which had brought OpenAI to the world stage just a few months prior.

Logo compilation of Microsoft, Google, Claude, and ChatGPT

This burst of development for general AI models foreshadowed the growth in AI tools for education. As the landscape for AI tools for education got more and more crowded, educators needed to discern what exactly they wanted out of an AI learning experience and how to achieve that.

April - Fear turns into curiosity

The Google searches for the phrase “personalized learning AI” peaked in April. Educators were starting to consider how AI could be the key to solving age-old education challenges like Bloom’s 2-sigma problem.

Google Trends screenshot showing spike in searches for "personalized learning AI" in April 2023.

Italy banned and unbanned ChatGPT in that span of just this month. The original ban was instated based on concerns over user control of their data privacy and verification of age for people attempting to access the platform. Student data privacy would prove to be a continuing concern for educators and their AI strategy.

Late April is also when the first version of Flint was released. Over 60% of startups from the Y Combinator Summer ‘23 cohort were working on some form of AI, but only two companies focused on using AI to improve education:

We at Flint had built an AI copilot for teachers—a tool that any K-12 teacher could use to create classroom activities, worksheets, or lesson plans.

Initial Flint landing page screenshot showing the software's focus on generating teaching materials.

May - Guidelines for AI in schools

US Dept. of Education released its first-ever report on AI in education in which officials shared insights and recommendations for integrating AI into education. These included:

  1. Emphasize Humans-in-the-Loop

  2. Align AI Models to a Shared Vision for Education

  3. Design AI Using Modern Learning Principles

  4. Prioritize Strengthening Trust

  5. Inform and Involve Educators

  6. Focus R&D on Addressing Context and Enhancing Trust and Safety

  7. Develop Education-specific Guidelines and Guardrails

June - Talks of regulating AI

President Biden met with leaders in the AI ethics and development space to discuss how best to move forward with developing safe, secure, and transparent AI technology. The final voluntary agreements are outlined in this fact sheet from the White House and were agreed upon by leading AI companies Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

Image of President Joe Biden at a podium to speak to a room of people

Commitments these companies made include:

  1. “Sharing information across the industry and with governments, civil society, and academia on managing AI risks”

  2. “Developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system”

  3. “Prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy”

Transparency from #1 might have a conflict of interest with profit-forward companies, the possibility of #2 is still an unsolved engineering problem, and the incentive to fully commit to #3 is unclear because it poses a threat to the company’s reputation. As all of these commitments are voluntary, we’ll have to keep an eye on how legislation progresses and how transparent companies are willing to be with their audiences.

July - AI exploration led by teachers, not students

Quizlet’s research on the state of AI in education found teachers outpacing students in AI usage and optimism. Beyond asking about how effective AI is for teaching and studying, the study also covered how AI might help students rebound post-pandemic and how AI can bring about more equity in education quality for students regardless of background. Teacher views on all these topics were generally more hopeful.

To explore how AI can foster deeper learning and push the current boundaries of education, we refocused Flint to address tutoring. The new and improved Flint aimed to help schools embrace AI. Our platform was built to create new modes of teaching through flexible, interactive and personalized AI learning experiences, not just help with existing teacher tasks.

New Flint landing page with messaging about helping schools embrace AI

August - Enterprise and upskilling

OpenAI launched ChatGPT Enterprise—a version of ChatGPT catered to businesses that would address the privacy and security qualms that had kept companies from allowing use of AI in the workplace. However, the waitlist is thousands and thousands of companies long, which has been restrictive for businesses—educational ones included—to moving forward with exploring AI solutions.

Results from a survey by IBM gave insights into AI’s projected impact on the workforce. The data showed a large scope of disruption across industries and a growing need for upskilling to leverage AI.

September - Turns out, cheating concerns were overblown

The International Journal for Educational Integrity published research showing how rates of cheating didn’t increase because of AI. This was not only reassuring for educators, but also took the conversation to the next step: how can AI be used to change assessment to be inherently harder to plagiarize? As we discussed in a previous blog post, AI detection is untrustworthy and ethically questionable, so the new data on overblown cheating concerns allowed for more focus on how AI can revolutionize learning.

Student high-fiving an AI robot in a classroom

October - First U.S. AI executive order

President Joe Biden signed a new executive order mandating safety, equity, and civil rights assessments and research regarding how AI impacts the labor market. There were 8 core components to the order:

  1. New safety and security standards

  2. Consumer privacy protection

  3. Equity and civil rights guidance

  4. Consumer protection

  5. Support for labor market implications

  6. Innovation and competition promotion

  7. International partnerships

  8. Guidance for federal agencies’ use and procurement of AI

This set of guidelines and mandates was deemed “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust” and built on the voluntary commitments from June.

November - The Altman Affair

Sam Altman, the CEO of OpenAI, was fired, hired by Microsoft, and rehired as CEO of OpenAI all in less than a week. Out of OpenAI’s 770 employees, 702 of them signed a letter to the board threatening to leave if Altman was not reinstated. The reasons for this drama reflected and were symbolic of the strong opinions around AI’s mission of benefitting the world versus the pursuit of profit.

Sam Altman speaking and sitting in a chair

December - Google Gemini challenges ChatGPT! Or does it…?

Google launches Gemini, a stark competitor to ChatGPT. Gemini claimed to have outperformed ChatGPT in many massive multitask language understanding (MMLU) assessments. Google’s explanation for this was that Gemini was trained on multimodal data, meaning that it natively and simultaneously can analyze text, image, video, audio, and code. This new type of AI reasoning that branches beyond text will continue to open doors for how AI can be applied to education.

However, the authenticity and novelty of Gemini’s demo video above was questioned by an op-ed from Bloomberg. Google soon admitted that the video “was edited and did not involve an actual spoken prompt”. Furthermore, the improved performance that Gemini’s launch so heavily referenced was really only a few percentage points better than ChatGPT-4, which at the time had been released 9 months prior.

Whether Google or OpenAI or a different AI player will pull forward in the race for LLM supremacy is still up for grabs. What we know at Flint is, we’ll be watching intently and work to incorporate the best technology into our own platform and bring the power of AI into the hands of teachers and students.

Our Predictions for 2024

2023 proved that generative AI has great potential to disrupt the field of education. This year also showed us that AI technology can and is moving at breakneck speed. The big question is always be: “what’s next?”. After reflecting on the past 12 months, we are excited to share our predictions for the next 12 months.

How AI models will improve

Scientists building an AI robot together
  • Concerns about hallucinations will basically disappear. Sam Altman has said OpenAI plans to iron out hallucinations within the next 1-2 years. There’s doubt that they will ever fully disappear, but as models improve and people learn how to manage and look for hallucinations, concern about them will soon subside.

  • “Prompt engineering” will be reserved for edge cases (e.g. advanced custom workflows). Consumer AI models will be so good that they’ll produce great results with 0 prompt engineering. You can already tell the difference here between GPT-4 and GPT-3.5 where GPT-4 can create stellar results from plain language prompts.

  • AI will be accurate enough to generate images that can be used academically for STEM use cases like biology and physics diagrams. This will unlock even more personalized learning potential and a smoother experience for both teachers and students.

  • AI will be able to remember more info accurately. Context windows have gone from 8k to 100k within this year for most models and this is likely to improve even further. This means AI tools will be able to better personalize to each student. Imagine an AI model being able to remember how a student struggled with a writing topic previously, or where in math they’ve needed extra help over the course of the semester—an AI tutor could augment learning over the course of units, classes, and school years.

  • Speech will become cemented as a major way that we interact with AI. Verbal conversations with AI will feel almost as natural as speaking to a human. Even though, right now, most people are still using text-based interactions—we at Flint have seen a rise in interest in and use of verbal assignments. Speech-to-text transcription is getting insanely good—with some models like OpenAI’s Whisper performing better than humans. The improvement in quality combined with an increase in speed sets up speech-to-text latency to be less and less of an issue. Plus in many cases, speech assignments are less easily plagiarized.

How AI will be used by students, educators, and parents

Students, educators, and parents huddled around AI technology in a classroom

As more people use AI for learning and teaching, it’ll become a total no-brainer for educational institutions to create precise AI adoption protocols. There will be a few steps/elements to this change:

  • Tutoring centers outside and within schools will need to use AI to make them even more valuable to each student and more flexible to deliver value to more students. If these centers fail to embrace AI and prove their unique value, many parents and students will simply opt for self-learning using AI tools.

  • 1-1 tutoring will be able to scale to any kid who needs it. This will really take off when AI is truly as good as a human tutor, which we believe will happen in many subjects in the next 12 months.

  • AI will change how teachers teach, on a massive scale. We’ve seen at Flint how AI doesn’t save teachers time, but enables them to spend more 1-1 time with students, and rotate as a facilitator while their students get personalized practice and assessment from AI. Assessment will change to focus on deeper learning skills like critical thinking and creativity and teachers will act more as guides for learning while AI handles concept and fact recitation.

Spark AI-powered learning at your school.

In your demo, we’ll start by learning about your school’s AI strategy and then explore how Flint can help.

Spark AI-powered learning at your school.

In your demo, we’ll start by learning about your school’s AI strategy and then explore how Flint can help.