From Answers to Agency: Equity in AI-Integrated Math
Topics
Educators increasingly rely on education technology tools as they shift instruction, redefine teacher roles, and design learning experiences that reflect how students actually learn. Technology should never lead the design of learning. But when used intentionally, it can personalize instruction, enrich learning environments, and help students master critical skills.
If AI can generate the answer instantly, what does it mean for a student to actually know math? Explore six intentional, system-level actions to build independent math thinkers, especially for students furthest from opportunity.
AI can already solve most of the problems we assign to students in secondary math classrooms. That’s not speculative, it’s observable.
So here are the questions leaders should be asking:
If AI can generate the answer instantly, what does it mean for a student to actually know math? If AI can do the procedures, what should count as real proficiency?
These are not philosophical questions. They are system design questions.
Because the system is already under strain. Recent NAEP data show a troubling trend. From 2022 to 2024, students identified as economically disadvantaged saw declines in math performance. Both their 25th percentile scores and their average scores dropped. Meanwhile, students who are not economically disadvantaged improved particularly at the higher end of performance, where 75th percentile scores grew. The achievement gap is widening.
Before we even consider AI’s potential effects, the data tells us that our current math systems are producing divergent outcomes. Students furthest from opportunity are losing ground while higher-performing, economically advantaged students continue to accelerate. In other words, the system is already stratifying mathematical success before AI even enters the equation.
As leaders consider AI integration, it is important to recognize that the technology sits inside this broader design. AI does not create that dynamic. It surfaces the pendulum we have been swinging for decades.
In math education, our priorities tend to swing toward conceptual understanding, then back to procedural fluency. Toward deeper thinking, then back to test prep. Toward rigor, then back to speed and correctness. Underneath those swings sits a belief system about what counts as strong mathematical fluency, what counts as strong instruction, and what we believe the purpose of learning actually is. Too often, the pendulum lands on correctness. Being “good at math” becomes synonymous with getting the right answer quickly. Strong instruction becomes efficient delivery. Learning becomes finishing. And if correctness is the goal, AI wins. But if explanation, transfer, and conceptual fluency are the goal, then AI forces us to rethink what we value and what we incentivize.
For leaders committed to equity, this moment where a widening achievement gap meets the rapid emergence of generative AI calls for intentional, system-level action. It requires that we:
Align around AI as an ethical support structure.
Incentivize deep learning over surface correctness.
Normalize explicit conversations about ethical AI use.
Design homework that structures AI use thoughtfully.
Redefine what counts as mathematical strength.
Use unplugged modeling and error analysis to assess transfer.
This is not about banning AI or embracing it uncritically. It is about designing secondary math systems that build independent thinkers, especially for students furthest from opportunity.
1. Align around AI as an Ethical Support Structure
Before policies and procedures, there must be alignment.
Many students don’t struggle with math because they are incapable. They struggle because multi-step reasoning feels overwhelming. AI can break down steps, provide alternative representations, and slow down explanations. Used thoughtfully, that support can reduce anxiety.
But support via AI can easily slide into substitution.
Leaders must clearly communicate that AI’s role is scaffolding, not supplanting. Across classrooms, expectations should remain consistent:
Students must demonstrate independent transfer.
Written explanation is required.
Conceptual connections matter.
Students must be able to reason without the tool present.
AI can reduce fear without reducing rigor—but only if systems define its role explicitly.
2. Incentivize Deep Learning over Surface Correctness
Grading is one of the most powerful leadership levers because it conveys what is valued and what gets incentivized.
If grading systems reward speed and right answers, AI becomes an optimization tool. If grading systems reward reasoning, revision, and explanation, AI cannot replace the intellectual work that matters.
Leaders should ask:
What does our grading policy actually incentivize?
What do common assessments measure?
Do pacing expectations prioritize coverage over understanding?
Students need regular opportunities to demonstrate understanding in discussions, performance tasks, and collaborative modeling environments where explanation matters more than completion.
AI does not erode thinking. Incentive structures do.
3. Normalize Explicit Conversations about Ethical AI Use
Silence invites misuse.
Students need clarity around the ethical distinction between using AI to deepen learning and using it to supplant thinking and present it as their own work.
Schools should develop shared language and norms around:
When AI use strengthens understanding.
When AI use merely produces answers.
How to reflect on its impact on learning.
Teachers can regularly ask:
When was AI helpful for learning?
When was it helpful only for finishing?
What did it clarify?
Could you solve a similar problem independently?
The ethical line is clear: Is AI strengthening your thinking or replacing it?
Leaders play a critical role in ensuring these conversations are systemic, not isolated to individual classrooms.
4. Design Homework That Structures AI Use Thoughtfully
Rather than trying to create “AI-proof” assignments, schools should design with reality in mind.
Consider a multi-step linear equation:
3(2x − 5) + 4 = 2x + 7.
Instead of assigning ten nearly identical problems, reduce volume and increase variation. Change coefficients. Introduce fractions. Alter structure. Focus on flexibility over repetition.
Below is an example of a structured, student-facing AI assisted task:
Attempt the equation independently.
Photograph their solution.
Optionally request AI error analysis.
Identify the misconception surfaced.
Generate a targeted mini-quiz.
Complete it without assistance.
Explain why the correction now makes sense.
AI is not required. It is an option for guided inquiry. The student remains cognitively responsible. Professional learning for teachers should include task design that preserves rigor while acknowledging AI’s presence.
5. Redefine What Counts as Mathematical Strength
AI excels at procedural generation. It does not excel at meaning-making. So how must our definition of mathematical strength evolve when procedures are automated?
Take a percentage rate problem:
A $120 jacket is discounted by 25%. What is the sale price?
AI computes instantly. But mathematical strength is not just arriving at the correct number. It includes:
Explaining 25% in multiple representations.
Justifying why multiplying by 0.75 works.
Comparing subtracting 25% versus multiplying by 75%.
Analyzing what happens if the discount is applied twice.
Detecting flawed reasoning.
If correctness alone defines proficiency, AI will always outperform students. But if systems prioritize explanation, structure, proportional reasoning, and transfer, then students must engage in deeper thinking that AI cannot complete for them.
This is not just a classroom shift. It is a systems decision. That definition of mathematical strength must be reflected in assessments, grading policies, and curriculum guidance.
Students still need conceptual fluency to ensure that technology functions as a scaffold rather than a substitute. We have navigated moments like this before, particularly with the integration of graphing calculators into math classrooms. Initially, the focus centered on whether students should use them at all. Over time, the more important shift was clarifying what students needed to understand even with the tool in hand, including interpreting graphs, connecting representations, and reasoning about structure rather than simply generating outputs.
The question now is whether we apply that same lesson. As AI takes on more procedural work, will we redesign our systems to prioritize mathematical agency, or will we allow the tool to further obscure what students actually understand?
6. Use Unplugged Modeling and Error Analysis to Assess Transfer
If AI supports homework, transfer must be assessed without it.
Instructional leaders should encourage classroom routines where students:
Annotate worked solutions.
Diagnose subtle reasoning errors.
Model problems visually by hand.
Reconstruct logic independently.
If a student can detect a faulty step, explain it, and rebuild the reasoning without assistance, then AI supported learning. If they cannot, then AI replaced it. Unplugged modeling is not anti-technology. It is verification of independence.
The Leadership Shift
This moment is not about banning AI or AI-proofing tasks. It is about clarifying what math education is for. AI does not weaken math education. It reveals whether we have been building thinkers or answer producers. The shift is not technological. It is structural and cultural.
It lives in grading policies.
In assessment design.
In pacing expectations.
In professional learning.
In the messages leaders send about what counts as strong math.
And for students furthest from opportunity, this shift carries even greater weight. In a system where gaps are already widening, AI will either accelerate inequity or expand access to deeper mathematical thinking. That outcome is not determined by the tool. It is determined by how we design around it.
So here is the leadership question:
If AI disappeared tomorrow, would our students still know how to think?
If students can explain why solutions make sense, even without the tool, then AI has expanded agency. If they cannot, the issue is not the technology. It is the system we designed.
And the good news is that systems can be redesigned.
Photo at top by Katerina Holmes.
