Math Survey Results Shared: Mostly Aligned, Some Not So Much

by Sugi Sorensen
March 28, 2026

In an email dated February 26, 2026, LCUSD Associate Superintendent for Student Services James Cartnal shared a link to the Math Instructional Priorities Survey: La Canada Unified School District, a detailed report of the results of the LCUSD Math Instructional Priorities Survey created by the district’s consultant partner — Hanover Research. The survey itself had been administered by Hanover to LCUSD K-8 parents, LCUSD staff who teach or support math instruction, and LCHS 7/8 students over the period January 12-23, 2026.

Recall that the ostensible purpose of the survey was to inform Hanover’s creation of a high quality instructional materials rubric, which in turn was supposed to be used by the district as the primary instrument for screening potential math textbook curricula for its current elementary math instructional material adoption. Things didn’t exactly go to plan as the district selected six semi-finalist curricula in December 2025, and then whittled that down to four finalists before the rubric was ever created.

Overview of Survey Results

Cartnal had initially reported at the second elementary math adoption parent meeting on January 28, 2026 that 587 students, 536 families and 66 staff members, or 1,189 total, had responded to the survey:

Presentation slide about the LCUSD Math Priorities Survey, detailing survey collaboration with Hanover Research, survey dates from January 12 to 23, 2026, and participant response statistics including students, families, and staff members.
Figure 1: Slide 20 from Jim Cartnal’s presentation slides at the LCUSD K-8 Math Adoption – Parent Informational Meeting #2 on 01/28/26.

The Hanover report, however, states that there were 1,342 respondents, with the parent count at 698.1 What accounts for that 162 parent response discrepancy is never explained. One clue to the source may be found in Hanover’s Methodology slide (on page 5), which includes the note that “After data collection, Hanover identified and removed low-quality respondents.” Yet Hanover never explains what constituted “low-quality,” nor did they ever list removed counts from any survey question. We have reverse-engineered those counts and address them in the Methodological Problems section below.

Survey count issues aside, the longer Hanover survey report comports with Cartnal’s ‘rapid results’ slides from the January 28th second parent informational meeting, summarized on slide 21:

Slide presenting rapid results from the LCUSD Math Priorities Survey, highlighting support for a balanced K-8 math program and prioritizing skills like problem solving and critical thinking.
Figure 2: Slide 21 from J.Cartnal’s presentation to parents at the 01/28/26 meeting.

Hanover’s Recommendations align with Cartnal’s expressed key findings, though they are framed to elevate conceptual goals over practical ones. More on that in a minute. Hanover listed three recommendations to LCUSD for its elementary math curriculum adoption:

  1. Balance an emphasis on conceptual understanding with a solid foundation of procedural fluency.
  2. Prioritize resources that support explicit teacher modeling followed by guided student practice to help students gain confidence.
  3. Incorporate practices and materials that help students develop a deeper understanding of mathematics and explain their thinking without overemphasizing formal mathematic communication.2

Areas of Alignment

The good news is parents and staff are mostly aligned on the salient instructional priorities that should drive the current elementary math curriculum selection, at least as far as those priorities were presented in the survey itself:

  • Both parents (81%) and staff (80%) selected Developing problem-solving and critical thinking as the highest priority in response to the question, “Which of the following should be top priorities for math instruction in LCUSD? Please select up to five options.”
  • Building strong conceptual understanding of mathematical ideas was parents’ second highest priority (64%), and staff’s third-highest priority (77%) on the same question.
  • Building procedural fluency, like math facts, and accuracy was staff’s second highest priority (78%), and parents’ fourth highest priority (46%).
  • Parents, students, and staff all highly value explicit instruction by teachers followed by guided student practice in answer to the question “How important are each of the following instructional practices for supporting effective math instruction?”:
Bar chart illustrating the importance of instructional practices for math instruction, comparing responses from students, parents, and staff.
Figure 3: Slide 34 of Hanover’s summary report on Math Instructional Priorities Survey for La Cañada Unified School District (February 2026).
  • Last of the results aligned between parents and staff was a distinct lack of prioritization of inquiry-oriented (i.e. constructivist) instructional practices like promoting student collaboration and discussion (only 7% of parents and 19% of staff prioritized it), and digital online delivery of instruction (11% of parents and only 3% of staff prioritized it.) This shared preference for a paper-and-pencil versus digital mode of instructional delivery was found in response to the question “Which of the following should be prioritized for supporting effective math instruction in LCUSD?”, which students, parents, and staff all ranked highest among eight options:
Bar chart showing priorities for supporting effective math instruction in LCUSD, with 'Paper and pencil math practice activities that help build math fluency' highlighted as the top choice.
Figure 4: Slide 27 from Hanover’s summary report on Math Instructional Priorities Survey for La Cañada Unified School District (February 2026).

To summarize, parents and staff both want a curriculum that develops mathematical problem-solving ability, conceptual understanding, and procedural fluency through explicit teacher-led instruction and paper-and-pencil practice.

The Conceptual Understanding Confusion

Hanover’s elevation of conceptual understanding as its top recommendation to LCUSD stems from two separate yet distinct problems, both avoidable if the survey had been designed properly. First, neither Cartnal nor Hanover properly framed the two fundamentally different approaches to mathematics instruction that undergirds how all math curricula are created. Second, the survey combined vague, aspirational goals like conceptual understanding and encouraging students to see themselves as capable math learners as response items with specific pedagogical and diagnostic items in the later priority questions. That confusion wound up artificially elevating the vague, motherhood-and-apple-pie conceptual and student self-efficacy items above the useful practical items.

The first issue is that the organizing axis for all math curriculum design over the past four decades is whether the math relies on teacher-led direct explicit instruction or student-centered inquiry learning. Every curriculum that LCUSD is evaluating sits somewhere on this continuum, and where it sits determines everything about how students experience math instruction: whether the teacher models procedures before students practice them or whether students are asked to discover strategies and procedures through exploration; whether standard algorithms are taught early and practiced to fluency or treated as one option among many; whether lessons follow an ‘I do, we do, you do’ guided release instructional sequence that LCHS Math Department Chair Juan Nuñez so eloquently described at the second parent informational meeting on January 28th, 2026, or a “here’s a challenging word problem, figure it out on your own” productive struggle session. The entire survey could have been vastly simplified and made more informative by simply asking respondents some version of: “Should math instruction primarily involve the teacher showing students how to solve problems followed by structured practice, or should it primarily involve students exploring problems and constructing their own understanding?”

To understand the fundamental differences between these two competing frameworks, consider the following concise comparison:

Constructivism vs Explicit Instruction Explained
Slide title: 'Math Instruction: Two Competing Frameworks' with a maroon background and geometric design elements.
A presentation slide discussing competing frameworks of mathematics instruction, focusing on Constructivism and Explicit Instruction, with historical context about the 'Math Wars' and various terminology used over time.
A table comparing inquiry-based (constructivism) and explicit instruction in math education, detailing properties such as theoretical basis, locus of instructional guidance, teacher role, and the relationship between conceptual and procedural understanding.
A table comparing classroom practices between Constructivism and Explicit Instruction, highlighting properties like View on Memorization, Worked Examples, Role of Practice, and Importance of Accuracy.
Table comparing Constructivism and Explicit Instruction teaching practices in math education, highlighting views on standard algorithms, timed tests, and technology.

Everyday Mathematics is a constructivist math curriculum, one of the three most prominent examples of inquiry-learning curricula to emerge from the reform-math oriented 1989 NCTM Curriculum Standards.3 Parents understood this in 2016 when they stood in firm opposition to Everyday Mathematics’ adoption by LCUSD.

Math In Focus, in contrast, is a traditional math, explicit instruction oriented curriculum. Parents also well understood this in 2016, as well as the fact that pairing these two fundamentally different math curricula back-to-back as LCUSD was proposing to do would lead to a train wreck at the interface between Everyday Mathematics and Math In Focus in 6th grade, an avoidable calamity LCMP has written about previously that LCUSD is presently trying to rectify.

Hanover never framed the distinction nor asked the question about which approach survey respondents preferred. And because they never asked it, everything downstream became ambiguous.

Consider what happened when the survey asked about conceptual understanding without specifying an instructional framework. A parent who wants explicit instruction can endorse conceptual understanding because they understand it the way a mathematician or cognitive scientist does — as the deep knowledge of why procedures work that comes from careful teacher explanation and worked examples. A staff member trained in constructivist pedagogy endorses the same phrase meaning something entirely different — student-led exploration, discovery of multiple strategies, sense-making through productive struggle. Both groups checked the same box; the survey recorded agreement; Hanover reported consensus. But there is no consensus. There are two groups using the same words to describe mutually incompatible instructional approaches.

This isn’t a hypothetical ambiguity. It was visible in the survey data itself. Consider the internal contradiction in the combined results: 93% of parents rated “demonstrating how to approach and solve math problems” as very or extremely important (slide 34), their highest rated instructional practice to support math instruction. That is explicit modeling, a hallmark of direct instruction. 91% of teachers rated “teachers first explain and demonstrate how to do a math procedure, then support students as they practice so they can learn the steps confidently” as their top instructional practice in answer to the same survey question.

But 67% of parents and 63% of staff also rated “teaching math through exploration” as very or extremely important in response to the question on how to facilitate effective math instruction (slide 41.) That’s the hallmark of constructivist inquiry learning. A coherent respondent can’t hold both of these together as high priorities, because they compete directly with each other during limited classroom instructional time. The survey’s failure to force a choice between these approaches meant it generated a picture of apparent agreement on everything, which in turn allowed Hanover to write recommendations that sound balanced while being effectively ambiguous as curriculum selection guidance.

When Hanover wrote in its second major Recommendation – “prioritize resources that support explicit teacher modeling followed by guided student practice,”4 they were describing the core sequence of explicit/direct instruction. But they never named it. They didn’t say direct instruction. They didn’t say explicit instruction. They didn’t cite Rosenshine or Haring and Eaton’s Instructional Hierarchy. They didn’t cite the IES Practice Guides that recommend explicit instruction for struggling students. They didn’t cite Project Follow Through. They didn’t cite Kirschner, Sweller, and Clark (2006).5 They described the practice without naming the pedagogical framework, which means the recommendation carries no weight as a guide for distinguishing among curricula.

And this matters enormously for the adoption. If you know a curriculum is built on direct instruction principles, you can predict that it will feature teacher-led lessons, worked examples, scaffolded practice, systematic review, and explicit teaching of standard algorithms. If you know it’s built on inquiry/constructivist principles, you can predict student exploration, multiple strategies, productive struggle, collaborative problem-solving, and delayed introduction of standard algorithms. These are observable, verifiable design features. But if the survey never established this framework, the adoption committee has no systematic way to sort the candidates. They will likely end up evaluating individual features in isolation — “Does it have manipulatives? Does it mention fluency? Does it have have students explain their reasoning?” — rather than evaluating the overall instructional architecture.

This is exactly how the district ended up with the incoherent list of six-finalists we examined in a previous article. When the committee window shopped curricula at LACOE’s publishers’ instructional materials fair on December 12, 2025 without a clear framework for distinguishing instructional approaches, what you get is essentially random selection — a grab-bag that happens to include three inquiry programs, two explicit instruction programs, and one hybrid, not because anyone deliberately chose that mix, but because nobody had a principled basis for recognizing the differences. The district entourage that attended that publishers’ fair, absent a coherent and well-designed instructional materials rubric, absent the framing of constructivism versus explicit instruction, couldn’t distinguish between a program that says it develops fluency through constructivist exploration and a program that actually develops fluency through systematic practice, because they had never been taught the vocabulary or the framework to make that distinction.

Parent and Staff Misalignment on Priorities

Hanover acknowledged parent-staff divergences in math instructional priorities but downplayed them.6 The survey data actually reveal dramatic, statistically significant parent-staff splits that Hanover noted in passing, but never informed their recommendations:

Parent vs. Staff Priority Gap – LCUSD Math Survey
Parent vs. Staff: Top Priorities for LCUSD Math Instruction
Largest gaps reveal fundamentally different visions of what math education is for
"Which of the following should be top priorities for math instruction in LCUSD? Please select up to five options."
Hanover Survey, Slides 24–25  |  Parent n=668, Staff n=69
Parents (n=668) Staff (n=69)
Source: Hanover Research, "Math Instructional Priorities Survey – LCUSD," February 2026. Analysis by Institute for Mathematics Instruction.

As can be seen above, the parent-staff splits on top instructional priorities were stark. Priorities parents favored more:

  • Preparing for advanced HS math: Parents 44%, Staff 4% (Parents +40% higher)
  • STEM foundations: Parents 33%, Staff 4% (Parents +29% higher)
  • Preparing students for math coursework in college or university: Parents 24%, Staff 8% (Parents +16% higher)

And priorities staff favored more:

  • Procedural fluency/accuracy: Staff 78%, Parents 46% (Staff +32% higher)
  • Supporting diverse learning needs: Staff 42%, Parents 22% (Staff +20% higher)
  • Improving student confidence and engagement in mathematics: Staff 64%, Parents 44% (Staff +20% higher)

On importance of facilitating effective math instruction (slides 41 -42), there was only one item that revealed a wide split in preference. Staff’s second highest rated item (92% very important or extremely important) was “ensuring all students have fair opportunities to learn math by adjusting lessons and support so everyone can grow and succeed in math.” Meanwhile just 65% of parents rated this item Very Important or Extremely Important revealing a 27% split. All other items on this question differed by less than 12%, with the mean difference on these items at just 4.7%.

These are not small differences. All were noted by Hanover as statistically significant.

Parents are dramatically more focused on preparing students for advanced coursework in high school and beyond. Staff overwhelmingly favored conceptual understanding and supporting diverse learners. Yet Hanover’s recommendations read as if there’s a shared consensus. That is only partially true:

The Differentiation Asymmetry – LCUSD Math Survey
The Differentiation Asymmetry
Staff prioritize identifying and supporting struggling learners — but not advancing ready learners
"Which of the following should be top priorities for math instruction in LCUSD? Please select up to five options."
Hanover Survey, Slides 24–25  |  Parent n=668, Staff n=69
Parents (n=668) Staff (n=69)
Source: Hanover Research, "Math Instructional Priorities Survey – LCUSD," February 2026. Analysis by Institute for Mathematics Instruction.

Parents’ top priorities clustered around preparation and mastery: developing problem-solving and critical thinking (81%), conceptual understanding (64%), connecting to real-world applications (50%), and building procedural fluency (46%), that together prepare their students for advanced high school math courses (44%), STEM foundations (33%), and preparing for advanced math coursework in college (24%.) This is a portrait of parents who see math instruction foundationally — as building the skills and knowledge base their children need to succeed in increasingly rigorous coursework and eventually in STEM careers.

Staff’s top priorities clustered around classroom process and equity: problem-solving/critical thinking (80%), procedural fluency (78%), conceptual understanding (77%), improving student confidence and engagement (64%), and supporting diverse learning needs through differentiated instruction (42%). And at the same time staff seemed not to care at all about preparing students for HS math (4%), college math (8%) or building STEM foundations (4%.) That’s three teachers and administrators out of 69 that valued preparing students for advanced high school math and future STEM careers. This is a portrait of teachers focused on managing heterogeneous classrooms, reaching struggling learners, and implementing their preferred pedagogical frameworks.

The divergence likely reflects a structural difference in time horizons: teachers are accountable for grade-level outcomes within a single year, while parents are tracking a longer arc — from elementary foundations through high school mathematics and into college and career. Stakeholders optimizing for different endpoints will weight instructional priorities differently even when they agree on the vocabulary.

Parent and Staff Misalignment on Everyday Math

In addition to the misaligned math instructional priorities in the eyes of parents and teachers, another stark difference unmentioned by Hanover in their survey report is the significant disconnect in perception about the current curricula in use in LCUSD schools — Everyday Mathematics in K-5 and Math In Focus in grades 6-8. There was a 16% gap in perception between parents (38%) and staff (52%) about Everyday Mathematics in answer to the question, “how effective is LCUSD’s current K-5 math instruction at ensuring students to perform (sic) at grade level in math?”7 Not surprisingly, students’ perception (43%) was closer in alignment with parents than staff.

In contrast, parents (54%) and staff (55%) were in near perfect alignment in their view of Math In Focus, with students (63%) liking MiF even more than their parents and teachers. The unmentioned revelation also buried in the data is that students have a much stronger perception of Math In Focus’ effectiveness than Everyday Mathematics – 50% more students rated MiF ‘Very Effective’ or ‘Extremely Effective’ than EM.

The disparity in perception is likely greater than the survey results revealed, because both question 25 and 26 were poorly worded, asking respondents to rate how effective LCUSD’s current K-5 or 6-8 math instruction was at “ensuring students to perform (sic) at grade level in math.” The survey did not distinguish between the curriculum, the teachers or the peer support in the classroom. Good teachers know how to compensate for bad curriculum.

In an interesting historical note, La Canada Math Parents conducted a survey of LCUSD community members in Spring of 2017, two years into the Everyday Mathematics adoption at LCUSD, and asked a similar question (4) about respondents’ perception of their childs’ math textbook curriculum.8 At that time, only 32% of survey respondents thought Everyday Mathematics was either “Above Average” or “Excellent.” So parents’ perception of Everyday Mathematics has not improved much over the past decade. That LCUSD at every point over the past ten years has either downplayed, minimized or outright ignored parental concerns about Everyday Mathematics, is deeply troubling.

Student and Staff Misalignment on LCUSD Math Quality

The most striking feature of the student-staff comparison across question 28 (“In your opinion, what are the greatest strengths of current math instruction in LCUSD? Please select up to three options”) and question 29 (“In your opinion, which areas of math instruction in LCUSD need the most improvement?”) is not any single gap but the way the two questions contradict each other when read together. On Q28, staff’s highest-rated strength is “teachers make math clear and easy to understand” at 57% — their most confident self-assessment across all nine options. Yet on Q29, students’ second-highest complaint is “making math concepts easier to understand” at 42%, while only 25% of staff identify this as needing improvement. Staff believe instructional clarity is their greatest asset; students say it is among their greatest unmet needs. Both cannot be right.

The same contradiction appears with engagement: staff and students tie on Q28, both rating “lessons keep students engaged” at 23% — neither group sees it as a current strength. But when asked what needs fixing on Q29, a full 53% of students — an outright majority, the single highest response from any group on any item across both questions — identify engagement as a top-three improvement area, while only 34% of staff agree.

Compounding the problem is poor question design. The proffered responses are ambiguous. The two highest-rated student responses on Q29 — “keeping students engaged during lessons” at 53% and “supporting students who learn at different speeds” at 44% — appear on the surface to be clear signals, but both are ambiguous items that likely aggregate students with opposite experiences into the same response. A strong math student bored by instruction on material she mastered months ago through outside supplementation would select ‘keeping students engaged’ because the lesson offers her nothing new — the problem is insufficient challenge. A struggling student who can’t follow the lesson and has given up trying would select the same item because the instruction has left him behind — the problem is insufficient scaffolding. Both students check the same box, but one is saying “go faster” and the other is saying “slow down.”

The same double meaning affects the response “supporting students who learn at different speeds.” An advanced student frustrated by teaching to the middle of the class in a heterogeneous classroom — the student who finishes the worksheet in five minutes and spends the remaining time with nothing productive to do — would select this because the classroom structure denies him the acceleration he’s ready for. A student who consistently falls behind would select it because the classroom moves too fast for her to consolidate understanding before the next topic arrives. The survey item collapses both experiences into a single undifferentiated count, and the 44% figure tells the district nothing about which direction the speed mismatch runs or what proportion of students are on each side. This is not a minor ambiguity — it is the central question the adoption process needs to answer, and the survey instrument was structurally incapable of answering it.

Staff’s 71% on the same “adapting to different speeds” item almost certainly reflects a unidirectional reading: teachers thinking about the challenge of reaching struggling learners in heterogeneous mixed-ability classrooms, consistent with their prioritization of differentiated instruction and screener/intervention tools elsewhere in the survey. But the student response cannot be read the same way. In a district where 84% of students score at or above grade level on the CAASPP and where outside math supplementation through programs like Russian School of Mathematics, Art of Problem Solving, and private tutoring is widespread, a substantial fraction of the students selecting this item are likely on the high end — students whose experience of “different speeds” is being held to a pace well below what they’re capable of. The survey’s failure to disaggregate this item into its two constituent populations means the data cannot distinguish between students who need the pace slowed down and students who need it accelerated, which is precisely the distinction that matters most for curriculum selection and the one LCUSD has steadfastly refused to acknowledge.

Methodological Problems

Though generally reasonable, Hanover’s analysis of the LCUSD Math Priorities Survey suffered from several methodological problems. I previously mentioned the following bullet point Hanover included on its Methodology page (slide 5): “After data collection, Hanover identified and removed low-quality respondents.” The next bullet point also stated that “’Don’t Know or Not Applicable’ responses, and equivalent, are often excluded from the figures and analysis in order to focus on respondents who did express an opinion.”:

A slide titled 'Methodology' from a presentation on K-12 education, detailing sample sizes, conclusions, statistical significance, and data collection processes.
Figure 5: Slide 5 from Hanover’s survey report indicating methodology notes in how they analyzed survey results.

Hanover never mentions how many survey responses were removed from individual question analyses, which is unfortunate because ‘Don’t Know’ or ‘Not Applicable’ (hereafter referred to as “DK/NA responses”) actually convey useful information, most importantly if a survey question was poorly written.

During the first Elementary Math Parent Adoption Meeting on November 5th, 2025, parents strongly warned that the educator-loaded jargon of the survey questions would lead to significant non-responses and DK/NA responses, and would thus elevate staff opinions given they understand the jargon better. This warning turned out to be prescient. There was significant attrition in parent response rates as the survey progressed:

La Cañada Unified School District  ·  Hanover Research Math Priorities Survey (January 2026)

Survey Response Attrition by Question

Percentage decline in parent and staff respondents from the baseline established at Q11 (the first substantive question). Hover over any data point for the full question text and group statistics.

Parents  baseline n = 693
Staff  baseline n = 69
Routing / filtered question
Q16 (Staff only): Parents were not routed to Q16. The parent line breaks here because zero parents were asked this question — this reflects survey routing, not respondent abandonment.
Q26 (Grades 6–8 only): Excluded from both lines. Shown only to respondents with a grades 6–8 student (parent n = 179; staff n = 11) — the apparent drop reflects audience filtering, not survey fatigue or abandonment.
Question ordering: The horizontal axis reflects actual survey administration sequence. Hanover's internal question numbers are non-sequential (Q21–Q24 preceded Q17–Q20 in the live instrument).
Baseline: Q11 is the first question shown to both parents and staff. Q1–Q10 were demographic screeners for which Hanover did not report group-level n values. Declines are relative to Q11: parents n = 693; staff n = 69.

Source: Hanover Research, Math Instructional Priorities Survey — LCUSD, January 2026. Analysis by the Institute for Mathematics Instruction.

As you can see from the chart above, by question 27, almost a third of parent respondents (–205 parents) had either given up attempting to answer the increasingly opaque questions or had been dropped by Hanover for being of “low-quality”, while only 8 teachers (11.6%) had dropped. But readers of Hanover’s summary would never know this because Hanover dropped non responses from the denominator in their question response rate calculations. They did not even state the number of omitted responses, we had to reverse engineer them.

By comparing parent, staff and student n values across survey questions, we were able to answer Hanover’s ambiguous methodological note on slide 5 that “after data collection, Hanover identified and removed low-quality respondents” and “‘Don’t Know’ or ‘Not Applicable’ responses, and equivalent, are often excluded from the figures and analysis in order to focus on respondents who did express an opinion.”

On slide 35, for instance, all ten fluency items appeared on the same survey page — a respondent who saw one item saw all ten:

A chart showing priorities for fluency elements in a strong math curriculum, with categories including practice methods, mastery of procedures, accuracy checks, speed, and assessment types, represented by percentage ratings.
Figure 6: Slide 35 from Hanover’s summary report on Math Instructional Priorities Survey for La Cañada Unified School District (February 2026).

Yet n ranged from 649 to 668 across those items. That 19-respondent spread can only reflect DK/NA exclusions, because if someone abandoned the survey entirely they’d be missing from all items equally. This means Hanover silently removed DK/NA responses from the denominator for each individual item, calculating percentages only from those who expressed a substantive opinion, and presented those inflated percentages without disclosing the count of excluded DK/NA responses.

Hanover’s survey design document itself reveals the mechanism. It states that Hanover recommends using forced response on all questions, meaning respondents must select something before advancing. But the survey includes “Unsure” or DK/NA options on most questions. So respondents who found a question confusing — and remember, parents at the November meeting told Cartnal directly and repeatedly that the questions were incomprehensible — had two choices: guess at an answer they didn’t understand, or select “Unsure.” Those who selected “Unsure” were then excluded from the analysis. The parents who told Cartnal at the November meeting that some of the questions were confusing — Greg Alexanian, Larry Brown, Stéphane Valladier, Tiffany Shea — predicted exactly this outcome.

Survey methodology research has long established that elevated DK/NA rates are primarily indicative of question design failure rather than respondent deficiency. Krosnick’s (1991) theory of survey satisficing demonstrates that when questions are difficult to interpret, respondents of lower familiarity with the topic will disproportionately select ‘Don’t Know’ rather than guess — and that this tendency increases as respondents progress through a questionnaire.9 Tourangeau, Rips, and Rasinski (2000) further established that comprehension failures are fundamentally a question design problem, not a respondent quality problem. By excluding these responses from their analysis denominators, Hanover effectively penalized the respondents most affected by their own confusing question design.10

Conclusion

The Hanover survey report is not without value. It accurately documents areas of genuine alignment: parents, staff, and students all want explicit teacher modeling, paper-and-pencil practice, and a curriculum that develops problem-solving ability and procedural fluency. Those findings are real, and the district appears to have taken them seriously. But the report’s primary failures are structural, not incidental, and they compounded one another in ways that systematically distorted what the survey told the district.

The most consequential failure was designing a single survey instrument for fundamentally different audiences without accounting for those differences. Parents who spend ten years tracking their children’s mathematical trajectory, teachers trained in the vocabulary of instructional pedagogies, and middle school students evaluating their own classroom experience do not share the same conceptual map of the questions they were being asked. A properly designed study would have used separate instruments for each group, calibrated to each group’s knowledge and perspective. By deploying a single survey instrument — one loaded with educator jargon that parents at the first parent informational meeting in November explicitly warned Cartnal they could not reliably interpret — Hanover guaranteed that questions would be understood differently by different respondents, and that the responses would be collapsed as if they meant the same thing. They didn’t. The apparent consensus Hanover reported was overstated and misalignment in responses ignored.

The second failure was interpretive. Wherever a question was ambiguous — and some of the substantive questions were — Hanover resolved the ambiguity in favor of the conceptual-understanding reading. When 64% of parents and 77% of staff both endorsed “building strong conceptual understanding,” Hanover recorded agreement. It did not ask whether a parent endorsing conceptual understanding meant something compatible with what a constructivist-trained educator meant by the same phrase. It never framed the axis around which all K-12 math curricula actually vary — teacher-led explicit instruction versus student-centered inquiry learning — which is the only context in which conceptual understanding becomes a diagnostic rather than an aspirational term. Without that frame, the report’s central recommendation to “balance conceptual understanding with procedural fluency” is fraught. It is a sentence that every curriculum vendor from the most constructivist to the most explicit-instruction-oriented will happily endorse, because it is unfalsifiable as written.

The methodological failures reinforced both structural problems. Excluding DK/NA responses from the denominator without disclosing removal counts inflated apparent consensus on every item where confused respondents self-selected out. Removing ‘low-quality’ respondents without defining quality or reporting counts created a further gap between the surveyed population and the analyzed one. The attrition analysis we reconstructed from Hanover’s own reported n values — a nearly four-to-one differential in parent versus staff drop-off across Q11 through Q29 — shows that the survey’s progressive difficulty fell almost entirely on parents. Staff, who by training speak this vocabulary fluently, completed the instrument at close to full strength. Parents, who are ultimately the primary stakeholders in an elementary curriculum adoption, were systematically filtered out of their own survey as the questions grew more opaque.

The cumulative result is a report that was adequate at describing where there was broad, genuine agreement, and inadequate — structurally incapable — of capturing where the most important disagreements lay. Those disagreements, visible in the stark parent-staff divergences on STEM preparation, advanced coursework readiness, and differentiation direction, were acknowledged by Hanover in passing and then set aside when it came time to write recommendations. A well-designed survey would have made those disagreements the centerpiece of its findings, because they are precisely what curriculum selection has to navigate and balance.

The survey was supposed to inform Hanover’s so-called high-quality instructional materials evaluation rubric — the instrument the district is now using to score the four finalist curricula. As we will see in the next article, the rubric inherits every problem the survey generated and adds several of its own: selectively borrowed language from a Science of Mathematics curriculum evaluation rubric stripped of its diagnostic substance, five scored indicators with no grounding in the empirical math education literature, and the complete omission of the features that most reliably distinguish high-quality from low-quality math curricula. A mediocre report built on a flawed survey has produced a flawed rubric. That has been the record of Hanover’s work for La Cañada Unified, from the pedagogically misleading best practices document delivered in January, to the survey instrument parents warned Cartnal about in November, to the evaluation rubric delivered in March. The pattern is consistent enough that the district should weigh it carefully before treating any Hanover output as authoritative guidance for what is, ultimately, a consequential and largely irreversible decision.


  1. “Respondent Characteristics (2025-26),” of Math Instructional Priorities Survey: La Cañada Unified School District, February 2026, Hanover Research, p.60. ↩︎
  2. Ibid., p.7. ↩︎
  3. It would take a book-length treatment to explain the origins of the NCTM’s and math reformers infatuation with constructivism, and is beyond the scope of this article. A less-lengthy overview of the difference between the two fundamental frameworks for math instruction is provided in an article I wrote three years ago — “Thoughts on Effective Math Instruction.” ↩︎
  4. Hanover, p.7. ↩︎
  5. Paul Kirschner, John Sweller, and Richard Clark, “Why Minimally Guided Instruction Doesn’t Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching,”Educational Psychologist, 41:2, pp.75-86 (2006). ↩︎
  6. Hanover, p.10. ↩︎
  7. Ibid., p.48. ↩︎
  8. Question 4 (“What is your opinion of your child’s primary math textbook curriculum?’), Elementary Math Survey Results – June 6, 2017, La Canada Math Parents. p.11. ↩︎
  9. Krosnick, J. A. (1991). “Response strategies for coping with the cognitive demands of attitude measures in surveys,” Applied Cognitive Psychology, 5(3), 213–236. ↩︎
  10. Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The Psychology of Survey Response. Cambridge University Press. ↩︎