How is Policy Affecting Classroom Instruction?

Executive Summary

Five-plus years into the experiment with new “college- and career-ready standards” (of which Common Core is the most notable and most controversial example), we know little about teachers’ implementation and the ways policy can support that implementation. This paper uses new state-representative teacher survey data to characterize the degree of standards implementation across three states—Kentucky, Ohio, and Texas. We also investigate teachers’ perceptions of the extent to which the policy environment supports them to implement the standards. We find a great deal of variation in perceptions of policy, with Ohio teachers perceiving policy to be less supportive than Kentucky or Texas teachers. Teachers in all states are mostly implementing the content in new standards, but they are also teaching a good deal of content they should not—content that has been deemphasized in their grade-level standards. Perceptions of policy do not explain much of the variation in instruction, contrary to our theory. If greater attention is not paid to supporting teachers to implement the standards and reduce coverage of deemphasized content, we worry the standards will not have much effect.


Introduction

The Common Core State Standards (CCSS) have sucked much of the oxygen out of the room when it comes to discussing K-12 educational standards. In the eyes of many American parents, the mere mention of “Common Core” has become a poison pill to members of both parties. But regardless of the political controversies, it’s worth asking whether teachers believe in these standards, and whether they might be changing their instruction as a result. Even in non-Common Core states, there are new demands on instruction called for by “college and career ready standards” that merit investigation. We find some evidence that teachers are implementing the content in new standards as expected, but we also find that teachers are implementing content that should have been deemphasized. We see little evidence that teachers’ beliefs about state policy are associated with their instructional choices.

As investigators in the Center for Standards, Alignment, Instruction, and Learning (C-SAIL), which is funded by the Institute for Education Sciences (IES), we understand that the term “Common Core” often distracts from a broader discussion of college- and career-readiness standards. The Common Core is one such set of standards, but certainly not the only one, as states have drifted towards their own individualized standards post-Common Core. Texas, after all, never adopted Common Core.

Despite their longstanding place at the heart of education policy, standards-based reforms have only a modest record of success. Certainly, there is some evidence that the accountability pressures that typically come with standards-based reforms can induce student learning gains. [1] But a wide variety of quantitative and qualitative research finds that standards implementation—the extent to which teachers use the standards in classroom instruction—is typically moderate, at best. [2]

This most recent wave of standards-based reform, which started with the CCSS, calls for even more ambitious instructional change than previous versions (for some states). Given these sweeping changes, we wanted to know—have we better learned how to implement standards? Are teachers changing their instruction to match the standards? What are states doing to help teachers adapt?

To answer these questions, we draw on state-representative survey data collected by the C-SAIL project team. [3] Our data and methods are described in full in the appendix, which can be found in the downloadable PDF of this report. While our work is not causal, it provides suggestive evidence on the ways policy can encourage stronger standards implementation and through that, better student achievement outcomes.

How were teachers in our three states different?

To examine teachers’ implementation of standards in this new era with these new, more ambitious standards, we draw on the “policy attributes theory.” [4] This theory has been used to study teachers’ responses to education policy for several decades, and it is the theoretical framework that undergirds C-SAIL’s research. The framework posits that five attributes are related to successful policy implementation:

Specificity: How extensive, detailed, and/or prescriptive a policy is.

Authority: How policies gain legitimacy and status through persuasion (e.g., rules or law, historical practice, or charismatic leaders).

Consistency: The extent to which policies are aligned and how policies relate to and support each other. When the policy system is consistent, standards and tests align with each other.

Power: How policies are reinforced and enacted through systems of rewards and sanctions. Policies that have power include incentives for compliance.

Stability: The extent to which policies change or remain constant over time.

Table 1 shows the distribution of the attributes among all teachers across the three states. Texas ranks highest in specificity, power, and stability while Kentucky ranks highest in consistency and authority. According to our theory, differences between states among policy features help explain why teachers may or may not be aligning their instruction with their respective state standards. Every comparison was statistically significant except for consistency, which is only significantly different between Ohio and Kentucky; power, which is not significantly different across any of the three states, and stability, which is only significantly different between Texas and Kentucky.

Table 1. Teacher policy attribute means across states

Texas teachers

n=564-586

Ohio teachers

n=379-405

Kentucky teachers

n=390-436

Specificity 3.03 (0.89) 2.38 (0.99) 2.75 (0.78)
Consistency 2.82 (0.62) 2.71 (0.56) 2.91 (0.55)
Authority 2.56 (0.69) 2.30 (0.63) 2.78 (0.54)
Power 2.68 (0.67) 2.50 (0.63) 2.56 (0.64)
Stability 2.51 (1.01) 2.44 (0.95) 2.29 (1.02)

Note: Numbers are means and (standard deviations). In general, a 2.0 indicates relatively weak or “somewhat disagree” attitudes, whereas 3.0 represents relatively higher or “somewhat agree” attitudes.

Do teachers’ perceptions of policy predict their coverage of standards?

After examining differences in how teachers perceive the attributes of their policy environment, we wanted to see how well these attributes predicted what teachers do in the classroom—specifically, whether teachers were more likely to cover instruction emphasized in their state’s standards.

In our predictive models, we wanted to account for several important factors that are likely related to teachers’ content coverage. Thus, we control for teacher experience (novice or not), and the following four classroom-level variables: percentage of high-achieving students, percentage of low-achieving students, percentage of students on IEPs, and percentage of ELLs.
Analytic approach
First, we report means for standards emphasized and standards de-emphasized content by grade level, subject, and state.

Second, we run a series of teacher-level regressions where the dependent variables are coverage indices for the emphasized content, and our focal predictors are the five policy attributes, controlling for the descriptive variables listed above. That is, we are examining whether teachers who report higher specificity, consistency, authority, power and stability in their policy environments are more likely to cover the content emphasized in their state’s standards.

To what extent are teachers teaching standards-aligned content?

Figures 1 and 2 show the mean teacher reports of coverage of the standards emphasized and de-emphasized content for elementary teachers and then high school teachers. Across states, grade levels, and subjects, responses range from 2.57 to 3.68 (recall 1=no coverage, 2=minor coverage, 3=moderate, and 4=major). From these figures, several patterns emerge.

First, teachers generally report covering the standards emphasized content regardless of state, grade, and subject. The mean coverage score is always greater than 3, ranging from 3.07 for secondary math teachers in Kentucky to 3.59 for secondary ELA teachers in Texas.

Third, there are some clear subject and grade-level patterns in the data. In elementary school, mathematics teachers report covering more emphasized content and less de-emphasized content—the pattern we might hope for. This is true for elementary mathematics in all three states, and in all three cases the differences are statistically significant. In contrast, elementary school ELA teachers report covering more de-emphasized content than emphasized content in all three states, though these differences are not statistically significant. In high school, the pattern is exactly opposite—mathematics teachers report covering more de-emphasized content and less emphasized content, while ELA teachers report covering more emphasized content and less de-emphasized content. Our findings were quite consistent across all three surveyed states.

Overall, the results of this analysis suggest that teachers believe they are covering the content emphasized in the standards. However, they also report covering the de-emphasized content, often just as much as they cover the emphasized content. There are some subjects, grades, and states where teachers seem to do better at emphasizing the content in the standards, especially in elementary mathematics, but this is not the norm.

Which policy attributes are related to teacher’s coverage of the standards?

Table 2 below shows which of the policy attributes significantly predict coverage of emphasized standards content. [6] There were no similarities among predictors across states. Taken together, these results show a) quite weak relationships of policy with reported instruction within states (much weaker than has been found elsewhere), and b) differences across states in relationships between the attributes and content. [7] Where there were nonzero results, they were more often in the expected positive direction than not.

In general, the results seem more supportive of policy predicting instruction in ELA than in mathematics. Even though statistical significance is modest, 14 of the 15 coefficients across the three states are greater than zero in ELA (as compared to just 9 of 15 in mathematics). Whether the policy attributes are more salient for teachers of ELA than mathematics is a worthwhile question for future investigation.

Table 2. Predictive model for emphasized content using the policy attributes for content coverage

State TX OH KY TX OH KY
ELA

Emph

ELA

Emph

ELA

Emph

Math

Emph

Math

Emph

Math

Emph

Specificity 0.116*** -0.013 0.046 0.032 -0.035 0.065
Consistency 0.013 0.133* 0.110 0.054 -0.021 0.149
Authority 0.132** 0.093 0.067 -0.089 0.045 -0.128
Power 0.065 0.076 0.025 0.110 -0.066 0.112
Stability 0.017 0.048 0.030 -0.093* 0.063 0.056
N 201 144 165 180 118 111
R2 0.236 0.154 0.102 0.057 0.172 0.123

Standard errors in parentheses

p < 0.05, ** p < 0.01, *** p < 0.001

Conclusion

The latest college- and career-readiness standards sought to encourage certain content effects on teachers’ instruction. We do not take a stance on whether these desired changes were “good;” we report whether they had the desired effect. We find that one aspect of the intended shift seems to have occurred—teachers are teaching content emphasized in the standards (though we cannot say they would not have been teaching this content if the standards did not exist). But, another part of the intended shift—moving away from certain content—has not occurred. [8]

The ability of teacher policy perceptions to predict instruction is limited. However, the ability of specificity and authority to predict emphasized instruction among Texas English Language Arts teachers is encouraging, as there may be something particular to Texas that explains the unique associations we see there. Texas has been a nonparticipant in many of the multi-state attempts at standards-based reform, yet it may be having more success.

Texas did not participate in Race to the Top or ascribe to the Common Core State Standards. Yet Texas teachers perceive policy to be more specific than teachers in the other two states, indicating they believe their districts provide more guidance on how to cover the standards. This distinction may be important for future policymakers to consider the efficacy of federally based or cross-state initiatives as opposed to state-based ones. Or, it may suggest that larger states simply have greater capacity for this time-intensive and expensive work.

We know that the standards will not matter much if they do not change what teachers teach. We found that teachers are covering content emphasized by their state’s new standards, but teachers are also still covering content not emphasized in the standards. This runs counter to the idea that teachers should focus their instructional efforts on the (already comprehensive) topics and skills in the grade-level standards. Overall, it seems clear that states and districts could provide more support in helping teachers move away from certain content, which we know from previous research is a challenge for teachers. Without these shifts, we cannot say that the policy has been well implemented, which makes it even more difficult to decide whether the standards have a chance to improve student outcomes.

— Adam Edgerton, Morgan Polikoff and Laura M. Desimone

ednext-evidencespeaks-small

Adam Edgerton is a PhD Student in Education Policy at the University of Pennsylvania Graduate School of Education. Morgan Polikoff is an Associate Professor at the Rossier School of Education at USC. Laura M. Desimone is a Professor of Education Policy at the University of Pennsylvania Graduate School of Education.

This post originally appeared as part of Evidence Speaks, a weekly series of reports and notes by a standing panel of researchers under the editorship of Russ Whitehurst.

The author(s) were not paid by any entity outside of Brookings to write this particular article and did not receive financial support from or serve in a leadership position with any entity whose political or financial interests could be affected by this article.


Notes:

1. For a review of this literature see Figlio, D. N., & Loeb, S. (2011). School Accountability. In E. A. Hanushek, S. Machin & L. Woessmann (Eds.), Handbook of the Economics of Education (Vol. 3, pp. 383-421). North-Holland, The Netherlands: Elsevier.

2. See for instance Polikoff, M. S. (2012c). Instructional alignment under No Child Left Behind. American Journal of Education, 118, 341–368.

3. This analysis examines select data from a survey administered to teachers in Texas, Ohio and Kentucky during the spring of 2016. We employed a stratified random sampling technique designed to ensure the sample was representative of districts in each state. Forty-two Texas districts, forty-two Ohio districts and eighty-nine Kentucky districts were included in the sample. In each district, we sampled up to two elementary schools and two high schools, making sure to capture representative samples of public, private and charter schools based on demographics. In each of these elementary schools, we sampled two fifth-grade math teachers, two fourth-grade ELA teachers, one teacher of students with disabilities (SWDs) and one teacher of English Language Learners (ELLs). In each high school participating in the study, we sampled two English Language Arts (ELA) teachers and one teacher in each of the following specialties or subjects: SWD, ELL, algebra I, algebra II, and geometry. We chose these three math subjects because they are the most common high school math courses, thus including them maximizes the number of high school target course responses we obtained. Further, we wanted to identify math classes enrolling students who were likely to be required to take the state mathematics assessment. In total 603 out of 1,089 sampled Texas teachers responded, for a response rate of 55 percent; 417 out of 654 sampled Ohio teachers responded, for a rate of 64 percent, and 554 out of 1731 sampled Kentucky teachers responded, for a response rate of 32 percent.

4. For an early description of the policy attributes theory see Porter, A., Floden, R., Freeman, D., Schmidt, W., & Schwille, J. (1988). Content determinants in elementary school mathematics. In D. A. Grouws & T. J. Cooney (Eds.), Perspectives on research on effective mathematical teaching (pp. 96-113). Hillsdale, NJ: Lawrence Erlbaum Associates.

5. Cohen, D. K. (1990). A revolution in one classroom: The case of Mrs. Oublier. Educational Evaluation and Policy Analysis, 12(3), 327-345.

6. We tested a baseline model with no covariates (i.e., only the five policy attributes and no other predictors), and then a full model, which included novice teacher and percentage of high-achieving students, low-achieving students, students with IEPs, and percentage of ELL students. Adding the covariates did not change results for the policy attributes, so we report results of the full model.

7. See for instance Polikoff, M. S. (2012c). Instructional alignment under No Child Left Behind. American Journal of Education, 118, 341–368.

8. We note that our survey included only a small slice of the content in the complete standards. Results might be different depending on what content we ask teachers to report.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College