Start with the Questions to Use Data to Improve Educator Preparation Quality

Start with the Questions to Use Data to Improve Educator Preparation Quality

States and educator preparation programs (EPPs) are hungry for information about how they are preparing graduates to meet the needs of students in the classroom. While every state has the capacity to share information about graduate classroom performance with EPPs, too many still do not. Unsurprising to those of us here at DQC, a recent US Government Accountability Office (GAO) report found that most states find the federal government’s Title II data collection to be burdensome, duplicative, and ultimately useless in helping improve the quality of EPPs. Title II of the Higher Education Act (HEA) requires that states collect and report certain data points about EPPs in order to receive federal funds. What this all means is that right now, too much data is being collected without first starting with the questions that leaders and educators have about EPPs.

The report had several key findings:

  • Officials from 3 out of 5 case-study states, and 7 out of 14 EPPs, reported that collecting data on the effectiveness of EPP graduates is challenging.
  • Federal reporting requirements are often duplicative: 48 states told GAO that they are either not using some sections of the Title II data reports or that they already collect most of the useful Title II data elements through other mechanisms.
  • “Very few” states actually use the Title II data to inform state funding decisions or to inform district hiring practices, and 8 of 14 EPPs reported to GAO that “very little” of their Title II data was useful to them in assessing their own performance

Despite these shortcomings, it is evident that both TPPs and states want—and need—better data in order to be transparent and continually improve the quality of educator preparation. More than half of states reported that they review some information about graduate effectiveness in evaluating EPPs for approval or renewal. Thirty states evaluate graduate effectiveness through district and principal surveys, while 15 look at outcomes data, including student test scores. One case-study state uses data on graduate effectiveness to help EPPs identify shortcomings and bolster the training they provide to teaching candidates. Ten additional states reported to the GAO that they will either implement or expand the use of graduate effectiveness data as part of their approval process for EPPs.

So while at the moment the data being collected does not align to the needs of states and preparation programs, the desire to use data is there. Federal policymakers have a tremendous opportunity to support states and EPPs in their quest for transparency and continuous improvement by starting with the questions, reducing duplication and burden, and ultimately transforming what is currently a cumbersome process into one that fuels the kinds of change states want in educator preparation.