Share with your friends









Submit

More Americans, including policymakers, are realizing the value of early education. Yet Head Start, the nation’s largest early education program, continues to come under scrutiny. Since the Head Start Impact Study found that on average the measureable school readiness benefits of Head Start were no longer discernable in early elementary school, some policymakers have questioned whether the program is a worthwhile investment.

Instead, what policymakers should take from the Impact Studies is that there is room for improvement in Head Start. The program has the potential to better the lives of our nation’s most vulnerable children, and it often does. Other research has shown that Head Start children are not only better prepared for kindergarten, but also continue to benefit into adulthood. The question policymakers should be asking is this: how can we make Head Start more effective so that all attendees see long-lasting benefits?

Last week Sara Mead and Ashley LiBetti Mitchel of Bellwether Education Partners, along with Results for America, The Volker Alliance, and the National Head Start Association, released a new paper attempting to answer this question. Moneyball for Head Start: Using Data, Evidence, and Evaluation to Improve Outcomes for Children and Families offers a host of recommendations for how to improve the program based on the “Moneyball” principles that involve using data and evaluation to ensure that taxpayer money is invested in the most effective and efficient manner.

Head Start is a large program– it was allocated $9.168 billion in the FY 2016 budget– and it’s in both the federal government’s and families’ best interest for it to work effectively. While many Head Start advocates argue that increased funding would improve program quality, this paper offers ideas for how the program can change to better utilize the funding it already has.

The federal government released proposed updates to the Head Start Performance standards last summer, which are heavily backed by research and make strides in using data to inform continuous improvement in Head Start programs. (New America’s take on them, here). Moneyball for Head Start applauds the steps taken in the proposed standards, but argues that greater reform is needed to truly foster a culture of continuous improvement.

Mead and LiBetti Mitchel organize recommendations into three distinct categories: those for the local grantees, those for federal oversight, and those related to research and evaluation.

At the local level, grantees need the tools and capacity to collect and analyze data so that they can make informed decisions. Data on everything from family demographics to child assessment to family engagement to staff qualifications can help inform teachers and program directors so that they best meet children and families’ needs. The authors explain that, “Head Start grantees collect and report data on a variety of outcomes, but effectively using this information to improve quality and outcomes requires a high level of intentionality, planning, and expertise in analyzing, interpreting, and acting on data.” While some high-performing Head Start programs already collect and utilize data to inform continuous improvement, many don’t have the right data or don’t know how to effectively use the data they do have.

The authors suggest that Head Start programs build capacity by working with other Head Start grantees and researchers to form Networked Learning Communities. These groups would work together to analyze and share data and identify trends. This is already happening both formally and informally in some places such as Minnesota, but the federal government could encourage their formation by allocating technical assistance dollars this way.

There’s more room for reform in the area of federal oversight than simply reallocating technical assistance dollars. Currently, accountability in Head Start revolves mostly around grantees’ compliance with the Head Start Performance Standards and other basic measures, such as financial solvency and state licensing standards. The report calls for the federal government to measure grantee performance using more meaningful, results-based measures that can differentiate grantees based on quality. They suggest that such performance measures include child outcomes, family outcomes, and program quality data in addition to the types of data already collected. Ideally, the Office of Head Start would use these data to analyze trends, identify high performers, and help programs improve.

But one of the primary challenges of implementing such a proposal is that policymakers don’t always have a clear sense of what to measure or how to measure it. Take child outcomes for instance– many questions need to be answered. Which specific outcomes are most important? What tools are valid and reliable? Do grantees have the capacity to use these tools? What training would they need? How should the data impact program performance ratings?

More and better research and evaluation is needed to answer these and other tough questions, such as what are the key elements that make Head Start most effective. The report recommends spending one percent of annual Head Start funding on research and evaluation, up from the less than 0.25 percent currently used, to learn more about what constitutes quality in Head Start and to develop tools to measure it.

While many of the recommendations offered in this paper sound fairly straightforward, they would involve commitment from multiple groups of stakeholders and a shift in the existing Head Start culture. And everyone knows that culture is difficult to change. It will take time to identify the right measures, create valid tools, and build centers’ capacity to use data to drive continuous improvement. This process could be strengthened by partnering with  the philanthropic community, the private sector, and coordinating with other government agencies and programs, such as state pre-K and child care, that are also thinking about and making headway in these areas.