The Impact of Learning Time on Academic Achievement *Prepared for the Faculty Fellows Research ProgramCenter for California StudiesCalifornia State University, SacramentoDecember 13, 2011Su Jin Jez, Ph.D.Assistant ProfessorDepartment of Public Policy andAdministrationCalifornia State University, SacramentoSacramento, CA 95819-6081(916)278-6557 voice(916)278-6544 [email protected] W. Wassmer, Ph.D.Professor and ChairpersonDepartment of Public Policy andAdministrationCalifornia State University, SacramentoSacramento, CA 95819-6081(916)278-6304 voice(916)278-6544 [email protected]*We thank the Center for California Studies (CENTER) at Sacramento State for the provision of aCalifornia State University Faculty Fellows Grant that allowed us to study this topic at the request ofthe California Senate’s Office of Research (SOR). The opinions and findings given here are onlyours and do not necessarily represent those of the CENTER or SOR. We also appreciate insightsfrom Ted Lascher and from participants in the Sacramento State College of Social Sciences andInterdisciplinary Studies’ Seminar where we presented an earlier draft of this work. We are alsograteful for the research assistance in data gathering by Tyler Johnstone, a student in the SacramentoState Master’s Program in Public Policy and Administration.

The Impact of Learning Time on Academic AchievementABSTRACTAs schools aim to raise student academic achievement levels and districts wrangle with decreasedfunding, it is essential to understand the impact that learning time has on academic achievement.Using regression analysis and a data set drawn from California’s elementary school sites, we find astatistically significant and positive relationship between the number of instructional minutes in anacademic year and school-site standardized test scores. More specifically, about 15 more minutes ofschool a day (or about an additional week of classes over an academic year) relates to an increase inaverage overall academic achievement of about 1.0 percent, and a 1.5 percent increase in averageachievement for disadvantaged students, even after controlling for student and schoolcharacteristics. This same increase in learning time yields an expected 37 percent gain in the averagegrowth of socioeconomically disadvantage achievement from the previous academic year. Placingthis impact in the context of other influences found important to academic achievement, similarincreases in achievement only occur with an increase of fully credentialed teachers by nearly sevenpercentage points. These findings offer guidance regarding the use of extended learning time toincrease academic performance. Moreover, they also suggest caution in reducing instructional timeas the default approach to managing fiscal challenges.INTRODUCTIONGiven the continued focus on the academic underperformance of primary and secondary publicschool students in the United States, policymakers continue to explore interventions to raise suchperformance. Educational leaders often recommend the use of extended learning time (ELT) assuch an intervention. President Obama’s Education Secretary Duncan expressed support for theuse of federal stimulus funds for ELT in public schools (Wolfe, 2009). In addition, manyeducational reform organizations and think tanks have heavily promoted such an option (forexamples see Aronson, Zimmerman, & Carlos, 1999; Farbman & Kaplan, 2005; Little, Wimer, &Weiss, 2008; Pennington, 2006; Princiotta & Fortune, 2009; Rocha, 2007; Stonehill et al., 2009).While conventional wisdom may expect a positive relationship between additional hours inthe classroom and higher standardized test scores, the scholarly evidence from empirical research onthis subject is relatively thin. Voluntary after school programs are frequently cited as evidence thatextending the learning day raises participants’ academic performance (Farbman & Kaplan, 2005;Farland, 1998; Learning Point Associates, 2006). However, the success of after school programs for2

only those who volunteer to participate in such programs does not necessarily support themandatory extension of the school day as a policy to raise all student test scores. Worth noting isthat little of the existing research has focused on a broad range of schools that exhibit the type ofsocio-economic diversity present in many public schools in the United States. This is important dueto the documented challenges that such diversity presents to raising the overall academicperformance of students. Furthermore, school districts struggling to balance budgets during timesof fiscal stress, and contemplating a decrease in teaching hours as a way to do it, need to understandthe impact of this strategy on academic outcomes. Especially helpful would be how the effect of anexpected reduction in learning time compares to the effect calculated for an alternative reduction inother inputs into a school site’s academic outcomes.The State of California offers a contemporary example. As part of the fiscal year 2011-12state budget agreed upon by California’s Governor Brown and the state’s Legislature, a budgetarytrigger was set in the agreement that if 4 billion in anticipated revenues do not materialize inJanuary 2012, mandated cuts in the current budget year’s expenditure go into place. One of theproposed cuts is a reduction of 1.5 billion to state support for K-12 public education made upthrough seven fewer classroom instructional days (see ). Such areduction would be over and above the decrease from 180 to 175 school days allowed by Californialegislation in 2008, and that most of its school districts had implemented by 2010 to offsetcontinuing imbalances in their budgets (see crisis ). So what exactly would it mean for theachievement of learning outcomes if California – or for that matter, any state – reduced its requiredpublic school days by seven percent (down to 168 days from a previously required amount of 180 in2008)? The current literature on this topic is unable to offer a reliable prediction.3

Accordingly, we provide an empirical examination of how differences in classroom time at asample of public elementary school sites affects measures of average standardized test scoresrecorded at these sites. We appropriately measure this impact through a statistical method(regression analysis) that allows us to control for other explanatory factors besides learning time thatmay cause differences in observed standardized test scores. Our results offer a way to estimate theeffectiveness of extended learning time as a strategy to improve student achievement and close theachievement gap. More relevant to current challenges, these results also enable us to predict howstudent achievement would change if learning time decreases.Next, we review the relevant literature that seeks to understand how learning time influencesacademic achievement. Following that, we describe the theory, methods, and data that we use forour empirical examination. Then we share the results of the regression analysis, focusing on theimpact of extended learning time on academic achievement. The final section concludes with adiscussion of the implications for policy and practice.LITERATURE REVIEWUsing the economic logic of a production process, the more time spent to produce something(holding the other inputs into the production constant) the greater should be the quantity and/orquality of the output produced. Employing such reasoning, conventional wisdom among manypolicymakers is that increasing the time that students spend learning offers a simple and obviousway to improve educational outcomes. However, a search of the previous literature on therelationship between learning time and learning outcomes yielded little research that rigorously teststhis conventional wisdom. Previous research did consistently indicate that the more time studentsspend engaged in learning, the higher the expected levels of academic outcomes (Borg, 1980; Brown& Saks, 1986; Cotton & Savard, 1981). Yet, the relationship between just the time allocated tolearning and student academic outcomes – without controls for the effective the use of that time –4

remains unclear. This lack of clarity results from missing or insufficient controls for selection biasand other confounding factors, thereby making causal conclusions from the existing literature onthis subject tenuous. We offer next a review of previous research that aimed to assess how anincreased allocation of time devoted to learning effects measures of academic achievement.Our literature review begins with a description of a meta-analysis whose findings summarizemuch of the literature in the field. Next, we report upon two studies that have done well in theirattempts to deal with these methodological concerns. Later we review a few studies whose reportedfindings we are less confident in due to methodological concerns.In a recent meta-analysis, Lauer et al. (2006) reviewed 35 different post-1985 studies thatfocused on whether the voluntary attendance of after-school programs by at-risk students raisedtheir academic achievement relative to a non-attending control group. They found that such studiesgenerally offer statistically significant, but small in magnitude, effects of these programs on the mathand reading achievement of at-risk students. For the impact on reading, students who participatedin the after-school programs outperformed those who did not by 0.05 of a standard deviation fromthe mean for the fixed-effects model, and 0.13 standard deviations for the random-effects model.For the impact on mathematics, students who participated in the after-school programsoutperformed those who did not by 0.09 standard deviations for the fixed-effects model, and 0.17standard deviations for the random-effects model.The Lauer et al. (2006) findings offered a general representation of the results reported innearly all the empirical studies we reviewed. In short, voluntary extended learning programs tendedto exert only a small (if any) impact on the measured academic achievement of those participating inthem. Such findings make it difficult to predict whether any change in the amount of learning timeat a school site would have a measurable impact on the academic outcomes of students at the site.We are also hesitant to place a great deal of confidence in these findings due to methodological5

concerns present in many of these studies. These concerns include the voluntary, and small in scale,nature of the ELT programs observed, and inadequate controls for other factors that drivedifferences in academic performance besides learning time. The likely result of using data generatedfrom participants who voluntarily decided to extend their learning time is the inherent “selectionbias” of attracting higher achieving (or perhaps more driven to succeed) students to participate inELT programs. This results in uncertainty as to whether their observed higher achievement afterthe ELT program is due to the program itself, or non-measured personal characteristics that causedstudents to enroll voluntarily in the program.Dynaski et al. (2004) offered an experimental (and a quasi-experimental) evaluation of the 21stCentury Learning Centers Program. This large and federally funded program provided extended learningopportunities to students who attempted to improve academic outcomes and offer non-academicenrichment activities. The authors’ use of an experimental design to assess effectiveness offered areasonable way to control for the selection bias of those who voluntarily participated in such aprogram being on average more engaged in learning that those who did not. However, Dynarskiand colleagues were able to use an experimental design and address the problem of selection biasthrough an unplanned oversubscription to the program, which allowed a random assignment ofthose wanting to participate as the actual participants. The comparison they used was then betweenthis treatment group and those who wanted to participate, but for whom a spot was not available.Accordingly, the authors’ findings only allow us to draw inferences about students who wanted toparticipate in such a program.Furthermore, the Dynarski et al. study compared the treatment and control groups to see ifthey were similar in other characteristics. The groups were not significantly different in gender,race/ethnicity, grade level, mother’s age, academic traits, or disciplinary traits (with the oneexception that the elementary school sample control was less likely to do homework). For6

elementary school students, the evaluation found no significantly discernible influence on readingtest scores or grades in math, English, science, or social studies between those enrolled in the 21stCentury Learning Centers Program and the control group that was not. The authors also examinedmiddle school students, but without a randomly assigned control group. Instead, they used arebalanced sample based on propensity score matching – matching those who participated to a nonparticipant based on how alike they are. The treatment and control groups were similar for allcharacteristics, except the treatment group had lower grades, less-regular homework habits, morediscipline problems, and felt less safe in school than the control group. For middle school students,there were again few differences in academic achievement between the extended-learning treatmentand control groups. For both elementary and middle school students, across research designs,Dynarski et al. found little effect of the afterschool program on students’ academic achievement.Alternatively, Pittman, Cox, and Burchfiel (1986) utilized exogenous variation in the schoolyear to analyze the relationship between school year length and student performance. Such anexogenous variation arose when severe weather led to schools closing for a month in severalcounties in North Carolina during the 1976-77. During that academic year, students took theirstandardized test after missing, on average, 20 days of school. The authors made year-to-year andwithin grade comparisons of individual student test scores for both before and after the shortenedschool year. Cross-sectional and longitudinal analysis also studied two cohorts of students impactedby the weather. Pitmna, Cox, and Burchfiel reported no statistically significant differences betweenthe academic performances of students in the shortened school year in comparison to other nonshortened years. However, teachers reported that students were more motivated in the year withsevere weather, which may have led to increased active learning time in school.Vandell, Reisner, and Pierce (2007) sought to evaluate the impact of only “high quality”afterschool programs on academic and behavioral outcomes. The researchers whittled down a list7

of 200 programs to just 35 programs that they deemed as offering “evidence of supportiverelationships between staff and child participants and among participants, and on evidence of richand varied academic support, recreation, arts opportunities, and other enrichment activities” (p. 2).The 35 programs studied were free, offered programming four to five days each week, had strongpartnerships with community-based organizations, and served at least 30 students who were largelyminority, low-income students in high-poverty neighborhoods. The evaluation of 2,914 studentsoccurred over a two-year period. Only 80 percent of the elementary school sample and 76 percentof the middle school sample remained at the end of the second year of the survey. It is not clearlystated how the control group was chosen and the authors do not compare the groups to ensure thatthey are similar.To evaluate the impact of the afterschool programs, Vandell, Reisner, and Pierce used twolevel (student and school) random-intercept hierarchical linear models (HLM) which is a form ofregression analysis. HLM is useful when studying constructs where the researcher nests the unit ofanalyses (in this case, a student) in groups (in this case, a school) that are not independent. Theauthors analyzed elementary and middle school students separately and controlled for a number ofbackground characteristics, including family income and structure, and mother’s educationalattainment. They found that elementary school students who participated regularly over the twoyears of the study increased their percentile placement of math test scores from 12 to 20 points(depending on the model) as compared to those who spent their afterschool hours unsupervised.While middle school students who participated regularly over the two years of the study improvedtheir math test score percentile placement by 12 points over those who spent their afterschool hoursunsupervised.Vandell, Reisner, & Pierce (2007) found large, positive impacts of high quality afterschoolprogramming. Their focus on only high quality programs was unique and clarified that only the8

“best” of the programs may have an impact. However, as noted previously, the issue of selectionbias was again present in this evaluation. Students who chose to participate in an afterschoolprogram are likely very different from those who chose not to do so. The authors of this paper didnot discuss this issue, nor did the discussion of their model leave the reader feeling that theirmethods adequately adjusted for these differences. What we can confidently conclude from thisstudy is that students who choose to participate in a high quality afterschool program, and do soregularly, have better outcomes than students who do not. We cannot say with any certainty thatsuch cream-of-the-crop afterschool program would have the same measured positive academic effectson other types of students.In another study, Farmer-Hinton (2002) examined a mandatory, two-hour, after-schoolremediation program and found that after one year (approximately one-month more of learningcompared to non-participants), participants had increased math and reading achievement. Theauthors used HLM and controlled for individual and institutional factors to isolate the impact of theafter-school program. These controls included student retention, race, gender, and family income;and school wide student mobility, percent African American, and percent in poverty. The use ofsuch a model allowed the researchers to be more rigorous in assessing causality, but key controls likeparental education are still absent. Of further concern is the fact that funds to support theafterschool program were competitive. Unfortunately, Farmer-Hinton offered no discussion of theselection criteria used. This competitive process introduced bias into her findings in at least twoways. First, school principals who applied for the funds are likely more shrewd about getting extraresources for their school. Such shrewdness may translate into other ways they found to increasestudent achievement. Second, the district could have chosen the school sites that received fundsbased upon some trait indicating they would be able to garner greater gains from implementing theprogram.9

Frazier and Morrison (1998) examined kindergarteners and found those in a 210-dayextended school year exhibited better beginning of first grade outcomes in reading, math, generalknowledge, and perceived competence, than kindergartners enrolled in only a 180-day traditionalschool year. The study used both raw scores and growth rates to measure these academic outcomes,but failed to explain how to interpret both of these metrics. The match between kindergartenersenrolled in the extended school year with kindergarteners enrolled in traditional school years,occurred based on background characteristics and magnet school attendance. While the matchedgroups look largely the same, one cohort of the extended year students had mothers with statisticallysignificantly more education and with greater employment levels than their matched traditional yearpeers. Given this, controlling for these variables would have made sense when analyzing differencesin outcomes, but the authors simply compared means and score changes with one-way analysis ofvariance (ANOVA).Another study by Hough and Bryde (1996) matched six, full-day kindergarten programs withsimilar half-day kindergarten programs based on location, school size, and student characteristics.The authors then used ANOVA to compare the outcomes of full- and half-day programs and foundthat full-day students outperformed half-day students on most outcomes. However, it was not clearthe size of the performance difference between full-day and half-day kindergarten students, as theauthors did not interpret the metrics used to evaluate achievement. Moreover, the authors couldhave strengthened causal claims by controlling for school, class, student, and family characteristicsknown to confound the relationship between outcomes and full-day enrollment.The methodological and data problems in prior studies of the relationship between learningtimes and academic outcomes, and the inconsistent findings reported from them, clearly indicate aneed for further research on this topic. Next, we describe the theory, methodology, and data used inour regression estimation of the influence of learning time on academic achievement.10

METHODOLOGY AND DATAMethodologyWe situate our research firmly within the large number of empirical studies that already exist on thecausal links between school inputs and academic performance produced at a school site. Theconsensus among these production-based studies is that student and social inputs (largely out of thecontrol of educators and policymakers) explain more than half of the variation in school scores(Hanushek, 1986 and 2010).Accordingly, we focus here on how the inputs that a school site has control over (includinginstructional time) contribute to its academic performance. We concentrate on the effect ofdifferences in learning time at California elementary public school sites (in the form of regularacademic hours) in academic year 2005-06 on differences in standardized test performance. Thestatewide collected Academic Performance Index (API) measures academic performance at aCalifornia elementary school site based on state-specified compilation of standardized test scores.In California, a school site’s Academic Performance Index (API) ranges from a low of 200 to a highof 1000, with a score of 800 considered proficient. A further description and details on the APIcalculation for the year used (2005-06) in this study is de05b.pdf .We assess the influences of inputs into academic output as measured by both a school site’soverall API score (base) and the change in its API score from the previous academic year (growth).California reports upon these measures for all students at a school site, and for students withinspecific subgroups (Latino, African American, Asian, White, and Socioeconomic Disadvantaged) forwhich a significant number of a certain type attends a school site. Though we examined theinfluence of learning time on all these groups, we only report regression results for the one subgroup(Socioeconomic Disadvantaged) on which learning time exerted a statistically significant influence.11

Following Fisher (2007, Chapter 13), we divide the inputs expected to exert an influence intostudent, social, and school categories. Thus, we model the production of an average standardizedAPI score at school site “i” as:APIi, API Growthi, Socioeconomic Disadvantaged APIi, orSocioeconomic Disadvantaged APIi Growth f (Student Inputsi, Social Inputsi, and School Inputsi),where,Student Inputsi f (Percentage Students African Americani, Percentage StudentsAsian Americani, and Percentage Students Latinoi),Social Inputsi f (Percentage Students Reduced Price Mealsi, PercentageStudents Gifted and Talentedi, Percentage Migrant Education Programi, PercentageStudents English Lang Learnersi, Percentage Parents College Educatedi, PercentageParents Grad School Educatedi, Percentage Parents Survey Responsei),School Inputsi f (Academic Year Teaching Minutesi, Dummy Year RoundCalendari, GradeKto3 Average Class Sizei, Grade4to6 Average ClassSizei, Percentage Teachers Full Credentiali, Percentage District Budget to Teachersi,and Enrollmenti).We realize that the inputs placed in the student and social categories are interchangeable based uponthe perspective taken. We base our placement on the viewpoint that student inputs are ones that areinherent to students (race and ethnicity) and unchanged by a student’s social environment. We arelimited in the number of control variables we can actually measure based on what data are publiclyavailable. That said, the specific ones chosen control for a number of student, social, and schoolinputs that determine differences in average standardized test scores. Thus, we are optimistic thatthis model will allow us to capture the independent influence of Academic Year Teaching Minuteson average standardized test scores at a California public elementary school site. We estimate theabove model using regression analysis, which allows the calculation of regression coefficients foreach explanatory variable. If deemed statistically significant, such regression coefficients measurethe expected impact of a one-unit change in an explanatory variable on the dependent variable. The12

standard errors calculated for the regression coefficients in this analysis are robust to heteroskedasticconcerns that are likely to be present. If statistically significant, regression coefficients indicate theexpected midpoint effect of a one-unit change in an explanatory variable to the dependent variable,holding all other explanatory variables constant. We offer next a description of how and where thedata were gathered for the regression analysis, and descriptive statistics for all variables used in it.DataOur study was constrained by the limited amount of information collected on the number of schoolminutes in an academic year at a California public elementary school site. We were frankly surprisedto learn that in California, and even throughout the United States, information on public schoollearning time is rarely collected. In California, a statewide attempt to assemble data on learningminutes (as measured by “allocated class time”) in a school for the state’s school sites was lastattempted for the 2005-06 academic year as a required element in data submitted to the CaliforniaDepartment of Education as part of it School Accountability Report Card Program (for adescription see We were further surprised tolearn that the required reporting of this data was weakly enforced and therefore not available for allthe state’s school sites. This is the case even though for the past 13 years it has been a requirementfor all public schools to complete and publish their School Accountability Report Card.We put together a sample of California school sites to include by first gathering a list of the5,087 elementary schools in California that existed in academic year 2005-2006 and had greater than500 students enrolled. With a desire to gather a random sample of these schools greater than 500 innumber to guarantee an adequate amount of degrees of freedom in our analyses, we then sortedthem in order of enrollment and chose every ninth school. This resulted in a shortened listcontaining 565 sites. This list fell to 546 due to some sites not reporting standardized test scores inthe desired years. We then contacted the school district offices for each of these 547 sites to see if13

they could provide 2004-05 Academic Year Teaching Minutes. Only 166 (or about 30 percent) ofthese sites had collected the desired information. Because we deemed 166 to be too small a samplesize, we then went back to the same school districts that we knew had data for school siteinstructional minutes for a portion of the original 565 sites. This second effort resulted in a finalsample of 310 California school sites for which we had 2005-05 instructional data. For these 310sites, we next collected the other needed dependent and explanatory variables. We believe the quasirandom approach to gathering the data sample helped us to minimize selection bias. Furthermore,with the exception of some explanatory variables losing their statistical significance due to a smallersample size, regressions run using only the initial purely random sample of 166 school sites yieldedresults that were essentially similar to the results we report from all 310 sites.The four dependent variables used in this study measure: (1) the academic performance of allstudents at a school site, (2) the average academic performance of those defined as “socioeconomically disadvantaged” by the California Department of Education as having both parentswithout a high school degree and/or the student receiving a reduced-price or free lunch, and (3 and4) the change in such measures from the previous year. Data on these are all from the 2005-06academic year. We also tried other group specific (African American, Asian American, and Latino)API base and growth scores for California school sites, but found that Academic Year TeachingMinutes never exerted a statistically significant influence on them.Student input control variables include the percentage students in the school who wereAfrican American, Asian American, and Latino. This accounts for the three major racial/ethnicminority groups in California. Since whites and all other non-white groups are unaccounted for, theregression coefficients on these explanatory variables represent the expected effect of substitutingone percent of a site’s student population falling into the excluded category, with the

Sacramento, CA 95819-6081 Sacramento, CA 95819-6081 (916)278-6557 voice (916)278-6304 voice (916)278-6544 fax (916)278-6544 fax [email protected] [email protected] *We thank the Center for California Studies (CENTER) at