320 D.H. Monk et al. Economics of Education Review 19 2000 319–331
have been moving toward an ‘all-Regents’
1
secondary program on their own initiative, perhaps in anticipation
of the requirements now being promulgated by the Regents. The study is organized around three primary
questions:
1. What explains the willingness or ability of school dis- tricts to increase Regents achievement examination
participation rates? 2. What have been the effects on student perform-
ance? and 3. What have been the changes in resource allocation
behavior? We turn next to a description of the data and methods
we employed and then to an overview of our findings. We conclude with a discussion about implications for
policy.
2. Data and methods
2.1. Data Our findings are based on two complementary sources
of data. We began by taking advantage of data that are routinely collected by the State Education Department.
These data include information about the Regents testing program in addition to resource allocation information
that can be gleaned from the State’s Basic Education Data System. We are indebted to members of the staff
at the State Education Department for the assistance we received in gaining access to these data.
2
We have been fortunate to be able to complement these statewide data with the results of a series of case
studies that were conducted during the summer of 1997 at ten sites around the State where significant efforts
have been made to move the schools toward an ‘all- Regents’ program. The site visits were conducted by
staff from several BOCES under the leadership of the District Superintendent of the Otsego Northern Catskills
ONC BOCES.
3
1
Hereafter, we shall use the terms ‘all-Regents’ to describe instances where substantive increases have occured in the per-
centage of students taking Regents achievement examinations. We shall also use the term ‘Regents exam’ to refer to the con-
tent based achievement exams that historically have been taken by college-bound students in New York State.
2
We are particularly grateful for the assistance provided by Nick Argyros, Mark Barth, Sam Corsi, George Cronk, Diane
Hutchinson, and Edward Lalor.
3
The following individuals participated in the development and conduct of the case study research: Nick Argyros, Mark
Barth, John Bishop, Jim Carter, Jim Collins, Judy Fink, Marla Gardner, John Grant, Samid Hussain, William Miles, David
Monk, Edward Moore, Joan Moriarty, and Marie Warchol.
2.2. Statewide data and analysis We were particularly interested in measures of partici-
pation by students in the Regents’ achievement examin- ation program. The State Education Department main-
tains a data base that includes measures of the percentage of students writing each of the achievement examin-
ations. These examinations are taken at different points during the year the exams are most typically adminis-
tered in June of each year, but some are also offered in January and in some cases August, and students take
them at different points in their respective high school programs e.g., some students will take the Course I
mathematics exam at the end of 8th grade while others will do so at the end of 9th grade.
We measured the level of participation in the Regents exam program for each district by averaging the exam
specific participation rates for the English, Course I mathematics, Global Studies, and US History examin-
ations. The base pupil count we used for the calculation of these percentages was the average enrolment statistic
for grades 9–12 that is maintained by the State Education Department. It is possible that these averages mask
important differences across the exams, but we believe that the average figure provides the best available overall
index of the degree to which a district has moved for- ward with an ‘all-Regents’ approach to its secondary
program.
With participation as our starting point, we moved in two directions to identify additional variables of interest
for our analyses. On the one hand, we sought insight into the antecedents of changes in participation rates. We
were particularly interested in knowing more about what gives rise to a given district’s inclination to increase its
percentage of students participating in the Regents test- ing program. Toward this end, we identified a series of
district background characteristics that we believe could have bearing on student participation rates. These
included: district type i.e. urban, suburban, or rural, full value property wealth per pupil, the incidence of poverty
as measured by the presence of students in the free and reduced price lunch program, and district size as meas-
ured by enrolment. Given the longitudinal nature of the inquiry, we also became interested in the effects of
changes that have occurred in these structural character- istics over the period we studied 1992–96.
On the other hand, we were interested in learning more about the effects of changes in participation rates
on various phenomena including measures of pupil per- formance and district resource allocation behaviors. The
desire to move in this direction prompted us to collect data about pupil performance on Regents exams, drop-
out rates, district spending levels per pupil, and district professional staffing levels on a subject specific basis.
Unfortunately, the only Regents test score perform- ance data that were available to us for the entire State
321 D.H. Monk et al. Economics of Education Review 19 2000 319–331
are measures of percentages passing the exam. It would have been much preferable to have averages of students’
scores, but these data are not maintained by the State. Our student test score performance measure is change in
the percentage of test takers who passed the Regents exam. The statistic we relied upon is an average of pass-
ing rates for the four Regents examinations that we have been considering: Course I mathematics, English, Glo-
bal Studies, and US History. For our drop-out statistic, we relied upon calculations from the State Education
Department.
District spending levels were obtained from the School Financial Master File SFMAST data files, and
subject specific professional staffing levels were obtained from the Institutional Master File IMF and the Person-
nel Master File PMF of the Basic Education Data Sys- tem BEDS that is collected and maintained by the State
Education Department. Again, because of the longitudi- nal nature of the inquiry, we collected these data for the
1992–93 as well as for the 1996–97 school years. Several of our variables are simple difference scores that are
based on data drawn from these two years.
We needed to make decisions about how to treat the Big 5 city districts New York City, Buffalo, Syracuse,
Rochester, and Yonkers in these analyses. Because of the unique features of the Big 5 districts, we treated them
separately from the other districts in our sample and did not include them in our regression analyses.
Our analysis of the statewide data relies heavily on a series of multiple regression models using ordinary least
squares OLS and weighted least squares WLS esti- mating techniques.
4
The weighted least square WLS models are weighted by 8–12 enrolments as grades 8–
12 enrolment are most likely to be affected by partici- pation in the Regents’ examinations.
5
In general, we find the effects of most of the predictor variables to be fairly
consistent across the two estimation methods. We report our results step by step in the order sug-
gested by the questions we used to guide the analysis. We begin with an analysis of relationships between dis-
trict background structural characteristics and the degree of student participation in the Regents achievement
examination program. We next turn to a series of analy- ses that give information about the effects of increases
in participation on student performance, both in terms of test score results and decisions about persistence in
4
To control for the simultaneity bias that arises due to the endogeneity of the change in participation variable, we initially
planned to instrument change in participation between 1992 and 1996 by measures of districts’ preferences, lagged changes in
participation, and administrative turn over. However, these measures were not available to us.
5
The results are similar when we weight by total district enrolment.
school. We then look to see if we can identify effects of increases in participation on resource allocation practices
by school districts. We are particularly interested in both changes in overall spending and staffing levels and shifts
in how existing resources have been allocated.
2.2.1. Case study data and analysis One of the purposes of the case studies was to allow
us to look more closely at the experiences of districts that had moved forward with ‘an all-Regents’ approach
and to find out more about what the change meant in the lives of pupils, teachers, administrators, board members
and taxpayers. In our selection of sites we sought places where there was a significant increase in the percentage
of students taking the Regents exams. We defined ‘sig- nificant’ to mean a 25 point increase in the percentage
of students taking Regents exams between 1992 and 1996. We were also interested in places where the results
in terms of performance looked encouraging. Specifi- cally, we looked at what happened to the percentage of
students passing these exams and focused our attention on sites where performance either held steady or
increased. In other words, we were looking explicitly for districts that appeared to be having success with the all-
Regents initiative. We were also interested in achieving a spread of districts across the different regions of the
State. We made sure that we had representatives of dif- ferent types of districts urban, suburban, and rural.
6
Finally, we compared the districts that we identified using these methods with a list of districts that were
nominated by the District Superintendents of the State as good examples of ‘all-Regents’ programs. Where
possible we focused attention on the nominated districts.
7
Table 1 provides a characterization of the ten sites. The site visits took place during one- or two-day visits
in the summer and early fall of 1997. Interviews were conducted with administrators, teachers, and to a lesser
degree school board members and community leaders. Questions were asked about the antecedents of the
increase in participation and the consequences for stu- dents, teachers and school officials, and taxpayers. The
interviews were taped and the results were later com- piled. In this paper, we will deal with findings that per-
tain to the decision to move forward with an ‘all- Regents’ program as well as with findings about changes
in the use of resources.
6
Our desire to have all three types of districts represented within the sample forced us to accept one site that did not meet
the 25 percentage point increase criterion. This was an urban district whose gain in participation was high relative to the
experiences of other similar urban districts.
7
The District Superintendent nomination process was organized by District Superintendent James Carter in his
capacity as chair of the District Superintendents’ Committee on Standards.
322 D.H. Monk et al. Economics of Education Review 19 2000 319–331
Table 1 Characteristics of the ‘all-Regents’ case study sites
Urban Suburban
Rural Eastern
2 Northern
1 1
Western 1
1 Central
1 Southern Hudson
2 Long Island
1 Total
2 6
2
3. Results