School Board Challenge  Find just one valid independent study by 1/31/06
In early January I received an email from one of the Alpine School District Board members:
Oak, You've already got your mind made up but if by chance you are struck with a perfectly good moment of openmindedness, you could check out this website: The web site http://www.comap.com/elementary/projects/arc/tristate%20achievement%20full%20report.htm
Providing you with ALL information is helpful.
Why is it that you seem so destructive? I'm sure you will find teachers who have not appreciated the time it has taken to learn a new style of teaching math, but you will find no one who has been terminated because of it.
Every good teacher will take advantage of the option giving each student what they need as far as the math goes. We still believe that standard based math is the basis for the curriculum. By the way, the Governor wants, as part of his education agenda, a "deeper understanding of math." You can't get that with just drill. Happy new year to you! Sincerely, Name Withheld
Openmindedness has been the quest of this author from the very beginning. Less than a year ago I was neutral on Investigations math until I actually started doing research to know what the facts were for myself. So I gladly took up the opportunity to read about the COMAP study and then presented my findings at the January 2006 school board meeting.
Here's the glowing conclusion from the COMAP study:
“…The principal finding of the study is that the students in the NSFfunded reform curricula consistently outperformed the comparison students: All significant differences favored the reform students; no significant difference favored the comparison students. This result held across all tests, all grade levels, and all strands, regardless of SES and racial/ethnic identity. The data from this study show that these curricula improve student performance in all areas of elementary mathematics, including both basic skills and higherlevel processes. Use of these curricula results in higher test scores.”
Unfortunately (for the board), the study had a tiny independence problem in that TERC (the group that created and publishes Investigations Math) was the financial backer of the group that created, administered, and published the study.
Dr. Jim Milgram of Stanford said of this study, it is akin to the tobacco company studies of the 50's and 60's telling everyone "smoking is safe, we've tested it."
In preparation for presenting the material to the board, I spent several hours researching and contacting people. One person I was directed to was Sandra Stotsky, who was the assistant commissioner of education in Massachusetts when this study was performed there in 2000. She wrote:
“I am aware of several major problems with the MA part of the study. (1) As the Executive Summary admits, mostly highincome "white" schools were using the "reform" programs in the MA grade 4 sample, (2) no information is given on the supplemental tutoring that exists in these suburban communities (a hard factor to get information on without laborintensive exploration at each school), (3) no information is given about supplemental curriculum materials the teachers themselves may have usedall we are told is that the schools that were contacted said they fully used the reform program. I know that many teachers in these highincome schools use supplemental materials to make up for the "reform" programs deficiencies, (4) no information is given on the amount of professional development the "reform" teachers had (a huge amount in all probability) in comparison to the teachers in the comparison group (if no new math program, no professional development), (5) no information is given on the amount of time spent on math in the reform schools compared to the comparison group (the "reform" programs require a lot more time per week than most schools had been allotting math for many years. For example, I discovered that one Newton elementary school with top scores was considered a model because it taught math one hour each day!), and probably most important and relevant (6) the MCAS grade 4 math test was originally designed with a great deal of advice from TERC. TERC also shaped the math standards in the 1995 standards document that were being assessed by this test in 2000 (it is acknowledged in the intro to this document). TERC's supporters (and EM supporters) were on the assessment advisory committees that made judgments about the test items and their weights for the math tests. It is wellknown that the grade 4 test reflects "constructivist" teaching of math. In other words, the grade 4 test in MA in 2000 favored students using a "reform" program. “
I also contacted Dr. Jim Milgram at Stanford. He is on the advisory board of NASA and says that these programs have been around for decades and if they were effective, NASA, IBM and others would be actively looking for high schoolers that went through these programs. He also said it's generally acknowledged that no valid study has ever been performed to show the effectiveness of these programs. I will post his entire fascinating email at the bottom of this page, but I wanted to give you his quotes that I read to the board at the meeting from his review of Connected Math. I gave each member of the board a copy of this document and asked them to read it. ftp://math.stanford.edu/pub/papers/milgram/reportoncmp.html
Page 1 of 22: “If one visits the web site of the program (Connected Math), http://www.math.msu.edu/cmp/Index.html, one finds two preprints, presumably using rigorous methodology and statistical analysis, that are advertised as showing the benefits of CMP. Unfortunately, as we see in the appendix to this report, both studies are fatally flawed and deceptively presented. Additionally, at the website one will find a strong endorsement of the program by the AAAC. They grade it as one of the most effective programs for teaching middle school mathematics Unfortunately, this too must be taken with a grain of salt, as is also discussed in the appendix. In fact, it is generally acknowledged that there are no reputable studies showing that any of the NSF developed mathematics programs actually benefit students in testable ways.
Leaving aside these issues, we turn to the program itself.”
(Oak's comment: What a sense of humor he has! Then he proceeds to closely examine the three grade levels for connected math and skillfully show this curriculum as one of the worst available.)
Page 2 of 22: “Overall, the program seems to be very incomplete, and I would judge that it is aimed at underachieving students rather than normal or higher achieving students. In itself this is not a problem unless, as is the case, the program is advertised as being designed for all students. In fact, as indicated, there is no reputable research at all which supports this.”
Page 20 of 22: “There is a second paper extolling CMP at the CMP website by Reys, et.al. In this paper the statistics are done well but the "control group" is not realistic. The paper looks at three programs: CMP, another similar program, and a "control group" that consists of teachers who seem to share the same philosophy as the developers of CMP but are teaching without the assistance of any books or course materials. In other words the control group consists of teachers who are just winging it.
Unfortunately, this kind of statistical analysis, poorly done and misleading, appears to be very common in research on NSF funded programs, and the errors all seem to be in the direction most favorable to the programs.”
Besides presenting this material to the board, I challenged the board to 3 things:
1) Don't ever let someone from the district contradict me after I leave and then not tell me about it (which happened in either October or November) and not give me a chance to respond.
2) They've acknowledged that teachers are now able to teach anywhere on the spectrum that they think the students need to learn from Investigations to Traditional math, so now put your money where your mouth is and fund teacher choice of curriculum (the statement giving teachers authority to teach without fear of reprimand was read in November at the board meeting and the direct result of my October visit to the board). http://www.oaknorton.com/ASD%20November%20Board%20Meeting%20Statement.doc
3) A challenge to find JUST ONE VALID INDEPENDENT STUDY by the end of January that shows these constructivist programs are on par or better than traditional programs. If they find one I agreed to publish it on my website and email it to all of you. I didn't make them promise to drop the programs if they couldn't find one, but hopefully it will make some people think about what's happening as they try to find an untainted study. Of course I'll keep you posted on this.
Below is the text of Dr. Milgram's email to me and then a final exchange with him.
Dr. James Milgram (PhD Math, Stanford University, Advisory Council for NASA): There were very good reasons that responsible researchers at the US Department of Education, in their study of US middle school math programs, who were very aware of the ARC study you refer to below, graded it methodologically unsound  see the discussion on the What Works Clearinghouse page http://www.whatworks.ed.gov. The very first problem that I ran into was when I went to the ARC center website, I looked at the page entitled "About the ARC Center" http://www.comap.com/elementary/projects/arc/aboutarc.htm
What does one find? The center is entirely supported by the three programs it is testing. This matters because  one need only think of the tobacco studies of the 50's and 60's  one unfortunately cannot assume that research coming directly from parties with a financial interest in the outcomes is reliable.
Indeed, in the recent past I've looked closely at a number of reports by IMP and CorePlus, and have found, when looking at the way they crunched data, that the methods were deliberately skewed to get the results they wanted.
To give you an idea of the seriousness of the problem in the education research community, there was a recent paper by a researcher affiliated with a major school of education that I was asked to look at. After a close study of the paper, I realized that the data had been essentially faked  this was a bit delicate, since the data quoted was mostly accurate, but the material was only selectively quoted  those parts of the data that could be interpreted as supporting the author's position  and that the tests constructed for the study had been deliberately written in such a way that the desired results would be achieved. (It has been observed that the only thing education research has really shown over all these years is that students are likely to learn what they are taught and not learn what they are not taught, whether what they are taught is correct and useful or not.) The university in question is currently taking account of my concerns and trying to deal with the situation.
The current reality is that, whatever the claims of improved test scores, the students coming out of our schools today simply do not match up to their competition. I was just appointed to NASA's Advisory Council (with people like Neil Armstrong, etc.) because, as they told me when they invited me onto the board, NASA simply can't hire engineers and scientists today since they have to hire US citizens. (My belief is, and it is supported by the work I did for the SmithRichardson and Fordham Foundations on state math assessments, that the state assessments are so badly flawed that the numbers they give are essentially meaningless for any but the worst performing students.)
Companies like TI, INTEL, IBM have already made policy decisions to move all manufacturing and virtually all research out of the US (and out of Europe as well in some cases) and this is not to save money. I'm sure you are aware of the fact that GM and Ford are on the verge of bankruptcy and are shutting down over 20 plants between them and letting go 30% of their US and Canadian workforces. The root causes are bad financial decisions based on the belief that they could effectively compete with foreign products. Again, it is not the costs so much as the fact that these foreign products are simply better.
The thing to keep in mind is that the programs you mention have been in place for a long time. TERC, Everyday Math, etc. all date from the late 1980's and were being extensively used since the early '90's. There has been plenty of time to see actual outcomes  young men and women in the workforce that employers are glad to hire, solid engineers, and competent young scientists. They simply are not there. I can't tell you the number of times I've heard your story. A district naively goes "the reform" route, and then 5 or 6 years later, very privately drops those programs and tries to fix the resulting damage.
Fixing outcomes in mathematics is not that simple. The reality is that the teachers K  8 are functionally mathematically illiterate these days. (Not all of them, of course, but the vast majority.) What happens as a result is that they are not able to teach the subject reliably, regardless of the curriculum used.
Yours,
Jim Milgram
(For those that frown on the Fordham Foundation, I asked Dr. Milgram why it is viewed negatively by some in the education establishment. His reply was):
“The head of Fordham Foundation is Chester Finn, a former US Secretary of Education. His focus then and now has been on trying to get good outcomes out of the educational process, as he is very much aware of the extreme urgency of the situation. Yes, in a sense he can be regarded as extreme since he has been willing to criticize educational outcomes in this country, and the prevailing view currently seems to be ‘don't rock the money tree.’”
Yours,
Jim Milgram
(Another follow up email to Dr. Milgram:
Oak: In your prior email you said:
"the state assessments are so badly flawed that the numbers they give
are essentially meaningless for any but the worst performing
students."
Without my own interpretation of this, will you please give me a short
description of what you mean by only the worst performing students can
have valid scores?
Thanks,
Oak
Dr. Milgram’s reply:
The average number of mathematically incorrect questions on these tests has been consistently in the 20  25% range  and this goes for NAEP as
well. With a situation like this, only the kids who know absolutely nothing and get very poor scores will be measured more or less accurately. Students in the mid to high ranges could either know the incorrect things that are wanted on the incorrect questions and have little idea of actual mathematical content, or they could know the math they should know well, and they will essentially be guessing on the incorrect questions.
Of course, the kids who get virtually all the questions "right" are a very interesting subgroup. A few of them will actually know the math and understand the mindset that lies behind the incorrect questions as well. They could fully understand that they are giving incorrect answers to incorrect questions, but that the answers they give are what these silly adults want. And yes, I can assure you that a few kids like that exist.
Of course, others in this group will have simply studied very hard and learned the correct responses to the given stimuli in all cases. Probably this is the largest subgroup of the high scorers, but such students have absolutely no idea of the actual mathematics they are supposed to be learning.
Yours,
Jim Milgram
Investigations Math Menu
** Most important pages to read (all have value but if you will only read
a few pages make it these)
* Very important
