Culturally Responsive Assessment Archive 2012
James Hiramoto, Ph.D.,
James graduated with an MA and PhD in Educational Psychology from UC Berkeley’s, School Psychology Program. He has 17 years of experience as a school psychologist. He advises and provides trainings for superintendents, school administrators, teachers and special education staff. He has over 8 years of experience as a university professor and director, training school psychologist at the Master and Doctoral level. His areas of expertise align with the subjects he teaches and or presents at state or international conferences. These areas include: Cognitive ability, neuropsychological, alternative and culturally responsive assessment; crisis planning, management and intervention; educational research methodology and statistics; program evaluation; consultation and special education law.
Click a topic below to expand the full question and answer.
Reliability and Validity of Tests
My child’s special education assessment report says, “The tests used to assess your child have been determined to provide reliable and valid results for the purposes for which they have been used.” What do they mean by this and who is making this determination?
Thank you for your question. Variations of what you mention in your question are found in (or should be found in) all assessment reports for special education purposes.
Your question has two parts, so first let’s look at Part 1: “What do they mean by this?” Here are a few definitions to help answer your question.
Warning! Statistics Ahead!
Reliability refers to a test’s consistency. A ruler is reliable because it gives us the same units of measurement every time we use it. That is, an inch is always an inch. If our ruler was not reliable, comparisons between measured items would be meaningless.
Validity has to do with accuracy, not consistency. Validity isn’t a simple yes or no answer. Validity is a measure of how close a test is getting to what it wants to measure. Reliability is a must for validity, but reliability alone does not make something valid. That is, a broken clock is 100% reliable because it will always give you the same measurement (consistency). However, a broken clock is valid only two times a day.
In the case of cognitive ability testing (aka intelligence testing), test designers are looking for tests to be reliable and valid. The test makers do their best to make sure that under the same conditions, test score does not vary much from one administration to the next for the same person. In addition, they compare results of how the same individuals do on other tests of cognitive ability (construct validity). Finally, they compare their results (the cognitive ability results) to tests of academic achievement for predictive validity.
Information on a test’s reliability and validity can be found in the Technical Manual or sometimes the Administration Manual of the test.
Part 2 of your question: “Who is making this determination?”
The answer is that both the test maker and the specialist using the test with your child are determining the test’s reliability and validity.
As mentioned previously, the test maker goes to great lengths to demonstrate to professionals in the field that the test is reliable and valid in their Technical Manuals or Administration Manuals. There is usually at least one chapter dedicated to reliability and validity in these manuals.
The specialist using the test with your child is making the case that because they are using the test in the way it was designed/intended it will therefore provide reliable and valid results for your child. Two things have to be true here:
- The specialist administered the test in the way it was designed.
- That the specialist either verified that the test is reliable and valid by checking the technical manual, or assuming/trusting that the test maker did this correctly.
#1 is a professional skill judgment call and can only be addressed by the specialist. # 2 however, is something we never assume here at the Diagnostic Center. We carefully review the reliability and validity data of each test using an analysis worksheet that we developed (see below). We carefully go over how a test was developed. As you can see from our Analysis of Test Reliability/ Validity worksheet, the Diagnostic Centers carefully determine:
- Were there appropriate samples for test validation?
- Is the reliability sufficiently high enough to warrant the use of the test as a basis for making decisions concerning an individual student?
- Is it an accurate predictor of performance?
- Are there sufficient test items to measure the skill being assessed?
- What are the limitations of the test? (from various perspectives)
- Does the manual indicate that the test was reviewed by a cultural bias review panel?
- If so, how many individuals were consulted, what were their qualifications, and how was their input used?
We hope that school district specialists are being as diligent on checking for reliability and validity. If not, it is concerned parents like you that raise these questions, for the better assessment and evaluation of all children.
Thank you again for your question. I hope I addressed your question to your satisfaction. If not, please ask me again and let me know where you’d like me to clarify or add more detail.
Flanagan, D. P., & Harrison, P.L. (Eds.) (2012). Contemporary intellectual assessment, third edition: theories, tests, and issues, The Guilford Press.
Flanagan, D. P., & Harrison, P.L. (Eds.) (2005). Contemporary intellectual assessment, second edition: theories, tests, and issues, The Guilford Press.
Nielsen, M.A., Dawson, R. A., Friedland, M, Gallagher, S., M., Hiramoto, J. F., Jocic, N., & Stivers, M., (2012). Best practices guidelines for the assessment of African American students. Cognitive processes. Diagnostic Center North, California Department of Education.
Nielsen, M.A., Dawson, R. A., Friedland, M, Gallagher, S., M., Hiramoto, J. F., Jocic, N., & Stivers, M., (June, 2012). Best practices guidelines for the assessment of African American students. Cognitive processes. Presentation to Diagnostic Centers Central, South, and San Leandro and Fremont Unified School Districts.
Valencia, R. R & Suzuki, L. A. (2001). Intelligence testing and minority students foundations, performance factors, and assessment issues. Sage Publications, Inc.