Table of Contents

Misconception: TVAAS is based on a "black box" methodology.

TVAAS is based on established statistical models that have been in use among many industries for decades and, in some instances, centuries. These models are designed to work well with large amounts of information and accommodate common issues with student testing, such as non-random missing data. While the underlying program code for these models and algorithms used for Tennessee is proprietary, the TVAAS methodologies and algorithms are published and have been in the open literature for almost 20 years. Details about the TVAAS models are available in the references below:

  • On the statistical models upon which Tennessee's reporting is based: "Statistical Models and Business Rules" available at https://tvaas.sas.com/support/TVAAS-Statistical-Models-and-Business-Rules.pdf.
  • On the Tennessee Value-Added Assessment System: Millman, Jason, ed. Grading Teachers, Grading Schools: Is Student Achievement a Valid Evaluation Measure? Thousand Oaks, CA: Corwin Press, 1997.

TVAAS in Theory

While TVAAS reporting benefits from a robust modeling approach, this statistical rigor is necessary to provide reliable estimates. More specifically, the TVAAS models attain their reliability by addressing critical issues related to working with student testing data, such as students with missing test scores and the inherent measurement error associated with any test score.

Regardless, the TVAAS modeling has been sufficiently understood such that value-added experts and researchers have replicated the models for their own analyses. In doing so, they have validated and reaffirmed the appropriateness of the TVAAS modeling. The references below include recent studies by statisticians from the RAND Corporation, a non-profit research organization:

  • On the choice of a complex value-added model: McCaffrey, Daniel F., and J.R. Lockwood. 2008. "Value-Added Models: Analytic Issues." Prepared for the National Research Council and the National Academy of Education, Board on Testing and Accountability Workshop on Value-Added Modeling, Nov. 13-14, 2008, Washington, DC.
  • On the advantages of the longitudinal, mixed model approach: Lockwood, J.R. and Daniel F. McCaffrey. 2007. "Controlling for Individual Heterogeneity in Longitudinal Models, with Applications to Student Achievement." Electronic Journal of Statistics 1: 223-52.
  • On the insufficiency of simple value-added models: McCaffrey, Daniel F., B. Han, and J.R. Lockwood. 2008. "From Data to Bonuses: A Case Study of the Issues Related to Awarding Teachers Pay on the Basis of the Students' Progress." Presented at Performance Incentives: Their Growing Impact on American K-12 Education, Feb. 28-29, 2008, National Center on Performance Incentives at Vanderbilt University.

TVAAS in Practice

TVAAS includes two main statistical models, each described briefly below.

  • The growth standard methodology (also known as the multivariate response model or MRM) used in value-added analyses is a multivariate, longitudinal, linear mixed model. In other words, it is conceptually a multivariate repeated-measures ANOVA model. The growth standard methodology is used when there are clear "before" and "after" assessments in which to form a reliable gain estimate. In Tennessee, this is used for for TCAP reporting in Mathematics and English Language Arts in grades 4-8.
  • The predictive methodology (also known as the univariate response model or URM) used in value-added analyses is conceptually an analysis of covariance (ANCOVA) model. The predictive methodology is based on the difference between expected scores and actual scores for students. In Tennessee, this is used EOC reporting and for TCAP reporting in Social Studies for grades 6-8.