Bayesian Component Reliability- An F-35 Case Study

A challenging aspect ofa system reliability assessment is integratingmultiple sources of information, such as component, subsystem, and full-system data,along with previous test data or subject matter expert (SME) opinion. A powerfulfeature of Bayesian analyses is the ability to combine these multiple sources of dataand variability in an informed way to perform statistical inference. This feature isparticularly valuable in assessing system reliability where testing is limited and only asmall number of failures (or none at all) are observed....

2019 · Rebecca Medlin, V. Bram Lillard

Challenges and New Methods for Designing Reliability Experiments

Engineers use reliability experiments to determine the factors that drive product reliability, build robust products, and predict reliability under use conditions. This article uses recent testing of a Howitzer to illustrate the challenges in designing reliability experiments for complex, repairable systems. We leverage lessons learned from current research and propose methods for designing an experiment for a complex, repairable system. Suggested Citation Freeman, Laura J., Rebecca M. Medlin, and Thomas H....

2019 · Laura Freeman, Thomas Johnson, Rebecca Medlin

D-Optimal as an Alternative to Full Factorial Designs- a Case Study

The use of Bayesian statistics and experimental design as tools to scope testing and analyze data related to defense has increased in recent years. Planning a test using experimental design will allow testers to cover the operational space while maximizing the information obtained from each run. Understanding which factors can affect a detector’s performance can influence military tactics, techniques and procedures, and improve a commander’s situational awareness when making decisions in an operational environment....

2019 · Keyla Pagan-Rivera

Demystifying the Black Box- A Test Strategy for Autonomy

The purpose of this briefing is to provide a high-level overview of how to frame the question of testing autonomous systems in a way that will enable development of successful test strategies. The brief outlines the challenges and broad-stroke reforms needed to get ready for the test challenges of the next century. Suggested Citation Wojton, Heather M, and Daniel J Porter. Demystifying the Black Box: A Test Strategy for Autonomy. IDA Document NS D-10465-NS....

2019 · Heather Wojton, Daniel Porter

Designing Experiments for Model Validation- The Foundations for Uncertainty Quantification

Advances in computational power have allowed both greater fidelity and more extensive use of such models. Numerous complex military systems have a corresponding model that simulates its performance in the field. In response, the DoD needs defensible practices for validating these models. Design of Experiments and statistical analysis techniques are the foundational building blocks for validating the use of computer models and quantifying uncertainty in that validation. Recent developments in uncertainty quantification have the potential to benefit the DoD in using modeling and simulation to inform operational evaluations....

2019 · Heather Wojton, Kelly Avery, Laura Freeman, Thomas Johnson

Handbook on Statistical Design & Analysis Techniques for Modeling & Simulation Validation

This handbook focuses on methods for data-driven validation to supplement the vast existing literature for Verification, Validation, and Accreditation (VV&A) and the emerging references on uncertainty quantification (UQ). The goal of this handbook is to aid the test and evaluation (T&E) community in developing test strategies that support model validation (both external validation and parametric analysis) and statistical UQ. Suggested Citation Wojton, Heather, Kelly M Avery, Laura J Freeman, Samuel H Parry, Gregory S Whittier, Thomas H Johnson, and Andrew C Flack....

2019 · Heather Wojton, Kelly Avery, Laura Freeman, Samuel Parry, Gregory Whittier, Thomas Johnson, Andrew Flack

Impact of Conditions which Affect Exploratory Factor Analysis

Some responses cannot be observed directly and must be inferred from multiple indirect measurements, for example human experiences accessed through a variety of survey questions. Exploratory Factor Analysis (EFA) is a data-driven method to optimally combine these indirect measurements to infer some number of unobserved factors. Ideally, EFA should identify how many unobserved factors the indirect measures help estimate (factor extraction), as well as accurately capture how well each indirect measure estimates each factor (parameter recovery)....

2019 · Kevin Krost, Daniel Porter Stephanie Lane, Heather Wojton

Initial Validation of the Trust of Automated Systems Test (TOAST)

Trust is a key determinant of whether people rely on automated systems in the military and the public. However, there is currently no standard for measuring trust in automated systems. In the present studies we propose a scale to measure trust in automated systems that is grounded in current research and theory on trust formation, which we refer to as the Trust in Automated Systems Test (TOAST). We evaluated both the reliability of the scale structure and criterion validity using independent, military-affiliated and civilian samples....

2019 · Heather Wojton, Daniel Porter, Stephanie Lane, Chad Bieber, Poornima Madhavan

M&S Validation for the Joint Air-to-Ground Missile

An operational test is resource-limited and must therefore rely on both live test data and modeling and simulation (M&S) data to inform a full evaluation. For the Joint Air-to-Ground Missile (JAGM) system, we needed to create a test design that accomplished dual goals, characterizing missile performance across the operational space and supporting rigorous validation of the M&S. Our key question is which statistical techniques should be used to compare the M&S to the live data?...

2019 · Brent Crabtree, Andrew Cseko, Thomas Johnson, Joel Williamson, Kelly Avery

Managing T&E Data to Encourage Reuse

Reusing Test and Evaluation (T&E) datasets multiple times at different points throughout a program’s lifecycle is one way to realize their full value. Data management plays an important role in enabling - and even encouraging – this practice. Although Department-level policy on data management is supportive of reuse and consistent with best practices from industry and academia, the documents that shape the day-to-day activities of T&E practitioners are much less so....

2019 · Andrew Flack, Rebecca Medlin

Operational Testing of Systems with Autonomy

Systems with autonomy pose unique challenges for operational test. This document provides an executive level overview of these issues and the proposed solutions and reforms. In order to be ready for the testing challenges of the next century, we will need to change the entire acquisition life cycle, starting even from initial system conceptualization. This briefing was presented to the Director, Operational Test & Evaluation along with his deputies and Chief Scientist....

2019 · Heather Wojton, Daniel Porter, Yevgeniya Pinelis, Chad Bieber, Michael McAnally, Laura Freeman

Pilot Training Next- Modeling Skill Transfer in a Military Learning Environment

Pilot Training Next is an exploratory investigation of new technologies and procedures to increase the efficiency of Undergraduate Pilot Training in the United States Air Force. IDA analysts present a method of quantifying skill transfer from simulators to aircraft under realistic, uncontrolled conditions. Suggested Citation Porter, Daniel, Emily Fedele, and Heather Wojton. Pilot Training Next: Modeling Skill Transfer in a Military Learning Environment. IDA Document NS D-10927. Alexandria, VA: Institute for Defense Analyses, 2019....

2019 · Daniel Porter, Emily Fedele, Heather Wojton

Reproducible Research Mini-Tutorial

Analyses are reproducible if the same methods applied to the same data produce identical results when run again by another researcher (or you in the future). Reproducible analyses are transparent and easy for reviewers to verify, as results and figures can be traced directly to the data and methods that produced them. There are also direct benefits to the researcher. Real-world analysis workflows inevitably require changes to incorporate new or additional data, or to address feedback from collaborators, reviewers, or sponsors....

2019 · Andrew Flack, John Haman, Kevin Kirshenbaum

Sample Size Determination Methods Using Acceptance Sampling by Variables

Acceptance Sampling by Variables (ASbV) is a statistical testing technique used in Personal Protective Equipment programs to determine the quality of the equipment in First Article and Lot Acceptance Tests. This article intends to remedy the lack of existing references that discuss the similarities between ASbV and certain techniques used in different sub-disciplines within statistics. Understanding ASbV from a statistical perspective allows testers to create customized test plans, beyond what is available in MIL-STD-414....

2019 · Thomas Johnson, Lindsey Butler, Kerry Walzl, Heather Wojton

Statistics Boot Camp

In the test community, we frequently use statistics to extract meaning from data. These inferences may be drawn with respect to topics ranging from system performance to human factors. In this mini-tutorial, we will begin by discussing the use of descriptive and inferential statistics. We will continue by discussing commonly used parametric and nonparametric statistics within the defense community, ranging from comparisons of distributions to comparisons of means. We will conclude with a brief discussion of how to present your statistical findings graphically for maximum impact....

2019 · Kelly Avery, Stephanie Lane

Survey Testing Automation Tool (STAT)

In operational testing, survey administration is typically a manual, paper-driven process. We developed a web-based tool called Survey Testing Automation Tool (STAT), which integrates and automates survey construction, administration, and analysis procedures. STAT introduces a standardized approach to the construction of surveys and includes capabilities for survey management, survey planning, and form generation. Suggested Citation Finnegan, Gary M, Kelly Tran, Tara A McGovern, and William R Whitledge. Survey Testing Automation Tool (STAT)....

2019 · Kelly Tran, Tara McGovern, William Whitledge

The Effect of Extremes in Small Sample Size on Simple Mixed Models- A Comparison of Level-1 and Level-2 Size

We present a simulation study that examines the impact of small sample sizes in both observation and nesting levels of the model on the fixed effect bias, type I error, and the power of a simple mixed model analysis. Despite the need for adjustments to control for type I error inflation, our findings indicate that smaller samples than previously recognized can be used for mixed models under certain conditions prevalent in applied research....

2019 · Kristina Carter, Heather Wojton, Stephanie Lane

The Purpose of Mixed-Effects Models in Test and Evaluation

Mixed-effects models are the standard technique for analyzing data with grouping structure. In defense testing, these models are useful because they allow us to account for correlations between observations, a feature common in many operational tests. In this article, we describe the advantages of modeling data from a mixed-effects perspective and discuss an R package—ciTools—that equips the user with easy methods for presenting results from this type of model. Suggested Citation Haman, John, Matthew Avery, and Heather Wojton....

2019 · John Haman, Matthew Avery, Heather Wojton

Use of Design of Experiments in Survivability Testing

The purpose of survivability testing is to provide decision makers with relevant, credible evidence about the survivability of an aircraft that is conveyed with some degree of certainty or inferential weight. In developing an experiment to accomplish this goal, a test planner faces numerous questions What critical issue or issues are being address? What data are needed to answer the critical issues? What test conditions should be varied? What is the most economical way of varying those conditions?...

2019 · Thomas Johnson, Mark Couch, John Haman, Heather Wojton