<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Quality and Reliability Engineering International on Test Science Research Document Library</title>
    <link>https://research.testscience.org/venues/quality-and-reliability-engineering-international/</link>
    <description>Recent content in Quality and Reliability Engineering International on Test Science Research Document Library</description>
    <generator>Hugo -- 0.129.0</generator>
    <language>en-us</language>
    <copyright>Institute for Defense Analyses</copyright>
    <lastBuildDate>Wed, 01 Jan 2020 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://research.testscience.org/venues/quality-and-reliability-engineering-international/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Circular Prediction Regions for Miss Distance Models under Heteroskedasticity</title>
      <link>https://research.testscience.org/post/2020-circular-prediction-regions-for-miss-distance-models-under-heteroskedasticity/</link>
      <pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
      <guid>https://research.testscience.org/post/2020-circular-prediction-regions-for-miss-distance-models-under-heteroskedasticity/</guid>
      <description>Circular prediction regions are used in ballistic testing to express the uncertainty in shot accuracy. We compare two modeling approaches for estimating circular prediction regions for the miss distance of a ballistic projectile. The miss distance response variable is bivariate normal and has a mean and variance that can change with one or more experimental factors. The first approach fits a heteroskedastic linear model using restricted maximum likelihood, and uses the Kenward-Roger statistic to estimate circular prediction regions.</description>
      <content:encoded><![CDATA[<p>Circular prediction regions are used in ballistic testing to express the uncertainty in shot accuracy. We compare two modeling approaches for estimating circular prediction regions for the miss distance of a ballistic projectile. The miss distance response variable is bivariate normal and has a mean and variance that can change with one or more experimental factors. The first approach fits a heteroskedastic linear model using restricted maximum likelihood, and uses the Kenward-Roger statistic to estimate circular prediction regions. The second approach fits the analogous Bayesian model with unrestricted likelihood modifications, and computes circular prediction regions by sampling from the posterior predictive distribution. The two approaches are applied to an example problem, and are compared using simulation.</p>
<h4 id="suggested-citation">Suggested Citation</h4>
<blockquote>
<p>Johnson, Thomas H., John T. Haman, Heather Wojton, and Laura Freeman. “Circular Prediction Regions for Miss Distance Models under Heteroskedasticity.” Quality and Reliability Engineering International 37, no. 7 (November 2021): 2991–3003. <a href="https://doi.org/10.1002/qre.2771">https://doi.org/10.1002/qre.2771</a>.</p>
</blockquote>
<h4 id="slides">Slides:</h4>
<embed src= "slides.pdf" width= "100%" height= "700px" type="application/pdf" >

<h4 id="paper">Paper:</h4>
<embed src= "paper.pdf" width= "100%" height= "700px" type="application/pdf" >

<h4 id="poster">Poster:</h4>
<embed src= "poster.pdf" width= "100%" height= "700px" type="application/pdf" >

]]></content:encoded>
    </item>
    <item>
      <title>Analysis of Split-Plot Reliability Experiments with Subsampling</title>
      <link>https://research.testscience.org/post/2018-analysis-of-split-plot-reliability-experiments-with-subsampling/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>https://research.testscience.org/post/2018-analysis-of-split-plot-reliability-experiments-with-subsampling/</guid>
      <description>Reliability experiments are important for determining which factors drive product reliability. The data collected in these experiments can be challenging to analyze. Often, the reliability or lifetime data collected follow distinctly nonnormal distributions and include censored observations. Additional challenges in the analysis arise when the experiment is executed with restrictions on randomization. The focus of this paper is on the proper analysis of reliability data collected from a nonrandomized reliability experiments.</description>
      <content:encoded><![CDATA[<p>Reliability experiments are important for determining which factors drive product reliability. The data collected in these experiments can be challenging to analyze. Often, the reliability or lifetime data collected follow distinctly nonnormal distributions and include censored observations. Additional challenges in the analysis arise when the experiment is executed with restrictions on randomization. The focus of this paper is on the proper analysis of reliability data collected from a nonrandomized reliability experiments. Specifically, we focus on the analysis of lifetime data from a split-plot experimental design. We outline a nonlinear mixed-model analysis for a split-plot reliability experiment with subsampling and right-censored Weibull distributed lifetime data. A simulation study compares the proposed method with a two-stage method of analysis.</p>
<h4 id="suggested-citation">Suggested Citation</h4>
<blockquote>
<p>Medlin, Rebecca M., Laura J. Freeman, Jennifer L.K. Kensler, and G. Geoffrey Vining. “Analysis of Split-Plot Reliability Experiments with Subsampling.” Quality and Reliability Engineering International 35, no. 3 (2019): 738–49. <a href="https://doi.org/10.1002/qre.2394">https://doi.org/10.1002/qre.2394</a>.</p>
</blockquote>
<h4 id="paper">Paper:</h4>
<embed src= "subsampling_paper.pdf" width= "100%" height= "700px" type="application/pdf" >

]]></content:encoded>
    </item>
    <item>
      <title>Power Approximations for Reliability Test Designs</title>
      <link>https://research.testscience.org/post/2018-power-approximations-for-reliability-test-designs/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>https://research.testscience.org/post/2018-power-approximations-for-reliability-test-designs/</guid>
      <description>Reliability tests determine which factors drive system reliability. Often, the reliability or failure time data collected in these tests tend to follow distinctly non- normal distributions and include censored observations. The experimental design should accommodate the skewed nature of the response and allow for censored observations, which occur when systems under test do not fail within the allotted test time. To account for these design and analysis considerations, Monte Carlo simulations are frequently used to evaluate experimental design properties.</description>
      <content:encoded><![CDATA[<p>Reliability tests determine which factors drive system reliability. Often, the reliability or failure time data collected in these tests tend to follow distinctly non- normal distributions and include censored observations. The experimental design should accommodate the skewed nature of the response and allow for censored observations, which occur when systems under test do not fail within the allotted test time. To account for these design and analysis considerations, Monte Carlo simulations are frequently used to evaluate experimental design properties. Simulation provides accurate power calculations as a function of sample size, allowing researchers to determine adequate sample sizes at each level of the treatment. However, simulation may be inefficient for comparing multiple experiments of various sizes. In this document, we present a closed form approach for calculating power, based on the non- central chi-squared approximation to the distribution of the likelihood ratio statistic. The solution can be used to compare multiple designs and accommodate trade-space analyses between power, effect size, model formulation, sample size, censoring rates, and design type. To demonstrate the efficiency of our approach, we provide a comparison to estimates that are generated using Monte Carlo simulation.</p>
<h4 id="suggested-citation">Suggested Citation</h4>
<blockquote>
<p>Johnson, Thomas H., Rebecca M. Medlin, and Laura Freeman. “Power Approximations for Failure-Time Regression Models.” Quality and Reliability Engineering International 35, no. 6 (2019): 1666–75. <a href="https://doi.org/10.1002/qre.2467">https://doi.org/10.1002/qre.2467</a>.</p>
</blockquote>
<h4 id="slides">Slides:</h4>
<embed src= "slides.pdf" width= "100%" height= "700px" type="application/pdf" >

<h4 id="paper">Paper:</h4>
<embed src= "paper.pdf" width= "100%" height= "700px" type="application/pdf" >

]]></content:encoded>
    </item>
    <item>
      <title>Regularization for Continuously Observed Ordinal Response Variables with Piecewise-Constant Functional Predictors</title>
      <link>https://research.testscience.org/post/2016-regularization-for-continuously-observed-ordinal-response-variables-with-piecewise-constant-functional-predictors/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://research.testscience.org/post/2016-regularization-for-continuously-observed-ordinal-response-variables-with-piecewise-constant-functional-predictors/</guid>
      <description>This paper investigates regularization for continuously observed covariates that resemble step functions. The motivating examples come from operational test data from a recent United States Department of Defense (DoD) test of the Shadow Unmanned Air Vehicle system. The response variable, quality of video provided by the Shadow to friendly ground units, was measured on an ordinal scale continuously over time. Functional covariates, altitude and distance, can be well approximated by step functions.</description>
      <content:encoded><![CDATA[<p>This paper investigates regularization for continuously observed covariates that resemble step functions. The motivating examples come from operational test data from a recent United States Department of Defense (DoD) test of the Shadow Unmanned Air Vehicle system. The response variable, quality of video provided by the Shadow to friendly ground units, was measured on an ordinal scale continuously over time. Functional covariates, altitude and distance, can be well approximated by step functions. Two approaches for regularizing these covariates are considered, including a thinning approach commonly used within the DoD to address autocorrelated time series data, and a novel “smoothing” approach, which first approximates the covariates as step functions and then treats each “step” as a uniquely observed data point. Data sets resulting from both approaches are fit using a mixed model cumulative logistic regression, and we compare their results. While the thinning approach identifies altitude as having a significant impact on video quality, the smoothing approach finds no evidence of an effect. This difference is attributable to the larger effective sample size produced by thinning. System characteristics make it unlikely that video quality would degrade at higher altitudes, suggesting that the thinning approach has produced a Type 1 error. By accounting for the functional characteristics of the covariates, the novel smoothing approach has produced a more accurate characterization of the Shadow’s ability to provide full motion video to supported units.</p>
<h4 id="suggested-citation">Suggested Citation</h4>
<blockquote>
<p>Avery, Matthew, Mark Orndorff, Timothy Robinson, and Laura Freeman. “Regularization for Continuously Observed Ordinal Response Variables with Piecewise-Constant Functional Covariates.” Quality and Reliability Engineering International 32, no. 6 (2016): 2033–42. <a href="https://doi.org/10.1002/qre.2037">https://doi.org/10.1002/qre.2037</a>.</p>
</blockquote>
<h4 id="paper">Paper:</h4>
<embed src= "paper.pdf" width= "100%" height= "700px" type="application/pdf" >

]]></content:encoded>
    </item>
    <item>
      <title>Examining Improved Experimental Designs for Wind Tunnel Testing Using Monte Carlo Sampling Methods</title>
      <link>https://research.testscience.org/post/2010-examining-improved-experimental-designs-for-wind-tunnel-testing-using-monte-carlo-sampling-methods/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://research.testscience.org/post/2010-examining-improved-experimental-designs-for-wind-tunnel-testing-using-monte-carlo-sampling-methods/</guid>
      <description>In this paper we compare data from a fairly large legacy wind tunnel test campaign to smaller, statistically-motivated experimental design strategies. The comparison, using Monte Carlo sampling methodology, suggests a tremendous opportunity to reduce wind tunnel test efforts without losing test information.
Suggested Citation Hill, Raymond R., Derek A. Leggio, Shay R. Capehart, and August G. Roesener. “Examining Improved Experimental Designs for Wind Tunnel Testing Using Monte Carlo Sampling Methods.” Quality and Reliability Engineering International 27, no.</description>
      <content:encoded><![CDATA[<p>In this paper we compare data from a fairly large legacy wind tunnel test campaign to smaller, statistically-motivated experimental design strategies. The comparison, using Monte Carlo sampling methodology, suggests a tremendous opportunity to reduce wind tunnel test efforts without losing test information.</p>
<h4 id="suggested-citation">Suggested Citation</h4>
<blockquote>
<p>Hill, Raymond R., Derek A. Leggio, Shay R. Capehart, and August G. Roesener. “Examining Improved Experimental Designs for Wind Tunnel Testing Using Monte Carlo Sampling Methods.” Quality and Reliability Engineering International 27, no. 6 (October 2011): 795–803. <a href="https://doi.org/10.1002/qre.1165">https://doi.org/10.1002/qre.1165</a>.</p>
</blockquote>
<h4 id="paper">Paper:</h4>
<embed src= "paper.pdf" width= "100%" height= "700px" type="application/pdf" >

]]></content:encoded>
    </item>
    <item>
      <title>Designing Experiments for Nonlinear Models—an Introduction</title>
      <link>https://research.testscience.org/post/2009-designing-experiments-for-nonlinear-models-an-introduction/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://research.testscience.org/post/2009-designing-experiments-for-nonlinear-models-an-introduction/</guid>
      <description>We illustrate the construction of Bayesian D-optimal designs for nonlinear models and compare the relative efficiency of standard designs with these designs for several models and prior distributions on the parameters. Through a relative efficiency analysis, we show that standard designs can perform well in situations where the nonlinear model is intrinsically linear. However, if the model is nonlinear and its expectation function cannot be linearized by simple transformations, the nonlinear optimal design is considerably more efficient than the standard design.</description>
      <content:encoded><![CDATA[<p>We illustrate the construction of Bayesian D-optimal designs for nonlinear models and compare the relative efficiency of standard designs with these designs for several models and prior distributions on the parameters. Through a relative efficiency analysis, we show that standard designs can perform well in situations where the nonlinear model is intrinsically linear. However, if the model is nonlinear and its expectation function cannot be linearized by simple transformations, the nonlinear optimal design is considerably more efficient than the standard design.</p>
<h4 id="suggested-citation">Suggested Citation</h4>
<blockquote>
<p>Johnson, Rachel T., and Douglas C. Montgomery. “Designing Experiments for Nonlinear Models—an Introduction.” Quality and Reliability Engineering International 26, no. 5 (July 2010): 431–41. <a href="https://doi.org/10.1002/qre.1063">https://doi.org/10.1002/qre.1063</a>.</p>
</blockquote>
<h4 id="paper">Paper:</h4>
<embed src= "paper.pdf" width= "100%" height= "700px" type="application/pdf" >

]]></content:encoded>
    </item>
  </channel>
</rss>
