Home  > Recognize Excellence  > Rist Prize

David Rist Prize

The Rist Prize, named after David Rist (an early Director) and first presented in 1965, is awarded annually.  Until 2005 the prize was awarded for best paper submitted in response to a call for papers.  Beginning in 2006 the criteria for the prize was changed to the best implemented study submitted in response to a call for entries. Final judging takes place the Monday before the MORS Symposium (MORSS).

The Rist Prize now recognizes the practical benefit sound operations research can have on “real life” decision making and seeks the best implemented military operations research study from those submitted in response to a call for entries. The call solicits abstracts with letters of endorsement for implemented recommendations from studies or other operations research-based efforts, e.g., analyses, methodology improvements, that influenced major decisions or practices. Entries for individuals or teams submitted in response to this call will be eligible for consideration for the Rist Prize. There are cash prizes that may be awarded: $3,000 for the winner and $1,500 split amongst the honorable mentions.

Nominate a deserving study today.
Click here to download the call for entries.

Download Required Forms Here

Contractor Disclosure Form Form 712A
Government Disclosure Form Form 712B
Contractor/Government Abstract Form Form 109A/B


Winners

2013- "LtCol David Scott, Capt Aaron Burciaga, Ms. Cynthia Cheek, Ms. Janice Scoggins for their work "USMC Logistics Operations Analysis's Combat Active Replacement Factors Statistical Analysis Tool (CARF-STAT): New Methods for Developing Limited Data Sets for Predicting US Marine Corps Combat Losses." 

The runner's up were Mr. Ryan Graziano and Mr. John "Greg" Heck for their work "CAA's Arlington National Cemetery Project."  The other finalists were Mr. David House, Mr. Earl Eck, Dr. Ted Meyer, Mr. Brian Zacherl, for their work “JIEDDO J9 SAB ISR Team's Effectiveness Analysis of C-IED ISR Assets;” COL Andrew O. Hall, LTC Marie L. Hall, CDR Keith Williams, COL John Mike Scott, for their work “Joint Staff J33's Global Force Management and the Allocation Process.” 

2012 - MAJ Matt Dabkowski and COL Bradley Pippin for their work "Force Design/Force Mix: Building the Best Army Possible with Reduced End-Strength.”

The runner-up was Dr. Gregg M. Burgess, PhD, for his work “Cross-Intelligence Cost-Benefit Assessment of National Intelligence Programs." The other finalists were Lt Col Alex J. Barelka, PhD, PMP, Lt Col Maurice Azar, MSc, Capt Travis Herbranson, MSc, Mr. Henry Collis, MSc, Mr. John-Paul Gravelines MA, and Ms. Dixie O’Donnell, MSc, for their work “The International Security Assistance Force (ISAF) Strategic Communication Assessment Program (SCAP).”


2011 - Prof. Milind Tambe, Prof. Fernando Ordonez, Mr. Manish Jain, Mr. James Pita, Prof. Christopher Kiekintveld, Erroll Southers and James B. Curren for thier work "Software Assistants for Patrol Planning at LAX, Federal Air Marshals Service (FAMS), and Transportation Security."

The runner-up team consisted of LTC David A. Smith, PhD, Center for Army Analysis and co-authors Mr. Steven Goode and Ms. Justine Blaho for their work, "Afghan National Security Force Growth Retention Analysis. The other finalists were Dr. John Crino, OSD/CAPE/IWD, and co-authors Col(S) Scott Frickenstein, Lt Col Trevor Benitone, Dr. Adam Grissom and Dr. David Orletsky for their work "Non-Standard Rotary-Wing Study ."


2010 – The 2010 Rist Prize Winner was the combined team from HQ USAF/A9 and USSTRATCOM, who gave the presentation “Integer Programming Guides Negotiators for Nuclear Arms Reduction.”  Team members included Mr. John Andrews from HQ USAF/A9 and Mr. Patrick McKenna and Ms. Karen Phipps from USSTRATCOM. 


The runner-up, was the team: COL Kent Miller, LTC Robert Kewley, MAJ Mark Zais, MAJ Matthew Dabkowski, MAJ Chris Bachmann, and LTC (Ret) Michael Kwinn (HQDA DCS G-1), for their work “Analysis of Unit and Individual BOG: Dwell in Steady-State ARFORGEN.” The other finalist in this year’s Prize competition was the team of Donald W. Amann, Stephen R. Reise, and Brian C. Montgomery from John Hopkins University Applied Physics Lab for their work on “Weapons Cache Attribution and Prediction Analysis.”


2009 - The 2009 Rist Prize Winner was Mr. Joseph Mlakar of the Marine Corps Combat Development Command for his work on “Applying Crime Mapping and Analysis Techniques to Forecast Insurgent Attacks in Iraq.”

The runner-up was LTC James Ware of the Center for Army Analysis for his work in “Stryker Fleet Sustainment Analysis.”  The other finalist in this year’s Rist Prize competition was a team from the U.S. Army Materiel Systems Analysis Activity, led by Mr Jeffrey Corley, on “Mine Resistant Ambush Protected (MRAP) Survivability Technology Studies.”


2008 - The 2008 Rist Prize Winner was MAJ Steven J. Sparling and Robert F. Dell for their study entitled Optimal Distribution of Resources for Non-Combatant Evacuation, (endorsed by Brig Gen Angelella, USPACOM). Their MORS Mentor Dr William G. Lese, FS.

The five finalists in alphabetical order were:

  • LTC Robert D. Bradford and Tucker Hughes (CAA)
  • John Duke, Erik Adams, Roger Burley, Preston Dunlap and Lt Col Tim Smetek (OSD PA&E SAC)
  • Steven C. Pearson (OPNAV)
  • MAJ Steven J. Sparling (CAA) and Robert F. Dell, Ph.D. (NPS)
  • John (Jack) Zeto (CAA)

The Rist Prize runner up was LTC Robert D. Bradford and Tucker Hughes For their study entitled Army Reserve Stationing Study, (endorsed by LTG Stultz, Office of the Chief, Army Reserve). Their MORS Mentor was Dr. Andrew G. Loerch, FS.


2007- “Countering Radio Controlled Improvised Explosive Devices,” Dr. Edward S. Michlovich

Showing the effectiveness of an operation designed to prevent something from happening has always been a challenge. The inability to prove an effect can cast doubt on the value of the operation and lead to a classic question: When is an uncertain gain worth the known costs associated with it? In mid-2004, operational commanders were facing such a situation. In this case, the question was magnified by the exigent circumstances of war, where lives were quite directly in the balance.

An operation intended to help protect soldiers and marines from radio-controlled IED (RCIED) detonations was being conducted in Iraq. However, there were serious concerns about the operation’s cost, particularly the ramifications of high utilization on certain assets. In the absence of evidence that the operation was effective, some consideration was given to curtailing or terminating it. To inform decisions on the future of the operation, the Marine Corps asked if I could find a way to assess effectiveness.

I gathered data from units in theater and devised a technique, based on bootstrap statistical theory, to pull out meaningful results. The methodology is conceptually straightforward, consisting of three fundamental steps: calculate a metric relevant to effectiveness of the operation (IED frequency, for example); develop the distribution of values that the metric would take if the operation were not effective; compare the latter to the former. This becomes the equivalent of a statistical significance test, with the proposition that the asset had no effect being the null hypothesis.

The most challenging part of the methodology is the second step. This is where bootstrap theory comes in—it allowed me to use the data from when the asset was not employed to generate estimates of IED frequency if the asset had not been effective. Repeating the calculation many times in a Monte Carlo-like fashion allowed me to generate the necessary statistical distribution.

Analysis of the initial months of the operation revealed a correlation between the use of the asset and a marked reduction in IED frequency. I found a high statistical confidence (>95%) associated with the correlation; in statistical terms, I could reject the null hypothesis of no-effect. The analysis further showed that the operation’s effectiveness was limited only by the paucity of assets dedicated to it.

In early 2005, the Director, Joint IED Defeat Task Force, hosted my presentation of the analysis to Commander, U.S. Central Command, GEN John Abizaid. As a result of the analysis, he requested additional assets be devoted to the operation. Indeed, since the initial analysis, use of the asset has tripled. Follow-on analysis verified the original findings — the operation continued to be effective in countering RCIEDs. Calculations indicate that the continued and expanded use of the asset — which this analysis is generally credited with — has likely resulted in the prevention of hundreds of casualties in Iraq. CNA has since expanded the analysis to related operations as well.


2006 - "Army Force Generation Model Simulation," LTC Steven Stoddard, LTC Mark Brantley, LTC Clark Heidelbaugh, et al

Background: The Army continually examines its force structure and its ability to meet strategic requirements. Demand for forces is driven by national strategy, a force planning construct (e.g., “1-4-2-1”), and on-going operations. Supply of forces is constrained by unit lifecycles (training, readiness, deployments, and recovery), transformation, AC and RC forces levels, and rotations. The Army developed the Army Force Generation concept (ARFORGEN) to manage the supply of forces over a variety of demand scenarios. The Center for Army Analysis developed the Army Forces Generation Model (AFGM) Simulation study to model ARFORGEN and determine the appropriate size of the force.

Prior to this effort, there was no existing model that appropriately replicated the cyclical readiness that will exist under ARFORGEN. Also, no existing rotation model adequately captured the nuances of a fully rotational Army, such as variable rotation durations, in-theater overlap to accomplish battle hand-off, and rotation policy as a model output (vice a policy input). In light of these issues, we developed our own model, called MARATHON.

Summary of Methodology: We implemented the MARATHON model as a discrete-event simulation built in ProModel. This allows for deterministic and stochastic arrival and processing of contingency operation as well as visual validation that the Army generated forces as expected. In particular, this visual aspect of the model provides great insight to decision-makers who might have been less understanding of a mathematical optimization model.

MARATHON allows us to simulate the flow of active and reserve component units through their respective operational readiness cycles. Each cycle begins with a non-available period (when AC units are reset and RC units are not available for Title 10 operations), followed by periods when units train until they are ready and available, deploy, recover, and transform (as necessary). MARATHON allows us to examine a variety of force structure options and force generation policies by illustrating gaps or redundancies in capabilities, as well as associated deployment tempos. These factors drive the Army’s force structure and force management decision. The Army adopted MARATHON to analyze its force structure for the 2005 Quadrennial Defense Review (QDR) as well as other analytical efforts. We also used the simulation to model various courses of action that supported the approval and implementation of the ARFORGEN concept.

To conduct our analyses, we developed two major supply and demand scenarios, as well as more than 30 different demand scenarios for sensitivity analysis. The first major scenario consisted of the real demand the Army faced from 2002 through 2004 along with anticipated near-term requirements. We modeled this scenario using the programmed force structure. This scenario provided the principal means to validate the model. We developed the second major scenario based on the Strategic Planning Guidance Analytical Agenda, including OSD-vetted vignettes for lesser contingency operation. We modeled this scenario against future force-structure alternatives.

Impacts: In support of QDR analysis effort, we conducted five separate analyses:

  • Brigade Combat Teams (BCTs). We analyzed the number of BCTs the Army needs to meet operational requirements and estimated the stress on the force at various levels of commitment. This analysis covered both current and future demand scenarios with corresponding force structure options. Our conclusions advised HQDA regarding force sufficiency and force stress in both the near-term and future cases. These conclusions also included assessments of alternative force structure options. In particular, we determined the circumstance under which the Army could “run out of BCTs” and how frequently units can expect to deploy.
  • Support Structure. We analyzed how well the planned Army force structure will meet operation requirements in potential demand scenarios. This analysis included all deployable Army units across the Active and Reserve Components. We identified which types of units are unable to meet operational commitments as well as which types of units are likely to be over or under stressed. We also identified which unit types have improper AC/RC balance and which types of units should and should not be managed under ARFORGEN.
  • Sustain-Surge. We analyzed a variety of scenarios that combined different levels of sustained, steady-state operation and surge operations. For example, if the Army is maintaining a sustained commitment of X BCTs, it has the capacity to surge with Y addition BCTs. As the level of sustained commitment increases, rotational stress will increase accordingly while surge capability decreases. This analysis identified the frontiers between various commitment levels, stress thresholds, and force sufficiency.
  • ARFORGEN. We developed MARATHON to replicate ARFORGEN in accordance with emerging concepts from HQDA G-3 and Forces Command (FORSCOM). Because we created the model simultaneously with the evolution of the ARFORGEN concept, we were able to directly impact ARFORGEN development by G-3 and implementation by FORSCOM. In particular, we assessed that if ARFORGEN is implemented with flexible lifecycles, it will save the Army 2-4 BCTs of AC force structure. We also assessed that the Army can maintain the greatest number of available units by evenly distributing capabilities over time. Both of these conclusions were accepted by HQDA for ARFORGEN implementation.
  • Access to the Reserve Component. We employed MARATHON to examine access to the Reserve Component. This analysis showed how various policies regarding RC access impact Army force structure, force sufficiency, and deployment tempos. In particular, it showed which policies are vest for different types of units and operation conditions. ASA(M&RA) accepted the results of this research and directed dissemination to HQDA (G-1, G-3, and G-8), FORSCOM, NGB, and USAR.

Our methodology will continue to have far-reaching impacts on the Army:

  • We developed a personnel extension to the model in support of HQDA G-1 and Human Resources Command. This model extension allows us to examine various personnel policies under ARFORGEN by simulating the movement of soldiers through their careers, to include assignment to units that are moving through ARFORGEN operational readiness cycles. The model shows how personnel policies affect unit fill-rates and soldiers’ availability for schools and assignments.
  • We are developing an extension to the model to analyze equipping issue for HQDA G-8. This model allows us to examine assignment policies for training equipment, deployment equipment, and pre-positioned stocks of equipment. It also allows us to examine the affects of cyclic readiness and deployments on decisions to modernize, replace or recapitalize equipment.

Summary of Implementation:

  • Development of the MARATHON stimulation model; briefed at G-8 OPD (Jan 05) and used for all of CAAs ARFORGEN-related analyses (Jan 05 – present).
  • Analysis of ARFORGEN concept; used by HQDA G-3 for ARFORGEN approval (Jun 05); used by FORSCOM for ARFORGEN implementation (Apr 05 – Oct 05)
  • Analysis of force structure (BCTs as well as CS/CSS structure); used for QDR and Operational Availability 06 (Mar – Sep 05)
  • Analysis of BCT force structure in the near term; used by QDA (Aug 05) and Heavy-Infantry mix decision (Oct 05)
  • Analysis of sustained vs. surge operations (BCTs as well as CS/CSS structure; used for QDR (Aug-Oct 05)
  • Analysis of access to the Reserve Component; used for QDR (Oct 05)
  • Development of the MARATHON-PER simulation; deliver to HQDA G-1 for personnel analysis (May 05)
  • Development of the MARATHON-EQUIP simulation; used to analyze the Equipment Maneuver Plan for HQDA G-8 for ACP DP41 (working)

2005 - "Air Ambulance Analysis-Iraq," John Zeto, Mark Brantley, Galeraye Collins, et al

Abstract: In March 2004, Forces Command (FORSCOM) projected difficulty in continuing to source air ambulance units at the status quo level for continued Phase IV stability and support operations (SASO) in Operation Iraqi Freedom (OIF). FORSCOM, in concert with the Joint Staff, therefore mandated Combined & Joint Task Force- 7 (CJTF-7, later re-designated Multinational Corps–Iraq [MNC-I]) validate their requirement for air ambulance helicopters at the individual platform level.

Lacking a doctrinal method to produce the estimate, the CJTF-7 Surgeon’s Office requested reachback support through the CAA forward-deployed analysts to provide an analytical solution to FORSCOM’s mandate. AAA-Iraq is the methodology the CAA study director, LTC John Zeto, developed and used over the subsequent four months to quantify the requirement; first in support of CJTF-7 later in support of MNC-I.

The general methodology for this study occurred in three phases. First, initial data collection and statistical analysis of OIF casualty-producing incidents was performed to produce probability distributions of by-region casualty streams and accompanying patient flow values.

Second, the analyst team developed and employed a stochastic simulation to determine baseline medical evacuation capability. During this second phase, the analyst team engaged CAA’s cartographers and their extensive geographical information system (GIS) capabilities to produce a map of Iraq overlaid with stratified bands of geographic medical evacuation coverage. Although initially tangential to the primary research question (i.e., the number of air ambulances) the products produced by CAA’s cartographers proved invaluable, both to the analysts in the interpretation of simulation output as well as the in-theater sponsor in comprehending and implementing the study results.

Finally in the third phase, the analysts conducted sensitivity analysis to determine the impact, locations, and ultimately a minimum requirement for potential reductions in medical evacuation assets.

This presentation describes the analysis methodology, provides the analytical results, and highlights medical and war fighter insights pursuant to the research question. Included in the presentation is the initial analysis (version 1.0) conducted from March – April 2004, and an update (version 2.0) conducted from May – June 2004. The results of the former, version 1.0, served as the cornerstone upon which the number and locations of air ambulances currently in Iraq are based. It is expected the latter, version 2.0, will serve as the cornerstone for the foreseeable future.

The AAA-Iraq analysis was directed by LTC John Zeto; the study team members include: MAJ Mark Brantley, Ms Galeraye Collins, Mr John Bott, Dr Karsten Engelmann, MAJ Andrew Farnsler, MAJ Micheal Pannell, Mr Stewart Smith, and MAJ Stephanie Tutton.


2004 “Modeling Effectiveness and Uncertainty of a Computer Network Attack,” Mark A. Gallagher and Bud Whiteman

Abstract: Strategic Command developed a planning factor paradigm for determining Computer Network Attack (CNA) probability of mission success. The CNA Weapon Effectiveness Working Group approved this paradigm. The presentation depicts an application of the paradigm for an experiment. We apply a Bayesian approach based on component reliabilities to develop a distribution for the success rate. The resulting distribution provides senior decision makers with the expected success rate and the uncertainty of that estimate. We are beginning an Advance Concepts Technology Development (ACTD) program to gather develop component models and associated data to support broader applications of this approach.


2003 - "Input-Output Modeling for Effects-Based Operations," Mark A. Gallagher, Anthony W. Snodgrass and Gregory J. Ehlers

Abstract: Leontief developed the input-output model to make macro-economic assessments based on the interdependencies of various production sectors within a region. We discuss the concept of Effects-Based Operations (EBO) and its inherent requirement for analytical modeling to assess the effects of potential actions. We propose that the input-output model may be used to assess the direct and indirect impact of military operations in an enemy country. We present the input-output model and demonstrate how it can be used to assess the impacts of a variety of military strategies against a region or nation.


2001 - "The Development of an Information-Based Direct Fire Attrition Structure in AWARS," H. Kent Pickett and W Peter Cherry

Abstract: Over the past 30 years, the Army has relied on a Lanchester-based attrition structure in its aggregate level combat models. This structure, commonly called the Bonder/Farrell Attrition Algorithm, is based on parameters describing weapon performance (probability of kill given a shot and probability of detection given target exposure), target exposure, and firer preference for particular targets. By the mid-1990s, it became apparent that the Napoleonic process, represented by combat simulations of marching units together and fighting them until one broke and ran, was not the battle Army commanders expected to fight in the near term or the future. This paper describes a new approach to simulating the direct fire process by the Army Warfighting Simulation (AWARS). The AWARS methodology uses the Bonder/Farrell attrition rate process but attempts to represent other aspects of the battle affecting internal unit coherence – in particular, the state of information a unit has about other friendly/enemy units and its own situation. The paper contains a description of the overall architecture of the AWARS model affecting the direct fire process, unit geometry, the representation of key parameters in orders, situation maps, and the timing structure driving the simulation. Finally, a set of succeeding states is described allowing units to establish successively higher levels of unit command, control, and a coherent fire mechanism (providing more accurate fields of fire and a higher knowledge of enemy location and intentions). The paper also describes the ability of the methodology to represent battle “lulls” (these are the times when individual vehicle crews simply do not engage targets at their most efficient rate).


2000 - "Why Skill Matters in Combat Outcomes: and How to Include It in Combat Modeling," Michael Fischerkeller, Wade Hinkle and Stephen D. Biddle

Abstract: Combat assessment and force balance methodologies will play important roles during the next Quadrennial Defense Review in analyzing the capabilities of postulated forces, in planning scenarios, and in determining the proper balance between readiness and modernization. Research at IDA suggests that the analytic tools currently used for these purposes may substantially undervalue the contributions of military skill and advanced operational concepts. Our work in this area has won two awards: the MORS 1997 Barchi Prize and a 1999 MORSS Medal for Excellence in Operations Research. This project used a combination of statistical analysis of historical data, combat simulation experimentation, and close examination of critical historical cases to develop and initially test a formal set of hypotheses about how technology, skill, and operational concepts interact to produce combat outcomes. This presentation to the MORS community will offer a summary of the research-to-date and a discussion of whether and how the resulting mathematical model ought to be included in QDR-related efforts to improve existing analytic tools.


1999 - "Signals from Space: The Next-Generation Global Positioning System," Lee J. Lehmkuhl, David J. Lucia and James K. Feldman

Abstract: The Global Positioning System (GPS) is a constellation of satellites that provides precise navigation and timing information to military and civilian users worldwide. GPS signals from space guide cruise missiles and rental cars, and allow us to track the locations of railroad boxcars, golf carts, and soldiers in the field. As the provider of this national and international asset, the US has a vested interest in seeing that GPS remains the premier space-based navigation system, and has embarked on a GPS modernization program. Improvements in signal generation and processing technology now allow us to consider new signal structures, which will greatly improve the usefulness of GPS for military and civilian users. Choosing between these new signals, however, presents senior decision makers with a host of both technical and operational tradeoffs, many between competing military and civilian interests. The decision analysis presented here modeled the value of GPS to different user communities and quantified the tradeoffs. The results allowed the GPS Independent Review Team to recommend a new signal with superior military value that also meets all civilian technical performance requirements.


1998 - "The Generation, Use, and Misuse Of "PKs" in Vulnerability/Lethality Analyses," Paul H. Deitz and Michael W. Starks

Abstract: Beginning with World War II and its aftermath, the area of ballistic vulnerability/lethality (V/L) was first defined as a specific discipline within the field of ballistics. As the field developed, various practices and metrics emerged. In some cases metrics were developed that were abstractly useful but bore no direct relationship to field observables. In the last decade, as issues concerning Live-Fire strategies have gained importance, increased attention has been focused on V/L with the intent of bringing greater rigor and clarity tot the discipline. In part this effort has taken the form of defining a V/L Taxonomy which is a method of decomposing a series of concatenated complex processes into separable, less-complex operations, each with certain specifiable properties and relationships.

Using the Taxonomy, this paper describes how the most commonly used V/L metrics are a function of platform aggregate damage, reduced platform capability; and reduced platform military utility. We show that these three distinct and separable classes of metrics are linked by operators that are multivariate, stochastic, and nonlinear. We also show that it is useful to form probability distributions with respect to initial and boundary conditions in order to characterize damage, capability, and utility. Many defense community studies ignore these distinctions to the detriment of fundamental clarity. Examples are given and potential remedies described.

Join the Mailing List