1. Introduction
Autonomous underwater vehicles (AUVs) are mechatronic robotic systems that are able to navigate underwater while untethered from any other system. For the purpose of this study, underwater gliders, which use a buoyancy change engine for propulsion but otherwise share many of the attributes of AUVs, are considered AUVs. With ships being expensive to operate and satellites unable to observe many ocean features, or restricted to observing the near surface, AUVs are an effective technology to sample the ocean (Singh et al. 2004; Webb et al. 2001; Eriksen et al. 2001; Rudnick et al. 2016).
Early studies on propeller-driven AUV risk and reliability presented analysis methodologies with examples from a range of deployments (Podder et al. 2004; Griffiths and Trembanis 2007; Griffiths et al. 2009; Brito et al. 2010, 2012). More recently, two independent studies of underwater glider reliability have shown a significant difference in performance between gliders maintained and deployed by their developers and those deployed by purchasers (Brito et al. 2014). Rudnick et al. (2016) examined the operation of the Spray underwater glider by the development and operations team at the Scripps Institution of Oceanography. Their survival analysis concluded that for a 100-day mission, the probability of survival for the Spray glider was 0.83. For this calculation the authors considered the faults that led to premature mission abort albeit in some cases the mission was not aborted, because the main aim was to demonstrate a target mission length rather than to gather scientific data. In contrast, an analysis of commercially available gliders, operated by nondevelopers, concluded that the probability of a deep glider surviving a 90-day mission without premature mission abort was 0.5 (Brito et al. 2014). Differences in survival estimates have also been observed for the risk of vehicle loss, with Rudnick et al. (2016) reporting a survival of 0.95 for a 100-day mission, and Brito et al. (2014) reporting a survival of 0.8 for a 100-day mission. In their survival analysis, with respect to vehicle loss, Rudnick et al. (2016) considered faults that led to a loss of control over buoyancy and vehicle loss as failures. In the study by Brito et al. (2014), the authors considered vehicle loss as failures. Both studies argue that understanding and eliminating failure modes are key to increasing the probability of successful mission completion and survival.
At this stage it is important to distinguish between failure and faults, as these two terms are used in the manuscript. Our definition of failure is aligned with that adopted by the British Standard (BSI 1991, p. 5), which states that failure is “the termination of the ability of an item to perform a required function.” A failure is a result of a component fault or human error. A component fault is caused by a defect and human error is caused by a person’s lapse, slip, mistake, or violation (Reason 1990). For simplicity, in this paper we use fault to encapsulate a component defect and a human error. In the work presented by Brito et al. (2014) and Rudnick et al. (2016), mission abort and AUV loss were considered mission failures. In this paper failure is defined as the termination of the ability of an AUV to perform the required function that can potentially lead to AUV loss. Generally, failure mitigation (risk mitigation) is defined as the process of annulling the consequence of failure or its likelihood of occurrence (Subramanian et al. 1996). In this paper failure mitigation (risk mitigation) is achieved by reducing the likelihood of failure occurrence.
While both Brito et al. (2014) and Rudnick et al. (2016) give emphasis to the role of mitigation through failure prevention and correction, neither presents an analytical framework for updating the risk profile based on a structured assessment of the understanding and elimination of failure modes and the subsequent effect on field results. Such an analytical approach would have to be based on probability theory, as is proposed in this paper.
The novelty of the technology makes it impossible to obtain past data on the probability of failure mitigation. Therefore, the assessment of the probability of failure mitigation for this type of technology must rely on expert subjective judgment. The same approach has been adopted in space exploration (Feather and Cornford 2003).
Feather and Cornford (2003) presented a hazard management framework to monitor and update the likelihood of the occurrence of design failure modes occurring. The system, Defect Detection and Prevention (DDP), is a probabilistic model. The key assumption is that each failure may have a number of prevention, analysis, control, and test (PACT) methods. The efficiency of each PACT in mitigating the failure is assessed by a group of experts in the field. The DDP system considers that multiple PACTs may have adverse or positive effects on a failure mode. PACTs are not independent, and they may introduce a failure into the system. The probability model aggregates all these effects to quantify the likelihood of failure mitigation. Brito et al. (2012) also used probability for modeling failure mitigation. However, in contrast to Feather and Cornford (2003), the mitigation actions were considered to be independent.
In this paper we present a Bayesian approach to updating the risk profile of an AUV, based on the preimplementation perceived effectiveness of the mitigation by subject matter experts and the observed performance during subsequent missions. We present a case study of the International Submarine Engineering Ltd. (ISE) Explorer AUV to illustrate the application of the method, based on the initial failure and survivability data presented in Brito et al. (2012).
This paper is organized as follows. Section 2 presents a summary of the methods used for estimating AUV survival. Section 3 presents the data of the ISE Explorer campaigns in the Arctic in 2010 and 2011. Section 4 presents the method proposed for updating the risk profile of an autonomous vehicle based on the probability of failure mitigation and field results. Section 5 presents the application of the method to the ISE case study. Section 6 presents the conclusions.
2. Survival analysis for autonomous underwater vehicles
For successful AUV missions, the observation that we gather is that the vehicle survived at least time t or distance x(t); this is the total mission time or mission length. In statistical survival modeling, this observation where the end, or the last reading, did not result in death is denoted as censored. For AUVs, successful missions are modeled as right censored, which means that the vehicle has survived at least the time or distance traveled by the vehicle. Some missions, however, end in failure. In statistical modeling, this event is represented as death. A survival function or distribution is a mathematical function that captures the probability that an individual or a system will survive beyond a specific time. One can make assumptions about the shape of the survival distribution. Parametric maximum likelihood methods can then be used to fit the data to the chosen distribution. Rudnick et al. (2016) used exponential distribution to model Spray glider reliability but other models exist, such as LogNormal and that devised by Weibull.
Nonparametric methods can also be used to estimate the probability of survival for a population, without making any assumption with respect to the shape of the distribution (Kalbfleisch and Prentice 2002). A number of studies have used nonparametric methods for estimating the survivability of autonomous underwater vehicles (Griffiths et al. 2003; Podder et al. 2004; Brito et al. 2014; Rudnick et al. 2016), including the Kaplan–Meier estimator (Kaplan and Meier 1958).
These studies depend on the use of mission data collected in the target operational environment. For missions in extreme environments, such as under ice, there is a paucity of mission data collected from the target environment, making it impossible to use conventional survival methods to estimate the mission risk. To address this problem, Brito et al. (2010) developed an extended version of the Kaplan–Meier estimator. Their estimator uses mission data collected in a benign environment and expert subjective judgment on the impact of those data in the targeted extreme environment.
The extended Kaplan–Meier survival estimator,


This estimator was used to inform operational decision-making for AUV deployments in extreme environments (Brito et al. 2010, 2012). There are two types of risk mitigation that lead to the reduction of the likelihood of failure: 1) monitoring distance and 2) failure prevention and reduction. Details of each are presented in the following subsections.
a. Mitigation with monitoring distance


The decision to identify the most suitable monitoring distance is informed by both the slope of the survival distribution and the practicality and cost of its implementation. The cost and the practical challenges of implementing the monitoring distance are not discussed in this paper.
With respect to the survival distribution, if the slope of the survival distribution is constant with the distance, then there is no gain in survival by implementing the monitoring distance. On the other hand, if the survival profile shows a steep slope in the first tens of kilometers and then it plateaus for greater distances, then there is a benefit to be gained from implementing the monitoring distance. The optimum monitoring distance is that distance in the survival profile where the survival profile becomes closest to flat.
Depending on the environment, the monitoring distance can be easy or hard to be implemented. The monitoring mission must allow operators to test the functionality of the AUV. Therefore, the AUV must be within communication range of the pilot or control team. The range is also important in terms of recovery. The implementation of a monitoring mission implies that it is possible to recover the AUV at any time if a fault has occurred that needs mending prior to committing to the main missions. Different environments affect the ability to communicate with, or to recover, the AUV. The implementation of the monitoring distance must be defined with an understanding of these constraints.
b. Failure prevention and correction






3. ISE Explorer case study
The ISE Explorer is an autonomous underwater vehicle developed by ISE, Port Coquitlam, British Columbia, Canada. The AUV has a length of 7.4 m, a body diameter of 0.74 m, and is depth rated to 5000 m. The weight of the AUV varies from one mission to another, depending on the payload and battery configuration; for the 2010 and 2011 Arctic campaigns, the weight was 1870 kg. The propulsion is by a propeller with energy for propulsion, controls, and communication from lithium-ion Exide batteries, 30 modules each of 1.6-kWh energy. The maximum range is 450 km at 1.5 m s−1 (Kaminski et al. 2010; Crees et al. 2010).
In this case study, we consider the operational data gathered for vehicle B05 during the Arctic operations in 2010 and 2011. The initial risk analysis for the operations in the Arctic was presented in Brito et al. (2012).
The dataset consisted of 32 missions; the fault data are presented in Table 1. For each fault, experts were asked to assess the likelihood of a fault leading to vehicle loss and the likelihood of a failure being mitigated in light of the mitigation action discussed with the engineering team. The expert judgments were elicited at two separate workshops. The first workshop was held in Halifax, Nova Scotia, Canada, from 8 to 10 December 2009; the second workshop was held in Vancouver, British Columbia, Canada, in 2011. The deployments within the dataset were from the following events:
Fabrication and assembly (May–September 2009)
Builder sea trials, (8 September–12 October 2009)
First homing and positioning trials (16 November–4 December 2009)
Second homing and positioning trials (4 January–28 January 2010)
Mission testing (22 February–12 March 2010)
Arctic survey (4 May–22 May 2010)
Vancouver trials (17 February–22 February 2011)
Bedford Basin trials (14 June–15 June 2011)
Probability of failure mitigation for all faults presented in Brito et al. (2012). Column 1 is the fault reference number. Column 2 contains the values of


The full fault description and mission details are provided in Brito et al. (2012). In the same paper, the authors present the results of the expert judgment elicitation. A formal expert judgment elicitation was conducted in order to elicit from a group of five experts two risk assessments for each fault. First, for each fault, the experts were asked to agree on the probability of the fault leading to vehicle loss. Following the completion of this process, the experts were then asked to agree on the probability that the failure mitigation strategy proposed by the engineers would correct the failure. Brito et al. (2012, p. 1693) present a table of the agreed expert assessments for all 51 failures in the dataset. When the authors plotted the density distribution of the probability of failure mitigation, they realized that there were three distinct modes in this distribution. The first mode, at zero, comprised assessments for which the failure had not been understood and for which a mitigation plan had not been developed. The second distribution, with mode at 0.5, comprised failures where although a mitigation strategy had been developed, it had not been tested. A third distribution, with mode at 0.9, comprised failure for which a mitigation plan had been developed and tested.
4. Method for updating risk based on mitigation effectiveness
Previous research has assumed that the probability of failure mitigation was a fixed value (Brito et al. 2010, 2012). Once agreed in the workshop, by a group of experts, the assumption is that is value remains constant. Our argument now is that in reality each mission is a test for the mitigation action or strategy. Therefore, the result of the test can be used to update the probability that the failure was mitigated. This can be modeled using a Bayesian theory that captures the rationale that belief in a hypothesis is influenced by new observations (or evidence). The posterior parameter distribution p(θ|D) is inferred from the observed data D. The prior distribution p(θ) represents any known information regarding θ, before D is observed.
Before any AUV missions take place, experts agree on the probability of failure mitigation. The probability of failure mitigation tends to be specified as a single probability value, from 0 to 1, rather than a probability density function. The probability of mitigation agreed at the workshop was the prior of the probability of failure mitigation that must be updated in light of successful missions as well as reoccurrences of failures.
There are two key problems. First, we must model the prior probability of failure mitigation in such a way that allows us to update its value based on field observations. Second, we must have means to conduct the Bayesian inference. This is discussed in sections 4a and 4b.
a. Modeling the prior
In the process of building the risk model, the experts agree on the probability that the failure correction action will remove the failure. In Brito et al. (2012), this is denoted as the probability of failure mitigation and it is represented by
There is uncertainty associated with the estimate of




The beta function is the normalization constant for the beta distribution.










b. Bayesian inference


The same convention is used as in section 4a because θ is the probability of failure mitigation.




5. Bayesian updated judgments
To illustrate the application of the Bayesian inference approach, we applied the approach to the ISE Explorer AUV B5. Twelve missions were performed after the failure mitigation assessment process in January 2010. Of these, six missions—dives 51–56— were carried out in the Arctic. Four missions took place off from Vancouver on 17, 18, 21, and 22 February 2011, and two missions were carried out in the Bedford Basin on 14 and 15 June 2011. Here we applied Bayesian inference for updating the likelihood that failure x had been mitigated.
a. Updating failure risk
Figure 1 presents the cumulative distribution functions (CDFs) for the probability of failure mitigation for failures 9, 13, 15, 40, 42, and 35b. These are typical examples of the probability of mitigation for the failures in the three modes of the probability of failure mitigation, for where failures did and did not occur.

Density distribution for the mean of the probability of fault mitigation for failures 9, 13, 15, 35b, 42, and 40.
Citation: Journal of Atmospheric and Oceanic Technology 35, 4; 10.1175/JTECH-D-16-0252.1

Density distribution for the mean of the probability of fault mitigation for failures 9, 13, 15, 35b, 42, and 40.
Citation: Journal of Atmospheric and Oceanic Technology 35, 4; 10.1175/JTECH-D-16-0252.1
Density distribution for the mean of the probability of fault mitigation for failures 9, 13, 15, 35b, 42, and 40.
Citation: Journal of Atmospheric and Oceanic Technology 35, 4; 10.1175/JTECH-D-16-0252.1
The posterior probability of failure mitigation increases even if one or two failures emerge during subsequent trials. This suggests that the failures are rarer than the prior (expert) probabilities suggest.
The updated values for all probabilities of mitigation are presented in Table 1. Where the prior
The table shows that most failures did not reemerge in subsequent missions.
Figure 2 presents a summary of the effect of the probability of mitigation, before and after subsequent missions, for failures for which the probability of loss was greater than 0.01.

Probability of loss given fault, mitigation, and trials results. Ranked order by unmitigated probability of loss given fault, for those faults above 0.01. The upper point of the “error bar” is the unmitigated data, the main point is after a priori estimated mitigation, and the lower error bar is posterior after Bayesian inference.
Citation: Journal of Atmospheric and Oceanic Technology 35, 4; 10.1175/JTECH-D-16-0252.1

Probability of loss given fault, mitigation, and trials results. Ranked order by unmitigated probability of loss given fault, for those faults above 0.01. The upper point of the “error bar” is the unmitigated data, the main point is after a priori estimated mitigation, and the lower error bar is posterior after Bayesian inference.
Citation: Journal of Atmospheric and Oceanic Technology 35, 4; 10.1175/JTECH-D-16-0252.1
Probability of loss given fault, mitigation, and trials results. Ranked order by unmitigated probability of loss given fault, for those faults above 0.01. The upper point of the “error bar” is the unmitigated data, the main point is after a priori estimated mitigation, and the lower error bar is posterior after Bayesian inference.
Citation: Journal of Atmospheric and Oceanic Technology 35, 4; 10.1175/JTECH-D-16-0252.1
In Brito et al. (2012) the authors show that the distribution of the probability of mitigation can be trimodal. Figure 3 shows the a priori pdf of the probability of mitigation, with no prior knowledge of the effectiveness of the failure mitigation (gray) and with knowledge of subsequent missions (black). The three modes of the distribution identified in Brito et al. (2012) (gray) are still evident in the a posteriori distribution (black).

Probability of failure mitigation: the prior probability of mitigation distribution (gray) and the posterior probability of mitigation distribution (black).
Citation: Journal of Atmospheric and Oceanic Technology 35, 4; 10.1175/JTECH-D-16-0252.1

Probability of failure mitigation: the prior probability of mitigation distribution (gray) and the posterior probability of mitigation distribution (black).
Citation: Journal of Atmospheric and Oceanic Technology 35, 4; 10.1175/JTECH-D-16-0252.1
Probability of failure mitigation: the prior probability of mitigation distribution (gray) and the posterior probability of mitigation distribution (black).
Citation: Journal of Atmospheric and Oceanic Technology 35, 4; 10.1175/JTECH-D-16-0252.1
b. Reliability growth
Having calculated the posterior for the probability of failure mitigation (see Table 1), it is then possible to calculate the updated survival profile for the autonomous underwater vehicle using the extended version of the Kaplan–Meier estimator, Eq. (1). Figure 4 shows the survival distribution.

Survival distribution for the ISE Explorer vehicle deployment in the Arctic. The survivability without mitigation (sold line), the survivability considering the mitigation based on the prior belief (dashed line), and the survivability considering the mitigation updated (dotted line).
Citation: Journal of Atmospheric and Oceanic Technology 35, 4; 10.1175/JTECH-D-16-0252.1

Survival distribution for the ISE Explorer vehicle deployment in the Arctic. The survivability without mitigation (sold line), the survivability considering the mitigation based on the prior belief (dashed line), and the survivability considering the mitigation updated (dotted line).
Citation: Journal of Atmospheric and Oceanic Technology 35, 4; 10.1175/JTECH-D-16-0252.1
Survival distribution for the ISE Explorer vehicle deployment in the Arctic. The survivability without mitigation (sold line), the survivability considering the mitigation based on the prior belief (dashed line), and the survivability considering the mitigation updated (dotted line).
Citation: Journal of Atmospheric and Oceanic Technology 35, 4; 10.1175/JTECH-D-16-0252.1
Figure 4 shows that for long missions, the largest increase in survivability comes from addressing the historical failures with their a priori estimated probability of effective mitigation. Considering the survival profiles for mitigated and Bayesian updated, for a mission between 57 and 324 km, the probability of survival increased by 1.6% (see Fig. 4). These results are based on the mean estimate for the updated probability of mitigation for each failure. From Table 1, it is possible to see that there is a reduction in the variance from the prior estimate for the probability of mitigation and the posterior estimate. The results therefore show that with the Bayesian inference, there is an increase in the probability of survival and an increase in confidence.
From the survival distribution for the unmitigated case, it is possible to see that the probability of survival decreases by approximately 15% in the first 31 km of a mission. This is the most significant slope in the survival distribution before it plateaus and it informed the implementation of mission 51. Two other test missions were conducted, missions 52 and 53. For missions 51 and 52, the vehicle was constantly monitored using short-range localization (SRL). Table 2 presents the missions conducted by the ISE Explorer vehicle B05 in the Arctic during the campaign in 2010.
All missions conducted by ISE Explorer B05 in the Arctic campaign in 2010.


Table 3 presents the survival estimates for missions 51–53, considering the unmitigated, mitigated, and Bayesian updated survival profiles. The probability of survival for missions 52 and 53, given the implementation of a monitoring distance of 31 km, was calculated using Eq. (2).
Survival estimates for ISE Explorer’s missions 51–53. The monitoring mission of 31 km corresponds to mission 51.


The survivability of the AUV, for mission 51, increased by 0.06, from the mitigated survival profile to the Bayesian updated survival profile. For missions greater than 31 km, the probability of survival for the Bayesian updated risk profile is 0.116 higher than the probability of survival estimated using the mitigated survival profile.
Table 4 presents the survival estimates for missions 55 and 56. The survival profile contains data up to a distance of 334 km; therefore, the probability of survival for mission 54 cannot be calculated. The product rule was used to estimate the overall probability of surviving both survey missions.
Survival estimates for ISE Explorer’s Arctic survey missions.


Considering no failure mitigation, for the case where a monitoring distance of 87 km is implemented (missions 51 and 52), the probability of surviving missions 55 and 56 increases by 17.7%. On the other hand, if we consider only failure correction and the Bayesian inference, with no monitoring distance, the probability of survival increases by 26.2%. This shows the importance of quantifying the impact of failure correction and of updating the risk profile based on subsequent missions.
6. Conclusions
Autonomous underwater vehicles are complex systems where managing the risk of premature abort or loss is very important. While designers and manufacturers will have sought to design and build reliable components, hardware subsystems, and software, a vehicle in the hands of users who were not the developers is still likely to show faults and failures. Developers and experts are efficient at managing risk through mitigation—that is to say, describing and understanding the failures that emerge, coming up with solutions, and then testing those solutions. However, this is usually a subjective process without an analysis of the likely impact of the fault or the effectiveness of failure mitigation on the probability of a vehicle surviving a mission.
This paper injects transparency into this process. It argues that while an initial assessment can be obtained for the probability of successful mitigation for a failure, the actual probability of mitigation must be updated based on the results of subsequent missions.
Tracking reliability growth is required in order to ensure effective outcomes from the deployment of autonomous systems. In this paper we present Bayesian formalism for tracking the reliability growth of autonomous underwater vehicles. Each mission was considered as a test in a binomial trial. We applied the method to update the risk of the ISE Explorer AUV, following the Arctic campaign in 2010.
Another potential application of this approach within the AUV context is the issue of updating the availability prediction. In an AUV deployment there are several phases, for example, pretest, posttest, vehicle overboard (if being launched from a ship), and so on. Availability is the likelihood of a successful sequence of phases taking place (Brito and Griffiths 2011). In actual field deployment, the transition from one phase to the next can be considered to be a binomial trial. Similar to what has been presented in this paper, the a priori of the probability of successful transition can be updated using the results of field deployments, thus allowing us to update the availability prediction of an AUV.
In our view this study has highlighted some limitations in the expert judgment elicitation, as the practice of eliciting a single value for the probability of mitigation presents a problem. Experts tend to assign a probability of 0 if the event is very unlikely and a probability of 1 if it is very likely. In the elicitation of the probability of loss given a failure, the elicitation process encourages the experts to assign a probability distribution, such as a beta distribution. A similar approach should be adopted for eliciting the probability of failure mitigation. By doing so it allows the decision-maker to update the risk for those failures in light of subsequent missions. This research shows that the practice of eliciting a single probability value for the probability of failure mitigation can be problematic. It forces the analyst to make assumptions regarding modeling the probability of mitigation. In making these assumptions, the analyst may introduce bias. This is evident with Fig. 3; here the probability of failure mitigation is represented as the number of counts. The probability of failure mitigation provided by experts was discrete values 0, 0.1, …, in increments of 0.1. This allowed us to model the probability of failure mitigation as a number of counts. If a more detailed probability of failure mitigation had been obtained, instead of using the number of counts to represent the probability of failure, then mitigation could have used as density function. In the case study presented in this paper, we modeled the probability of failure mitigation with a beta distribution. To match a beta distribution to the probability of failure mitigation, we assumed that the mean of the beta distribution was the probability of failure mitigation. The width of the prior (expert) probability distributions is also not known. To model the width, we assumed that the prior variance was the same as the variance for the mode of the priori probability of failure mitigation distribution; this was calculated using Eq. (7). Alternatively, we could have assumed that the mode of the beta distribution was the probability of failure mitigation or have made another assumption for the variance. This is, in our view, one limitation of this research. This highlights the need for analysts to elicit a probability distribution for the probability of failure mitigation. To our knowledge this is not current practice in risk modeling.
Acknowledgments
The authors thank the anonymous reviewers for their very insightful comments. We also thank the experts who took part in the expert judgment elicitation conducted in Halifax and in Vancouver. This work was partly funded by the Natural and Environment Research Council Grant NE/I015647/1 awarded to Mr. Gwyn Griffiths as Principal Investigator.
APPENDIX
Derivation of the Bayesian Equation for Three Variables
Let θ be a vector of probabilities of success for several independent binomial trials. Let n be a vector of the number of trials and let m be a vector of the number of successes.









REFERENCES
Brito, M. P., and G. Griffiths, 2011: A Markov chain state transition approach to establishing critical phases for AUV reliability. IEEE J. Oceanic Eng., 36, 139–149, https://doi.org/10.1109/JOE.2010.2083070.
Brito, M. P., G. Griffiths, and P. Challenor, 2010: Risk analysis for autonomous underwater vehicle operations in extreme environments. Risk Anal., 30, 1771–1788, https://doi.org/10.1111/j.1539-6924.2010.01476.x.
Brito, M. P., G. Griffiths, J. Ferguson, D. Hopkin, R. Mills, R. Pederson, and E. MacNeil, 2012: A behavioral probabilistic risk assessment framework for managing autonomous underwater vehicle deployments. J. Atmos. Oceanic Technol., 29, 1689–1703, https://doi.org/10.1175/JTECH-D-12-00005.1.
Brito, M. P., D. A. Smeed, and G. Griffiths, 2014: Underwater glider reliability and implications for survey design. J. Atmos. Oceanic Technol., 31, 2858–2870, https://doi.org/10.1175/JTECH-D-13-00138.1.
BSI, 1991: Quality vocabulary—Part 3: Availability, reliability and maintainability terms; Section 3.1—Guide to concepts and related definitions. British Standards Institution Publ. BS 4778-3.1:1991, 32 pp.
Crees, T., and Coauthors, 2010: UNCLOS under ice survey—An historic AUV deployment in the Canadian high arctic. Proc. OCEANS 2010 MTS/IEEE, Seattle, WA, IEEE, 8 pp., https://doi.org/10.1109/OCEANS.2010.5664438.
Eriksen, C. C., T. J. Osse, R. D. Light, T. Wen, T. W. Lehman, P. L. Sabin, J. W. Ballard, and A. M. Chiodi, 2001: Seaglider: A long-range autonomous underwater vehicle for oceanographic research. IEEE J. Oceanic Eng., 26, 424–436, https://doi.org/10.1109/48.972073.
Feather, M. S., and S. L. Cornford, 2003: Quantitative risk-based requirements reasoning. Requir. Eng., 8, 248–265, https://doi.org/10.1007/s00766-002-0160-y.
Griffiths, G., and A. Trembanis, 2007: Eliciting expert judgment for the probability of AUV loss in contrasting operational environments. 15th International Symposium on Unmanned Untethered Submersible Technology 2007, Autonomous Undersea Systems Institute, 494–510.
Griffiths, G., N. W. Millard, S. D. McPhail, P. Stevenson, and P. G. Challenor, 2003: On the reliability of the Autosub autonomous underwater vehicle. Underwater Technol., 25, 175–184, https://doi.org/10.3723/175605403783101612.
Griffiths, G., M. Brito, I. Robbins, and M. Moline, 2009: Reliability of two REMUS-100 AUVs based on fault log analysis and elicited expert judgment. 16th Annual International Symposium on Unmanned Untethered Submersible Technology 2009 (UUST 09), Autonomous Undersea Systems Institute, 451–463.
Kalbfleisch, J. D., and R. L. Prentice, 2002: The Statistical Analysis of Failure Time Data. 2nd ed. Wiley Series in Probability and Statistics, Wiley, 489 pp.
Kaminski, C., T. Crees, J. Ferguson, A. Forrest, J. Williams, D. Hopkin, and G. Heard, 2010: 12 days under ice—An historic AUV deployment in the Canadian High Arctic. Proc. 2010 IEEE/OES Autonomous Underwater Vehicles, Monterey, CA, IEEE, 11 pp., https://doi.org/10.1109/AUV.2010.5779651.
Kaplan, E. L., and P. Meier, 1958: Nonparametric estimation from incomplete observations. J. Amer. Stat. Assoc., 53, 457–481, https://doi.org/10.1080/01621459.1958.10501452.
O’Hagan, A., C. E. Buck, A. Daneshkhah, J. R. Eiser, P. H. Garthwaite, D. J. Jenkinson, J. E. Oakley, and T. Rakow, 2006: Uncertain Judgments: Eliciting Experts’ Probabilities. Statistics in Practice, Wiley, 338 pp.
Podder, T. K., M. Sibenac, H. Thomas, W. Kirkwood, and J. G. Bellingham, 2004: Reliability growth of autonomous underwater vehicle—Dorado. OCEANS ’04: MTS/IEEE Techno-Ocean ’04 (OTO’04) Conference Proceedings, Vol. 2, IEEE, 856–862, https://doi.org/10.1109/OCEANS.2004.1405576.
Reason, J., 1990: Human Error. Cambridge University Press, 302 pp.
Rudnick, D. L., R. Davis, and J. T. Sherman, 2016: Spray underwater glider operations. J. Atmos. Oceanic Technol., 33, 1113–1122, https://doi.org/10.1175/JTECH-D-15-0252.1.
Singh, H., A. Can, R. Eustice, S. Lerner, N. McPhee, O. Pizarro, and C. Roman, 2004: Seabed AUV offers new platform for high-resolution imaging. Eos, Trans. Amer. Geophys. Union, 85, 289–296, https://doi.org/10.1029/2004EO310002.
Subramanian, S., L. Elliott, R. V. Visnuvajjala, W. T. Tsai, and R. Mojdehbakhsh, 1996: Fault mitigation in safety-critical software systems. Proceedings: Ninth IEEE Symposium on Computer-Based Medical Systems, IEEE Computer Society Press, 12–17.
Webb, D. C., P. J. Simonetti, and C. P. Jones, 2001: SLOCUM: An underwater glider propelled by environmental energy. IEEE J. Oceanic Eng., 26, 447–452, https://doi.org/10.1109/48.972077.