A measure for decompression stress

I was thinking what might be a good measure for decompression quality or stress. This also comes as O’Dive with their end user Doppler Device claim to provide exactly that in their mobile phone application. As said before, I have some doubts about their approach, so let’s try to come up with something better.

As always here on TheTheoreticalDiver, everything is entirely theoretical, I have zero empirical data of my own. But with this disclaimer, here we go:

I think we all agree on the fact that decompression stress comes arises as tissue pressure exceeds ambient pressure and that also the time of this excess pressure is relevant.

But measuring the excess pressure in mbar is probably not very useful as this is not a natural measure of decompression stress (and the relation might as well depend on the tissue under consideration as well as ambient pressure and a million other things). The gradient factor approach is to normalise this excess pressure with the maximally allowed excess pressure (which is the definition of the gradient factor) and this sounds like a good start.

But rather than comparing to plain vanilla 100/100 Bühlmann, I would like to be more flexible and compare to your decompression model of choice. So I would like to take

\(q(t) := max\left(0, \frac{p_i(t) -p_{amb}(t)}{p_{i,max}(t) – p_{amb}(t)}\right)\)

as a measure of momentary stress where p_i,max is the maximal allowed inert gas pressure in tissue number i as given by your decompression model.

This could be plain Bühlmann in which case this expression is just your momentary gradient factor. Or you believe in excess pressure being worse at depth (as the produced bubbles will grow later on in shallower water) then you take p_i,max as given by Bühlmann corrected by your favourite settings of GF_low and GF_high. Or you insist on using VPM-B then you use the maximal momentary M-value as predicted by that model. Strictly speaking this is an extension of that model for real (as opposed to planned) dives, but at least in Subsurface, we have found a way to compute it (such that it pretty much agrees on planned dives).

Obviously, if you decompress exactly as prescribed by you model, q=1 during decompression (possibly a bit lower due to the fact that most people decompress in steps of 10ft/3m rather than continuously). But

\(q\le 1\)

is the definition of not violating the ceiling.

The ceiling being what it is, violating it should be bad and the more the worse whereas staying way below the ceiling gives your additional conservatism which you probably have to pay in terms of extended decompression time.

So, my proposal would be to compute the L^lambda norm of the function q(t) for somewhat large lambda as this punishes going above q=1:

\(\Big(\int q^\lambda dt\big)^{1/\lambda}=\Big(\int max\left(0, \frac{p_i(t) -p_{amb}(t)}{p_{i,max}(t) – p_{amb}(t)}\right)^\lambda dt\big)^{1/\lambda}\)

To me, this seems to be a reasonable measure of deco stress. Maybe it’s worthwhile to compute this for a number of real profiles and compare it deco outcome (risk of DCS, I wish I had access to the DAN data… or at least Doppler results).

What do you think?

In defence of bar l

In a recent article in GUE’s InDepth blog, Richard Walker argues that the correct unit to discuss the amount of (free) gas in a cylinder is l (litre) and not bar l (bar times litre) because that is the correct unit for a volume.

I would like to argue that although the latter is correct the former isn’t: The volume of the gas is not its amount. The amount is measured by counting the number of molecules or (if you don’t like big numbers) the number of moles of gas. Let’s look at the ideal gas law, the mother of all gas calculations (and of course including Boyle-Mariotte):

\(pV=nRT\)

It’s the n on the right hand side (or nR depending on whether you count molecules or moles). As during diving, the temperature stays pretty much constant (or at least we don’t take changes in to account in our calculations) you could include that in the constant as well. Then the RHS (and thus the LHS) describes the diver’s version of amount of gas.

And that is not changed even when feeding it though a compressor (let’s still ignore the temperature change), when emptying the cylinder at the surface or when breathing it at 30m of depth at ambient pressure. This is the amount.

And you should measure it in the unit of this equation which is volume times pressure, which in SI derived unites is conveniently expressed in bar l.

Of course you can compute which volume this amount of gas fills at some pressure (be it 232 bar or 1 bar or 4 bar in the above examples), simply divide by that pressure. But when gas planning, you should plan the amount of gas you need (you will need it at different pressures) and that is measured (invariantly) in bar l.

I am sorry Rich, but I believe you are wrong here.

Gold

Nothing profound to say these days (in which I guess more people practice the theoretical side of diving). But yesterday, I was watching the first few episodes of season 3 of Money Heist (La casa de Papel, Haus des Geldes) on Netflix. And not only the professor is complaining that he had too little time to work out the plan, it seems this also applies to the script writers.

I am willing to forgive questionable approaches to tracking satellite phones and hacking into mobile phones. But when it comes to theoretical diving, there is a limit!!1!

I don’t want to spoiler too much, let’s just say, there is a flooded vault and they are diving to extract bars of gold:

Wasn’t anybody aware that the density of gold is almost twice the density of lead? A good delivery bar weights 12,4 kg, so the woman holding two of whose on stretched out arms has spent a lot of time in the gym. Also, swimming two of those to the exit it will be challenging to maintain neutral buoyancy…

When I was a student at DESY (the German particle accelerator lab in Hamburg), even in the theory building, we used lead blocks the size of half bricks as door stops, monitor mounts etc. Those came from the experimentalists who used those as means of radiation protection. The first time, you tried to pick up one of those, you got the impression it was screwed to the floor as your brain expected to pick up something they weight of a brick. But no, the density of concrete is about twice that of water while lead is about 11 kg per litre while gold is 19 kg per liter.

Rating decompression

How good was the deco of my dive? For those of us who strive to improve their diving this is a valid question to find ways to optimise how we get out of the water. In Subsurface, for example, we give various information including the individual tissues’ ceilings as well as the heat-map.

Recently, on ScubaBoard, I learned about a new product on the market: O’Dive, the first connected sensor for personalised dives. It consists of Bluetooth connected ultrasound Doppler sensor together with an app on your mobile phone. The user has to upload the profile data of the dive to the Subsurface cloud from where the app downloads it and connects it with the Doppler data (for this, it asks for your Subsurface username and password?!?, the first place where you might ask if this is 100% thought through). Then it displays you a rating (in percent) for your deco and offers ways to improve it.

That sounds interesting. The somewhat ambitious price tag (about 1000 Euros for the sensor plus 1,50 Euro per dive analysed), however, prevented me from just buying it to take it for a test. And since it’s a commercial device, they don’t exactly say what they are doing internally. But in there information material they give references to scientific publications, mainly of one research group in southern France.

A fraction is to conference proceedings and articles in very specialised journals that neither my university’s library nor the Rubicon Archive not SciHub have access to, but a few of the papers I could get hold of. One of those is “A new biophysical decompression model for estimating the risk of articular bends during and after decompression” by J. Hugon, J.-C. Rostain, and B. Gardette.

That one is clearly in the tradition of Yount’s VPM-B model (using several of the bubble formulas that I have talked about in previous posts). They use a two tissue model (fitting diffusion parameters from dive data) and find an exponential dependence of the risk of decompression on the free gas in one of the tissues:

Figure 2 from their paper

 Note, however, that the “risk of decompression sickness” is not directly measured but is simply calculated from the

\(PrT = P\sqrt t\)

paramour (P is the ambient pressure in bar during the dive while t is the dive time in minutes, i.e. a characterisation of a dive profile compared to which a spherical cow in vacuum looks on spot, but this seems to be quite common, see also https://thetheoreticaldiver.org/wordpress/index.php/2019/06/16/setting-gradient-factors-based-on-published-probability-of-dcs/) using the expression

\(r_{DCS} = 4.07 PrT^{4.14}.\)

which apparently was found in some COMEX study. Hmm, I am not convinced Mr. President.

The second reference I could get hold of are slides from a conference presentation by Julien Hugon titled “A stress index to enhance DVS risk assessment for both air and mixed gas exposures” which sounds exactly like what the O’Dive claims to do.

There, it is proposed to compute an index

\( I= \frac{\beta (PrT-PrT^*)}{T^{0.3}} . \)

Here PrT=12 bar sqrt(min), T the total ascent time and beta depends on the gas breathed: 1 for air, 0.8 for nitrox and 0.3 for trimix (hmm again, that sounds quite discontinuous). As you might notice, what does not go into this index is at which depths you spend your ascent time while PrT keeps growing with time indefinitely (i.e. there is no saturation ever).

This index is then corrected according to a Doppler count (which is measured by a grade from 0 to 5):

Index correction according to Hugon teal

This corrected index is then supposed to be a good predictor for the probability of DCS.

I am not saying that this is really what is going on in the app and the device, these are only speculations. But compared to 1000 Euro plus 1,50 Euro per dive, looking at ceilings and heat map in Subsurface sounds like quite good value for money to me.de

Why are tissues independent?

This is the second instalment in a series of posts that are inspired by my reading about the SAUL deco model. Paradoxically, I want to report on something that I realised while reading the articles about that model even though Saul Goldmann, SAUL’s inventor, kindof comes to the opposite conclusion.

For the Bühlmann model, I have argued before that the different tissue half times are not really parameters of the model: If you have a tissue in contact with an environment (be it breathing gas or blood) it is pretty natural that gas diffuses between the environment and the tissue with a rate that is proportional to the difference of partial pressures

\(\dot p_i = d_i (p_a – p_i)\).

Here pi is the partial pressure in tissue number i, pa is the ambient partial pressure and di is the diffusion constant (essentially ln(2) divided by the half-time) of that tissue. This is the differential equation that governs all on- and off-gassing of tissues. To complete the Bühlmann model, one only has to specify a maximal tissue partial pressure in relation to the ambient pressure (or better a minimal ambient total pressure for a given tissue partial pressure) for which Bühlmann assumes an affine relation, i.e.

\(p_a \ge b_i (p_i – a_i).\)

It is important that the coefficients ai and bi for the tissue number i are really parameters of the model that need to be determined empirically (and for which there are tables). The diffusion constants (and thus tissue half-times), however, are not.

You only need to make sure you satisfy the inequality for all possible tissues, that is for all possible half-times. To do so, you try to do the calculation for all relevant time scales. In the case of (non-saturation) diving, the relevant times go from a few minutes (the shortest relevant time scale of ascents and descents) to a few hours.

Of course, you cannot calculate for all possible times. But you make sure, you cover this range of times with many representative times. And a little further thought reveals that you should spread those representative times geometrically (so also the range of diffusion constants is covered geometrically).

To sum up, you need to know the a’s and the b’s but for the d’s, you essentially cover all possible values. It is important, that for this result we had to know nothing about actual tissues like bones, nerves, muscles or whatever of the human body, since all that is relevant that they have some diffusion constant in the relevant range.

So far, this was all old news.

Today, I want to look at the question, why we can get away with treating all these tissues independently. Why does tissue number i only exchange inert gas with the blood stream or the breathing gas and not with all the other tissues?

In the Bühlmann model, each tissue exchanges inert gas directly with the blood stream.

Maybe, this assumption is unrealistic and we should better consider a model that looks like this:

Every tissue potentially exchanges gas with all the other tissues.

Maybe, we should allow tissues i and j to exchange gas directly. This would be described by a new diffusion constant cij (non-negative and possibly zero if the tissues are not connected) and the new differential equation would look like

\(\dot p_i = d_i (p_a – p_i)+\sum_{j\ne i} c_{ij}(p_j – p_i)\).

In order for no gas to get lost, we would need cij to equal cji. To simplify this, we could assemble the different partial pressure pi into a vector p, the diffusion constants di into a vector d and the new diffusion constants cij into a symmetric matrix C.

If you have only a little experience with ODE’s, you realise that this equation is a linear inhomogeneous ODE and can be written in the form

\(\dot p = Ap + p_a d\)

where the matrix A has components

\(a_{ij} = -c_{ij} + \sum_k c_{ik} \delta_{ij} – \delta_{ij} d_i.\)

A quick inspection shows that A is symmetric and negative definite (if all di are positive).

The key observation is that we can diagonalise A as

\(A= U^{-1}DU\)

where D is diagonal with all negative eigenvalues. Then the above differential equation reads

\({d\over dt}(Up) = D (Up) + p_a Ud.\)

But thanks to D being diagonal, we can view this as a new vector Up of tissue partial pressures that obey the original Bühlmann differential equation! So up to taking linear combinations, the Bühlmann model already covers the case of tissues directly exchanging inert gases. Allowing for this apparent generalisation of the model can be converted by simply a change of basis.

So, if we say, that the modelled tissues are not really corresponding to actual tissues (as we already did above) but allow the interpretation that they might be linear combinations of actual tissues, then the Bühlmann model already covers (as long as we cover all possible diffusion constants which we do) the case of inter-tissue gas exchange. It is not a special case in this wider class of modes with interconnected tissues but it is already the generic case.

Let me close with a consistency check: At content depth, pa is constant. In this case, Up converges saturates to

\((Up)_i \to \frac{(Ud)_i}{\lambda_i}p_a.\)

or in matrix notation

\(p\to U^{-1}D^{-1}Ud p_a = A^{-1}dp_a.\)

But a quick calculation shows that if v is the vector that has ones in all components

\(Av = d\)

and thus finally

\(p_i\to A^{-1}dp_a = vp_a\)

and hence indeed all tissue partial pressures approach the ambient partial pressure.

Establishing safety is hard

Recently, I had a look at the SAUL decompression model and I have a couple comments on it. They are at best inspired by what I have read and fall in some independent classes, so I will spread them over a number of posts in the upcoming days.

SAUL is a probabilistic model. This means that rather than telling you if a certain dive is ok from the perspective of the model or telling you the fastest way out of the water that is still considered save by the model (like our traditional models like Bühlmann or VPM-B, possibly enriched by adjustable fudge factors like gradient factors or conservatism) a probabilistic model will give you a probability for a dive to result in decompression illness.

In the case auf SAUL, these probabilities are based on series of empirical tests, where a (large) number of divers followed a given deco schedule and you count the number of DCS cases. This is not any different from any other empirical study.

What I want to discuss in this post is how to design a study to establish the fact that “the accident probability for this given profile is less than some probability p”. So, this post is about the statistics of setting up such a study. Please be aware that I am by no means an expert on statistics, this is all pretty much home grown (I did some googling but could not easily find a good reference for the whole story), so chances are even higher than otherwise that I am completely wrong. But in this case, please teach me!

Note that I want to establish the safety of a plan (as a bound on the probability of injury) but simply do a number N of dives, count the number a of accidents and then claim that the probability is a/N. Rather, I want a confidence interval. To be specific, I want that the probability of “the accident probability (or rate) is higher than one in a thousand” is less than 5% (that seems to be a pretty common number as 95% contains two standard deviations for a normal distribution).

So for example, we could do N dives and hope to find no accident. The probability that happens by chance is

\((1-p)^N\)

if p is the accident probability. So this should be less or equal than c=0.05. When we solve that for N we find

\(N=\frac{\ln c}{\ln(1-p)}\)

and as we expect p to be pretty small, we can chop of the Maclaurin series after the first term and thanks to ln(0.05) being almost -3 write

\(N\approx 3/p.\)

So, to establish a probability of 1/1000 we would have to conduct 3000 dives in our study. So, let’s apply for funding.

The problem is only, that we won’t get that funding. Because we will be able to show our hypothesis only in the case of no accident. But we just computed that that chance is 5%. So in 95% of studies conducted like this, the result will only be inconclusive. We have to do better.

Doing better might mean, not being that ambitious, but aiming for a not so strong hypothesis. We could try to show that the probability is only smaller than 1/500. Then we would do 1500 dives with an expected number of accidents (as we believe the true probability is still 1/1000) of 1.5.

Or we actually believe that the true probability is actually 1/2000. And then we could no more dives, such that the allowed accident number can be higher still leading to a 1/1000 bound on the probability.

As a function fo N, how many DCS cases are still compatible with p < 0.001 atet e95% confidence level (-1 meaning that confidence level cannot be achieved for N < 3000)

And for these numbers, we can then compute the chance of having a successful study (i.e. one where the number of DCS cases is indeed small enough) when the true probability of DCS for this dive is actually 1/2000:

Probability of a significant outcome as a function of N. Same colours indicate the same number of accepted DCS cases.

Two things I found remarkable: You actually need of the order of 10,000 dives to have a 50:50 chance for a significant study establishing an accident probability of at most 0.001. I have been using this probability as I think an accident rate of 1 in 1000 is what is generally accepted at least in recreational diving (would it be significantly higher, the whole industry would quickly go down the drain thanks to people suing others for liability). But studies with 10,000 dives are totally unrealistic as far too expensive. This is why you want to conduct studies with much higher accident rates, so you can establish those with much fewer dives. A famous example being the NEDU deep stop study that put their divers under severe stress (cold, exercise etc) to drive up the accident rate. And they were criticised for these “totally irrelevant to actual diving” conditions in particular by those being in favour of deep stops. Only that with “realistic” dives, it would be very hard and expensive to see a significant effect.

The other lesson is more formal: in a range where the allowed number of deco cases is constant, the chance for a successful study decreases with increasing N. This I did not expect but it is quite obvious: With every additional dive, you increase the chance of having another accident that kills the study. So, if you design such a study, you should pick N just slightly bigger than one of the jumps of allowed DCS cases.

As always, I made my mathematica notebook for these calculations available.

Next big thing: Doping for Deco

Over at GUE’s InDepth blog, Jarrod Jablonski has published a first post in an upcoming series on past and future of decompression protocols. A definite must-read, not only because it’s one of the leaders of the industry laying out his point of view.

The part I find most interesting for future developments is the research into exercise about before diving to reduce the number of micro bubbles from the blood and the idea to even to replace that with giving nitric oxide (NO) or substances that lead to the release of NO (like Viagra) prior to diving could cut short decompression times. This is probably far too early to know but I remember recently seeing a report of a cave dive on Facebook that mentioned some medicine was taken before the dive to shorten decompression without giving any details. Unfortunately, I cannot find the link anymore, maybe one of my readers can supply it.


But I think this would be the right time to think through a possible scenario: Assume research shows that there is a substance X that divers can take before big dives that would significantly shorten decompression while still maintaining the same level of safety. But substance X has long term side effects (NO has a toxic dose and is chemically related to NO2 which is a the heart of the Diesel engine scandal). Would we then see this as the diver’s analogue of doping? Could this lead to a situation similar to body building and steroids? There would be a general advice not to use such substances because of their side effects but maybe at that time, for the next big dives, decompression obligations would be so big that explorers could justify the short term benefits over the long term risks. But for every explorer there are orders of magnitude more followers who are going to imitate their heroes and thus would also start using such drugs although in their case there is no good justification. Just because they want to do everything like the people they look up to (not that would happen anywhere in the tech diving community…).


I would say now would be a good time to come up with an opinion on how such a situation were to be handled, what are the limits of what one could and should ethically do. Of course, everything would have to be adjusted once the actual thing with all details are known. But it wouldn’t hurt to come up with a basic opinion whilst all this is still theoretical.

Note: I originally meant this to be a comment under the InDepth blog post but I did not manage to create and account there that would allow me to post it. Does anybody know how (i.e. where) I could do that?

Update: The InDepth website now allows me to post.

Calculating Oxygen CNS toxicity

I have talked previously about oxygen toxicity, in particular about the influence on the pulmonary system measured in OTUs. But there is a second effect, the one on the brain (CNS). This effect is strongly dependent on the partial pressure of oxygen in the breathing gas and is conventionally expressed in terms of the “oxygen clock” or as %CNS.

At each pO2, there is a maximally allowed exposure time \(t(pO_2)\) and at each depth one can spend a certain percentage of that time, the percentages being added up over a dive.

Which leaves us with the question of how to obtain \(t(pO_2)\). It seems, everybody is using essentially the same table of values published by NOAA, with the time being infinite for pO2 < 0.5 bar, then starting with 720 minutes and then steeply decreasing to 45 minted at a pO2 of 1.6 bar where the table ends as you are not supposed to breath a pO2 above that value.

As it turns out, the empirical basis for this table is hard to find, as it, in the words of Bill Hamilton, it “is not based on a specific set of experiments but rather on the accumulated wisdom of experts in this field.”

So far, in Subsurface, we have used the same table and have used linear interpolation for values of pO2 in between. At high enough partial pressures, the maximal exposure time get shorter quickly, so with linear extrapolation beyond 1.6 bar, they quickly become negative. As the CNS% is given by the time spent at that particular pO2 normalised by the maximal time, this leads to the clearly absurd result of negative CNS% values. We had to do something about this.

This first step is to plot the table values (note that in the internals of Subsurface, all times are expressed in seconds while all pressures are in mbar):

Hmm, nothing obvious. So, let’s use a log scale for the times:

Ah, except for the last two points, this looks pretty much like a straight line. Too bad, the last two points at the highest pressures are the most important ones, so we should better not just forget about them.

There was some discussion, how to deal with these points, a fourth order polynomial fit (green) was discussed as well as using two straight lines, one for everything below 1.5bar and another one for the last three points (orange):

Turns out, the sum of squared deviations for the green fit is 0.06452 while the two lines give 0.0314796, i.e. the latter are twice as good, so let’s use those.

As these lines are fits for \(\log(t(pO_2)\), the resulting formula is

\(t(pO_2) = t_0 e^{-pO_2/p_c}\)

with t0 = 131300 s = 2188 min and pc = 516mbar for pO2 in between 0.5 bar and 1.5 bar and \(t_0 = 1.83861\cdot 10^{10} s= 3.06436\cdot 10^8 min\) as well as pc = 102 mbar above.

There is a mathematica notebook with these calculations in case you want to play with the numbers yourself.

Fraedrich follow-up

Here is a second guest post by Doug Fraedrich on how to select gradient factors:

For this next phase of the analysis, we will probe the GF-Hi vs PrT issue with other suitable data sources other than the Van Liew and Flynn model and look into the GF-Lo question without direct reliance on the 2011 NEDU Deep Stop experiment.

There are several suggested methodologies in the literature for the validation of models in general (Reference 1) and dive computers in particular (Reference 2.)  The method I used above is similar to one suggested in Reference 2, of comparing the results of a commercial dive computer to one of the US Navy probabilistic models (which are essentially least-squares fit to many DCS observations); in this case the Van Liew and Flynn “StandAir” model was used. Another possible approach suggested in Reference 2 is to compare the unit under test directly to well-validated Navy dive tables, i.e. VVAL-79 (Note that VVAL-79 is an updated version of VVAL-18, which was necessitated by an unacceptably high incidence of DCS using VVAL-18.)  Denoble suggests a similar but more general approach of comparison to so-called “primary models” which include VVAL-79 and also the Canadian DCIEM tables (Reference 3). Denoble defines “Primary Models” as those that have been extensively and methodically tested with man-trial data. For the remainder of this article, we will limit our analysis to using Primary Models. The validation of commercial dive computer algorithms is in its infancy, so it is instructive to use different sound methodologies and different “gold standards” to get a sense of the uncertainty in results. As a community, we can make it our goal to continually refine our methods and use new data as it becomes available to reduce this uncertainty over time.

In this new cross-validation study, the ZHL-16C model was iteratively run with different values of GF-Lo and GF-Hi to match the results VVAL-79 and DCIEM Tables.

The results are shown in the Figure 1 below.

The diagonal nature of the VVAL-79 results reflects that both GF-Lo and GF-Hi are indicated to decrease with increasing PrT; this not surprising since the Van Liew and Flynn data mentioned above also was used in the Navy’s model development and validation process. Note that the results for the DCIEM tables do not exhibit this behavior. 

In order to add more information to this analysis, we will consider a third model, SAUL, which is an update of the original Kidd-Stubbs DCIEM serial model and would certainly meet the Denoble criteria of a “Primary Model.” It has been tested by three datasets of man-dives with P-DCS statistics: a core dataset of 733 dives (Reference 4) and two secondary datasets, DSAT (1437 man-dives) and the NEDU 2011 Deep-stop dataset (390 man-dives.) A full dive-planner is not openly available for this model, but an online version is available that handles no-decompression dives, and estimates P-DCS of a specified dive profile assuming a 3 minute safety stop. 

For selected combinations of GF-Lo and GF-Hi, the ZHL-16C model is run iteratively with varying values of depth and bottom time to yield a 3 minute shallow stop. The SAUL dive planner is then run with those depth and bottom time values and the estimated P-DCS is computed. This P-DCS is plotted as a function of GF-Lo and GF-Hi, Figure 2. Unfortunately, for a given value of GF-Hi, only a narrow range of GF-Lo meet the criteria of mandating a 3 minute shallow stop (clearly illustrated in Figure below.) So these new results do NOT inform the GF-Lo debate nor the GF-Hi vs PrT issue. It does quantify P-DCS for a diagonal swath of GF values; which is a good start. So if your risk tolerance is say 0.25%, you would want to set GF-Hi between 80 and 85.

It is worth noting that these three primary models are not only well-tested with man-trial data, they are based on differentdatasets. So any conclusion based on areas of common agreement amongst these models can be considered as having a fairly low uncertainty.

To try to summarize where I think we are based on these data sources using the 3 primary models VVAL-79 DCIEM and SAUL, and supplementing with references to the data-based statistical models from both Howie et al and Van Liew and Flynn (both based on large datasets of man-trial DCS data), see the results below, which are partitioned into a 2×2 matrix:

 Low PrT Dives/Non-Deco DivesHigh PrT Dives/Deco Dives
GF-HiAll 5 sources agree on a range of 80-90VVAL-79 and Van Liew and Flynn indicate 60-70; DCIEM shows 80-85. SAUL and Howie are silent
GF-LoVVAL-79 and DCIEM agree on a range of 75-85, while the other 3 sources are “silent” or non-informativeVVAL-79 indicate 60-70; DCIEM shows 75-85. SAUL, Howie and Van Liew and Flynn are silent

For Low PrT dives, I would consider the recommendations for GF-Hi to be fairly conclusive. A little less so for GF-Lo, but there is an extensive man-dive database for these types of dives to back it up. For High PrT dives, we see the two main Primary models, VVAL-79 and DCIEM result in recommended ranges that do not overlap; so this is an area of future focus. Recommendations for GF-Lo for High PrT dives are the most uncertain. Several potential ways-ahead come mind. The full SAUL model could be executed to estimate P-DCS of dive profiles computed by ZHL-16C over the entire range of gradient factors. Or potentially, the DAN Project Dive Exploration observational database (Reference 5) could be mined to focus at high PrT dives, and convert reported dive profiles to equivalent ZHL-16C results with whatever GF’s are needed to match the profiles. The issue here may be to adequately address potential underreporting of dives where no DCS was present, and thus attempting to ensure the resultant P-DCS statistics are unbiased. Clearly more work is needed here.

References:

  1. Fraedrich D and Goldberg A “A methodological framework for the validation of predictive simulations” European Journal of Operational Research, 124(1):55-62 · July 2000
  2. Doolette DJ, Gault KA, Gerth WA, Murphy FG. US Navy Dive Computer Validation. In: Blogg SL, Lang MA, Møllerløken A, editors Proceedings of Validation of Dive Computers Workshop 2011 Aug 24 Gdansk. Trondheim: Norwegian University of Science and Technology; 2012. p. 51–62.
  3. Denoble PJ. Conservative diving: calculating and mitigating the risk of DCS. Alert Diver 2013; 29(3): 46-9.
  4. Goldman S, A new class of biophysical models for predicting the probability of decompression sickness in SCUBA diving. Journal of Applied Physiology, 2007. On-line dive planner at: http://moderndecompression.com/?page_id=493

Setting Gradient Factors based on published probability of DCS

This is a contributed post by Doug Fraedrich in which he reports on a recent paper by him and which is also mentioned in Doolette’s blog post I just commented on.

Recently I published a methodology on how to set conservatism factors on commercial dive computers based on published probability of decompression sickness (DCS) data and statistical summaries/models thereof, (Reference 1.) That paper described the general methodology on how to adjust the conservatism of several different algorithms; this note will focus specifically on setting gradient factor levels for the Bühlmann ZHL-16C algorithm.

The basic methodology is to use the output of probabilistic models from the literature that were derived from many well-documented experimental man-trials of known dive profiles. Then pick a probability of DCS isopleth curve and use those to set gradient factors (or conservatism factors) such that the algorithm will output the No-stop time or total decompression time (TDT) specified by that isopleth. This was done for three metrics: no-stop times for short recreational dives profiles, total decompression time for longer/deeper decompression dives and “first stop depth” for decompression dives. There were no probabilistic models for first stop depths/decompression profiles, so for this metric, we used data from a NEDU controlled experiment, which compared PDCSoutcomes of dives of the same depth, bottom time and same TDT but with different stop profiles (Reference 2.) This NEDU study was performed specifically to assess the effect of deep versus shallow profiles as dictated by dual-phase vs tissue-loading algorithms.

So now to apply the methodology from that paper to the task of setting Gradient Factor levels for the Bühlmann ZHL-16C algorithm. From previous publications on the topic of gradient factors, we know that the value GF-Hi mainly affects total decompression time and GF-Lo affects the depth of the first stop. For GF-Hi, we will use the probabilistic model from a new study on No Stop Times by Howie et al. (Reference 3.) Using their criteria of 0.1 % serious DCS, we pick several point-pairs of depth and No-stop time and iteratively run theBühlmann ZHL-16C algorithm (specifically MultiDeco version 4.12)with different values of GF-Hi and match at what level the algorithm allows for a direct ascent at each individual depth-bottom time pair. The corresponding value of GF-Hi is 80. 

As mentioned above, for GF-Lo, we use the US Navy NEDU study on deep stops, Reference 2. In the NEDU study, the maximum depth was 170 fsw, the bottom time was 30 minutes, the ascent rate was 30 fsw/min and the TDT for both profiles was 180 minutes. The group of divers who started their first stop at 40 fsw (for 9 minutes) had significantly lower PDCS(P= 0.0489 one-sided, Fisher Exact Test) than the group that stopped first at 70 fsw (for 12 minutes). Since there is not sufficient information to know exactly where between the two tested depths is optimal, this test case used the two depths as a maximum and minimum for the first stop criterion. For that dive profile, MultiDeco was iteratively run varying GF-Lo to see which levels of GF-Lo yield the two limiting first stop depths. This procedure indicates that GF-Lo should be higher than 55 (this was previously reported in Reference 1.) If you pick the mid-point of the two first stop depths between those profiles (assuming the optimal point is somewhere in the middle and not at either extreme) the resulted in GF-Lo being on the order of 70.

Of course, the above setting for GF-Hi was based on no-decompression dives which had a relatively low value of PrT of ~10-12 (Pressure Root Time is an indicator of the severity of the dive exposure where P = pressure in bar, T = dive time in minutes, Reference 4.)We know from Reference 4 that historically, the incidence of DCS is significantly higher for dives with a PrT > 25 and we know from Reference 1 that the required value of GF-Hi for ZHL-16C needs to decrease as the PrT of the dives increases. So to set GF-Hi for higher PrT decompression dives, we re-visited methods and models presented by Van Liew and Flynn, where they were specifically assessing the suitability of the US Navy’s decompression schedule (used at that time) by fitting data on single-level, non-repetitive, nitrogen-oxygen dives from the US Navy Decompression Database to a logistic regression that resulted in PDCSisopleths as a function of bottom time and TDT (Reference 5).We limited the domain of applicability of the current study to PRT< 40, so we used the “StandAir” Model which Van Liew andFlynn based ondata from standard air dives which had depths of less than 190 fsw and bottom times of less than 720 minutes. They assessed the StandAir statistical model to be reasonable except at the two depth extremes (nominally < 60 fsw and > 190 fsw). Van Liew and Flynn compared TDT required by the algorithm-under-test to the PDCSisopleths from their statistical model, and found that the TDT required by the algorithm-under-test lay between the 2% and 3% PDCSisopleths and thus deemed the algorithm acceptable for US Navy use. In the current study, we selected the 3% PDCSisopleth as an initial standard of comparison as a compromise between managing DCS risk while not requiring excessive total decompression times. Note that in their analysis, they combined data from all dives that had the same depth and bottom time, thus their PDCS results are averaging over many different (non-optimal) decompression profiles.

The process here is similar to what was described above using the No Stop Data, only with the Deco dives, it was a trio of data values of: depth-bottom time-TDT (at the 3% PDCS isopleth curve.) MultiDeco was iteratively run to match TDT for the input values of Depth and bottom time. The value of GF-Hi was noted and plotted against the PrT of the dive profiles used. These results are summarized in Figure 1. 

Suggested GF-Hi vs PrT of dive


Note the curve in Figure 1 is of sigmoid shape: It is flat at both low and high levels of PrT with a transition in between. These results are based on dive profiles with observed DCS symptoms, and there are many more dives at the lower end of the PrT scale than the higher; so generally speaking the “error bars” start off small on the left side of the graph and get bigger as PrT increases. The point at which it levels out is considered somewhat uncertain.

To summarize: 

  • GF-Lo is recommended to be >= 55
  • The recommended level of GF-Hi depends on the PrT of the dive, but is never greater than 80. It decreases with increasing PrT of the dive (to a point)
  • While the uncertainty at the low end (say PrT < 25) is low, it increases with PrT (i..e. more research is needed in this part of the domain.)

References:

  1. Fraedrich DS, Validation of algorithms used in commercial off-the-shelf dive computers, Diving and Hyperbaric Medicine, V48 No 4 , 2018
  2. Doolette DJ, Gerth WA, Gault KA. Redistribution of decompression stop time from shallow to deep stops increases incidence of decompression sickness in air decompression dives. NEDU TR 11-06, Panama City FL; 2011. 
  3. Howie, LE, Weber PW, Hada E, Vann RD, Denoble PJ, The probability and severity of decompression sickness, PLoS ONE 12(3): e0172665,2017
  4. Balestra C. Dive computer use in recreational diving: Insights from the DAN-DSC database. In: Blogg SL, Lang MA, Møllerløken A, editors. Proceedings of validation of dive computers workshop. 2011 Aug 24 Gdansk. Trondheim: Norwegian University of Science and Technology; 2012. p. 99–102. 
  5. Van Liew HD, Flynn ET. A simple probabilistic model for standard air dives that is focused on total decompression time. Undersea Hyperb Med.2005;32:199–213.