## Testing of No Fly Times in Commercial Off-the-Shelf Dive Computers

Here is another contribution by Doug Fraedrich. This time, the text would probably better fit on theepxerimentaldiver.org: He tests dive computers No-Fly-Times algorithms and compares those to DAN’s recommendations and the respective user manuals. For some more theoretical considerations regarding diving and flying, see this older post.

## Introduction

It is well known that divers must wait a certain time after diving before then can safely fly. The current No Fly Time (NFT) “best practices” guidelines from Divers Alert Network (DAN)1 are listed below:

• For a single no-decompression dive, a minimum preflight surface interval of 12 hours is suggested.
• For multiple dives per day or multiple days of diving, a minimum preflight surface interval of 18 hours is suggested.
• For dives requiring decompression stops, there is little evidence on which to base a recommendation and a preflight surface interval substantially longer than 18 hours appears prudent

These guidelines are primarily based on a paper by Richard D. Vann, et al2 of Divers Alert Network, which reported an analysis of risk of Decompression Sickness (DCS) results of 802 dives of nine different profiles: four single non-deco dives and five sets of repetitive dives. DAN did not test decompression dives in Reference 2, so the US Navy No Fly Times were used for this category of dives.

Many commercial off-the-shelf dive computers compute and display this time after surfacing from a dive. The level of documentation on how exactly this is computed varies from dive computer to dive computer; most units state in their manual that they use the DAN guidelines.

The objective of this report is to test several representative commercial off-the-shelf dive computers in pressure chamber which simulates different categories of dives and assess the following: 1) does each computer conform to the DAN NFT guidelines? and 2) does each computer conform to what is described in the computer’s owner’s manual? The dive computers tested are shown in Figure 1 and listed in Table 1.

## Methods

The dive computers were subjected to simulated dives in a small pressure chamber, Figure 2. This chamber was a Pentair Pentek Big Blue 25cm chamber that had a maximum allowable pressure of 6.9 bar.

The units-under-test are placed in the chamber with the display facing out (as illustrated in Figure 2) and filled with fresh tap water. Air pressure is added from the top of the chamber via a standard bicycle pump (Velowurks Element floor pump.) Even though the relationship between pressure and indicated depth is well-known, several “calibration runs” were performed before the testing to verify repeatabilty of calibration. For these calibration runs, a Shearwater Perdix dive computer was used.

Pressure was released in a controllable way via an added vent valve. This ensured that the simulated ascent rates were not too fast. Figure 3.

Figure 4 shows an actual profile using the test chamber.

A run list was prepared based on the dives described in Reference 2; see run list in Table 2.

Some of the dive times needed to be shortened, compared to the original dives in Reference 2; this is because the bottom times in the “old” no-deco dives exceeded the No Decompression Limits on some of the computers tested.  Since decompression dives were not tested in the DAN study in Reference 2, two dives profiles were added to the test matrix:  a “marginal” deco dive, profile #10 (US Navy Repetitive group H) and a deeper, more stressing deco dive, Profile 14 (US Navy Repetitive Group L).3   US Navy recommended No Fly Times are listed in the table. The specific bottom time for the marginal deco dive was chosen by what was required to make the most conservative of all the computers go into decompression mode.  The last category of “Violation Dives” was added to test claims made in some of the manufacture’s User Manuals.  The dive profiles in Table 2 were simulated and the displayed No Fly Time from each computer was recorded. The battery charge level was monitored for each computer and kept at acceptable levels. With the exception of the repetitive dives with a 1 hour surface interval, the interval between consecutive runs was the maximum of: 24 hours, desaturation time and the longest displayed NFT from any computer.

## Results

A total of 26 runs were performed in this study; the raw results are shown in Table A1 in the Appendix.  To start off the testing,  Profile #2  was replicated several times to assess that the results were repeatable. For all of the runs, the water temperature varied from17C to 23C; when repetitive dives were simulated, the water was not completely replenished with new tap water, so the second (and third) dives of repetitive sequences tended to have a water temperature about 1-2 C warmer than the first. The ascent rate was monitored using several different dive computers, but the SEAC Guru displayed numerical values with units of ft/min; ascent rates were constantly monitored and were typically in the range of 5-15ft/min (1.5 -5 m/min).

The results from Appendix A1 are summarized in Table 3. Based on this data, each of the eight dive computers can be assessed as to whether they are compliant with the DAN guidelines and if they actually compute what is described in their manuals.

## Discussion

Single Dives:

All of the computers were compliant with the DAN guidelines for this type of dive.  It was observed that both Mares and Suunto increased their NFT after the exact same single dive profile (#2) separated by a surface interval of ~ 44 hours. There is little documentation on the Mares RGBM variant, but the Sunnto variants are known to factor in the effects of multi-day diving in bubble growth (and presumably desaturation) calculations. This multi-day factor, known as “F3”, persists for up to 100 hours4, Figure 4. Multi-day diving is not covered in the DAN NFT guidelines.

Repetitive Dives:

All of the computers were compliant with the DAN guidelines for this type of dive.  For the Suunto EON Steel, since this computer based Repetitive NFT on desaturation times, the reported NFT depended on the exact nature of the repetitive dives and presumably the dives in the previous 100 hours.

Decompression Dives:

Two computers registered an NFT of less than 18 hours for the Group H deco dive: Deepblu Cosmiq+  required only 12 hours of NFT, and the Suunto EON Steel required a range of NFT from 17:35 to 21:18, (depending on previous dives and surface interval.) Note that in both User Manuals, they stated that NFT’s were based on internally-computed De-Sat times and not the DAN guidelines.  The Group L dive required significant total decompression time (~60 mins): all of the computers registered the same NFT as the Group H dive, except the Suunto EON Steel which increased the required NFT to 26:13 which is above the DAN guideline.   It should be noted that all computers registered NFT’s that exceeded that suggested by the US Navy for Repetitive group H3, however the Deepblue Cosmiq+ would fall short of the US Navy recommendation if used for deco dive having repetitive group I or more.

Violation Dives:

Some computers mentioned in their User Manual that they added an extra NFT penalty for ascent rate violations. Different computers defined “fast ascent violation” differently, so a target ascent rate of 50ft/min was used in Profile #13, which qualified as a fast ascent for all computers. Profile #13 was executed and the actual ascent rate varied from 9-15 m/min during the ascent from 18 to 5m, which took about one minute. A three minute safety stop at 5m was performed. None of the computers seemed to adjust the NFT, based on a comparison to similar dives with a nominal ascent rate, Profiles 2, 6a, 7a, 9a. Alarms did go off during the fast ascent from several computers. The Safety Stop may have been sufficient to satisfy all of each computers’ Dive Algorithms’ safety requirements. Similarly, none of the computers added an extra NFT penalty for dive Profile #11, which was a no-deco dive, with nominal ascent rate but missed safety stop.

For omitted stops on a deco dive (Run # 20, Profile 12), four computers added additional NFT time, compared to the same profile deco dives with all the required stops (Run # 17 profile 10.) See Table 4.

User Manual Discrepancies:

It was noted that several computers did not conform with the NFT rules stated in their User Manual; see Table 5.

## Summary

Eight commercial dive computers were tested in a small pressure chamber to assess their computation and display of No Fly Times. All of the dive computers were found to be generally compliant with the standard DAN guidelines with a few exceptions. Several computers used a very simple rule for NFT: Garmin, Aqualung, Oceanic and Cressi simply displayed 24 hours NFT for all nominal dives (both no-deco and decompression).  Some computers added features that handled cases not covered in the DAN guidelines, e.g. multi-day diving (Mares and Suunto) and certain decompression guidance violations such as omitted decompression stops (Garmin, Suunto, Cressi and DeepBlu).

## References

1. https://www.diversalertnetwork.org/medical/faq/Flying_After_Diving
2. Richard D. Vann, et al Experimental trials to assess the risks of decompression sickness in flying after diving”  Journal of Undersea and Hyperbaric Medicine, 2004
3. US Navy Diving Manual, Version 7, 2016, Tables 9-7 and 9-6.
4. Suunto Reduced Gradient Bubble Model. June 2003. Available from: http://www.dive-tech.co.uk/resources/suunto-rgbm.pdf. [cited 2017 August 29].

## Subsurface Goes Statistics

You wouldn’t be here if you didn’t enjoy quantitative analysis of anything diving. There is a special treat for you: Thanks mainly to Willem and Berthold of the Subsurface development team, you can now use Subsurface to do all kinds of statistics on you dive log. There are many cool graphical representations of looking at essentially any variable from your log (depth, duration, date, SAC, rating, visibility, gas use, oxygen or helium percentage, location, buddy etc etc) as functions of each other:

Go ahead and see how your SAC changes over time or which buddy you do the longest dives with or if you use less gas on dives with better visibility.

This is still an an alpha testing phase and crashes are still possible and things will change before the next upcoming release. But you can download a test version from our daily builds. There are also mobile version available from Testflight for iOS and the Google PlaysStore beta program. Please comment on the Subsurface mailing list or on Github if you find bugs or have ideas for improvement.

## A measure for decompression stress

I was thinking what might be a good measure for decompression quality or stress. This also comes as O’Dive with their end user Doppler Device claim to provide exactly that in their mobile phone application. As said before, I have some doubts about their approach, so let’s try to come up with something better.

As always here on TheTheoreticalDiver, everything is entirely theoretical, I have zero empirical data of my own. But with this disclaimer, here we go:

I think we all agree on the fact that decompression stress comes arises as tissue pressure exceeds ambient pressure and that also the time of this excess pressure is relevant.

But measuring the excess pressure in mbar is probably not very useful as this is not a natural measure of decompression stress (and the relation might as well depend on the tissue under consideration as well as ambient pressure and a million other things). The gradient factor approach is to normalise this excess pressure with the maximally allowed excess pressure (which is the definition of the gradient factor) and this sounds like a good start.

But rather than comparing to plain vanilla 100/100 Bühlmann, I would like to be more flexible and compare to your decompression model of choice. So I would like to take

$$q(t) := max\left(0, \frac{p_i(t) -p_{amb}(t)}{p_{i,max}(t) – p_{amb}(t)}\right)$$

as a measure of momentary stress where p_i,max is the maximal allowed inert gas pressure in tissue number i as given by your decompression model.

This could be plain Bühlmann in which case this expression is just your momentary gradient factor. Or you believe in excess pressure being worse at depth (as the produced bubbles will grow later on in shallower water) then you take p_i,max as given by Bühlmann corrected by your favourite settings of GF_low and GF_high. Or you insist on using VPM-B then you use the maximal momentary M-value as predicted by that model. Strictly speaking this is an extension of that model for real (as opposed to planned) dives, but at least in Subsurface, we have found a way to compute it (such that it pretty much agrees on planned dives).

Obviously, if you decompress exactly as prescribed by you model, q=1 during decompression (possibly a bit lower due to the fact that most people decompress in steps of 10ft/3m rather than continuously). But

$$q\le 1$$

is the definition of not violating the ceiling.

The ceiling being what it is, violating it should be bad and the more the worse whereas staying way below the ceiling gives your additional conservatism which you probably have to pay in terms of extended decompression time.

So, my proposal would be to compute the L^lambda norm of the function q(t) for somewhat large lambda as this punishes going above q=1:

$$\Big(\int q^\lambda dt\big)^{1/\lambda}=\Big(\int max\left(0, \frac{p_i(t) -p_{amb}(t)}{p_{i,max}(t) – p_{amb}(t)}\right)^\lambda dt\big)^{1/\lambda}$$

To me, this seems to be a reasonable measure of deco stress. Maybe it’s worthwhile to compute this for a number of real profiles and compare it deco outcome (risk of DCS, I wish I had access to the DAN data… or at least Doppler results).

What do you think?

## In defence of bar l

In a recent article in GUE’s InDepth blog, Richard Walker argues that the correct unit to discuss the amount of (free) gas in a cylinder is l (litre) and not bar l (bar times litre) because that is the correct unit for a volume.

I would like to argue that although the latter is correct the former isn’t: The volume of the gas is not its amount. The amount is measured by counting the number of molecules or (if you don’t like big numbers) the number of moles of gas. Let’s look at the ideal gas law, the mother of all gas calculations (and of course including Boyle-Mariotte):

$$pV=nRT$$

It’s the n on the right hand side (or nR depending on whether you count molecules or moles). As during diving, the temperature stays pretty much constant (or at least we don’t take changes in to account in our calculations) you could include that in the constant as well. Then the RHS (and thus the LHS) describes the diver’s version of amount of gas.

And that is not changed even when feeding it though a compressor (let’s still ignore the temperature change), when emptying the cylinder at the surface or when breathing it at 30m of depth at ambient pressure. This is the amount.

And you should measure it in the unit of this equation which is volume times pressure, which in SI derived unites is conveniently expressed in bar l.

Of course you can compute which volume this amount of gas fills at some pressure (be it 232 bar or 1 bar or 4 bar in the above examples), simply divide by that pressure. But when gas planning, you should plan the amount of gas you need (you will need it at different pressures) and that is measured (invariantly) in bar l.

I am sorry Rich, but I believe you are wrong here.

## Gold

Nothing profound to say these days (in which I guess more people practice the theoretical side of diving). But yesterday, I was watching the first few episodes of season 3 of Money Heist (La casa de Papel, Haus des Geldes) on Netflix. And not only the professor is complaining that he had too little time to work out the plan, it seems this also applies to the script writers.

I am willing to forgive questionable approaches to tracking satellite phones and hacking into mobile phones. But when it comes to theoretical diving, there is a limit!!1!

I don’t want to spoiler too much, let’s just say, there is a flooded vault and they are diving to extract bars of gold:

Wasn’t anybody aware that the density of gold is almost twice the density of lead? A good delivery bar weights 12,4 kg, so the woman holding two of whose on stretched out arms has spent a lot of time in the gym. Also, swimming two of those to the exit it will be challenging to maintain neutral buoyancy…

When I was a student at DESY (the German particle accelerator lab in Hamburg), even in the theory building, we used lead blocks the size of half bricks as door stops, monitor mounts etc. Those came from the experimentalists who used those as means of radiation protection. The first time, you tried to pick up one of those, you got the impression it was screwed to the floor as your brain expected to pick up something they weight of a brick. But no, the density of concrete is about twice that of water while lead is about 11 kg per litre while gold is 19 kg per liter.

## Rating decompression

How good was the deco of my dive? For those of us who strive to improve their diving this is a valid question to find ways to optimise how we get out of the water. In Subsurface, for example, we give various information including the individual tissues’ ceilings as well as the heat-map.

Recently, on ScubaBoard, I learned about a new product on the market: O’Dive, the first connected sensor for personalised dives. It consists of Bluetooth connected ultrasound Doppler sensor together with an app on your mobile phone. The user has to upload the profile data of the dive to the Subsurface cloud from where the app downloads it and connects it with the Doppler data (for this, it asks for your Subsurface username and password?!?, the first place where you might ask if this is 100% thought through). Then it displays you a rating (in percent) for your deco and offers ways to improve it.

That sounds interesting. The somewhat ambitious price tag (about 1000 Euros for the sensor plus 1,50 Euro per dive analysed), however, prevented me from just buying it to take it for a test. And since it’s a commercial device, they don’t exactly say what they are doing internally. But in there information material they give references to scientific publications, mainly of one research group in southern France.

A fraction is to conference proceedings and articles in very specialised journals that neither my university’s library nor the Rubicon Archive not SciHub have access to, but a few of the papers I could get hold of. One of those is “A new biophysical decompression model for estimating the risk of articular bends during and after decompression” by J. Hugon, J.-C. Rostain, and B. Gardette.

That one is clearly in the tradition of Yount’s VPM-B model (using several of the bubble formulas that I have talked about in previous posts). They use a two tissue model (fitting diffusion parameters from dive data) and find an exponential dependence of the risk of decompression on the free gas in one of the tissues:

Note, however, that the “risk of decompression sickness” is not directly measured but is simply calculated from the

$$PrT = P\sqrt t$$

paramour (P is the ambient pressure in bar during the dive while t is the dive time in minutes, i.e. a characterisation of a dive profile compared to which a spherical cow in vacuum looks on spot, but this seems to be quite common, see also https://thetheoreticaldiver.org/wordpress/index.php/2019/06/16/setting-gradient-factors-based-on-published-probability-of-dcs/) using the expression

$$r_{DCS} = 4.07 PrT^{4.14}.$$

which apparently was found in some COMEX study. Hmm, I am not convinced Mr. President.

The second reference I could get hold of are slides from a conference presentation by Julien Hugon titled “A stress index to enhance DVS risk assessment for both air and mixed gas exposures” which sounds exactly like what the O’Dive claims to do.

There, it is proposed to compute an index

$$I= \frac{\beta (PrT-PrT^*)}{T^{0.3}} .$$

Here PrT=12 bar sqrt(min), T the total ascent time and beta depends on the gas breathed: 1 for air, 0.8 for nitrox and 0.3 for trimix (hmm again, that sounds quite discontinuous). As you might notice, what does not go into this index is at which depths you spend your ascent time while PrT keeps growing with time indefinitely (i.e. there is no saturation ever).

This index is then corrected according to a Doppler count (which is measured by a grade from 0 to 5):

This corrected index is then supposed to be a good predictor for the probability of DCS.

I am not saying that this is really what is going on in the app and the device, these are only speculations. But compared to 1000 Euro plus 1,50 Euro per dive, looking at ceilings and heat map in Subsurface sounds like quite good value for money to me.de

## Why are tissues independent?

This is the second instalment in a series of posts that are inspired by my reading about the SAUL deco model. Paradoxically, I want to report on something that I realised while reading the articles about that model even though Saul Goldmann, SAUL’s inventor, kindof comes to the opposite conclusion.

For the Bühlmann model, I have argued before that the different tissue half times are not really parameters of the model: If you have a tissue in contact with an environment (be it breathing gas or blood) it is pretty natural that gas diffuses between the environment and the tissue with a rate that is proportional to the difference of partial pressures

$$\dot p_i = d_i (p_a – p_i)$$.

Here pi is the partial pressure in tissue number i, pa is the ambient partial pressure and di is the diffusion constant (essentially ln(2) divided by the half-time) of that tissue. This is the differential equation that governs all on- and off-gassing of tissues. To complete the Bühlmann model, one only has to specify a maximal tissue partial pressure in relation to the ambient pressure (or better a minimal ambient total pressure for a given tissue partial pressure) for which Bühlmann assumes an affine relation, i.e.

$$p_a \ge b_i (p_i – a_i).$$

It is important that the coefficients ai and bi for the tissue number i are really parameters of the model that need to be determined empirically (and for which there are tables). The diffusion constants (and thus tissue half-times), however, are not.

You only need to make sure you satisfy the inequality for all possible tissues, that is for all possible half-times. To do so, you try to do the calculation for all relevant time scales. In the case of (non-saturation) diving, the relevant times go from a few minutes (the shortest relevant time scale of ascents and descents) to a few hours.

Of course, you cannot calculate for all possible times. But you make sure, you cover this range of times with many representative times. And a little further thought reveals that you should spread those representative times geometrically (so also the range of diffusion constants is covered geometrically).

To sum up, you need to know the a’s and the b’s but for the d’s, you essentially cover all possible values. It is important, that for this result we had to know nothing about actual tissues like bones, nerves, muscles or whatever of the human body, since all that is relevant that they have some diffusion constant in the relevant range.

So far, this was all old news.

Today, I want to look at the question, why we can get away with treating all these tissues independently. Why does tissue number i only exchange inert gas with the blood stream or the breathing gas and not with all the other tissues?

Maybe, this assumption is unrealistic and we should better consider a model that looks like this:

Maybe, we should allow tissues i and j to exchange gas directly. This would be described by a new diffusion constant cij (non-negative and possibly zero if the tissues are not connected) and the new differential equation would look like

$$\dot p_i = d_i (p_a – p_i)+\sum_{j\ne i} c_{ij}(p_j – p_i)$$.

In order for no gas to get lost, we would need cij to equal cji. To simplify this, we could assemble the different partial pressure pi into a vector p, the diffusion constants di into a vector d and the new diffusion constants cij into a symmetric matrix C.

If you have only a little experience with ODE’s, you realise that this equation is a linear inhomogeneous ODE and can be written in the form

$$\dot p = Ap + p_a d$$

where the matrix A has components

$$a_{ij} = -c_{ij} + \sum_k c_{ik} \delta_{ij} – \delta_{ij} d_i.$$

A quick inspection shows that A is symmetric and negative definite (if all di are positive).

The key observation is that we can diagonalise A as

$$A= U^{-1}DU$$

where D is diagonal with all negative eigenvalues. Then the above differential equation reads

$${d\over dt}(Up) = D (Up) + p_a Ud.$$

But thanks to D being diagonal, we can view this as a new vector Up of tissue partial pressures that obey the original Bühlmann differential equation! So up to taking linear combinations, the Bühlmann model already covers the case of tissues directly exchanging inert gases. Allowing for this apparent generalisation of the model can be converted by simply a change of basis.

So, if we say, that the modelled tissues are not really corresponding to actual tissues (as we already did above) but allow the interpretation that they might be linear combinations of actual tissues, then the Bühlmann model already covers (as long as we cover all possible diffusion constants which we do) the case of inter-tissue gas exchange. It is not a special case in this wider class of modes with interconnected tissues but it is already the generic case.

Let me close with a consistency check: At content depth, pa is constant. In this case, Up converges saturates to

$$(Up)_i \to \frac{(Ud)_i}{\lambda_i}p_a.$$

or in matrix notation

$$p\to U^{-1}D^{-1}Ud p_a = A^{-1}dp_a.$$

But a quick calculation shows that if v is the vector that has ones in all components

$$Av = d$$

and thus finally

$$p_i\to A^{-1}dp_a = vp_a$$

and hence indeed all tissue partial pressures approach the ambient partial pressure.

## Establishing safety is hard

Recently, I had a look at the SAUL decompression model and I have a couple comments on it. They are at best inspired by what I have read and fall in some independent classes, so I will spread them over a number of posts in the upcoming days.

SAUL is a probabilistic model. This means that rather than telling you if a certain dive is ok from the perspective of the model or telling you the fastest way out of the water that is still considered save by the model (like our traditional models like Bühlmann or VPM-B, possibly enriched by adjustable fudge factors like gradient factors or conservatism) a probabilistic model will give you a probability for a dive to result in decompression illness.

In the case auf SAUL, these probabilities are based on series of empirical tests, where a (large) number of divers followed a given deco schedule and you count the number of DCS cases. This is not any different from any other empirical study.

What I want to discuss in this post is how to design a study to establish the fact that “the accident probability for this given profile is less than some probability p”. So, this post is about the statistics of setting up such a study. Please be aware that I am by no means an expert on statistics, this is all pretty much home grown (I did some googling but could not easily find a good reference for the whole story), so chances are even higher than otherwise that I am completely wrong. But in this case, please teach me!

Note that I want to establish the safety of a plan (as a bound on the probability of injury) but simply do a number N of dives, count the number a of accidents and then claim that the probability is a/N. Rather, I want a confidence interval. To be specific, I want that the probability of “the accident probability (or rate) is higher than one in a thousand” is less than 5% (that seems to be a pretty common number as 95% contains two standard deviations for a normal distribution).

So for example, we could do N dives and hope to find no accident. The probability that happens by chance is

$$(1-p)^N$$

if p is the accident probability. So this should be less or equal than c=0.05. When we solve that for N we find

$$N=\frac{\ln c}{\ln(1-p)}$$

and as we expect p to be pretty small, we can chop of the Maclaurin series after the first term and thanks to ln(0.05) being almost -3 write

$$N\approx 3/p.$$

So, to establish a probability of 1/1000 we would have to conduct 3000 dives in our study. So, let’s apply for funding.

The problem is only, that we won’t get that funding. Because we will be able to show our hypothesis only in the case of no accident. But we just computed that that chance is 5%. So in 95% of studies conducted like this, the result will only be inconclusive. We have to do better.

Doing better might mean, not being that ambitious, but aiming for a not so strong hypothesis. We could try to show that the probability is only smaller than 1/500. Then we would do 1500 dives with an expected number of accidents (as we believe the true probability is still 1/1000) of 1.5.

Or we actually believe that the true probability is actually 1/2000. And then we could no more dives, such that the allowed accident number can be higher still leading to a 1/1000 bound on the probability.

And for these numbers, we can then compute the chance of having a successful study (i.e. one where the number of DCS cases is indeed small enough) when the true probability of DCS for this dive is actually 1/2000:

Two things I found remarkable: You actually need of the order of 10,000 dives to have a 50:50 chance for a significant study establishing an accident probability of at most 0.001. I have been using this probability as I think an accident rate of 1 in 1000 is what is generally accepted at least in recreational diving (would it be significantly higher, the whole industry would quickly go down the drain thanks to people suing others for liability). But studies with 10,000 dives are totally unrealistic as far too expensive. This is why you want to conduct studies with much higher accident rates, so you can establish those with much fewer dives. A famous example being the NEDU deep stop study that put their divers under severe stress (cold, exercise etc) to drive up the accident rate. And they were criticised for these “totally irrelevant to actual diving” conditions in particular by those being in favour of deep stops. Only that with “realistic” dives, it would be very hard and expensive to see a significant effect.

The other lesson is more formal: in a range where the allowed number of deco cases is constant, the chance for a successful study decreases with increasing N. This I did not expect but it is quite obvious: With every additional dive, you increase the chance of having another accident that kills the study. So, if you design such a study, you should pick N just slightly bigger than one of the jumps of allowed DCS cases.

As always, I made my mathematica notebook for these calculations available.

## Next big thing: Doping for Deco

Over at GUE’s InDepth blog, Jarrod Jablonski has published a first post in an upcoming series on past and future of decompression protocols. A definite must-read, not only because it’s one of the leaders of the industry laying out his point of view.

The part I find most interesting for future developments is the research into exercise about before diving to reduce the number of micro bubbles from the blood and the idea to even to replace that with giving nitric oxide (NO) or substances that lead to the release of NO (like Viagra) prior to diving could cut short decompression times. This is probably far too early to know but I remember recently seeing a report of a cave dive on Facebook that mentioned some medicine was taken before the dive to shorten decompression without giving any details. Unfortunately, I cannot find the link anymore, maybe one of my readers can supply it.

But I think this would be the right time to think through a possible scenario: Assume research shows that there is a substance X that divers can take before big dives that would significantly shorten decompression while still maintaining the same level of safety. But substance X has long term side effects (NO has a toxic dose and is chemically related to NO2 which is a the heart of the Diesel engine scandal). Would we then see this as the diver’s analogue of doping? Could this lead to a situation similar to body building and steroids? There would be a general advice not to use such substances because of their side effects but maybe at that time, for the next big dives, decompression obligations would be so big that explorers could justify the short term benefits over the long term risks. But for every explorer there are orders of magnitude more followers who are going to imitate their heroes and thus would also start using such drugs although in their case there is no good justification. Just because they want to do everything like the people they look up to (not that would happen anywhere in the tech diving community…).

I would say now would be a good time to come up with an opinion on how such a situation were to be handled, what are the limits of what one could and should ethically do. Of course, everything would have to be adjusted once the actual thing with all details are known. But it wouldn’t hurt to come up with a basic opinion whilst all this is still theoretical.

Note: I originally meant this to be a comment under the InDepth blog post but I did not manage to create and account there that would allow me to post it. Does anybody know how (i.e. where) I could do that?

Update: The InDepth website now allows me to post.

## Calculating Oxygen CNS toxicity

I have talked previously about oxygen toxicity, in particular about the influence on the pulmonary system measured in OTUs. But there is a second effect, the one on the brain (CNS). This effect is strongly dependent on the partial pressure of oxygen in the breathing gas and is conventionally expressed in terms of the “oxygen clock” or as %CNS.

At each pO2, there is a maximally allowed exposure time $$t(pO_2)$$ and at each depth one can spend a certain percentage of that time, the percentages being added up over a dive.

Which leaves us with the question of how to obtain $$t(pO_2)$$. It seems, everybody is using essentially the same table of values published by NOAA, with the time being infinite for pO2 < 0.5 bar, then starting with 720 minutes and then steeply decreasing to 45 minted at a pO2 of 1.6 bar where the table ends as you are not supposed to breath a pO2 above that value.

As it turns out, the empirical basis for this table is hard to find, as it, in the words of Bill Hamilton, it “is not based on a specific set of experiments but rather on the accumulated wisdom of experts in this field.”

So far, in Subsurface, we have used the same table and have used linear interpolation for values of pO2 in between. At high enough partial pressures, the maximal exposure time get shorter quickly, so with linear extrapolation beyond 1.6 bar, they quickly become negative. As the CNS% is given by the time spent at that particular pO2 normalised by the maximal time, this leads to the clearly absurd result of negative CNS% values. We had to do something about this.

This first step is to plot the table values (note that in the internals of Subsurface, all times are expressed in seconds while all pressures are in mbar):

Hmm, nothing obvious. So, let’s use a log scale for the times:

Ah, except for the last two points, this looks pretty much like a straight line. Too bad, the last two points at the highest pressures are the most important ones, so we should better not just forget about them.

There was some discussion, how to deal with these points, a fourth order polynomial fit (green) was discussed as well as using two straight lines, one for everything below 1.5bar and another one for the last three points (orange):

Turns out, the sum of squared deviations for the green fit is 0.06452 while the two lines give 0.0314796, i.e. the latter are twice as good, so let’s use those.

As these lines are fits for $$\log(t(pO_2)$$, the resulting formula is

$$t(pO_2) = t_0 e^{-pO_2/p_c}$$

with t0 = 131300 s = 2188 min and pc = 516mbar for pO2 in between 0.5 bar and 1.5 bar and $$t_0 = 1.83861\cdot 10^{10} s= 3.06436\cdot 10^8 min$$ as well as pc = 102 mbar above.

There is a mathematica notebook with these calculations in case you want to play with the numbers yourself.