Not a lot of new content here recently. The last days, I attended 37c3, the annual hacker convention of the Chaos Computer Club which took place again in Hamburg . So, today, we will have some engineering content rather than the usual physics one.

You have to see the super entertaining talk by two guys who built their own submarine from scratch.

It has been very quiet here for a while. That is partly because there is a topic that I wanted to discuss here and since my take was going to be at least somewhat problematic, I felt for a while that I needed to do more reading to get the facts right. But now the time has come for me to talk about: Probabilistic decompression models.

The idea as such is very simple: Many models for decompression including the ones that I talk a lot about in this blog, Bühlmann, VPM-B and also DCIEM are “deterministic” models: They give you a plan and if you stick to it (or dive more conservatively) you should be fine but you must not violate the ceilings they predict at any given time. On the other hand, there are probabilistic models that aim to compute a probability of getting DCS for any possible profile and which you can then turn around and prescribe a maximal probability of DCS and then you optimise your profile (typically for the shortest ascent time) that gives you that prescribed probability of DCS.

That sounds like a great advantage: Don’t we all know that there is no black and white in decompression but there are large grey areas. Even when you stick to the ceilings of your deterministic model there is a chance that you get bent while on the other hand you do not immediately die when you stick your head above the ceiling, quite the opposite, there are chances that even with substantial violation of the decompression plan you will still be fine. That sounds pretty probabilistic to me.

Of course, things are more complicated. Also the deterministic models do not really aim at a black/white distinction. Rather, they are also intended to be probabilistic but with a fixed prescribed probability built in. For recreational diving (here as usually including technical diving but not commercial diving), the accepted rate of DCS is usually assumed to be one hit in a few thousand dives. That seems to be the sweet spot between not too often having to call the helicopter (and most recreational divers never experiencing a DCS hit) and being overly conservative (too short NDL times, too long decompression stops). So, the only difference is that for the probabilistic models, this probability is an adjustable parameter.

You can turn this around: Why would you be interested in dive profiles that have vastly different probabilities of DCS that this conventional one in a few thousand? After all, even for one possible value of p(DCS) it is very hard to collect enough empirical data to pin down the parameters of a decompression model. Why on earth would you want to do that also for probabilities that you do not intend to encounter in your actual diving? Why would I be interested in computing an ascent plan that will bend me one dive in twenty for example? Or in only one dive in a million? That sounds to be overly complicated for useless information.

The answer is exactly in your restricted ability to conduct millions of supervised test dives: Let us assume you have a probabilistic model with a number of parameters that still need to be determined empirically. But you have a priori knowledge that the general form of your model is correct (we will come back to this assumption below), you can do your test dives with depths, bottom times and ascents that your model gives you DCS probabilities in in the range of 5-50% say, depending on the parameters. Much more aggressive than you intend to dive in the end. But such test dives will give you a lot of DCS cases and you do not need too many dives to determine if the true probability is 5%, 20% or 50% and adjust the model parameters accordingly. You gain already a lot of information with just 10-20 dives while for dives with the 1/10000 rate of DCS you need many more dives to have enough DCS cases to establish the true probability.

Once you have established the parameters of your model with these high risk dives where you have to use your chamber a lot to treat your guinea pig divers after bending them you can then use your model for dives that have a much healthier conservatism where your model gives you for example p(DCS)=1/10000.

An example of doing your study in a regime where you expect to bend many contestants is the famous NEDU deep stop study where the divers were in quite cold water with insufficient thermal protection and in which they had to work out on ergometers while in the water all to drive up the expected number DCS cases (there not necessarily with a probabilistic model in the background but just the intend to see a difference in deeper vs shallower decompression schedules where of course 0 DCS cases for both ascents wouldn’t be very informative).

But as so often, there is no such thing as free lunch: You are extrapolating data! You make experiments in a high risk regime of your data and then hope that the same model with the same parameters also in a low risk regime. You are extrapolating your data over several orders of magnitude of probability. This can only work if you can be sure your model is correct in its form and the only thing to determine empirically are the few parameters. But in the real world, in particular in decompression science, such a priori knowledge is not at hand.

Let me illustrate this in a simplified example: Let’s assume, there is only one parameter x and your model is that the logarithm of p(DCS) is a straight line (an affine function) of x

\(\log(p_{DCS}) = m x +b\)

Then, in your experiments you do a number of test dives at various values of x, find the corresponding rates of DCS and finally fit the slope m and the intercept b by linear regression.

If however the a priori assumption of the model that a straight line does the job is not justified because there is also a small quadratic contribution, say, you can fit your parameters as closely as you want but still get very far off in the extrapolation to where you intend to use your model.

This is of course only an example and for more complicated models you will use a maximum likelihood method to find optimal values of your model parameters given the outcomes of various test dives. But what this will never be able to do is to verify the form of your model assumptions. You are always only optimising the model parameters but never the model itself. For that you need independent knowledge and let’s hope you have that.

VVAL18

To be specific, let us look at one probabilistic algorithm more concretely: VVAL18 which was developed by Edward Deforest Thalmann for the US Navy based on a database of a few thousand navy dives. For military divers, the accepted risk of DCS is much higher, if 2% of the dives result in symptoms that need to be treated that is still considered fine given that decompression chambers are typically available at the dive spot. This model is also known as the Thalmann algorithm and is described for example in short in this technical report. (Similar things could for example be said about the SAUL decompression model which is similar in spirit and also inspired by the navi data)

This model also uses compartments with half times but leaves open the possibility that the partial pressures do not follow the usual diffusive dynamics with the rate of change proportional to the pressure difference to the surroundings but for off-gassing also allow for the possibility of a constant rate (which leads to linear rather than exponential time dependence).

For each compartment i, there is a fixed threshold pressure pth that as an excess pressure is considered harmless and above that a relative excess pressure is calculated

\(e_i = \frac{p_i-p_{amb}-p_{thi}}{p_{amb}}\)

To compute the risk, for each second of the dive and each compartment, risk of not getting bent in the second is assumed to be

\(e^{-a_ie_i}\qquad (*)\)

for some constants ai. Finally, all these individual risks are considered to be independent, so the “survival probability”, the probability of not developing DCS is assumed to be simply the product of all the individual probabilities for compartments and seconds (since they are in the exponent, you can there integrate the ei over time and sum over tissues).

These constitute the a priori assumptions of the model that I was talking about: The exponential dependence of risk on relative overpressure (*) and the statistical independence of tissues and instances of time. According to these assumptions, your risk of DCS increases exponentially in time when you do longer decompression (assuming the excess pressure is kept constant) for deeper dives or longer bottom times (this is clearly at odds with the assumptions of for example the Bühlmann model that allows you to have arbitrary long decompression obligations as long as you do not violate a ceiling) and you are allowed to have arbitrary large excess pressures if you keep the duration of the excess short enough (tell that to your soda bottle).

With these assumptions, for which I could not find any justification in the literature for, except “we came up with them”, lack of imagination, then the parameters of the model are optimised using maximum likelihood. In the Thalmann case, there three tissues and the parameters to be fitted are the half-times, the thresholds pth and the constants ai.

The three half-times Thalmann ends up with are roughly one minute, one hour and ten hours (with large uncertainties), the thresholds are essentially 0, and the ai are in the 1/50000 range (you can find the values in an appendix of the report cited above, note that time units of minutes are used rather than the hypothetical second I used here).

I have serious doubts about the intrinsic assumptions of (*) and the statistical independence of time segments. But for Navi use where you accept DCS risks of a few percent those may be ok since any model with sufficiently many fitted parameters will reproduce dives with similar parameters. But failed assumptions will bite you when you extrapolate your model out from the high risk regime to recreational diving as failed model assumptions tend to blow up under extrapolation.

I want to mention that I am grateful to the LMU Statistics Lab that I could discuss with them some of the issues mentioned. Of course all mistakes here are my own.

This post is a copy from an answer in the Subsurface support forum but I post it here as well as this point comes up over and over. It is about the setting of some dive computer where you can set the density of water or whether you are diving in sea or fresh water. There is a similar setting in Subsurface, but the default setting in the preferences is to hide it. For a good reason.

Let me mention once more why this option is turned off by default: There is a good chance that by fiddling with the density you make changes whose effects are not what you intend: The density is the relevant constant that controls the conversion between depth and ambient pressure. That you probably intend. But what is easy to forget: Your dive computer displays depth and reports it in the log that is transferred to Subsurface. But it does not really measure depth, rather it measures ambient pressure and it uses the density to convert that to depth.

It does this conversion because we humans are used to think in terms of depth which is much like a length and we have an idea how much 10m is, much more so than 2bar.

For most things diving, however, depth does not matter at all, ambient pressure does. This includes gas consumption (as your regulator regulates according to ambient pressure, not to depth) and all deco calculations (because also there partial pressures in their relation to ambient pressure dictate what is happening in your body). Depth only matters when you think about breaking a new world record or worry if the mast of the wreck sticks out of the water or if you buoyo line is long enough. For gas usage estimates and deco calculations, it would be much more honest if your computer displayed ambient pressure in bar rather than depth in m. Only that for the average Joe that would be hard to digest and we have to live with the fact that this conversion is done back and forth by the computer and by Subsurface all the time.

But what really makes zero sense is to use different values of density when translating back and forth. Also your dive computer does not measure the density, this is a setting that you have to make manually. Yes, you could change that setting on your dive computer every time you switch between sea and fresh water to get a more accurate depth display (which as I explained above most likely does not matter at all). But my guess would be you forget to change that setting for at least half of your dives. So my very strong recommendation would be to set it on your dive computer once and for all to any value (maybe according to where you do most of your diving) and set Subsurface accordingly and never ever change it again. The result can be that some of your depth readings are slightly off but at least your gas and deco calculations will be consistent.

As mentioned before, gases in diving cylinders are not only not sufficiently well approximated by the real gas equation but also the van der Waals equation, despite its prominence in thermodynamics teaching, is not doing much better.

Subsurface does better than this using a polynomial fit to table data for the compressibility of the three relevant gases. In a discussion at ScubaBoard, the question came up how to use this in gas blending. After an initial version using Mathematica, I sat down and implemented it as a perl script and hooked it up to this web page for everybody’s use and enjoyment. Here it is:

source code is on GitHub. Right now, it does only nitrox. But it computes instructions to up up partly filled cylinders with any pre-existing nitrox mix.

Let me know if you think extending it to trimix would be useful for actual use. I am not sure what the best user interface would be in that case: For nitrox, specifying the initial and target mix and pressure and two top up mixes, the blending problem generally has a solution. But with three gas components to get right, it is in general impossible with only two top up mixes. So you either have to use three (linearly independent) top up mixes or let one thing unspecified. That could be either the oxygen or helium fraction of the final mix or you have to leave open one of the gas fractions of the top up gases.

So what do you do in practice, which component do you leave unspecified?

Update: I updated the script so it can now also handle blending trimix (starting from a partially filled cylinder, you can specify three top up gases it will now calculated the intermediate pressures you have to fill up to). To blend nitrox, specify the target mix as containing no helium and leave one of the top up mixes empty.

Update: I discovered an error in the calculation (I calculated the mix according to pressures rather than volumes at 1 bar) that should be fixed now (April 10, 2022)

When the Subsurface mailing list recently received the request if the planner could implement the DCIEM model as well my first reaction what “Whut?” has I had not heard of it before.

Turns out, DCIEM stands for “Defense and Civil Institute of Environmental Medicine” which was a research institution of the Canadian military and is now part of DRDC Toronto according to Wikipedia. The request to Subsurface was apparently prompted by the fact that Shearwater announced to implement the model into their dive computer for commercial divers. The “for commercial divers” sounds to me like this is more to tick some boxes in the health and safety requirements for these people rather than something that the diving world at large should adopt. But still, it might be interesting to see what this model actually does.

Some googling suggests that this model is mainly consumed not in terms of an algorithm in planning software or dive computers but in the for of tables. A 1992 version can for example be found here.

But Google also finds a review paper by Nishi and Tikuisis that explains that the DCIEM model is a non-linear model developed by Kidd and Stubbs. It shows as four compartment discretisation of a slab of tissue together with a ODE for time evolution:

As written, I cannot make much sense of it but things clear up when the expression is expanded:

\(\dot p = ab \Delta p +a\Delta p^2\)

Here, p is the vector of tissue pressures and Δ is the discrete Laplacian that for a sequence (s) is

\( \Delta s_{n} = s_{n-1} -2 s_n + s_{n+1}.\)

If there were only the first term on the RHS, that would simply be a diffusion equation that I would also have written if somebody had asked me to write down a model for a slab of tissue. I have no clue where the second, non-linear term comes from but clearly it can safely be ignored as long as p<<b. There is also a paper by Nishi and Kuehn that gives a FORTRAN implementation of the model (doesn’t this sound familiar…). In the source code, I find a value of b=274.5 which is supposedly in units of psi which translates to 19bar, so the non-linear term should will not be relevant for anyone with a depth limit short of 200m. In the review paper it is stated that the constants of the model were fitted to bubble measurements after trial dives but one could wonder how this is would be possible unless compartment pressures at least came near to 20bar…

The paper with the FORTRAN program also explains that the relation between tissue pressure and ambient pressure is pretty much a standard \(p_{amb}\ge c_1 p_{tissue} + c_2.\)

More interesting is that the discretised slab is an example of the interacting tissue models I talked about in the post about those being equivalent to independent tissues. For four tissues, the discretised Laplacian (there is of course also a closed form) has eigen values \(\frac{-3\pm \sqrt 5}{2}, \frac{-5\pm\sqrt 5}{2}\) which numerically range between -3.6 and -0.38.

So, the four independent tissues after diagonalization cover one decade of half-times all proportional to 1/ab.

So, taking everything into account, the DCIEM model is (equivalent to) a Bühlmann type model with four tissues coving a somewhat small range of half-times. So I would expect with an appropriate choice of constants, one can produce somewhat reasonable deco schedules at least for air dives (we have not discussed different gases) with not too long runtimes. But a full blown Bühlmann (possibly with gradient factors) is much more expressive.

I want to look into using depth information to improve automated color correction (inspired by some new features of some camera products on the market). Problem is, I don’t do underwater photography myself. Therefore I am looking for example pictures with depth information. They should be

no color correction applied to yet (manual or automatic)

together with a Subsurface file (use the export function, possibly with annonymization) where they are linked to profile (via the import media function)

As I am only interested in color they don’t have to have high resolution (could well be heavily jpeg compressed) or have any artistic value. By sending those you agree to potentially include them as test files in Subsurface where they would be distributed under a GPL2 or CC0 license.

Long gone are the times when every morning, I checked my RSS-Reader for new posts to my favourite blogs. These days, we need other ways to hear about new content, in particular for low volume blogs like this one.

For a while, have announced new posts to Facebook and Twitter (using IfThisThanThat) but I have the impression that more and more people are leaving those platforms as well so I need other ways to inform you, my dearest readers, once I got around to write a new post. I am not going to record a TicToc video for each new post.

Rather, I have set up a newsletter. Subscribe to it, and you will get one mail per new post. This way you will get served all new content piping hot and never miss the latest in theoretical diving anymore!

Here is another contribution by Doug Fraedrich. This time, the text would probably better fit on theepxerimentaldiver.org: He tests dive computers No-Fly-Times algorithms and compares those to DAN’s recommendations and the respective user manuals. For some more theoretical considerations regarding diving and flying, see this older post.

Introduction

It is well known that divers must wait a certain time after diving before then can safely fly. The current No Fly Time (NFT) “best practices” guidelines from Divers Alert Network (DAN)^{1} are listed below:

For a single no-decompression dive, a minimum preflight surface interval of 12 hours is suggested.

For multiple dives per day or multiple days of diving, a minimum preflight surface interval of 18 hours is suggested.

For dives requiring decompression stops, there is little evidence on which to base a recommendation and a preflight surface interval substantially longer than 18 hours appears prudent

These guidelines are primarily based on a paper by Richard D. Vann, et al^{2} of Divers Alert Network, which reported an analysis of risk of Decompression Sickness (DCS) results of 802 dives of nine different profiles: four single non-deco dives and five sets of repetitive dives. DAN did not test decompression dives in Reference 2, so the US Navy No Fly Times were used for this category of dives.

Many commercial off-the-shelf dive computers compute and display this time after surfacing from a dive. The level of documentation on how exactly this is computed varies from dive computer to dive computer; most units state in their manual that they use the DAN guidelines.

The objective of this report is to test several representative commercial off-the-shelf dive computers in pressure chamber which simulates different categories of dives and assess the following: 1) does each computer conform to the DAN NFT guidelines? and 2) does each computer conform to what is described in the computer’s owner’s manual? The dive computers tested are shown in Figure 1 and listed in Table 1.

Computer

Main Dive Algorithm

No Fly Time computations (as stated in User Manual)

Deepblu Cosmiq+

Buhlmann ZHL-16c

The No-Fly Time is based on the calculations of your desaturation time according to your actual dive profile. The No-Fly Time will be counted downwards every half hour.

Suunto EON Steel

15-tissue Suunto-Fused

No-fly time is always at least 12 hours and equals desaturation time when it is more than 12 hours. If decompression is omitted during a dive so that Suunto EON Steel enters permanent error mode, the no-fly time is always 48 hours.

Oceanic VT4.0

Z+/DSAT

FLT Time is a count down timer that begins counting down 10 minutes after surfacing from a dive from 23:50 to 0:00 (hr:min)

Mares Icon

10 tissue RGBM

Icon HD employs, as recommended by NOAA , DAN and other agencies, a standard 12-hour (no-deco non-repetitive dives) or 24-hour (deco and repetitive dives) countdown.

Cressi Cartesio

“Cressi” 9 -tissue RGBM

The no-fly times as follows: 12 hours after a single dive within the safety curve (without decompression). 24 hours after a dive outside the safety curve (with decompression) or repeated or multi-day dives performed correctly. 48 hours…if severe errors have been made during the dive.

SEAC Guru

Buhlmann ZHL-16B

For single dives that did not require mandatory deco stops, wait a minimum interval of 12 hours on the surface. In the event of multiple dives in a single day, or multiple consecutive days with dives, wait a minimum interval of 18 hours. For dives that required mandatory deco stops, wait a minimum interval of 24 hours.

AquaLung i300C

Z+

The Time to Fly countdown shall begin counting from 23:50 to 0:00 (hr:min), 10 minutes after surfacing from a dive.

Garmin Descent MK1

Buhlmann ZHL-16c

After a dive that violates the decompression plan, the NFT is set to 48 hours. (Note: No mention of NFT after a nominal dive.)

Table 1. Dive Computers Tested

Methods

The dive computers were subjected to simulated dives in a small pressure chamber, Figure 2. This chamber was a Pentair Pentek Big Blue 25cm chamber that had a maximum allowable pressure of 6.9 bar.

The units-under-test are placed in the chamber with the display facing out (as illustrated in Figure 2) and filled with fresh tap water. Air pressure is added from the top of the chamber via a standard bicycle pump (Velowurks Element floor pump.) Even though the relationship between pressure and indicated depth is well-known, several “calibration runs” were performed before the testing to verify repeatabilty of calibration. For these calibration runs, a Shearwater Perdix dive computer was used.

Pressure was released in a controllable way via an added vent valve. This ensured that the simulated ascent rates were not too fast. Figure 3.

Figure 4 shows an actual profile using the test chamber.

A run list was prepared based on the dives described in Reference 2; see run list in Table 2.

#10 24m/30 mins with all required deco stops: Group H (11:04)

#14 37m/30 with all required deco stops: Group L (16:50)

“Violation” Dives

#11 18m /30mins no-deco dive with missed safety stop

#12 24m/30 mins with omitted (all) deco stops

#13 18m /30 ft w/ fast ascent ( ~15m/min) w/ 3 min safety stop

Table 2. Run List

Some of the dive times needed to be shortened, compared to the original dives in Reference 2; this is because the bottom times in the “old” no-deco dives exceeded the No Decompression Limits on some of the computers tested. Since decompression dives were not tested in the DAN study in Reference 2, two dives profiles were added to the test matrix: a “marginal” deco dive, profile #10 (US Navy Repetitive group H) and a deeper, more stressing deco dive, Profile 14 (US Navy Repetitive Group L).^{3} US Navy recommended No Fly Times are listed in the table. The specific bottom time for the marginal deco dive was chosen by what was required to make the most conservative of all the computers go into decompression mode. The last category of “Violation Dives” was added to test claims made in some of the manufacture’s User Manuals. The dive profiles in Table 2 were simulated and the displayed No Fly Time from each computer was recorded. The battery charge level was monitored for each computer and kept at acceptable levels. With the exception of the repetitive dives with a 1 hour surface interval, the interval between consecutive runs was the maximum of: 24 hours, desaturation time and the longest displayed NFT from any computer.

Results

A total of 26 runs were performed in this study; the raw results are shown in Table A1 in the Appendix. To start off the testing, Profile #2 was replicated several times to assess that the results were repeatable. For all of the runs, the water temperature varied from17C to 23C; when repetitive dives were simulated, the water was not completely replenished with new tap water, so the second (and third) dives of repetitive sequences tended to have a water temperature about 1-2 C warmer than the first. The ascent rate was monitored using several different dive computers, but the SEAC Guru displayed numerical values with units of ft/min; ascent rates were constantly monitored and were typically in the range of 5-15ft/min (1.5 -5 m/min).

The results from Appendix A1 are summarized in Table 3. Based on this data, each of the eight dive computers can be assessed as to whether they are compliant with the DAN guidelines and if they actually compute what is described in their manuals.

Table 3. NFT Results Summary (hr:min) (Violations in Red, marginal discrepancies in Orange)

Discussion

Single Dives:

All of the computers were compliant with the DAN guidelines for this type of dive. It was observed that both Mares and Suunto increased their NFT after the exact same single dive profile (#2) separated by a surface interval of ~ 44 hours. There is little documentation on the Mares RGBM variant, but the Sunnto variants are known to factor in the effects of multi-day diving in bubble growth (and presumably desaturation) calculations. This multi-day factor, known as “F3”, persists for up to 100 hours^{4}, Figure 4. Multi-day diving is not covered in the DAN NFT guidelines.

Repetitive Dives:

All of the computers were compliant with the DAN guidelines for this type of dive. For the Suunto EON Steel, since this computer based Repetitive NFT on desaturation times, the reported NFT depended on the exact nature of the repetitive dives and presumably the dives in the previous 100 hours.

Decompression Dives:

Two computers registered an NFT of less than 18 hours for the Group H deco dive: Deepblu Cosmiq+ required only 12 hours of NFT, and the Suunto EON Steel required a range of NFT from 17:35 to 21:18, (depending on previous dives and surface interval.) Note that in both User Manuals, they stated that NFT’s were based on internally-computed De-Sat times and not the DAN guidelines. The Group L dive required significant total decompression time (~60 mins): all of the computers registered the same NFT as the Group H dive, except the Suunto EON Steel which increased the required NFT to 26:13 which is above the DAN guideline. It should be noted that all computers registered NFT’s that exceeded that suggested by the US Navy for Repetitive group H^{3}, however the Deepblue Cosmiq+ would fall short of the US Navy recommendation if used for deco dive having repetitive group I or more.

Violation Dives:

Some computers mentioned in their User Manual that they added an extra NFT penalty for ascent rate violations. Different computers defined “fast ascent violation” differently, so a target ascent rate of 50ft/min was used in Profile #13, which qualified as a fast ascent for all computers. Profile #13 was executed and the actual ascent rate varied from 9-15 m/min during the ascent from 18 to 5m, which took about one minute. A three minute safety stop at 5m was performed. None of the computers seemed to adjust the NFT, based on a comparison to similar dives with a nominal ascent rate, Profiles 2, 6a, 7a, 9a. Alarms did go off during the fast ascent from several computers. The Safety Stop may have been sufficient to satisfy all of each computers’ Dive Algorithms’ safety requirements. Similarly, none of the computers added an extra NFT penalty for dive Profile #11, which was a no-deco dive, with nominal ascent rate but missed safety stop.

For omitted stops on a deco dive (Run # 20, Profile 12), four computers added additional NFT time, compared to the same profile deco dives with all the required stops (Run # 17 profile 10.) See Table 4.

It was noted that several computers did not conform with the NFT rules stated in their User Manual; see Table 5.

Dive Computer

User Manual Discrepancy

Mares Icon

For single dives, it displays the greater of 12 hours or the De-SAT time

Cressi Cartesio

Displays 24 hour NFT for all dives including no-deco dives with safety stop

SEAC Guru

Displays 24 hour NFT for repetitive dives

Garmin Descent MK1

Displays 24 hour NFT for all no-deco dives (omitted in manual)

Table 5. Observed discrepancies with User Manuals

Summary

Eight commercial dive computers were tested in a small pressure chamber to assess their computation and display of No Fly Times. All of the dive computers were found to be generally compliant with the standard DAN guidelines with a few exceptions. Several computers used a very simple rule for NFT: Garmin, Aqualung, Oceanic and Cressi simply displayed 24 hours NFT for all nominal dives (both no-deco and decompression). Some computers added features that handled cases not covered in the DAN guidelines, e.g. multi-day diving (Mares and Suunto) and certain decompression guidance violations such as omitted decompression stops (Garmin, Suunto, Cressi and DeepBlu).

You wouldn’t be here if you didn’t enjoy quantitative analysis of anything diving. There is a special treat for you: Thanks mainly to Willem and Berthold of the Subsurface development team, you can now use Subsurface to do all kinds of statistics on you dive log. There are many cool graphical representations of looking at essentially any variable from your log (depth, duration, date, SAC, rating, visibility, gas use, oxygen or helium percentage, location, buddy etc etc) as functions of each other:

Go ahead and see how your SAC changes over time or which buddy you do the longest dives with or if you use less gas on dives with better visibility.

This is still an an alpha testing phase and crashes are still possible and things will change before the next upcoming release. But you can download a test version from our daily builds. There are also mobile version available from Testflight for iOS and the Google PlaysStore beta program. Please comment on the Subsurface mailing list or on Github if you find bugs or have ideas for improvement.

I was thinking what might be a good measure for decompression quality or stress. This also comes as O’Dive with their end user Doppler Device claim to provide exactly that in their mobile phone application. As said before, I have some doubts about their approach, so let’s try to come up with something better.

As always here on TheTheoreticalDiver, everything is entirely theoretical, I have zero empirical data of my own. But with this disclaimer, here we go:

I think we all agree on the fact that decompression stress comes arises as tissue pressure exceeds ambient pressure and that also the time of this excess pressure is relevant.

But measuring the excess pressure in mbar is probably not very useful as this is not a natural measure of decompression stress (and the relation might as well depend on the tissue under consideration as well as ambient pressure and a million other things). The gradient factor approach is to normalise this excess pressure with the maximally allowed excess pressure (which is the definition of the gradient factor) and this sounds like a good start.

But rather than comparing to plain vanilla 100/100 Bühlmann, I would like to be more flexible and compare to your decompression model of choice. So I would like to take

as a measure of momentary stress where p_i,max is the maximal allowed inert gas pressure in tissue number i as given by your decompression model.

This could be plain Bühlmann in which case this expression is just your momentary gradient factor. Or you believe in excess pressure being worse at depth (as the produced bubbles will grow later on in shallower water) then you take p_i,max as given by Bühlmann corrected by your favourite settings of GF_low and GF_high. Or you insist on using VPM-B then you use the maximal momentary M-value as predicted by that model. Strictly speaking this is an extension of that model for real (as opposed to planned) dives, but at least in Subsurface, we have found a way to compute it (such that it pretty much agrees on planned dives).

Obviously, if you decompress exactly as prescribed by you model, q=1 during decompression (possibly a bit lower due to the fact that most people decompress in steps of 10ft/3m rather than continuously). But

\(q\le 1\)

is the definition of not violating the ceiling.

The ceiling being what it is, violating it should be bad and the more the worse whereas staying way below the ceiling gives your additional conservatism which you probably have to pay in terms of extended decompression time.

So, my proposal would be to compute the L^lambda norm of the function q(t) for somewhat large lambda as this punishes going above q=1:

To me, this seems to be a reasonable measure of deco stress. Maybe it’s worthwhile to compute this for a number of real profiles and compare it deco outcome (risk of DCS, I wish I had access to the DAN data… or at least Doppler results).

What do you think?

Hinweispflicht zu Cookies

Webseitenbetreiber müssen, um Ihre Webseiten DSGVO konform zu publizieren, ihre Besucher auf die Verwendung von Cookies hinweisen und darüber informieren, dass bei weiterem Besuch der Webseite von der Einwilligung des Nutzers
in die Verwendung von Cookies ausgegangen wird.

Der eingeblendete Hinweis Banner dient dieser Informationspflicht.

Sie können das Setzen von Cookies in Ihren Browser Einstellungen allgemein oder für bestimmte Webseiten verhindern.
Eine Anleitung zum Blockieren von Cookies finden Sie
hier.