Others argued that it must be GFhigh as that applies, by definition, at the surface.

And indeed, in this limit (of the first stop is at the surface) the idea of gradient factors degenerates: In that limit, the rate of change of effective gradient factor as a function of depth diverges. So, like in the last post, there is an interesting point that involves taking limits.

What I had in mind is the implementation in Subsurface: As you can see, as long as there has not been a stop yet (because the ceiling is still above the surface), the effective gradient factor is GFlow. So, in a no decompression dive, you would think, you never see anything else.

But to show that in the recreational mode in the planner turned out so be quite hard: I had to play a lot with the parameters to find a dive where GFlow has any influence on the total dive time of a recreational dive (defined as a dive without mandatory stops and without running out of gas). Eventually I found one: For an air dive to 20m (with an ascent rate of 20m/min for the last segment, see below why this is important), you get a maximal run time of 49min with GF settings 20/100 while you can stay for 50min with GF settings of 100/100. But changing GFhigh has much stronger influence:

What did I miss? In the end, it’s the fine-print of the definition of “first stop depth” that I already talked about in an earlier post: The problem is that for real world dives there is no clear distinction between ascent and stop. So you need to come up with some definition which depth one wants to use to actually anchor GFlow. Subsurface uses the lowest ceiling encountered during the dive so far. But in particular for dives with very little (or none at all) deco obligation that is not exactly what others might consider the first stop depth. The difference is that the diver first need to get to that depth of the ceiling before the ceiling actually becomes a stop depth. And during that ascent, there is already off-gassing going on which can eliminate the ceiling during the time it takes the diver to get there.

As an example, you could have a first ceiling (which as I explained above is determined by GFlow) at say 1m of depth. But then, in this last meter of water, the effective gradient factor has to vary from GFlow to GFhigh. Given that we are talking about dives that are only marginally deco dives, it is likely that this first ceiling comes form a very fast tissue so it is likely that much of it goes away during the short time of ascent to that depth. Then, to find the NDL, the remaining question is if there is ceiling left below the surface. But then the GFlow is already anchored at 1m so for the surface it’s really GFhigh (and GFlow is no longer relevant as there is no ceiling left at 1m where it applies).

So the challenge to find a dive where GFlow makes any difference at all for the NDL was to produce a dive where there is something left of the initial ceiling at the time when the diver gets there in the marginal case of staying a little bit shorter not occurring any stops at all. So the dive must not be too deep (otherwise the ascent takes too long and there is a lot of on the way off-gassing). That’s why I had to increase the ascent rate.

So the upshot is: It is almost entirely GFhigh that sets the difference between a non-stop dive and a decompression stop dive. But if you stay a little bit longer the depth of your first stop (and also the duration) depends a lot on GFlow.

I should not end without pointing out that once more this discussion is quite academic: Gradient factors were invented for dives that have significant deco obligation to force deeper stops. Here, we are in the limit of recreational no-stop diving. So we are really not in the realm of gradient factors. And this manifests itself in the degeneracy of the model in the case of the ceiling being exactly at the surface that determines the NDL. But it was interesting anyway.

]]>For OTU, there is some thing to calculate. The empirical basis seems to be a measurement of the reduction of effective lung capacity after breathing O2 at higher partial pressure for an extended period of time. As far as we could tell, the current theory goes back the the thesis of Clark and his supervisor Lambertsen.

They found that there is no effect as long as the partial pressure of O2 stays below 0.5bar and any effect is proportional to the excess of partial pressure over 0.5bar and, according to their fit, it is also proportional to the exposure time to the power 6/5, i.e.

\((p-0.5bar) t^{6/5}\)

The power slightly bigger than 1 sounds somewhat believable since a lung that already suffers could be more susceptible to further damage. This value was obtained by plotting their data on a log-log-scale, fitting a straight line and reading off the slope of that straight line as the exponent thanks to

\(\log((p-0.5bar) t^{6/5}) = \log(p-0.5bar) + \frac 65 \log(t).\)

You can see this in their figure 18A on page 164. There you will also spot that the linear regression is not really good. This was also pointed out later in a more comprehensive study of the Naval Medical Research Institute that found that later data but also the original data of Clark and Lambertsen was more compatible with an exponent of 1. But the 6/5 stuck in the literature and all that follows below is non-trivial only because the power is different from 1.

What people then did was to take the 5/6-th root of this expression as the definition of the oxygen toxicity unit (normalised to a partial pressure of 1bar), i.e.

\(OTU = \left(\frac{p-0.5bar}{0.5bar}\right)^{5/6}t.\)

For time varying O2 partial pressure this could then be integrated over time, as for example proposed by Erik Baker (yes, the Erik Baker of VPM-B). Again, he publishes an implementation in his favourite programming language, FORTRAN. Well except, that he uses the reciprocal of that fraction and raises it to the power -5/6, with the additional benefit of risking a division by zero at the boundary case of partial pressure 0.5bar.

And of course, integrating this 5/6-th root makes no sense: You still get something linear in time whereas originally it was found that the damage progresses faster than time!

As you are integrating, you can also write a closed formula for a time segment where the partial pressure changes linearly in time (like during an open circuit dive during an ascent or descent). You need to compute

\(\frac 1{p_b-p_a}\int_{p_a}^{p_b} ((p-0.5bar)/0.5bar)^{5/6}dp = \frac 3{11(p_b-p_a)} \left.\left(\frac{p-0.5bar}{0.5bar}\right)^{11/6}\right|_{p_a}^{p_b}\)

This is Baker’s equation 2. Computationally, this formula has the disadvantage that a constant partial pressure is no longer a special case for this formula as one encounters a 0/0 floating point exception. Of course, taking a proper limit yields the above equation for the OTU but this is not convenient in a computer program.

So, what we did was to introduce better variables \(p_m=(p_a+p_b)/2\) and \(\delta = (p_b-p_a)/p_m\) in the integral expression above and then expand in powers of \(\delta\). By symmetry, only even powers appear and so a two term quadratic expression if good up to \(O(\delta^4)\), by far good enough for our purposes. This yields the improved expression

\(OTU = \left(\frac{p_m-0.5bar}{0.5bar}\right)^{5/6}t\left(1- \frac{5(p_b-p_a)^2}{216((p_m-0.5bar)/0.5bar)^2}\right)+O(\delta^4).\)

that can be calculated easily without treating the case \(p_a=p_b\) separately.

Once more, all this is very likely purely academic as it is not so easy to do dives that get into the regime where OTU matters at all. That is probably also why the empirical data is so poor.

Even more recently, there was a report on new results in this area on rebreather.org. Their study produced data summarised in these plots:

Obviously, this data clearly suggests to fit it by some convoluted formula that yields a line that I manually erased from the graphs. Guess what it is and then check out the original link.

]]>But one would think that it should be possible to use a decompression model that works well under water to compute such a time. So let’s do this in this post or at least compute a conservative estimate. To be specific, we will use the Buhlmann model. We do that calculation compartment by compartment and assume that when leaving the water, the tissue has the maximally allowed partial pressure for surfacing (i.e. this tissue was the guiding tissue for the final part of the ascent). Clearly, this is a conservative assumption. Then, according to the model

\(p_s = (p_i -a)b.\)

Here, ps is the surface pressure and a and b are the usual Bühlmann coefficients.

Then we do a surface interval (whose length we wish to determine in the end) during which the partial pressure decays exponentially:

\(p_i(t) = f_{N_2} p_s+ (p_i(0) – f_{N_2}p_s)e^{-\gamma t}\)

where f is the N2 fraction the the breathing gas (0.79 for air which we are probably breathing while waiting for the plane to board). Finally, we don’t want the cabin pressure to be less than the minimal ambient pressure that Dr. Bühlmann recommends. We want to parametrize the cabin pressure using the barometric formula which asserts (assuming constant temperature) that the pressure drops exponentially at height with decay constant that the pressure is 1/e at about hs=8500m above sea level. So, we set

\(e^{-h/hs} p_s= (p_i(t) -a)b.\)

This we can then solve for t. Actually, being conservative, we want to throw in gradient factors. Again, being conservative, we don’t further linearly extrapolate gradient factors, but will use GFhi everywhere on the surface. Plugging everything in (with the help of Mathematica and some manual massaging) we find

\(e^{\gamma t} = \frac{a GF/p_s + 1-f_{N_2} + GF(1/b -1)}{a GF/p_s – f_{N_2} +e^{-h/h_s}(1+GF(1/b-1))}.\)

Now, we have to plug in numbers. For the cabin pressure, according to Wikipedia, we will assume h=2400m for older aircrafts. The wait times (in hours) for the different compartments are then shown in this plot:

You are supposed to see a number of things:

- The waiting time depends strongly on which tissue we are dealing with. For reasonably large gradient factors, only the tissues with half-times of several hours contribute significant waiting times. Remember, for this calculation, we assumed the loading is at its maximal value when you get out of the water. For realistic sports/tec-diving scenarios (as opposed to saturation diving) that should be quite hard even on a weeklong liveaboard with at least five dives a day. If slightly faster tissues are leading, the inferred no-fly times are much shorter, probably shorter than the queue at check-in. I looked at some data from real dive trips where people got everything out of their booked liveaboard but they got nowhere close to exciting the slow tissues. In the Subsurface planner I had to do five consecutive 2-hour dives with less than two hour surface interval to see at least some ceiling for the 239 minute compartment. In the plots, this has the blue line and leads to a no-fly time of much less than 5 hours.
- The plot ends on the left (small gradient factors) with diverging values. These divergences move to higher gradient factors when you increase the cabin pressurisation equivalent altitude (for example by assuming a loss of cabin pressure, remember this is when the oxygen masks are supposed to drop from the ceiling)
This comes about as the no-fly time becomes infinite or even complex as according to the Bühlmann-with-gradient-factors limit, your are not allowed to be at the ambient pressure at that altitude even when saturated with the nitrogen that you experience at sea level. We can compute this limiting altitude by solving

\(f_{N_2} p_s = e^{-h/h_s} p_s (GF/b-GF +1) + a GF /p_s\)

for the altitude via

\(e^{-h/h_s} = \frac{f- aGF/p_s}{GF/b -GF +1}.\)

This is shown here, the maximum altitude after waiting an infinite amount of time: - All these calculations are for air (or nitrox underwater since all we used was the assumption that the nitrogen saturation is at the limit). In particular, in view of an earlier post (N2 vs. He, what’s the difference?) there should not be large differences.
- You could try to repeat the same argument for VPM-B but according to that model, if you followed it during the ascent, the no fly time would always be infinite: The ascent is determined such that when you surface and stay at ambient pressure, you will just create the maximum amount of free gas that is barely allowed. So going to any altitude and lowering the ambient pressure further would release more free gas than allowed, no matter how long you waited. The only way out would be to produce fewer bubbles on the earlier ascent while still in the water, then you would have some reserves to go to altitude.

What are we supposed to conclude from this? One takeaway message is waiting for the recommended 24 hours is not totally off, in particular if there is a chance that your very slow tissues have loaded a significant amount of nitrogen.

On the other hand, for realistic dives, 24 hours is likely on the very conservative side. At least from the perspective of decompression theory. From this perspective it is a total mystery to my what kind of reasoning dive computers use to determine a no-fly time.

Apart from these model considerations, DCS symptoms often enough do not show up immediately after a dive but up to several hours later. And when you are in that situation that you find yourself with DCS symptoms (even those that would have occurred irrespective of flying or not) your chances for immediate proper treatment are probably much higher if you are not confined to an aircraft above the middle of the ocean. So even from that perspective it makes sense to wait a bit more to make sure you will not need a chamber in the next few hours.

PS: If you want to play around with the formulas, here is my Mathematica notebook.

]]>\(\Delta p_{He}<0,\quad \Delta p_{N_2}>0,\quad \Delta p_{He} + \Delta p_{N2} >0\)

This sounds reasonable: If this is true for the leading tissue, the decompression is ineffective as the total inert gas pressure goes up.

So I implemented it in Subsurface. Turns out, this does not trigger where you expect it. For example for a 60m dive with 20min of bottom time on 18/45, forcing a gas change to air (i.e. high N2) at 45m does not trigger it: Yes, He in the tissue goes down and N2 goes up but He is so much faster that the net effect is still off-gassing.

But for the same dive, it triggers at a different, unexpected place: At the beginning of the ascent (at 58m to 54m). How does this come about? At the end of the bottom time (and also during the start of the ascent), the leading tissue is the second (with 8min of N2 half-time). After 20min bottom time, He is almost saturated but N2 still has a way to go. Thus, at the start of the ascent, pretty much off the bottom, He starts off-gassing while for N2, the depth difference is not really noticeable, so it is still on-gassing with positive net. So there actually is counter diffusion even without a gas change!

I guess, nobody would suggest that leaving the bottom at 60m would be dangerous. But this seems to be the only place where counter diffusion actually happens!

I tend to believe more and more that this whole ICD story is either not explained at all by a diffusion model (maybe because it is only relevant in the inner ear that does not follow this simple tissue+environment model) or it is totally bogus.

So I would like to hear from you, dear readers, do you have any experiences with ICD or could you suggest a dive profile where it should be relevant?

]]>But first a bit of background to make sure we are all on the same page: ICD is the phenomenon that occurs when you switch from a mix with a lot of helium to a mix that contains less helium but more nitrogen. Then it can happen (depending on the tissue loading) that some tissue is off-gassing He but on-gassing N2, i.e. that the two inert gases move in opposite directions. And historically this is considered bad for your deco.

As the He atoms are much small than the N2 molecules the former diffuses faster than the latter so, in typical situations, in total the off-gassing should happen on a shorter timescale than the on-gassing and thus the net effect will be that the tissues inert gas loading goes down. Anyway, this effect should be covered by the usual diffusion based deco algorithm (as this is exactly what it simulates) and no additional care would need to be taken as long as the diver stays within the boundary of the deco algorithm.

There is, however, an argument due to Steve Burton that suggests to take the solubility of the gases (in typical tissue) into account to compute the absolute amount of inert gases in the body. And since that is about 5 times higher for N2 than for He, he argues that in order to have a net unloading of the absolute amount of gas one has to limit the change in N2 percentage in the breathing gas by 1/5 of the change in He percentage in the breathing gas. And at least one technical diving school of teaching seems to have adopted this criterium.

We are currently discussing if we implement this check into Subsurface and warn the user if a gas switch violates this “rule of fifth”. I am not sure though if I buy into this line of thought. After all, everything we do in deco planning is based on partial pressures, we never consider the absolute amount of gas. It is the partial pressure that plays the role of the fugacity determining if a particle moves across the diffusion boarder (in or out of the tissue say) and the rate is proportional to the differences (this is the mantra of diffusion based decompression models). So the solubility and with it the total amount of gas should play no direct role. I wonder if this rule of fifth (as is seems it comes out of a theoretical consideration with questionable initial assumptions) has received any empirical evaluation. Note that for example it forbids to change from Tx18/45 directly to EAN50 (as He goes down by 18% of which a fifth is 3.6% while the N2 fraction increases by 13 percentage points, much more than 3.6) even though I understand this is rather commonly done.

There is another ICD theory investigated by Doolette and Mitchell that focusses on inner ear DCS. They argue that in the inner ear the common diffusion model assumptions are violated since there is a relatively large amount of inner ear liquid that is not in direct (diffusive) contact with them ambient pressure (blood in practice) but only indirectly via inner ear tissue. So all off-gassing of the liquid goes first into the tissue. So what happens in an ICD caused inner ear DCS accident is that while that tissue is already on-gassing N2 from the environment (=blood) it is still receiving the He that comes out of the liquid and therefore experiences an over all uptake in inert gas which eventually causes DCS.

But i have not seen any hard “you should avoid doing the following” criterion that is derived from this line of thought beyond a general “shouldn’t really be a problem when using a CCR and otherwise be careful and always maximise the O2 fraction within the boundaries set by MOD”.

So, for the deeper trimix divers here: How do you decide which gas switches are ok and do you want your dive planning software to warn you about those.

PS: A last warning: It makes no sense to think about ICD in a way like “stuff moves in different directions, so the in-moving particles clog the out-going ones” as this little thought experiment shows: There are two stable nitrogen isotopes N-14 and N-15 (the former much more common in nature) that are chemically not distinguishable (only for example in a mass spectrometer). Image you are breathing a nitro mix with only N-14. Then you switch to the same mix but with all the N-14 replaced by N-15. Then there will be a counter diffusion of N-14 vs N-15. But of course, since both are chemically equivalent there are absolutely zero physiological consequences even, so the argument that the in-moving N-15 clogs up the out-moving N-14 cannot hold (and so it cannot hold in the case above).

]]>But we are lucky: There is a reference implementation in terms of a FORTRAN program. So, even though the VPM-B code in Subsurface is a complete rewrite (and solves many things algorithmically differently than the original code), we did a lot of testing to make sure that the plans we produce are identical to the plans computed using the FORTRAN program, even in places where we thought it makes little sense (like starting the ascent to the next stop when the ceiling depth equals the depth of the next stop rather than starting a bit earlier and making sure never to violate the ceiling during the ascent to the next stop (which is different since the ceiling goes up during the time it takes to get to the next stop). The latter is, at least to me, better motivated physically (“simply never violate the ceiling”) than basing the stop time on what happens at a different depth (namely the next stop depth) and we use the latter when computing Bühlmann schedules. But for VPM-B, we thought, given that we do not really understand the physics, we should not modify the model based on physics.

I have talked in the past about the Bühlmann model not being well defined to make all implementations come up with identical plans. In a sense, we are in a better situation with VPM-B, since there is the FORTRAN program which (at least for us) **defines** the model. But the problem is: This definition works only for the situation that you can compute with the FORTRAN program: You specify the bottom part of the dive and then let the program work out the ascent. Strictly speaking there is no definition for real dives: Dives where the distinction between bottom part and deco is blurred. But this is exactly what you have if you were to implement it in a dive computer or, as for Subsurface: Implement it in a dive log to show a ceiling for a logged dive (after all, Subsurface is mainly a divelog). For real dives, you cannot tell the exact point where bottom time ends and deco begins.

Unfortunately, the model depends on this: As explained in the VPM-B derivation post, one parameter that goes into the computation is the total deco time \(t_D\) (clearly that ends when you reach the surface, but where does it start?) but there are other parameters that you are supposed to evaluate at the beginning of the ascent like \(p_{1st ceiling}\) and the initial gradient for the Boyle compensation. All this depends on the time you call the end of bottom time and thus the computed ceilings also depend on that.

For Subsurface, we decided to take the point of time with the deepest ceiling (which, strictly speaking, is a circular definition but in practice is irrelevant) as the end of bottom time and base the ceiling computation in logbook mode on that. But that, to some degree, is arbitrary. And even for dives that we planned using VPM-B (and thus agree with the schedules computed by the FORTRAN program), applying this logic yields a slightly different ceiling. So it can appear that a VPM-B planned dive violates the VPM-B ceiling. But this is only due to the fact that the model is not really defined outside the planning situation.

Or put differently: You need to make arbitrary assumptions (not based on the depth profile and the gas you are breathing) to come up with a VPM-B ceiling. Which, at least for me, doesn’t strengthen my believe into this model.

PS: I am planning to be at the Boot dive show in Düsseldorf on Friday January 26th. If you happen to be there as well, please let me know and we can shake hands!

]]>This is motivated by the discussions about Ratio Deco and Deco On The Fly, for a discussion see this post by Rick. These approaches to computing deco are not really attempting to model anything that is supposed to go on in the diver’s body but simply to take a very pragmatic approach and interpolate deco schedules from known ones. If you like, they are (or attempt to be) mnemonics to learn deco plans by heart. The idea is simply that you memorise a reference plan and also how you have to modify deco if you modify the bottom part of your dive.

With Subsurface, you can now do exactly this: You can take any dive that you planned in the planner as reference dive and it will tell you how your total deco time depends on the bottom part. Take for example this trimix dive to 60m with 30min bottom time:

The runtime table says

The important line is the second:

Runtime: 118min, Stop times: + 3:05 /m + 3:59 /min

This tells you how to adjust the total deco time when changing the bottom time (the segment at 30m): For each meter that you go deeper, you have to add about three minutes (for example if you went to 63m for half an hour, you would have to add about 9 minutes or if you only went to 58m you could shorten your deco by 6 minutes). Similarly, if you change the bottom time you have to pay every extra minute on the bottom by about four minutes in deco.

It does not tell you how to distribute this time over the different stops but a good rule of thumb would be to do it roughly proportional to the time you already spend at the stops. So for example, of the about 90min of total deco time you spend roughly half at 6m, you would also add half of the extra time at this last stop.

These numbers are supposed to be something like the partial derivative of the total deco time with respect to depth and duration of the bottom time element (actually: the last manually entered part of the dive). You are supposed to multiply these derivative with the actual variation of the bottom segment that you do.

How do we compute these numbers? When this calculation is enabled, Subsurface actually computes five extra plans: It first computes the original plan, but now with second resolution of the stops instead of minute resolution. Then it shortens the bottom segment by one minute and computes the plan (again to seconds) and then computes a plan for a bottom time extended by one minute. Then it computes to more plans for one meter shallower and one meter deeper.

This give us two time differences both for depth and duration variations. What is displayed is the average of those. But a bit of experimentation shows that the difference is typically only a few seconds (and what are seconds for 90min deco?). This is also a measure how good this linear approximation is (as this the quadratic correction): It gives me confidence to say this approximation isn’t too bad for a few minutes or a few meters.

So having these two numbers also on the runtime table should give the diver much more confidence in how to react to unforeseen changes of plan (both shallower and deeper and shorter and longer). But of course, always remember: Plan the dive, dive the wreck, wreck the plan!

]]>

Let’s start by figuring out the inert gas ambient pressure at a given depth. We call \(p_d\) the diluent partial pressure, \(p_{O2}\) the partial pressure supplied from the oxygen cylinder (note that this is not the oxygen partial pressure since there is going to be some oxygen in the diluent as well), \(d\) is the depth, \(f_{O2}\) is the fraction of oxygen in the diluent while \(f_i\) is the fraction of the inert gas we are computing in the diluent.

There are two obvious equations: The ambient pressure is equal to the diluent pressure plus the oxygen pressure in the loop:

$$ p_d + p_{O2} = p_a = p_{surf} + dg\rho$$

Here, \(p_{surf}\) is the surface pressure, and \(\rho\) is the density of water and thus \(g\rho\) the conversion factor between depth and pressure. The second equation says that the total partial pressure of oxygen (from diluent and oxygen cylinder combined) has to equal the set point pressure \(p_s\):

$$f_{O2} p_d + P_{O2} = p_s.$$

We can subtract both equations to eliminate \(p_{O2}\) and find

$$ p_d = \frac{p_{surf} + g\rho d -p_s}{1-f_{O2}}.$$

This goes into the usual diffusion equation for the inert gas loading in the tissue \(p_t\) that we are ultimately interested in:

$$\dot p_t = -\gamma( p_t -f_i p_d).$$

\(\gamma\) is the usual diffusion constant that is related to the inverse of the tissue half-time by a factor of \(\ln(2)\). Finally, we want the depth to change linearly in time, so \(d=d_0+vt\) where my sign convention is that \(v\) is the descend velocity, make it negative for ascends.

So we have to solve a linear inhomogeneous ODE with constant coefficients which we could tackle by “varying the constants”, but of course we are lazy and let Mathematica do the job. After a bit of massaging, we find

$$p_t = \frac{f_i}{1-f_{O2}} \left( e^{-\gamma t} -1\right) \left( p_s + \frac{vg\rho}\gamma\right) + e^{-\gamma t}\left( p_0 – \frac{f_i}{1-f_{O2}} (p_{surf} + d_0g\rho)\right) + \frac{f_i}{1-f_{O2}}(p_{surf} + dg\rho)$$

where \(p_0\) is the initial tissue loading. So, here you go this is the CCR Schreiner equation. Or, if you want to follow the tradition to attaching names to simple solutions of ODEs, I allow you to call it the Helling equation.

]]>So here we are now. And since WordPress does not really like to be moved to a new domain (yes, they store absolute paths in their database, no clue why) and backing up the old was also difficult given how old the WP version was (which excluded most plug-ins) I moved all posts essentially bu cut&paste. I tried to update all the links as well. But please let me know if you find something outdated.

What I couldn’t move are the comments. Which is a pity.

So, once more, welcome to the new site! I hope you feel at home.

]]>But let’s look at this a bit more quantitatively: You can parametrize the error of the ideal gas law by a “compressibility factor” Z so that it becomes

\(pV = ZnRT\)

and then tabulate Z as for example done here. In the table for a realistic temperature of 300K you read off 1.0326 at 200 bar while only 1.0074 at 150 bar. So, at 200 bar, you overestimate the amount of gas in your cylinder by 3% or put differently, the amount of gas is that of an ideal gas but only at (1-3%) 200 bar = 194 bar while the amount calculated at 150bar is almost correct.

What is 3% amongst friends I hear you complain, that is likely less than the accuracy of your pressure gauge. That is of course correct but lets see how this relative error multiplies as soon as you take differences: Let’s say you want to compute your surface air consumption (SAC) for a dive in which you breath your cylinder down from 200 bar to 150 bar. Wrongly assuming the ideal gas law to hold lets you compute the amount of gas to be 50 bar times the volume of your cylinders. But as we saw, due to the compressibility factor, we should rather use 194 bar – 150 bar which is only 44 bar. Compared to the 50 bar of the ideal gas, we now have a 12% error, something that I would already consider significant. In particular when I use this value to extrapolate the gas use to other dives.

We see that suddenly the relative error multiplied by a factor of four and for the momentary gas use one needs to look at \(\partial Z/\partial p\) as well.

The upshot is that even for 200 bar one should better use a real gas replacement to the ideal gas law. Anybody who had been in an undergraduate physics class would now probably go to van der Waal equation but as it turns out for typical diving gases in the commonly used pressure ranges that gets 1-Z rather poorly. So for Subsurface we had to use a more ad hoc approximation. But that is the topic of a future post.

]]>