Monday, November 28, 2022

In defense of the classic D&D saving throws

In a recent reddit thread from a self-described "trolly" poster asking for people to sell them on trying old-school D&D in addition to the more "rules-light" OSR games they've recently been enjoying, I offered to explain what I saw as the design principles behind the old-school saving throws*. As I started fleshing out my thoughts, I realized they would end up far too long for a reddit comment - and figured I might as well drop 'em here!

Fair warning, I'm probably not going to be saying a whole lot that's actually new in this post; most of my explanations are going to be familiar to anyone who has read a lot of blogs or other material focused on old-school D&D. There is nothing new under the sun, etc. (read Jon Peterson's The Elusive Shift if you want to see just how true this is in the RPG space). That said, I do hope some of what comes out of me putting these concepts in my own words ends up being helpful to you the reader (whether you be /u/vmoth or someone else who stumbled across this).

The question at hand: what are the design principles behind the old-school saving throws? I'm going to take a descriptive and/or revisionist approach to answering this question, rather than a historical approach. AKA I am not going to attempt to explain what Gygax, Arneson, et al were thinking when they wrote D&D. Rather, I'm going to look at the saving throw mechanic as it exists in old-school D&D and explain what I think the old-school idiosyncrasies actually add to the experience, as well as why I (as an amateur game designer - for all GMs are) choose to keep them in my game.

Throat-clearing and introduction behind us, let's dive in.

What are the classic D&D saving throws? 

Quick recap for anyone who's not familiar: a saving throw is a game mechanic whereby a character about to suffer some horrible fate has a chance to avoid said fate by rolling a die (usually a d20) and attempting to hit a specific target number. Oftentimes this target number is also itself referred to as a saving throw (or just a "save").

That's the basic idea behind saving throws. Old-school D&D saving throws then have a few defining characteristics that distinguish them from the saving throw mechanic as it appears in newer versions of D&D and in many newer OSR games and retroclones. My answer to our question of "what are the design principles behind the old-school saving throws?" then lies in a closer examination of these distinctive features that set the old-school saving throws apart from their successors.

In roughly descending order of how controversial these are to a "modern" RPG sensibility, the features that make old-school saves unique are that they:

  1. Reference specific dangers
  2. Are not tied to ability scores
  3. Use static targets 
  4. Vary between the classes

Each of these characteristics is, to a greater or lesser degree, often maligned by folks like /u/vmoth as "clunky" or outmoded in favor of the (assumed better) newer/modern/simpler way of doing saving throws. I disagree. I think the old-school saves are great! And while I'm not knocking other systems, I'd like to offer an explanation as to how each of these old-school D&D-specific saving throw characteristics acts to reinforce theme and to balance the game - and why removing any of them in an attempt to "streamline" the mechanics sacrifices something distinctive about the original game. 

My defense of the old-school saves doesn't mean I think they should never be tweaked, of course - but I do maintain that it's generally helpful to understand the reasoning behind a rule before dismissing it out of hand (see: my first two blog posts). 

Characteristic #1: old-school saves reference specific dangers

Perhaps the least liked aspect of the old-school D&D saving throw mechanic is that each of the saving throws is named after a specific danger characters can expect to encounter in their adventures. With some minor variations between editions, the old-school saving throws generally are broken out into the broad categories of Poison/Death, Wands, Paralysis/Petrification, Dragon Breath, and Spells.

A new player will first encounter these saving throw categories while creating their initial character. Think for a moment about the psychological effect of this. Somewhere between rolling up ability scores and choosing a character name, a new player is going to find out how likely their erstwhile hero or heroine is to survive encounters with 5 specific dangers, writing down those target numbers on their character sheet right alongside starting equipment and hit points. This is a fantastic way to set the scene for the type of game you're playing. Before even setting foot in a dungeon, we've established that old-school D&D takes place in a world in which one wrong turn might lead you to be dodging fiery dragon breath, avoiding dastardly rays from magic wands, or trying to survive the venomous fangs of a giant spider (crab spiders being a particular "favorite" of my group). 

"Crab Spider Attack" by Jacob Fleming (courtesy Gelatinous Cubism Press)

This scene-setting aspect of the old-school saves can also come in handy when running games not set in the proverbial "fantasy dungeon" setting. If I tell you to create a character for a one-shot and you're writing down targets for saving throws against Toxic Waste, Lasers, Panic, Grenades, and Gamma Rays, you already have a pretty good idea of what kind of world this is just from the dangers you have to save against.

One thing I'll note - just because the saving throws reference specific dangers does not mean those are the only dangers the saves can handle. One erroneous charge often leveled at old-school saves is that they are inflexible due to their specificity. Not so! There is ample precedent for using the classic saving throws for dangers not specifically falling into their nominal categories - including in the text of the Moldvay Basic D&D set itself, which references a falling ceiling block trap that requires a save vs petrify to avoid (B52). This seems very odd at first glance; dodging a falling ceiling block is not the same thing as a medusa's petrifying gaze. There is a logic to this, though...

The save vs petrify/paralysis, far from being only used for resisting petrification and paralysis, is generally used for any effect which requires a character to move quickly to avoid danger or which might restrict a character's free movement. This is not limited to Moldvay; AD&D 2e provides another example of this more liberal application of the petrify/paralysis saving throw by using it for resisting disarm attempts.

These (admittedly mostly unwritten/implied) rules for applying old-school saves to "off-label" uses exist for the other saving throw categories as well. If you wanted to rename the old-school save categories in a less evocative (but perhaps more descriptive) way that incorporates these implied uses a bit more explicitly, you might rewrite them as saves vs Instant Death/Poison/Generic Danger, Aimed Devices, Inhibition of Bodily Autonomy, Area of Effect, and Spells/Generic Magic. 

Rather than elaborate further here, I'll direct the interested reader to the best reference I'm aware of for this: LLBlumire's Which Saving Throw Should I Use?. I cannot recommend that post enough to anyone interested in learning more about how to apply the old-school saves. It's concise, too (unlike me)!

Characteristic #2: old-school saves are not tied to ability scores

The second idiosyncratic characteristic of old-school saves when compared to saving throw mechanics in similar and/or successor games is that they aren't primarily tied to a character's ability scores**. For players coming from newer editions of D&D (or rules-light OSR games like Into the Odd where saving against ability scores is one of the core mechanics), this seems quite odd. Why wouldn't you just use the modern convention of tying the saving throws directly to ability scores? Or perhaps the slightly older practice of splitting the saves into three general categories (e.g. Fortitude, Reflex, and Will) that are then each influenced by specific ability scores? Aren't both of those cleaner and more elegant than the clunky old-school saves that are disassociated from ability scores for no good reason?

Well, no. There are actually a few really good reasons to divorce saving throws from ability scores: namely, limiting the importance of ability scores and allowing flexibility for the in-fiction explanations of successful saving throws.

First, limiting ability score importance. Old-school games often feature character generation methods that generate ability scores with a significant element of randomness. There are many advantages to this approach (faster character creation, more varied characters, challenge of making the most of non-ideal characters, increased rarity of "optimized" characters making them more special, verisimilitude, etc.), but it also comes with downsides - namely, that some characters will have higher ability scores than others, which is unfair.

The degree to which this unfairness will be tolerated by players depends mainly on how invested they are expected to get in their characters (often correlated to campaign length) and on how important the ability scores are to the game mechanics. In a one-shot or very short campaign, players are a lot less concerned about having to play a character who's objectively worse at everything than the other characters... but if you're talking about a 20+ session campaign, people will start to get annoyed. Limiting the use of ability scores to providing a specific enumerated set of bonuses, but not having them define absolutely everything about a character's capabilities, is one of the core reasons why old-school D&D can get away with using highly random ability score generation methods.

This is also why 5e (a game where the core mechanic used for literally everything is the ability check) has point-buy or standard array as the default methods for generating ability scores, and it's one reason why some people describe Into the Odd (a game where ability scores are highly random but also have a huge amount of influence on almost every roll the players make) as great for one-shots or short campaigns, but not well-suited for long campaigns. But I digress...

Second, flexibility for in-fiction description. The other reason not to base saving throws on ability scores is that tying them to ability scores causes the saves to specify, in-fiction, how the character is avoiding the danger in question. Reflex saves are for dodging out of the way of things or diving for cover. Constitution saves are for "toughing it out". There's nothing inherently wrong with this of course - but consider the alternative. When a character passes a save vs poison, all I know is that the poison did not kill them. This could be because of his natural dwarven resistance to poison - it could be because the cleric's deity protected her - or it could be because the thief was just quick enough to suck the poison out of the wound before it entered the bloodstream. 

There's a significant amount of GM freedom that comes with this more "hands-off" approach to saving throws. For example, it is possible to have a villain wielding a spell that in-fiction always kills its target on a successful spell cast, but still allow the PCs to go up against a foe wielding such magic without it being a guaranteed instant-kill by allowing a saving throw (perhaps at a penalty). Any avoidance of the spell by PCs can be described as truly superhuman/heroic exploits or dazzling amounts of luck, without enforcing that the spell is resisted with a Will save and thus anyone who is strong-willed enough can just survive.

Characteristic #3: old-school saves use static targets

The next characteristic of old-school saves I'll discuss is that they use (nominally) static target numbers, rather than variable targets based on (e.g.) spellcaster level***, which is to say, a character's chance to save vs any particular danger does not change with the severity of said danger. This isn't quite as controversial, I don't think, so I won't spend as much time here. 

The main advantage of this approach is simplicity - fixed save targets are easier and quicker to use at the table than variable targets. You don't have to factor in a bunch of info as the GM when a character makes a saving throw - just decide which save to apply and go for it. There's no need to stat out every enemy spellcaster to determine their save DC. It's one less thing for players to deal with when rolling them. One could certainly protest that this it would be more realistic for various dangers to have variable ease in evading them - and this isn't wrong... but when adding any bit of additional complexity to any system, it's always worth asking if the benefit is worth the cost. 

Characteristic #4: old-school saves vary between the classes

Lastly - while the old-school saving throw targets are usually static with respect to the specific danger faced, they do vary between the classes. The main advantage of this approach is that it reinforces one of the central pillars of D&D, which is class-based play. In addition to varying abilities, hit points, and equipment training, different classes are just better at avoiding certain kinds of dangers. In a system that relies on differentiation between classes for a good deal of the implicit worldbuilding and archetype reinforcement, having one more knob to turn to add class differentiation is helpful. 

While this does add complexity, it's a front-loaded complexity. You write your saving throw targets down when you create a character or level up - they aren't going to change in the middle of an adventure, so it won't slow you down at the table.

Wrapping up..

That's probably the longest reply to a reddit comment I've ever written. Thanks, /u/vmoth, for the inspiration for my first blog post in a long while! I hope this was helpful.

I'll leave you all with this: I'm not knocking the saving throw systems in 5e, or WWN, or Into the Odd, or Swords and Wizardry, or any other RPG that does things differently than old-school D&D. All I'm saying is that the old-school saving throws should be viewed not as a primitive version of a mechanic that later evolved into a more fully realized, more streamlined descendant... but as a game mechanic designed with specific features to achieve specific goals.

May your saves vs death always succeed.

Midjourney AI's interpretation of "old-school D&D saving throw". I find this image oddly evocative, baffling though it is.

Further reading

I refrained from looking up similar blog posts while writing this, in an attempt to avoid any inadvertent rote repetition. All the same, here's some proof that there is nothing new under the sun:

*The astute reader will notice I also promised /u/vmoth I'd explain the design principles behind THAC0, thief skills, and x-in-6 mechanics... we'll see.

**Yes, Wisdom does grant a bonus to saves vs magic in old-school D&D, but it isn't the primary source of saving throw advancement and that's basically all Wisdom does for most characters so I'm inclined to still say the saves are not primarily tied to ability scores. It's the exception that proves the rule, if you will.

***There are some exceptions to this, of course - especially surrounding poison (which sometimes varies based on monster strength) and spells (which often apply a save penalty when AoE spells are used on single targets). It's worth asking if the old-school saving throws might actually be better off just committing to fully static targets, as Chris McDowall advises here.

Monday, January 24, 2022

Save or Die! - a single-roll "death and dismemberment" system for old-school D&D

If there's one thing about old-school D&D people loooove to hack and houserule, it's the Thief.

...

But if there's two things about old-school D&D people loooove to hack and houserule, it's the Thief... and death/injury rules.

By default in most versions of old-school D&D (including B/X), a character is dead at 0 HP. That's it. They're done. Generally speaking, this works just fine at the table - but GMs generally being the rules-tinkering folks we are, many of us like to tweak this mechanic for a variety of reasons. Some find death at 0 HP a bit too punishing for the games they want to run, and introduce things like negative HP, death saves (in the 5e sense), that sort of thing. Many find it boring and/or overly simplistic, desiring the possibility of permanent character-altering injuries as a mechanical possibility for reasons of novelty or verisimilitude.

I've always had a soft spot for so-called "death and dismemberment" house rules, and I think the incredibly evocative picture below perfectly illustrates why. The party is beaten down, clearly barely escaped, and in 2 out of 3 cases has been permanently scarred... but in the end, they were victorious - and it shows! Together they overcame the (clearly significant) dangers in front of them and accomplished their goal. There's something about this picture that speaks precisely to the kind of feeling I really want my players to have after a really tough adventure.

"A successful adventure" by Jason Rainville (courtesy LotFP

My first ever foray into hacking my own RPG mechanics was building a comprehensive injury system for my Lost Mines of Phandelver (5e) campaign - the first RPG campaign I ever ran. To a large extent that's how I discovered the OSR... while I was searching for and reading up on houserules for permanent injuries, I found that most of the blogs I was reading were written by people playing older versions of D&D in a style I quickly came to admire. Taking inspiration from across the internet, I eventually wrote my own death and injury ruleset for 5e and implemented it in my campaign. My first ever PC death occurred under these rules, in which my wife's dwarf paladin was brought to the brink of death by a particularly grievous blow from an evil wizard's fireball.... but survived just long enough to cleave him in twain with her battleaxe before expiring. She still talks about that session..

In any case, if it's not apparent by now, I'm a big fan of injury rules for RPGs. I still think my 5e rules hold up really well, and I'd use 'em in a heartbeat... in a 5e campaign. Not so for a B/X campaign though - they're way too complicated for that. Still, I want the possibility of serious injury to exist as a mechanical middle ground between "perfectly fine" and "dead" - so I developed the system I call simply Save or Die!

Save or Die!

There's no denying that there are a lot of house-ruled death/injury rulesets out there for old-school D&D (and similar games). Rather than recapitulate the full list, I'll just link Lloyd Neill's "Death and Dismemberment" blog, which started out precisely as a series of deep-dives into the world of death/injury rules for RPGs, and includes a plethora of links to many of the most prominent ones. When it comes to death and injury rules, I certainly didn't come up with the concept and admit it is extremely well-trod ground.

As far as I know though, my particular take on these sort of rules is unique (at least I haven't seen it before - if someone's done it first though let me know!). Save or Die! combines the straightforward "make a save vs death" often favored by people looking for a simple way to inject some uncertainty (and a little extra PC durability) into the dying process with the "roll on a table of permanent injuries" favored by many, while avoiding some of the pitfalls of each - the fact that a high level Dwarf can save vs death on a 2 and thus becomes functionally immortal in the case of the the former, and the tendency to build overcomplicated multi-table roll token tracking minigames for the latter (man, if there's one place people just randomly decide they don't care about "rules light" any more it's death and injury rules).

My system is thus: when you reach 0 HP (or take damage while already at 0 HP), make a save vs death. If you fail, you die. If you succeed, you take the number you rolled on the d20 to pass the save and reference it to the permanent injury table below to find out what effect (usually a permanent injury) you suffered while avoiding death. And... that's it, basically. 

Save or Die! injury table

In my view, the major innovation here is linking the save vs death result directly to the injury table. Because lower level characters only succeed on saves vs death on certain (high) numbers, many of the results on the table are locked out for them. Thus your lvl 1 fighter isn't going to lose an arm first time down into the dungeon, he's just going to die or suffer some sort of minor-ish injury that reduces an ability score by a few points. I quite like this. Very few lvl 1 characters are important enough for their player to have any compunctions just dropping them if they take some significant injury, so in a system where that happens to a lvl 1 character, the character might as well have just died. However, placing the less severe injuries up at the top of the table and the more severe injuries (or weirder results) further down ensures that a character who loses an eye or an arm is going to by virtue of their higher level already be interesting enough for that to be a memorable story, and potentially a tough decision re: whether to retire or soldier on. This system also mitigates the "high level Dwarf never dies" issue because while a high level Dwarf may almost always make his death save, he's going to be taking attribute damage, losing limbs, etc every single time he does so. There's a significant enough cost to dropping to 0 HP that only an extreme risk-taker would willingly risk it.

A few extra rules

The above single-paragraph rule works just fine on its own, but for those who are a little more crunch-tolerant, here are a few extra wrinkles I add to the mechanic at my own table:

  • Recovering attribute loss: a character who loses 1d8 points in an attribute from one of the results at the top of the table can spend 2 weeks recuperating to regain 1d4 points in the damaged attribute (yes, potentially making it higher than it originally was). 
    • This changes the spread of attribute loss on injury to 0-7 (with a small chance of gaining 1-3 points; overall average is a loss of 2 points). I do this simply because I like the idea of characters' attributes changing somewhat over the course of their careers, both up and down. It also imposes a slight time tax for an injured character to fully recover (thus encouraging players to have multiple characters in the "stable").
    • If I wasn't playing with this recovery rule, I'd probably reduce the attribute loss on those particular results from 1d8 to 1d4 points.
  • Restoring loss of limb: loss of limb can be recovered with life-restoring magic such as Raise Dead, but doing so incurs a permanent loss of CON. I use the "OSE: Advanced Fantasy" chance of raising the dead table - roll on the table, lose a point of CON after each roll, continue rolling until a success is achieved. 
    • I like this because it imposes a limit to powerful recovery magic. It stands to reason that in a world where people can be magically brought back to life, permanent injuries like missing limbs can also be magically healed - but I want permanent injury (and death) to still mean something. For that reason, the same loss of CON that applies to healing permanent injuries also applies to raising the dead at my table. At the same time, this isn't quite as harsh as (for example) the AD&D rules, where there's a chance for the resurrection to simply fail. With these rules it can't fail - it just always results in a loss of at least 1 point of CON.
  • Massive damage: massive damage can still kill a character outright with no chance for a save. If the excess damage after reducing a character to 0 HP exceeds the character's max HP, the character dies instantly with no save vs death allowed.
    • I cribbed this directly from 5e. It's a good rule.
Aaand... that's it! This is my personal contribution to the rich world of death/injury AKA "death and dismemberment" house rules for old-school D&D. Thanks for reading!

Saturday, January 15, 2022

Variations on the Usage Die

One of the mechanical darlings of the OSR is the usage die, invented by David Black for his rules-light take on fantasy roleplaying The Black Hack. It's a simple and elegant mechanic - rather than tracking the exact number of rations, arrows, or any other other vaguely consumable resource remaining, these resources are assigned a "usage die" which is then rolled whenever the resource is used (aka when resting for rations, or after combat for arrows). On a roll of 1-2, the usage die moves down one step (d20->d12->d10->d8->d6->d4). On any other roll, the usage die remains as-is. When the d4 rolls a 1-2, the usage die is consumed and the resource is depleted.

Opinions on the original Black Hack version of the mechanic vary - some people enjoy the simplicity and insert the usage die as-written into their games at every opportunity, while others find the randomization of tangible resources such as rations and arrows to be a bit overly abstract compared to just counting them. Regardless of their opinions about the specific Black Hack implementation though, most people I've seen agree that it's an elegant mechanic with some serious potential. Personally, I don't use it for rations and arrows, but I think it works very well for resources that may be inherently a bit fuzzy and hard to precisely define - such as magical power, sanity, or fame.

In this post, I'd like to dig into the usage die mechanic a bit, then present a few variants that change the "feel" of the mechanic a bit, allowing for some different use cases. I'll also compare the expected value (aka average number of uses) resulting from each option, as well as a brief overview of the method I used to calculate it (discrete Markov chain analysis)*.

These are dice

OG (Black Hack) usage die: down 1 step on 1-X 

The original implementation of the usage die (henceforth Ud for short) uses a die chain that moves in only one direction - down, with a 2-in-(die type) chance of moving down a step each time the usage die is rolled. This produces a nice spread of average total uses ranging from 2 (for d4) to 30 (for d20), allowing for the modeling of resources with a wide range of "charges," but not so wide as to make the Ud irrelevant at the table.

One of the simplest tweaks that can be made to the usage die is changing the target number that results in a step down. Dropping it to 1 rather than 1-2 doubles the expected number of uses to 4-60, while increasing it to 1-3 drops the expected number of uses to 1.3-20.

Expected uses for variations of the OG usage die

The OG usage die mechanic has a number of distinctive features that makes it particularly well suited for modeling gradual depletion of an adventuring resource:

  • It moves in only one direction: the inexorable march down the chain means that even if you get a lucky streak resources will eventually run out, necessitating a return to safety for rest and resupply - this puts a built-in "clock" on any expedition, which is often preferable for a dungeon-crawling or "expedition" focused game.
  • Total depletion is predictable: you will have some warning before you completely run out of a Ud resource, because it only moves down a single step at once - this helps to limits frustration by allowing the players to feel like they have a measure of control and that their decisions matter, despite the randomness inherent in the mechanic. 
  • Depletion events are unpredictable: while a player can relatively safely count on being able to have some advance warning before completely running out of a Ud resource, any given roll is still an unknown - this creates some dramatic tension each time the die is rolled, which is generally pleasing to human brains (there's a reason gambling is so addictive to many people and why we RPG players love tossing our shiny math rocks).
  • Accelerating depletion rate: as your resources deplete, the chance of moving down on the next roll increases - this helps to build tension and fosters somewhat of a push-your-luck feel, as the players start out feeling relatively well-supplied but will start feeling more and more at risk as they continue to adventure.

Of the above features of the OG usage die, the unpredictability of depletion events and acceleration of the depletion rate are fairly baked-in - the whole beauty of using the mechanic is that it injects drama/tension to the depletion of resources while not requiring the players or GM to do any complicated math at the table or track anything other than the current die type. 

I'd argue any tweak that changes either the unpredictability of depletion events or the steadily accelerating depletion rate would fundamentally alter the mechanic such that it's not really a variation on the usage die any more, but something else entirely. That's not necessarily a bad thing, but it does provide a convenient dividing line for the purposes of this post. For now, I'll be sticking to the basic "die chain w/ consistent target number ranges" structure for the mechanic.

Unidirectionality and predictable depletion, however... those are relatively easy to change while preserving the basic structure of the mechanic - and changing either (or both) of them can greatly alter how the mechanic feels at the table. Let's dive in.

Bidirectional usage die: down one step on 1-X, up one step on highest value(s)  

The first variation I'll explore is the bidirectional usage die - that is, a Ud that steps down on low values, but steps up on high values. This version of the Ud models resources that usually tend towards depletion, but occasionally go the other direction.

We need to be somewhat careful here. Making the chance of a step down equal to the chance of a step up fairly quickly results in a situation where, pending a series of particularly unlucky rolls, the number of expected uses climbs very high, very fast. This is undesirable for a few reasons - not only is it extremely inconsistent (and thus difficult to use to create any sort of predictable gameplay experience at the table), but the high average number of uses means that unless the Ud is being rolled almost constantly, it will almost never be depleted in a typical adventure (and thus would be somewhat pointless as a resource tracking mechanic).

All the same, the idea of a usage die that steps both up and down has promise, especially if the step up frequency is limited. Infrequent but very good events feel really compelling at the table from a psychological perspective - there's a reason critical hit mechanics are so often hacked into old school D&D despite being (mostly) absent from the original rulesets. People also tend to disproportionately remember these unlikely good events; a gaming group will often talk for years about the time their character dealt 50 damage to one-shot the boss due to exploding damage dice in Savage Worlds, or saved vs the necromancer's death ray 3 rounds in a row before putting him down.

My preferred tuning of this variant is probably "3 down, 1 up" - that is, on a 1-3 the Ud steps down and on its highest value the Ud steps up. This results in a spread of average total uses of 2.5 (for d4) to 28.8 (for d20), which is surprisingly close to the 2-30 spread of the OG usage die. Note that despite the averages being similar this is still quite swingier than the OG usage die - particularly if you're starting at a d4, where you'll either deplete entirely or vault up to a considerably higher number of average uses on the first roll. I probably wouldn't use this variant of the mechanic for a resource that could start at d4 - it works better starting around d6 or d8. The average number of uses for a few versions of the bidirectional Ud are presented below.

Expected uses for various bidirectional usage die variations
*(3 down 2 up is 2 down 1 up on the d4)

In my opinion the bidirectional Ud works really well for modeling things like reserves of magical power (as a replacement for spell slots or mana) - particularly for something like a wild mage. It could also work well with a sanity mechanic, or (in a reversal of the resource depletion paradigm) for the severity of a wound or disease that gradually heals over time (but with the chance of suddenly worsening).

Jumpy usage die: down multiple steps on 1, down one step on 2-X

The other major feature of the usage die we can mess with is its (relative) predictability. By default, you're not in danger of fully depleting the Ud until it has been reduced to a d4. This is easy enough to change; simply change it such that rolling the lowest value results in moving down multiple steps - either 2 steps (for a little bit more unpredictability) or full depletion (for a lot more unpredictability). 

This change has a few effects. The most obvious effect is that the number of expected "charges" is  significantly reduced. This can be helpful if you're trying to model something with few charges, but still want to make use of a wide range of the dice chain. The other effect is that from a psychological perspective, the Ud is less "safe" - it is more prone to run out unexpectedly. This heightens the feeling of tension (or for a more high stakes resource, dread) promoted by the usage die. Obviously, this will be more appropriate for some applications than others - but it's a nifty tool to have in the toolbox.  The expected number of uses for both of the options mentioned above are shown below.

Jumpy usage die expected number of uses

This variant of the Ud is especially helpful for modeling situations where there's a steady decay, but always with a chance of everything immediately going off the rails. I'd use it sparingly, but it serves well in cases where you don't want the players to feel completely secure at any point. Example applications include magic spells that are unraveling in an unpredictably chaotic manner, or perhaps the attitude of a king whose appetite for extended conversation with the party is rapidly running out. 

Wrapping up

These variations can be combined, of course - one can imagine (for example) a usage die that depletes 2 steps on a 1, 1 step on a 2, and increases 1 step on the highest value. Below is a summary table of the various options discussed in this post, a few new combos, and (bonus) 2 versions of a reverse Ud  - aka a usage die that steps up instead of down, and "finishes" after d20, creating a decelerating (rather than accelerating) depletion rate.
Summary of all discussed usage die variations + a few
*(3 down 2 up is 2 down 1 up on the d4)

I said at the start of the post that I'd present a brief overview of the analysis method I used to derive the expected number of uses for each of these. This post has already run fairly long, so I'll keep it brief. Each of the usage die rulesets I examine in this post can be represented mathematically as discrete Markov chains with one absorbing state. That is to say - they are state machines in which the probability of the next state depends only on the current state. As it turns out, with a little matrix math (using your preferred programming language - mine is Excel :P), it's really easy to calculate all sorts of information about these state machines - including the expected number of cycles before "absorption" (aka depletion). 

For a more in-depth explanation of this analysis method, see my past post walking step-by-step through a Markov chain analysis of the "clock puzzle" from Xanadu. If anyone really wants a walkthrough of the method as applied to a usage die, though, feel free to leave a comment. It's hard to overstate how useful Markov chains are for analyzing of RPG mechanics - I expect to be returning to them not infrequently. 

Well - that's it! I hope this sparks some ideas regarding new ways to implement the usage die in your games! David Black did the OSR a major favor in popularizing the mechanic, and I think with some tweaking it becomes an extremely versatile mechanic appropriate for all kinds of applications. Please comment if you've got other ideas for innovative modifications to the Ud mechanic! 

*You don't really need Markov chain analysis to calculate expected uses w/ the "one-way" versions of the usage die, but it becomes much more difficult to do so ad-hoc as the mechanics get more complicated - the Markov chain analysis on the other hand makes this very, very easy with any arbitrary set of usage die rules as long as they actually do form a Markov chain.