Amplifier "class" distinctions are useless
There's a lot of confusion over the term "Class A" as applied to guitar amplifiers. The term has a precise technical definition created in the days when engineers were concerned with linear signal reproduction. Class distinctions (A, AB, B) are defined for operating conditions not expected of guitar amplifiers, namely that the intended operation of the amplifier is to minimize the generated distortion.
The problem is, any guitar amp that pushes the output stage hard enough can't be Class A no matter how hot you run the output tubes. All it takes is for one side's grid to go far enough negative that it cuts off, and it is no longer - by definition - running in Class A. If you disallow power-stage clipping, then you're really splitting hairs: you're just looking for a name to give to your hot-biased output. If that's your intent, then "Class A" is as good as anything. Plus, it sounds good: Class A, Grade A... that means it's better than something that's Not A, right?
On top of that, cathode biased amps all exhibit bias shift. The harder you push them, the colder the tubes run. Which means that they're all definitely running Class AB when running at normal volumes.
That's not to say that you might as well go with a fixed-bias Class AB amp. IMO, cathode biased amps do sound better than their higher-powered bretheren as the note decays. I'm guessing that part of it is the shifting operating point, and part of it is that you're staying well away from crossover distortion on the softer portion of the notes.
It seems that in the world of guitar amplification, class distinctions have come to be associated with the amount of quiescent bias on the output tubes. Amps that are biased hot are labelled as Class A.
Looking at a couple of textbooks from the `40s and `50s, it seems clear (to me at least) that the class distinction refers to the operating point as seen over the full expected range of the signal at the grids. Of course, engineers were interested in accurately reproducing waveforms back when those books were written. That's not really the intent of a guitar amplifier.
In an attempt to fit class distinctions to guitar amplifiers, modern amp builders - even the ones who recognize that there is not an implicit equivalence between "cathode biased" and "Class A" - ignore the behavior of an amplifier under the conditions in which the output stage is overdriven and assert that a particular class designation applies based upon the operating conditions which don't force the amp out of that class. In other words an amplifier should be labelled as Class A if, under full output, both tubes of a PP pair are conducting for the entire 360 degree cycle of the output waveform. This kind of reasoning is wrong, or at least confusing.
The way I look at it, an amplifier's class really depends upon the intended performance of the amplifier. If that intent is accurate reproduction, as in a hi-fi amplifier, then the class distinction is useful and unambiguous. But in a guitar amp, in the case where the expected use includes the distortion created by overdriving the output stage (thereby pushing the opposing tube of a PP pair into cutoff), I'd argue that Class A operation is always a fiction.
I guess you could argue that a "Class A" guitar amp is actually running in Class A if you don't push it hard enough to drive the output grid(s) into cutoff. But that seems a bit disingenuous, at least when the amp is used by a rock or blues guitarist. The problem with defining a guitar amp as Class A ignoring saturation and cutoff is that there's no clear line between operation in the "normal" region (no grid cutoff) and the abnormal region. It's a slippery slope: why not then take a hot Class AB amp and call it Class A based upon the fact that most of the time (except perhaps for initial note attacks or big slammed power chords) it's in the region where both tubes of a PP pair are conducting for 360 degrees? (Oh, wait... that's exactly what guitar amp manufacturers are doing.)
This is not to say that the quiescent operating point of an amplifier is irrelevant. The quiescent bias affects the distortion signature. Part of that comes from the shift in operating point and its effect upon linearity in the conduction region, and part is the relative presence or absence of crossover distortion.
However, I really believe that operating class distinctions aren't useful for guitar amps. We're trying to take a taxonomy developed specifically for faithful linear signal reproduction and apply it to an intentionally nonlinear system. That taxonomy just doesn't hold up without applying qualifications that don't reflect the intended use of a guitar amplifier.
Guitar amplifier transient behavior is a lot more dynamic than that of a hi-fi amp. The second-order effects that hi-fi designers try to minimize are exactly the things that make a guitar amp sound interesting.
Cathode-biased amps are especially rich in these second-order effects precisely because the operating point changes with signal. Depending upon the magnitude of the present and recent signal and the RC time constant of the bias network, the tubes of a cathode-biased push-pull pair may be conducting for 360 degrees (nominally Class A), more than 180 but less than 360 degrees (Class AB), exactly 180 degrees (Class B) or even - under certain transient conditions - less than 180 degrees (Class C). Do we call this Variable-Class or Dynamic-Class behavior? Hey, it's no less confusing than arbitrarily calling it Class A - which has a very distinct technical definition - and really meaning that it's cathode biased.