Despite the lack of guidance available for practitioners, extensive polypharmacy has become the primary method of treating patients with severe and chronic mood, anxiety, psychotic or behavioral disorders. This ground-breaking new book provides an overview of psychopharmacology knowledge and decision-making strategies, integrating findings from evidence-based trials with real-world clinical presentations. It adopts the approach and mind-set of a clinical investigator and reveals how prescribers can practice 'bespoke psychopharmacology', tailoring care to the individualized needs of patients.
Laboratory Values and Psychiatric Symptoms: What to Measure, What Not to Measure, and What to Do With The Results
Not everything that can be counted counts, and not everything that counts can be counted.
▢ Know which psychotropic medications do and do not have established therapeutic plasma drug level ranges
▢ Recognize which oral medications differ when taken with versus without food on drug absorption, bioavailability, and plasma drug levels
▢ Understand evidence-based reasons for when to order a plasma drug level
▢ Know when and how to order and interpret plasma drug levels relative to drug half-life and to draws obtained at trough, random, or other times based on convention
Psychiatrists probably are not so unusual among health care professionals in their desire to measure things. But compared to practitioners in most other areas of medicine, they may be the newest entrants to the world of the quantitative versus qualitative. Measurement-based care (MBC) and laboratory testing have become increasing focal points of clinical practice. Perhaps this comes in response to decades (if not centuries) of an often impressionistic and sometimes sluggishly qualitative way of recording clinical observations; perhaps it is backlash against a psychoanalytic heritage that for too long eschewed quantitative measures and formal outcome tracking; it also reflects the promulgation of research tools (semi-structured interviews, questionnaires, rating scales) into nonresearch clinical settings; and no doubt, MBC has arisen in response to a health care system that has come to link service reimbursement with quantifiable parameters.
“Therapeutic drug monitoring” (TDM) is a term used to describe the practice of regularly measuring serum drug levels to assure their sufficient concentrations for therapeutic efficacy. It assumes that serum levels correlate with clinical effects.
In this chapter, we will focus on the rationale and relevance (or irrelevance) of laboratory-based measures, end-organ monitoring for drug safety and efficacy, and the role of quantitative symptom tracking and MBC as an adaptation and outgrowth from the research world of RCTs. Let us once again begin with a clinical example (Clinical Vignette 7.1).
Mark is a 34-year-old man with bipolar I disorder that has stably been in remission for over three years. His psychiatrist, Dr. Abbott, prescribes him lithium carbonate 600 mg/day and divalproex sodium 1000 mg/day. His most recent quarterly serum drug levels were [Li+] = 0.48 mEq/L and [valproic acid] = 42 μg/mL. His serum creatinine at that time was 0.89 mg/dL and his complete blood count (CBC), hepatic enzymes, and thyroid-stimulating hormone (TSH) all were well within their normal ranges. Should his dosages be changed?
Mark has been stable on a fairly simple drug regimen of modestly dosed, diagnosis-appropriate pharmacotherapies for a meaningfully extensive period of time. If his symptoms have been well-controlled for several years, his regimen and dosages have been constant, and his biochemistry markers have been unremarkable, does he really need lab work every three months? The answer largely depends on whom we ask. Some practice guidelines quite conservatively do, for example, advocate indefinite monitoring of serum lithium levels as frequently as every three months. Our perspective is that such blanket recommendations must be interpreted within a particular clinical context. Obvious differences exist among patients who are psychiatrically more or less symptomatic, younger versus older adults, patients with renal disease, those taking thiazide diuretics or ACE inhibitors, those with erratic adherence, those with versus without GI or neurological adverse effects, and those with poorly controlled symptoms. Laboratory monitoring should occur for an intended purpose based on the patient’s clinical status.
In the absence of unstable clinical signs or symptoms, there is probably room for more latitude and discretion in deciding on the most appropriate frequency and relevance of Mark’s laboratory monitoring. Importantly, from a safety standpoint, we should probably be more concerned with the continued normalcy of Mark’s renal and thyroid function than his lithium level (absent symptom or dosing changes), and more with his hepatic function than his valproic acid level. Semi-annual assessment of end-organ laboratory monitoring in this case is likely more than adequate.
Mark’s serum lithium and valproate levels are of secondary importance to his clinical status. One could alter his doses to try to make his numbers conform more closely to the therapeutic ranges described in Table 7.1, but doing that would ignore the established evidence of the clinical stability afforded by his existing regimen. Furthermore, unlike an idealized hypothetical monotherapy patient, Mark takes two medications that may well be clinically synergistic. There is no evidence base from which to presume the necessity of optimized dosing (or serum drug levels) from such a combination therapy regimen. Once again, the patient’s own sustained lack of signs or symptoms is irrefutable evidence of his clinical stability. We only impose further evidence-based strategies that could improve his condition when his condition is in need of improvement.
|Medication||Empirical findings regarding serum levels||Relevance of serum drug levels||Caveats|
|Carbamazepine||4–12 μg/mL in epilepsy||No demonstrated correlation between serum [carbama-zepine] and clinical response in acute or prophylactic treatment of bipolar disorder (Simhandl et al., 1993; Vasudev et al., 2000)||Autoinduction of carbamazepine often leads to lowered levels several weeks after initiation|
||Optimal antimanic efficacy when [valproate] >71 μg/mL; highest effect size in mania for [valproate] >94 μg/mL (Allen et al., 2006). No data in bipolar depression.||Free (unbound) [valproate] should be measured (usual range = 6–22 μg/mL) when plasma protein levels may be low (e.g., as in malabsorption or malnutrition/anorexia, or significant hepatic or renal disease)|
|Lamotrigine||2.5–15.0 μg/mL in epilepsy||There is no established therapeutic serum reference range for purposes other than seizure prevention. One small (n = 34) study combining unipolar and bipolar TRD found significantly greater response when serum [lamotrigine] >2.3 μg/mL (Kagawa et al., 2014)||Estrogen-containing oral contraceptives, as well as pregnancy, can reduce serum lamotrigine levels by up to 50%|
||“Therapeutic” levels in acute mania tend to be higher (closer to 1.0 mEq/L) than during maintenance therapy||Optimum serum lithium levels produce adequate therapeutic benefit at the lowest possible dose, in order to minimize end-organ adverse effects. Also, “established” therapeutic ranges are themselves subject to moderators (e.g., absence of chronicity (Gelenberg et al., 1989)) and mediators (e.g., avoiding abrupt drops in Li+ levels (Perlis et al., 2002)|
Abbreviations: TRM = treatment-resistant depression
In general, no medical test should be ordered without a reason. In the case of psychotropic drug levels, we can think of several, as summarized in Box 7.1.
Medication adherence: when pharmacotherapy nonadherence is suspected, an undetectable serum drug level lends credence to speculation.
Frank toxicity states, or when there are clinical signs suggestive of possible toxicity (coarse tremor, cognitive dulling or sedation, ataxia), it can be useful to determine whether a serum drug level is at or above the upper end of the laboratory reference range. (Note: some drugs, such as lithium, might cause tremor or gastrointestinal upset without such signs necessarily indicating excessive dosing or supratherapeutic serum levels.)
In a still-symptomatic patient, when an established therapeutic reference range for a particular drug does exist, one might check a serum level to gauge if there is room to increase the dose safely toward the upper end of the reference range.
When a pharmacokinetic interaction may be pertinent, measuring a level (especially before and after exposure to potential inducing or inhibiting codrug) may help to determine the extent and magnitude of the interaction.
In treatment-resistant cases where dosing can be guided by plasma drug levels to the point of either tolerability or futility.
As a rule of thumb, one might generally be interested in the extreme values of a drug’s laboratory reference range, with especially low levels signaling either poor adherence or ultrarapid metabolism, and unduly high levels suggesting toxicity/overdose, poor metabolism (causing drug build-up), or else the perhaps inadvertent capture of a peak (Cmax) level rather than a trough or steady-state level, as when a specimen is obtained too soon after a last dose. Though clinicians often speak interchangeably about “plasma” and “serum” drug levels, at least in the case of tricyclic antidepressants, plasma blood levels may be significantly higher than corresponding serum levels (Coccaro et al., 1987). Certain specific clinical conditions also are known to alter drug metabolism – for example, as noted in Chapter 12, rising estrogen levels during pregnancy can induce CYP450 enzymes, potentially resulting in reduced serum levels of SSRIs or other P450 substrates; however, routinely measuring serum drug levels for the sake of documenting this event during pregnancy may simply become an academic exercise that merely affirms clinical suspicion.
In TDM, pay attention mainly to the extreme ends of a laboratory reference range; minimal-to-absent levels reflect nonadherence or ultra-rapid metabolism, while levels near or above the upper end of the reference range are consistent with possible toxicity states (depending on a drug’s therapeutic index).
Serum: blood fluid minus clotting factors and red or white blood cells
Plasma: blood fluid containing proteins and blood cells in suspension
In general, if and when measuring serum drug levels, it makes sense to do so after the passage of five half-lives as a reflection of steady-state pharmacokinetics. However, clinically therapeutic effects of a drug may be evident before five half-lives have elapsed, as in the case of very long-half-life drugs with long half-life active metabolites (e.g., fluoxetine, aripiprazole, cariprazine). Unless convention dictates otherwise, meaningfully interpretable levels are usually drawn at trough concentrations (i.e., just before the next dose).
For lithium, the manufacturer’s package insert information advises that serum lithium levels be drawn “immediately prior to the next dose when lithium concentrations are relatively stable (i.e., 12 hours after the previous dose).” This, however, presupposes that lithium dosing occurs twice a day, which was the original dosing recommendation made by the manufacturer decades ago when lithium first came to market. Subsequent studies have shown that once rather than multiple daily dosing of lithium appears more protective against nephrotoxicity (Castro et al., 2016). By convention, meaningfully interpretable lithium levels are usually drawn 10–14 hours after the previous dose, although a true “trough” level would refer strictly to levels measured immediately before a next dose (i.e., at Cmin).
Among antidepressants, body weight has not been shown to influence blood levels (Unterecker et al., 2011).
Bioavailability and serum levels are affected by administration with food for some, but not most, psychotropic drugs. Medications that are better absorbed with food (possibly leading to higher serum levels), include deutetrabenazine, lithium, lumateperone, lurasidone, nefazodone, paliperidone, quetiapine (modest increase), sertraline (modest increase), vilzodone, and ziprasidone. Those that are better absorbed without food (possibly leading to higher serum levels) include asenapine, valbenazine, and zolpidem (modest).
There are a handful of clinical situations in which serum drug levels are decidedly uninformative for gauging therapeutic effects. Examples include:
Tachyphylaxis: after prolonged exposure to drugs that can cause physiological tolerance (e.g., benzodiazepines, amphetamines, opiates), a given drug concentration may yield a lesser effect over time;
“Hit and run” drugs that produce an all-or-none effect regardless of dosage. Examples would include antibiotics or irreversible noncompetitive enzyme inhibitors such as MAOIs.
Let us review some basic information about pharmacokinetics, as depicted in Boxes 7.2 and 7.3.
Therapeutic Window: refers to the range of dosages of a drug that can produce a therapeutic effect without causing significant adverse effects. Operationally, this is defined as the dosages that fall between the minimum effective concentration and the minimum toxic concentration, illustrated by the following graphic:
Therapeutic Range: when a defined clinical population (e.g., epilepsy patients, migraine sufferers, panic disorder patients) is exposed to a drug, the range of dosages (or serum levels) that correspond to empirically observed beneficial effects without producing toxicity defines a drug’s therapeutic range. Note that the therapeutic range of drug dosages or serum levels for one condition may have no relevance (other than avoiding toxicity) if the same drug is used for a different condition.
Therapeutic Index: refers to the ratio of a drug’s median lethal dose (LD50) to its median effective dose (ED50), i.e.:
A large therapeutic index means a safer drug while a narrow one means there is little room for dosing error (e.g., lithium). Drugs with a large therapeutic index sometimes also can produce a more sustained effect that permits once- or twice-daily dosing even if their elimination half-life is relatively short (e.g., beta-blockers).
Linear dose–response curve: increases in serum drug levels are directly proportional to drug dosing.
Curvilinear response curve: an optimal effect comes from a dose (or level) that is neither too high nor too low. Notable examples among psychotropic drugs include nortriptyline, imipramine, and amitriptyline.
Logarithmic curve: involves a varying slope that initially shows a steep ascent followed by a plateau or ceiling effect. Examples include inhaled glucocorticoids in asthma.
Sigmoidal curve: follows a roughly linear contour from between 10–20% and 80–90% concentrations, bounded by flatter slope thresholds at lowest and highest dosing levels. Small changes in concentration or blood levels along the steep linear portion of the curve may produce substantial changes in effect. Examples include phenobarbital, lithium, amphetamine, and morphine.
Zero-order (nonlinear) kinetics (aka saturation kinetics): a constant amount of a drug is eliminated per unit time, independent of its plasma levels. Half-life and clearance are lower at low serum drug concentrations. Few drugs follow purely zero-order kinetics. Notable examples include ethanol, phenytoin, gabapentin, and high-dose salicylates.
First-order (linear) kinetics: elimination of a drug occurs at constant fractional rate per unit time (e.g., along an exponential decay curve), proportional to plasma drug levels. Drug clearance and half-life remain constant. Most psychotropic drugs follow first-order elimination, which in general is a much more common phenomenon than zero-order kinetics.