2008 Sodium Bicarbonate vs Sodium Chloride for the Prevention of Contrast Medium-Induced Nephropathy in Patients Undergoing Coronary Angiography - A Randomized Trial

Somjot S. Brar, MD; Albert Yuh-Jer Shen, MD; Michael B. Jorgensen, MD; Adam Kotlewski, MD; Vicken J. Aharonian, MD; Natasha Desai, BS; Michael Ree, BS; Ahmed Ijaz Shah, MD; Raoul J. Burchette, MS
JAMA. 2008;300(9):1038-1046.

ABSTRACT

Context Sodium bicarbonate has been suggested as a possible strategy for prevention of contrast medium-induced nephropathy, a common cause of renal failure associated with prolonged hospitalization, increased health care costs, and substantial morbidity and mortality.

Objective To determine if sodium bicarbonate is superior to sodium chloride for preventing contrast medium-induced nephropathy in patients with moderate to severe chronic kidney dysfunction who are undergoing coronary angiography.

Design, Setting, and Patients Randomized, controlled, single-blind study conducted between January 2, 2006, and January 31, 2007, and enrolling 353 patients with stable renal disease who were undergoing coronary angiography at a single US center. Included patients were 18 years or older and had an estimated glomerular filtration rate of 60 mL/min per 1.73 m2 or less and 1 or more of diabetes mellitus, history of congestive heart failure, hypertension, or age older than 75 years.

Interventions Patients were randomized to receive either sodium chloride (n = 178) or sodium bicarbonate (n = 175) administered at the same rate (3 mL/kg for 1 hour before coronary angiography, decreased to 1.5 mL/kg per hour during the procedure and for 4 hours after the completion of the procedure).

Main Outcome Measure The primary end point was a 25% or greater decrease in the estimated glomerular filtration rate on days 1 through 4 after contrast exposure.

Results Median patient age was 71 (interquartile range, 65-76) years, and 45% had diabetes mellitus. The groups were well matched for baseline characteristics. The primary end point was met in 13.3% of the sodium bicarbonate group and 14.6% of the sodium chloride group (relative risk, 0.94; 95% confidence interval, 0.55-1.60; P = .82). In patients randomized to receive sodium bicarbonate vs sodium chloride, the rates of death, dialysis, myocardial infarction, and cerebrovascular events did not differ significantly at 30 days (1.7% vs 1.7%, 0.6% vs 1.1%, 0.6% vs 0%, and 0% vs 2.2%, respectively) or at 30 days to 6 months (0.6% vs 2.3%, 0.6% vs 1.1%, 0.6% vs 2.3%, and 0.6% vs 1.7%, respectively) (P > .10 for all).

Conclusion The results of this study do not suggest that hydration with sodium bicarbonate is superior to hydration with sodium chloride for the prevention of contrast medium-induced nephropathy in patients with moderate to severe chronic kidney disease who are undergoing coronary angiography.

Trial Registration clinicaltrials.gov Identifier: NCT00312117

2008 Regional Variation in Out-of-Hospital Cardiac Arrest Incidence and Outcome

Graham Nichol, MD, MPH; Elizabeth Thomas, MSc; Clifton W. Callaway, MD, PhD; Jerris Hedges, MD, MS; Judy L. Powell, BSN; Tom P. Aufderheide, MD; Tom Rea, MD; Robert Lowe, MD, MPH; Todd Brown, MD; John Dreyer, MD; Dan Davis, MD; Ahamed Idris, MD; Ian Stiell, MD, MSc
JAMA. 2008;300(12):1423-1431.

ABSTRACT

Context The health and policy implications of regional variation in incidence and outcome of out-of-hospital cardiac arrest remain to be determined.

Objective To evaluate whether cardiac arrest incidence and outcome differ across geographic regions.

Design, Setting, and Patients Prospective observational study (the Resuscitation Outcomes Consortium) of all out-of-hospital cardiac arrests in 10 North American sites (8 US and 2 Canadian) from May 1, 2006, to April 30, 2007, followed up to hospital discharge, and including data available as of June 28, 2008. Cases (aged 0-108 years) were assessed by organized emergency medical services (EMS) personnel, did not have traumatic injury, and received attempts at external defibrillation or chest compressions or resuscitation was not attempted. Census data were used to determine rates adjusted for age and sex.

Main Outcome Measures Incidence rate, mortality rate, case-fatality rate, and survival to discharge for patients assessed or treated by EMS personnel or with an initial rhythm of ventricular fibrillation.

Results Among the 10 sites, the total catchment population was 21.4 million, and there were 20 520 cardiac arrests. A total of 11 898 (58.0%) had resuscitation attempted; 2729 (22.9% of treated) had initial rhythm of ventricular fibrillation or ventricular tachycardia or rhythms that were shockable by an automated external defibrillator; and 954(4.6% of total) were discharged alive. The median incidence of EMS-treated cardiac arrest across sites was 52.1 (interquartile range [IQR], 48.0-70.1) per 100 000 population; survival ranged from 3.0% to 16.3%, with a median of 8.4% (IQR, 5.4%-10.4%). Median ventricular fibrillation incidence was 12.6 (IQR, 10.6-5.2) per 100 000 population; survival ranged from 7.7% to 39.9%, with a median of 22.0% (IQR, 15.0%-24.4%), with significant differences across sites for incidence and survival (P<.001).

Conclusion In this study involving 10 geographic regions in North America, there were significant and important regional differences in out-of-hospital cardiac arrest incidence and outcome.

2008 Hospital-wide Code Rates and Mortality Before and After Implementation of a Rapid Response Team

Paul S. Chan, MD, MSc; Adnan Khalid, MD; Lance S. Longmore, DO; Robert A. Berg, MD; Mikhail Kosiborod, MD; John A. Spertus, MD, MPH
JAMA. 2008;300(21):2506-2513.

ABSTRACT

Context Rapid response teams have been shown in adult inpatients to decrease cardiopulmonary arrest (code) rates outside of the intensive care unit (ICU). Because a primary action of rapid response teams is to transfer patients to the ICU, their ability to reduce hospital-wide code rates and mortality remains unknown.

Objective To determine rates of hospital-wide codes and mortality before and after implementation of a long-term rapid response team intervention.

Design, Setting, and Patients A prospective cohort design of adult inpatients admitted between January 1, 2004, and August 31, 2007, at Saint Luke's Hospital, a 404-bed tertiary care academic hospital in Kansas City, Missouri. Rapid response team education and program rollout occurred from September 1 to December 31, 2005. A total of 24 193 patient admissions were evaluated prior to the intervention (January 1, 2004, to August 31, 2005), and 24 978 admissions were evaluated after the intervention (January 1, 2006, to August 31, 2007).

Intervention Using standard activation criteria, a 3-member rapid response team composed of experienced ICU staff and a respiratory therapist performed the evaluation, treatment, and triage of inpatients with evidence of acute physiological decline.

Main Outcome Measures Hospital-wide code rates and mortality, adjusted for preintervention trends.

Results There were a total of 376 rapid response team activations. After rapid response team implementation, mean hospital-wide code rates decreased from 11.2 to 7.5 per 1000 admissions. This was not associated with a reduction in the primary end point of hospital-wide code rates (adjusted odds ratio [AOR], 0.76 [95% confidence interval {CI}, 0.57-1.01]; P = .06), although lower rates of non-ICU codes were observed (non-ICU AOR, 0.59 [95% CI, 0.40-0.89] vs ICU AOR, 0.95 [95% CI, 0.64-1.43]; P = .03 for interaction). Similarly, hospital-wide mortality did not differ between the preintervention and postintervention periods (3.22 vs 3.09 per 100 admissions; AOR, 0.95 [95% CI, 0.81-1.11]; P = .52). Secondary analyses revealed few instances of rapid response team undertreatment or underuse that may have affected the mortality findings.

Conclusion In this large single-institution study, rapid response team implementation was not associated with reductions in hospital-wide code rates or mortality.

2006 First Documented Rhythm and Clinical Outcome From In-Hospital Cardiac Arrest Among Children and Adults

Vinay M. Nadkarni, MD; Gregory Luke Larkin, MD; Mary Ann Peberdy, MD; Scott M. Carey; William Kaye, MD; Mary E. Mancini, PhD; Graham Nichol, MD; Tanya Lane-Truitt, RN; Jerry Potts, PhD; Joseph P. Ornato, MD; Robert A. Berg, MD; for the National Registry of Cardiopulmonary Resuscitation Investigators
JAMA. 2006;295:50-57.

Context Cardiac arrests in adults are often due to ventricular fibrillation (VF) or pulseless ventricular tachycardia (VT), which are associated with better outcomes than asystole or pulseless electrical activity (PEA). Cardiac arrests in children are typically asystole or PEA.

Objective To test the hypothesis that children have relatively fewer in-hospital cardiac arrests associated with VF or pulseless VT compared with adults and, therefore, worse survival outcomes.

Design, Setting, and Patients A prospective observational study from a multicenter registry (National Registry of Cardiopulmonary Resuscitation) of cardiac arrests in 253 US and Canadian hospitals between January 1, 2000, and March 30, 2004. A total of 36 902 adults (18 years) and 880 children (<18 years) with pulseless cardiac arrests requiring chest compressions, defibrillation, or both were assessed. Cardiac arrests occurring in the delivery department, neonatal intensive care unit, and in the out-of-hospital setting were excluded.

Main Outcome Measure Survival to hospital discharge.

Results The rate of survival to hospital discharge following pulseless cardiac arrest was higher in children than adults (27% [236/880] vs 18% [6485/36 902]; adjusted odds ratio [OR], 2.29; 95% confidence interval [CI], 1.95-2.68). Of these survivors, 65% (154/236) of children and 73% (4737/6485) of adults had good neurological outcome. The prevalence of VF or pulseless VT as the first documented pulseless rhythm was 14% (120/880) in children and 23% (8361/36 902) in adults (OR, 0.54; 95% CI, 0.44-0.65; P<.001). The prevalence of asystole was 40% (350) in children and 35% (13 024) in adults (OR, 1.20; 95% CI, 1.10-1.40; P = .006), whereas the prevalence of PEA was 24% (213) in children and 32% (11 963) in adults (OR, 0.67; 95% CI, 0.57-0.78; P<.001). After adjustment for differences in preexisting conditions, interventions in place at time of arrest, witnessed and/or monitored status, time to defibrillation of VF or pulseless VT, intensive care unit location of arrest, and duration of cardiopulmonary resuscitation, only first documented pulseless arrest rhythm remained significantly associated with differential survival to discharge (24% [135/563] in children vs 11% [2719/24 987] in adults with asystole and PEA; adjusted OR, 2.73; 95% CI, 2.23-3.32).

Conclusions In this multicenter registry of in-hospital cardiac arrest, the first documented pulseless arrest rhythm was typically asystole or PEA in both children and adults. Because of better survival after asystole and PEA, children had better outcomes than adults despite fewer cardiac arrests due to VF or pulseless VT.

2008 Minimally Interrupted Cardiac Resuscitation by Emergency Medical Services for Out-of-Hospital Cardiac Arrest

Bentley J. Bobrow, MD; Lani L. Clark, BS; Gordon A. Ewy, MD; Vatsal Chikani, MPH; Arthur B. Sanders, MD; Robert A. Berg, MD; Peter B. Richman, MD; Karl B. Kern, MD
JAMA. 2008;299(10):1158-1165.

Context Out-of-hospital cardiac arrest is a major public health problem.

Objective To investigate whether the survival of patients with out-of-hospital cardiac arrest would improve with minimally interrupted cardiac resuscitation (MICR), an alternate emergency medical services (EMS) protocol.

Design, Setting, and Patients A prospective study of survival-to-hospital discharge between January 1, 2005, and November 22, 2007. Patients with out-of-hospital cardiac arrests in 2 metropolitan cities in Arizona before and after MICR training of fire department emergency medical personnel were assessed. In a second analysis of protocol compliance, patients from the 2 metropolitan cities and 60 additional fire departments in Arizona who actually received MICR were compared with patients who did not receive MICR but received standard advanced life support.

Intervention Instruction for EMS personnel in MICR, an approach that includes an initial series of 200 uninterrupted chest compressions, rhythm analysis with a single shock, 200 immediate postshock chest compressions before pulse check or rhythm reanalysis, early administration of epinephrine, and delayed endotracheal intubation.

Main Outcome Measure Survival-to-hospital discharge.

Results Among the 886 patients in the 2 metropolitan cities, survival-to-hospital discharge increased from 1.8% (4/218) before MICR training to 5.4% (36/668) after MICR training (odds ratio [OR], 3.0; 95% confidence interval [CI], 1.1-8.9). In the subgroup of 174 patients with witnessed cardiac arrest and ventricular fibrillation, survival increased from 4.7% (2/43) before MICR training to 17.6% (23/131) after MICR training (OR, 8.6; 95% CI, 1.8-42.0). In the analysis of MICR protocol compliance involving 2460 patients with cardiac arrest, survival was significantly better among patients who received MICR than those who did not (9.1% [60/661] vs 3.8% [69/1799]; OR, 2.7; 95% CI, 1.9-4.1), as well as patients with witnessed ventricular fibrillation (28.4% [40/141] vs 11.9% [46/387]; OR, 3.4; 95% CI, 2.0-5.8).

Conclusions Survival-to-hospital discharge of patients with out-of-hospital cardiac arrest increased after implementation of MICR as an alternate EMS protocol. These results need to be confirmed in a randomized trial.

2008 Survival From In-Hospital Cardiac Arrest During Nights and Weekends

Mary Ann Peberdy, MD; Joseph P. Ornato, MD; G. Luke Larkin, MD, MSPH, MS; R. Scott Braithwaite, MD; T. Michael Kashner, PhD, JD; Scott M. Carey; Peter A. Meaney, MD, MPH; Liyi Cen, MS; Vinay M. Nadkarni, MD, MS; Amy H. Praestgaard, MS; Robert A. Berg, MD; for the National Registry of Cardiopulmonary Resuscitation Investigators
JAMA. 2008;299(7):785-792.

Context Occurrence of in-hospital cardiac arrest and survival patterns have not been characterized by time of day or day of week. Patient physiology and process of care for in-hospital cardiac arrest may be different at night and on weekends because of hospital factors unrelated to patient, event, or location variables.

Objective To determine whether outcomes after in-hospital cardiac arrest differ during nights and weekends compared with days/evenings and weekdays.

Design and Setting We examined survival from cardiac arrest in hourly time segments, defining day/evening as 7:00 AM to 10:59 PM, night as 11:00 PM to 6:59 AM, and weekend as 11:00 PM on Friday to 6:59 AM on Monday, in 86 748 adult, consecutive in-hospital cardiac arrest events in the National Registry of Cardiopulmonary Resuscitation obtained from 507 medical/surgical participating hospitals from January 1, 2000, through February 1, 2007.

Main Outcome Measures The primary outcome of survival to discharge and secondary outcomes of survival of the event, 24-hour survival, and favorable neurological outcome were compared using odds ratios and multivariable logistic regression analysis. Point estimates of survival outcomes are reported as percentages with 95% confidence intervals (95% CIs).

Results A total of 58 593 cases of in-hospital cardiac arrest occurred during day/evening hours (including 43 483 on weekdays and 15 110 on weekends), and 28 155 cases occurred during night hours (including 20 365 on weekdays and 7790 on weekends). Rates of survival to discharge (14.7% [95% CI, 14.3%-15.1%] vs 19.8% [95% CI, 19.5%-20.1%], return of spontaneous circulation for longer than 20 minutes (44.7% [95% CI, 44.1%-45.3%] vs 51.1% [95% CI, 50.7%-51.5%]), survival at 24 hours (28.9% [95% CI, 28.4%-29.4%] vs 35.4% [95% CI, 35.0%-35.8%]), and favorable neurological outcomes (11.0% [95% CI, 10.6%-11.4%] vs 15.2% [95% CI, 14.9%-15.5%]) were substantially lower during the night compared with day/evening (all P values < .001). The first documented rhythm at night was more frequently asystole (39.6% [95% CI, 39.0%-40.2%] vs 33.5% [95% CI, 33.2%-33.9%], P < .001) and less frequently ventricular fibrillation (19.8% [95% CI, 19.3%-20.2%] vs 22.9% [95% CI, 22.6%-23.2%], P < .001). Among in-hospital cardiac arrests occurring during day/evening hours, survival was higher on weekdays (20.6% [95% CI, 20.3%-21%]) than on weekends (17.4% [95% CI, 16.8%-18%]; odds ratio, 1.15 [95% CI, 1.09-1.22]), whereas among in-hospital cardiac arrests occurring during night hours, survival to discharge was similar on weekdays (14.6% [95% CI, 14.1%-15.2%]) and on weekends (14.8% [95% CI, 14.1%-15.2%]; odds ratio, 1.02 [95% CI, 0.94-1.11]).

Conclusion Survival rates from in-hospital cardiac arrest are lower during nights and weekends, even when adjusted for potentially confounding patient, event, and hospital characteristics.

2008 Regional Variation in Out-of-Hospital Cardiac Arrest Incidence and Outcome

Graham Nichol, MD, MPH; Elizabeth Thomas, MSc; Clifton W. Callaway, MD, PhD; Jerris Hedges, MD, MS; Judy L. Powell, BSN; Tom P. Aufderheide, MD;Tom Rea, MD; Robert Lowe, MD, MPH; Todd Brown, MD; John Dreyer, MD; Dan Davis, MD; Ahamed Idris, MD; Ian Stiell, MD, MSc

JAMA. 2008;300(12):1423-1431.
Context The health and policy implications of regional variation in incidence and outcome of out-of-hospital cardiac arrest remain to be determined.

Objective To evaluate whether cardiac arrest incidence and outcome differ across geographic regions.

Design, Setting, and Patients Prospective observational study (the Resuscitation Outcomes Consortium) of all out-of-hospital cardiac arrests in 10 North American sites (8 US and 2 Canadian) from May 1, 2006, to April 30, 2007, followed up to hospital discharge, and including data available as of June 28, 2008.Cases (aged 0-108 years) were assessed by organized emergency medical services (EMS) personnel, did not have traumatic injury, and received attempts at external defibrillation or chest compressions or resuscitation was not attempted. Census data were used to determine rates adjusted for age and sex.

Main Outcome Measures Incidence rate, mortality rate, case-fatality rate, and survival to discharge for patients assessed or treated by EMS personnel or with an initial rhythm of ventricular fibrillation.

Results Among the 10 sites, the total catchment population was 21.4 million, and there were 20 520 cardiac arrests. A total of 11 898 (58.0%) had resuscitation attempted; 2729 (22.9% of treated) had initial rhythm of ventricular fibrillation or ventricular tachycardia or rhythms that were shockable by an automated external defibrillator; and 954(4.6% of total) were discharged alive. The median incidence of EMS-treated cardiac arrest across sites was 52.1 (interquartile range [IQR], 48.0-70.1) per 100 000 population; survival ranged from 3.0% to 16.3%, with a median of 8.4% (IQR, 5.4%-10.4%). Median ventricular fibrillation incidence was 12.6 (IQR, 10.6-5.2) per 100 000 population; survival ranged from 7.7% to 39.9%, with a median of 22.0% (IQR, 15.0%-24.4%), with significant differences acrosssites for incidence and survival (P<.001).

Conclusion In this study involving 10 geographic regions in North America, there were significant and important regional differences in out-of-hospital cardiac arrest incidence andoutcome.

 

 

2008 Prehospital Termination of Resuscitation in Cases of Refractory Out-of-Hospital Cardiac Arrest

Comilla Sasson, MD, MS; A. J. Hegg, MD; Michelle Macy, MD; Allison Park, MPH; Arthur Kellermann, MD, MPH; Bryan McNally, MD, MPH; for the CARES Surveillance Group

JAMA. 2008;300(12):1432-1438.

Context Identifying patients in the out-of-hospital setting who have no realistic hope of surviving an out-of-hospital cardiac arrest could enhance utilization of scarce health care resources.

Objective To validate 2 out-of-hospital termination-of-resuscitation rules developed by the Ontario Prehospital Life Support (OPALS) study group, one for use by responders providing basic life support (BLS) and the other for those providing advanced life support (ALS).

Design, Setting, and Patients Retrospective cohort study using surveillance data prospectively submitted by emergency medical systems and hospitals in 8 US cities to the Cardiac Arrest Registry to Enhance Survival (CARES) between October 1, 2005, and April 30, 2008. Case patients were 7235 adults with out-of-hospital cardiac arrest; of these, 5505 met inclusion criteria.

Main Outcome Measures Specificity and positive predictive value of each termination-of-resuscitation rule for identifying patients who likely will not survive to hospital discharge.

Results The overall rate of survival to hospital discharge was 7.1% (n = 392). Of 2592 patients (47.1%) who met BLS criteria for termination of resuscitation efforts, only 5 (0.2%) patients survived to hospital discharge. Of 1192 patients (21.7%) who met ALS criteria, none survived to hospital discharge. The BLS rule had a specificity of 0.987 (95% confidence interval [CI], 0.970-0.996) and a positive predictive value of 0.998 (95% CI, 0.996-0.999) for predicting lack of survival. The ALS rule had a specificity of 1.000 (95% CI, 0.991-1.000) and positive predictive value of 1.000 (95% CI, 0.997-1.000) for predicting lack of survival.

Conclusion In this validation study, the BLS and ALS termination-of-resuscitation rules performed well in identifying patients with out-of-hospital cardiac arrestwho have little or no chance of survival.

2005 Quality of Cardiopulmonary Resuscitation During In-Hospital Cardiac Arrest

Benjamin S. Abella, MD, MPhil; Jason P. Alvarado, BA; Helge Myklebust, BEng; Dana P. Edelson, MD; Anne Barry, RN, MBA; Nicholas O'Hearn, RN, MSN;Terry L. Vanden Hoek, MD; Lance B. Becker, MD

JAMA. 2005;293:305-310.

Context The survival benefit of well-performed cardiopulmonary resuscitation (CPR) is well-documented, but little objective data exist regarding actual CPR quality during cardiac arrest. Recent studies have challenged the notion that CPR is uniformly performed according to established international guidelines.

Objectives To measure multiple parameters of in-hospital CPR quality and to determine compliance with published American Heart Association and international guidelines.

Design and Setting A prospective observational study of 67 patients who experienced in-hospital cardiac arrest at the University of Chicago Hospitals, Chicago, Ill, between December 11, 2002, and April 5, 2004. Using a monitor/defibrillator with novel additional sensing capabilities, the parameters of CPR quality including chest compression rate, compression depth, ventilation rate, and the fraction of arrest time without chest compressions (no-flow fraction) were recorded.

Main Outcome Measure Adherence to American Heart Association and international CPR guidelines.

Results Analysis of the first 5 minutes of each resuscitation by 30-second segments revealed that chest compression rates were less than 90/min in 28.1% of segments. Compression depth was too shallow (defined as <38 mm) for 37.4% of compressions. Ventilation rates were high, with 60.9% of segments containing a rate of more than 20/min. Additionally, the mean (SD) no-flow fraction was 0.24 (0.18). A 10-second pause each minute of arrest would yield a no-flow fraction of 0.17. A total of 27 patients (40.3%) achieved return of spontaneous circulation and 7 (10.4%) were discharged from the hospital.

Conclusions In this study of in-hospital cardiac arrest, the quality of multiple parameters of CPR was inconsistent and often did not meet published guideline recommendations, evenwhen performed by well-trained hospital staff. The importance of high-quality CPR suggests the need for rescuer feedback and monitoring of CPR quality during resuscitation efforts.

2006 Use of an Automated, Load-Distributing Band Chest Compression Device for Out-of-Hospital Cardiac Arrest Resuscitation

Marcus Eng Hock Ong, MD, MPH; Joseph P. Ornato, MD; David P. Edwards, MBA, EMT-P; Harinder S. Dhindsa, MD, MPH; Al M. Best, PhD; Caesar S. Ines, MD, MS;Scott Hickey, MD; Bryan Clark, DO; Dean C. Williams, MD; Robert G. Powell, MD; Jerry L. Overton, MPA; Mary Ann Peberdy, MD

JAMA. 2006;295:2629-2637.

Context Only 1% to 8% of adults with out-of-hospital cardiac arrest survive to hospital discharge.

Objective To compare resuscitation outcomes before and after an urban emergency medical services (EMS) system switched from manual cardiopulmonary resuscitation (CPR) to load-distributing band (LDB) CPR.

Design, Setting, and Patients A phased, observational cohort evaluation with intention-to-treat analysis of 783 adults with out-of-hospital, nontraumaticcardiac arrest. A total of 499 patients were included in the manual CPR phase (January 1, 2001, to March 31, 2003) and 284 patients in the LDB-CPR phase (December 20, 2003, to March 31, 2005); of these patients, the LDB device was applied in 210 patients.

Intervention Urban EMS system change from manual CPR to LDB-CPR.

Main Outcome Measures Return of spontaneous circulation (ROSC), with secondary outcome measures of survival to hospital admission and hospital discharge, and neurological outcome at discharge.

Results Patients in the manual CPR and LDB-CPR phases were comparable except for a faster response time interval (mean difference, 26 seconds) and more EMS-witnessed arrests (18.7% vs 12.6%) with LDB. Rates for ROSC and survival were increased with LDB-CPR compared with manual CPR (for ROSC, 34.5%; 95% confidence interval [CI], 29.2%-40.3% vs 20.2%; 95% CI, 16.9%-24.0%; adjusted odds ratio [OR], 1.94; 95% CI, 1.38-2.72; for survival to hospital admission, 20.9%; 95% CI, 16.6%-26.1% vs 11.1%; 95% CI, 8.6%-14.2%; adjusted OR, 1.88; 95% CI, 1.23-2.86; and for survival to hospital discharge, 9.7%; 95% CI, 6.7%-13.8% vs 2.9%; 95% CI, 1.7%-4.8%; adjusted OR, 2.27; 95% CI, 1.11-4.77). In secondary analysis of the 210 patients in whom the LDB device was applied, 38 patients (18.1%) survived to hospital admission (95% CI, 13.4%-23.9%) and 12 patients (5.7%) survived to hospital discharge (95% CI, 3.0%-9.3%). Among patients in the manual CPR and LDB-CPR groups who survived to hospital discharge, there was no significant difference between groups in Cerebral Performance Category (P = .36) or Overall Performance Category (P = .40). The number needed to treat for the adjusted outcome survival to discharge was 15 (95% CI, 9-33).

Conclusion Compared with resuscitation using manual CPR, a resuscitation strategy using LDB-CPR on EMS ambulances is associated with improved survival to hospital discharge in adults with out-of-hospital nontraumatic cardiac arrest.

2004 Delirium as a Predictor of Mortality in Mechanically Ventilated Patients in the Intensive Care Unit

E. Wesley Ely, MD, MPH; Ayumi Shintani, PhD, MPH; Brenda Truman, RN, MSN; Theodore Speroff, PhD; Sharon M. Gordon, PsyD; Frank E. Harrell, Jr, PhD; Sharon K. Inouye, MD, MPH; Gordon R. Bernard, MD; Robert S. Dittus, MD, MPH

JAMA. 2004;291:1753-1762.

Context In the intensive care unit (ICU), delirium is a common yet underdiagnosed form of organ dysfunction, and its contribution to patient outcomes is unclear.

Objective To determine if delirium is an independent predictor of clinical outcomes, including 6-month mortality and length of stay among ICU patients receiving mechanical ventilation.

Design, Setting, and Participants Prospective cohort study enrolling 275 consecutive mechanically ventilated patients admitted to adult medical and coronary ICUs of a US university-based medical center between February 2000 and May 2001. Patients were followed up for development of delirium over 2158 ICU daysusing the Confusion Assessment Method for the ICU and the Richmond Agitation-Sedation Scale.

Main Outcome Measures Primary outcomes included 6-month mortality, overall hospital length of stay, and length of stay in the post-ICU period. Secondary outcomes were ventilator-free days and cognitive impairment at hospital discharge.

Results Of 275 patients, 51 (18.5%) had persistent coma and died in the hospital. Among the remaining 224 patients, 183 (81.7%) developed delirium at some point during the ICU stay. Baseline demographics including age, comorbidity scores, dementia scores, activities of daily living, severity of illness, and admission diagnoses were similar between those with and without delirium (P>.05 for all). Patients who developed delirium had higher 6-month mortality rates (34% vs 15%, P= .03) and spent 10 days longer in the hospital than those who never developed delirium (P<.001). After adjusting for covariates (including age, severity of illness, comorbid conditions, coma, and use of sedatives or analgesic medications), delirium was independently associated with higher 6-month mortality (adjusted hazard ratio [HR], 3.2; 95% confidence interval [CI], 1.4-7.7; P = .008), and longer hospital stay (adjusted HR, 2.0; 95% CI, 1.4-3.0; P<.001). Delirium in the ICU was also independently associated with a longer post-ICU stay (adjusted HR, 1.6; 95% CI, 1.2-2.3; P = .009), fewer median days alive and without mechanical ventilation (19 [interquartile range, 4-23] vs 24 [19-26]; adjusted P = .03), and a higher incidence of cognitive impairment at hospital discharge (adjusted HR, 9.1; 95% CI, 2.3-35.3; P = .002).

Conclusion Delirium was an independent predictor of higher 6-month mortality and longer hospital stay even after adjusting for relevant covariates including coma, sedatives, and analgesics in patients receiving mechanical ventilation.

2003 End-of-Life Practices in European Intensive Care Units - The Ethicus Study

Charles L. Sprung, MD; Simon L. Cohen, MD; Peter Sjokvist, MD; Mario Baras, PhD; Hans-Henrik Bulow, MD; Seppo Hovilehto, MD; Didier Ledoux, MD;Anne Lippert, MD; Paulo Maia, MD; Dermot Phelan, MD; Wolfgang Schobersberger, MD; Elisabet Wennberg, MD, PhD; Tom Woodcock, MB, BS; for the Ethicus Study Group

JAMA. 2003;290:790-797.

Context While the adoption of practice guidelines is standardizing many aspects of patient care, ethical dilemmas are occurring because of forgoing life-sustaining therapies in intensive care and are dealt with in diverse ways between different countries and cultures.

Objectives To determine the frequency and types of actual end-of-life practices in European intensive care units (ICUs) and to analyze the similarities and differences.

Design and Setting A prospective, observational study of European ICUs.

Participants Consecutive patients who died or had any limitation of therapy.

Intervention Prospectively defined end-of-life practices in 37 ICUs in 17 European countries were studied from January 1, 1999, to June 30, 2000.

Main Outcome Measures Comparison and analysis of the frequencies and patterns of end-of-life care by geographic regions and different patients and professionals.

Results Of 31 417 patients admitted to ICUs, 4248 patients (13.5%) died or had a limitation of life-sustaining therapy. Of these, 3086 patients (72.6%) had limitations of treatments (10% of admissions). Substantial intercountry variability was found in the limitations and the manner of dying: unsuccessfulcardiopulmonary resuscitation in 20% (range, 5%-48%), brain death in 8% (range, 0%-15%), withholding therapy in 38% (range, 16%-70%), withdrawing therapy in 33% (range, 5%-69%), and active shortening of the dying process in 2% (range, 0%-19%). Shortening of the dying process was reported in 7 countries. Doses of opioids and benzodiazepines reported for shortening of the dying process were in the same range as those used for symptom relief in previous studies. Limitation of therapy vs continuation of life-sustaining therapy was associated with patient age, acute and chronic diagnoses, number of days in ICU, region, and religion (P<.001).

Conclusion The limiting of life-sustaining treatment in European ICUs is common and variable. Limitations were associated with patient age, diagnoses, ICU stay, and geographic and religious factors. Although shortening of the dying process is rare, clarity between withdrawing therapies and shortening of the dying process and between therapies intended to relieve pain and suffering and those intended to shorten the dying process may be lacking.

2002 Paresis Acquired in the Intensive Care Unit - A Prospective Multicenter Study

Bernard De Jonghe, MD; Tarek Sharshar, MD; Jean-Pascal Lefaucheur, MD, PhD; François-Jérome Authier, MD; Isabelle Durand-Zaleski, MD, PhD;Mohamed Boussarsar, MD; Charles Cerf, MD; Estelle Renaud, MD; Francine Mesrati, MD; Jean Carlet, MD; Jean-Claude Raphaël, MD; Hervé Outin, MD;Sylvie Bastuji-Garin, MD, PhD; for the Groupe de Réflexion et d'Etude des Neuromyopathies en Réanimation


JAMA. 2002;288:2859-2867.

Context Although electrophysiologic and histologic neuromuscular abnormalities are common in intensive care unit (ICU) patients, the clinical incidence of ICU-acquired neuromuscular disorders in patients recovering from severe illness remains unknown.

Objectives To assess the clinical incidence, risk factors, and outcomes of ICU-acquired paresis (ICUAP) during recovery from critical illness in the ICU and to determine the electrophysiologic and histologic patterns in patients with ICUAP.

Design Prospective cohort study conducted from March 1999 to June 2000.

Setting Three medical and 2 surgical ICUs in 4 hospitals in France.

Participants All consecutive ICU patients without preexisting neuromuscular disease who underwent mechanical ventilation for 7 or more days were screened daily for awakening. The first day a patient was considered awake was day 1. Patients with severe muscle weakness on day 7 were considered to have ICUAP.

Main Outcome Measures Incidence and duration of ICUAP, risk factors for ICUAP, and comparative duration of mechanical ventilation between ICUAP and control patients.

Results Among the 95 patients who achieved satisfactory awakening, the incidence of ICUAP was 25.3% (95% confidence interval [CI], 16.9%-35.2%). All ICUAP patients had a sensorimotor axonopathy, and all patients who underwent a muscle biopsy had specific muscle involvement not related to nerve involvement. The median duration of ICUAP after day 1 was 21 days. Mean (SD) duration of mechanical ventilation after day 1 was significantly longer in patients with ICUAP compared with those without (18.2 [36.3] vs 7.6 [19.2] days; P = .03). Independent predictors of ICUAP were female sex (odds ratio [OR], 4.66; 95% CI, 1.19-18.30), the number of days with dysfunction of 2 or more organs (OR, 1.28; 95% CI, 1.11-1.49), duration of mechanical ventilation (OR, 1.10; 95% CI, 1.00-1.22), and administration of corticosteroids (OR, 14.90; 95% CI, 3.20-69.80) before day 1.

Conclusions Identified using simple bedside clinical criteria, ICUAP was frequent during recovery from critical illness and was associated with a prolonged duration of mechanical ventilation. Our findings suggest an important role of corticosteroids in the development of ICUAP.

2001 Delirium in Mechanically Ventilated Patients - Validity and Reliability of the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU)

E. Wesley Ely, MD,MPH; Sharon K. Inouye, MD,MPH; Gordon R. Bernard, MD; Sharon Gordon, PsyD; Joseph Francis, MD,MPH; Lisa May, RN,BSN;Brenda Truman, RN,MSN; Theodore Speroff, PhD; Shiva Gautam, PhD; Richard Margolin, MD; Robert P. Hart, PhD; Robert Dittus, MD,MPH


JAMA. 2001;286:2703-2710.

Context Delirium is a common problem in the intensive care unit (ICU). Accurate diagnosis is limited by the difficulty of communicating with mechanically ventilated patients and by lack of a validated delirium instrument for use in the ICU.

Objectives To validate a delirium assessment instrument that uses standardized nonverbal assessments for mechanically ventilated patients and to determine the occurrence rate of delirium in such patients.

Design and Setting Prospective cohort study testing the Confusion Assessment Method for ICU Patients (CAM-ICU) in the adult medical and coronary ICUs of a US university-based medical center.

Participants A total of 111 consecutive patients who were mechanically ventilated were enrolled from February 1, 2000, to July 15, 2000, of whom 96 (86.5%) were evaluable for the development of delirium and 15 (13.5%) were excluded because they remained comatose throughout the investigation.

Main Outcome Measures Occurrence rate of delirium and sensitivity, specificity, and interrater reliability of delirium assessments using the CAM-ICU, made daily by 2 critical care study nurses, compared with assessments by delirium experts using Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, criteria.

Results A total of 471 daily paired evaluations were completed. Compared with the reference standard for diagnosing delirium, 2 study nurses using the CAM-ICU had sensitivities of 100% and 93%, specificities of 98% and 100%, and high interrater reliability ( = 0.96; 95% confidence interval, 0.92-0.99). Interrater reliability measures across subgroup comparisons showed values of 0.92 for those aged 65 years or older, 0.99 for those with suspected dementia, or 0.94 for those with Acute Physiology and Chronic Health Evaluation II scores at or above the median value of 23 (all P<.001). Comparing sensitivity and specificity between patient subgroups according to age, suspected dementia, or severityof illness showed no significant differences. The mean (SD) CAM-ICU administration time was 2 (1) minutes. Reference standard diagnoses of delirium, stupor, and coma occurred in 25.2%, 21.3%, and 28.5% of all observations, respectively. Delirium occurred in 80 (83.3%) patients during their ICU stay for a mean (SD) of 2.4 (1.6) days. Delirium was even present in 39.5% of alert or easily aroused patient observations by the reference standard and persisted in 10.4% of patients at hospital discharge.

Conclusions Delirium, a complication not currently monitored in the ICU setting, is extremely common in mechanically ventilated patients. The CAM-ICU appears to be rapid, valid, and reliable for diagnosing delirium in the ICU setting and may be a useful instrument for both clinical and research purposes.

2002 Efficacy of Recombinant Human Erythropoietin in Critically Ill Patients -A Randomized Controlled Trial

Howard L. Corwin, MD; Andrew Gettinger, MD; Ronald G. Pearl, MD, PhD; Mitchell P. Fink, MD; Mitchell M. Levy, MD; Marc J. Shapiro, MD; Michael J. Corwin, MD;Theodore Colton, ScD; for the EPO Critical Care Trials Group


JAMA. 2002;288:2827-2835.

Context Anemia is common in critically ill patients and results in a large number of red blood cell (RBC) transfusions. Recent data have raised the concern that RBC transfusions may be associated with worse clinical outcomes in some patients.

Objective To assess the efficacy in critically ill patients of a weekly dosing schedule of recombinant human erythropoietin (rHuEPO) to decrease the occurrence of RBC transfusion.

Design A prospective, randomized, double-blind, placebo-controlled, multicenter trial conducted between December 1998 and June 2001.

Setting A medical, surgical, or a medical/surgical intensive care unit (ICU) in each of 65 participating institutions in the United States.

Patients A total of 1302 patients who had been in the ICU for 2 days and were expected to be in the ICU at least 2 more days and who met eligibility criteria were enrolled in the study; 650 patients were randomized to rHuEPO and 652 to placebo.

Intervention Study drug (40 000 units of rHuEPO) or placebo was administered by subcutaneous injection on ICU day 3 and continued weekly for patients who remained in the hospital, for a total of 3 doses. Patients in the ICU on study day 21 received a fourth dose.

Main Outcome Measures The primary efficacy end point was transfusion independence, assessed by comparing the percentage of patients in each treatment group who received any RBC transfusion between study days 1 and 28. Secondary efficacy end points identified prospectively included cumulative RBC units transfused per patientthrough study day 28; cumulative mortality through study day 28; change in hemoglobin from baseline; and time to first transfusion or death.

Results Patients receiving rHuEPO were less likely to undergo transfusion (60.4% placebo vs 50.5% rHuEPO; P<.001; odds ratio, 0.67; 95% confidence interval [CI], 0.54-0.83). There was a 19% reduction in the total units of RBCs transfused in the rHuEPO group (1963 units for placebo vs 1590 units for rHuEPO) and reduction in RBC units transfused per day alive(ratio of transfusion rates, 0.81; 95% CI, 0.79-0.83; P = .04). Increase in hemoglobin from baseline to study end was greater in the rHuEPO group (mean [SD], 1.32 [2] g/dL vs 0.94 [1.9] g/dL; P<.001). Mortality (14% for rHuEPO and 15% for placebo) and adverse clinical events were not significantly different.

Conclusions In critically ill patients, weekly administration of 40 000 units of rHuEPO reduces allogeneic RBC transfusion and increases hemoglobin. Further study is needed to determine whether this reduction in RBC transfusion results in improved clinical outcomes.

2002 Anemia and Blood Transfusion in Critically Ill Patients

Jean Louis Vincent, MD, PhD, FCCP; Jean-François Baron, MD; Konrad Reinhart, MD; Luciano Gattinoni, MD; Lambert Thijs, MD, PhD; Andrew Webb, MD;Andreas Meier-Hellmann, MD; Guy Nollet, MD; Daliana Peres-Bota, MD; for the ABC Investigators


JAMA. 2002;288:1499-1507.

Context Anemia is a common problem in critically ill patients admitted to intensive care units (ICUs), but the consequences of anemia on morbidity and mortality in the critically ill is poorly defined.

Objectives To prospectively define the incidence of anemia and use of red blood cell (RBC) transfusions in critically ill patients and to explore the potential benefits and risks associated with transfusion in the ICU.

Design Prospective observational study conducted November 1999, with 2 components: a blood sampling study and an anemia and blood transfusion study.

Setting and Patients The blood sampling study included 1136 patients from 145 western European ICUs, and the anemia and blood transfusion study included 3534 patients from 146 western European ICUs. Patients were followed up for 28 days or until hospital discharge, interinstitutional transfer, or death.

Main Outcome Measures Frequency of blood drawing and associated volume of blood drawn, collected over a 24-hour period; hemoglobin levels, transfusion rate, organ dysfunction (assessed using the Sequential Organ Failure Assessment score), and mortality, collected throughout a 2-week period.

Results The mean (SD) volume per blood draw was 10.3 (6.6) mL, with an average total volume of 41.1 (39.7) mL during the 24-hour period. There was a positive correlation between organ dysfunction and the number of blood draws (r = 0.34; P<.001) and total volume drawn (r = 0.28; P<.001). The mean hemoglobinconcentration at ICU admission was 11.3 (2.3) g/dL, with 29% (963/3295) having a concentration of less than 10 g/dL. The transfusion rate during the ICU period was 37.0% (1307/3534). Older patients and those with a longer ICU length of stay were more commonly transfused. Both ICU and overall mortality rates were significantly higher in patients who had vs had not received a transfusion (ICU rates: 18.5% vs 10.1%, respectively; 2 = 50.1; P<.001; overall rates: 29.0% vs 14.9%, respectively; 2 = 88.1; P<.001). For similar degrees of organ dysfunction, patients who had a transfusion had a higher mortality rate. For matched patients in the propensity analysis, the 28-day mortality was 22.7% among patients with transfusions and 17.1% among those without (P = .02); the Kaplan-Meier log-rank test confirmed this difference.

Conclusions This multicenter observational study reveals the common occurrence of anemia and the large use of blood transfusion in critically ill patients. Additionally, this epidemiologic study provides evidence of an association between transfusions and diminished organ function as well as between transfusions and mortality.

2005 Impact of the Pulmonary Artery Catheter in Critically Ill Patients - Meta-analysis of Randomized Clinical Trials

Monica R. Shah, MD, MHS, MSJ; Vic Hasselblad, PhD; Lynne W. Stevenson, MD; Cynthia Binanay, RN, BSN; Christopher M. O'Connor, MD; George Sopko, MD, MPH;Robert M. Califf, MD

JAMA. 2005;294:1664-1670.


Context Randomized clinical trials (RCTs) evaluating the pulmonary artery catheter (PAC) have been limited by small sample size. Some nonrandomized studies suggest that PAC use is associated with increased morbidity and mortality.

Objective To estimate the impact of the PAC device in critically ill patients.

Data Sources MEDLINE (1985-2005), the Cochrane Controlled Trials Registry (1988-2005), the National Institutes of Health ClinicalTrials.gov database, and the US Food and Drug Administration Web site for RCTs in which patients were randomly assigned to PAC or no PAC were searched. Results from the ESCAPE trial ofpatients with severe heart failure were also included. Search terms included pulmonary artery catheter, right heart catheter, catheter, and Swan-Ganz.

Study Selection Eligible studies included patients who were undergoing surgery, in the intensive care unit (ICU), admitted with advanced heart failure, or diagnosed with acute respiratory distress syndrome and/or sepsis; and studies that reported death and the number of days hospitalized or the number of days inthe ICU as outcome measures.

Data Extraction Information on eligibility criteria, baseline characteristics, interventions, outcomes, and methodological quality was extracted by 2 reviewers. Disagreements were resolved by consensus.

Data Synthesis In 13 RCTs, 5051 patients were randomized. Hemodynamic goals and treatment strategies varied among trials. A random-effects model was used to estimate the odds ratios (ORs) for death, number of days hospitalized, and use of inotropes and intravenous vasodilators. The combined OR for mortality was 1.04 (95% confidence interval [CI], 0.90-1.20; P = .59). The difference in the mean number of days hospitalized for PAC minus the mean for no PAC was 0.11 (95% CI, -0.51 to 0.74;P = .73). Use of the PAC was associated with a higher use of inotropes (OR, 1.58; 95% CI, 1.19-2.12; P = .002) and intravenous vasodilators (OR, 2.35; 95% CI, 1.75-3.15; P<.001).

Conclusions In critically ill patients, use of the PAC neither increased overall mortality or days in hospital nor conferred benefit. Despite almost 20 years of RCTs, a clear strategy leading to improved survival with the PAC has not been devised. The neutrality of the PAC for clinical outcomes may result from the absence of effective evidence-based treatments to use in combination with PAC information across the spectrum of critically ill patients.

2008 Cytomegalovirus Reactivation in Critically Ill Immunocompetent Patients

Ajit P. Limaye, MD; Katharine A. Kirby, MSc; Gordon D. Rubenfeld, MD; Wendy M. Leisenring, ScD; Eileen M. Bulger, MD; Margaret J. Neff, MD; Nicole S. Gibran, MD;Meei-Li Huang, PhD; Tracy K. Santo Hayes, BSc; Lawrence Corey, MD; Michael Boeckh, MD

JAMA. 2008;300(4):413-422.

Context Cytomegalovirus (CMV) infection is associated with adverse clinical outcomes in immunosuppressed persons, but the incidence and association of CMV reactivation with adverse outcomes in critically ill persons lacking evidence of immunosuppression have not been well defined.

Objective To determine the association of CMV reactivation with intensive care unit (ICU) and hospital length of stay in critically ill immunocompetent persons.

Design, Setting, and Participants We prospectively assessed CMV plasma DNAemia by thrice-weekly real-time polymerase chain reaction (PCR) and clinical outcomes in a cohort of 120 CMV-seropositive, immunocompetent adults admitted to 1 of 6 ICUs at 2 separate hospitals at a large US tertiary care academic medical center between 2004 and 2006. Clinical measurements were assessed by personnel blinded to CMV PCR results. Risk factors for CMV reactivation and association with hospital and ICU length of stay were assessed by multivariable logistic regression and proportional odds models.

Main Outcome Measures Association of CMV reactivation with prolonged hospital length of stay or death.

Results The primary composite end point of continued hospitalization (n = 35) or death (n = 10) by 30 days occurred in 45 (35%) of the 120 patients. Cytomegalovirus viremia at any level occurred in 33% (39/120; 95% confidence interval [CI], 24%-41%) at a median of 12 days (range, 3-57 days) and CMV viremia greater than 1000 copies/mL occurred in 20% (24/120; 95% CI, 13%-28%) at a median of 26 days (range, 9-56 days). By logistic regression, CMV infection at any level (adjusted odds ratio [OR], 4.3; 95% CI, 1.6-11.9; P = .005) and at greater than 1000 copies/mL (adjusted OR, 13.9; 95% CI, 3.2-60; P < .001) and the average CMV area under the curve (AUC) in log10 copies per milliliter (adjusted OR, 2.1; 95% CI, 1.3-3.2; P < .001) were independently associated with hospitalization or death by 30 days. In multivariable partial proportional odds models, both CMV 7-day moving average (OR, 5.1; 95% CI, 2.9-9.1; P < .001) and CMV AUC (OR, 3.2; 95% CI, 2.1-4.7; P < .001) were independently associated with a hospital length of stay of at least 14 days.

Conclusions These preliminary findings suggest that reactivation of CMV occurs frequently in critically ill immunocompetent patients and is associated with prolonged hospitalization or death. A controlled trial of CMV prophylaxis in this setting is warranted.

2001 Should Immunonutrition Become Routine in Critically Ill Patients?

A Systematic Review of the Evidence

Daren K. Heyland, MD,FRCPC,MSc; Frantisek Novak, MD; John W. Drover, MD,FRCSC; Minto Jain, MD,FRCSC; Xiangyao Su, PhD; Ulrich Suchner, MD

JAMA. 2001;286:944-953.

ABSTRACT


Context Several nutrients have been shown to influence immunologic and inflammatory responses in humans. Whether these effects translate into an improvement in clinical outcomes in critically ill patients remains unclear.

Objective To examine the relationship between enteral nutrition supplemented with immune-enhancing nutrients and infectious complications and mortality rates in critically ill patients.

Data Sources The databases of MEDLINE, EMBASE, Biosis, and CINAHL were searched for articles published from 1990 to 2000. Additional data sources included the Cochrane Controlled Trials Register from 1990 to 2000, personal files, abstract proceedings, and relevant reference lists of articles identified by database review.

Study Selection A total of 326 titles, abstracts, and articles were reviewed. Primary studies were included if they were randomized trials of critically ill or surgical patients that evaluated the effect of enteral nutrition supplemented with some combination of arginine, glutamine, nucleotides, and omega-3 fatty acids on infectious complication and mortality rates compared with standard enteral nutrition, and included clinically important outcomes, such as mortality.

Data Extraction Methodological quality of individual studies was scored and necessary data were abstracted in duplicate and independently.

Data Synthesis Twenty-two randomized trials with a total of 2419 patients compared the use of immunonutrition with standard enteral nutrition in surgical and critically ill patients. With respect to mortality, immunonutrition was associated with a pooled risk ratio (RR) of 1.10 (95% confidence interval [CI], 0.93-1.31). Immunonutrition was associated with lower infectious complications (RR, 0.66; 95% CI, 0.54-0.80). Since there was significant heterogeneity across studies, we examined several a priori subgroup analyses. We found that studies using commercial formulas with high arginine content were associated with a significant reduction in infectious complications and a trend toward a lower mortality rate compared with other immune-enhancing diets. Studies ;of surgical patients were associated with a significant reduction in infectious complication rates compared with studies of critically ill patients. In studies of critically ill patients, studies with a high-quality score were associated with increased mortality and a significant reduction in infectious complication rates compared with studies with a low-quality score.

Conclusion Immunonutrition may decrease infectious complication rates but it is not associated with an overall mortality advantage. However, the treatment effect varies depending on the intervention, the patient population, and the methodological quality of the study.

2005 Acute Renal Failure in Critically Ill Patients

A Multinational, Multicenter Study

Shigehiko Uchino, MD; John A. Kellum, MD; Rinaldo Bellomo, MD; Gordon S. Doig, PhD; Hiroshi Morimatsu, MD; Stanislao Morgera, MD; Miet Schetz, MD;Ian Tan, MD; Catherine Bouman, MD; Ettiene Macedo, MD; Noel Gibney, MD; Ashita Tolwani, MD; Claudio Ronco, MD; for the Beginning and Ending Supportive Therapy for the Kidney (BEST Kidney) Investigators

JAMA. 2005;294:813-818.

ABSTRACT


Context Although acute renal failure (ARF) is believed to be common in the setting of critical illness and is associated with a high risk of death, little is known about its epidemiology and outcome or how these vary in different regions of the world.

Objectives To determine the period prevalence of ARF in intensive care unit (ICU) patients in multiple countries; to characterize differences in etiology, illness severity, and clinical practice; and to determine the impact of these differences on patient outcomes.

Design, Setting, and Patients Prospective observational study of ICU patients who either were treated with renal replacement therapy (RRT) or fulfilled at least 1 of the predefined criteria for ARF from September 2000 to December 2001 at 54 hospitals in 23 countries.

Main Outcome Measures Occurrence of ARF, factors contributing to etiology, illness severity, treatment, need for renal support after hospital discharge, and hospital mortality.

Results Of 29 269 critically ill patients admitted during the study period, 1738 (5.7%; 95% confidence interval [CI], 5.5%-6.0%) had ARF during their ICU stay, including 1260 who were treated with RRT. The most common contributing factor to ARF was septic shock (47.5%; 95% CI, 45.2%-49.5%). Approximately 30% of patients had preadmission renal dysfunction. Overall hospital mortality was 60.3% (95% CI, 58.0%-62.6%). Dialysis dependence at hospital discharge was 13.8% (95% CI, 11.2%-16.3%) for survivors. Independent risk factors for hospital mortality included use of vasopressors (odds ratio [OR], 1.95; 95% CI, 1.50-2.55; P<.001), mechanical ventilation (OR, 2.11; 95% CI, 1.58-2.82; P<.001), septic shock (OR, 1.36; 95% CI, 1.03-1.79; P = .03), cardiogenic shock (OR, 1.41; 95% CI, 1.05-1.90; P = .02), and hepatorenal syndrome (OR, 1.87; 95% CI, 1.07-3.28; P = .03).

Conclusion In this multinational study, the period prevalence of ARF requiring RRT in the ICU was between 5% and 6% and was associated with a high hospital mortality rate.