NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Hughes RG, editor. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Rockville (MD): Agency for Healthcare Research and Quality (US); 2008 Apr.
1 Ronda G. Hughes, Ph.D., M.H.S., R.N., senior health scientist administrator, Agency for Healthcare Research and Quality. E-mail: vog.shh.qrha@sehguH.adnoR
The necessity for quality and safety improvement initiatives permeates health care. 1 , 2 Quality health care is defined as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” 3 (p. 1161). According to the Institute of Medicine (IOM) report, To Err Is Human, 4 the majority of medical errors result from faulty systems and processes, not individuals. Processes that are inefficient and variable, changing case mix of patients, health insurance, differences in provider education and experience, and numerous other factors contribute to the complexity of health care. With this in mind, the IOM also asserted that today’s health care industry functions at a lower level than it can and should, and it put forth the following six aims of health care: effective, safe, patient-centered, timely, efficient, and equitable. 2 The aims of effectiveness and safety are targeted through process-of-care measures, assessing whether providers of health care perform processes that have been demonstrated to achieve the desired aims and avoid those processes that are predisposed toward harm. The goals of measuring health care quality are to determine the effects of health care on desired outcomes and to assess the degree to which health care adheres to processes based on scientific evidence or agreed to by professional consensus and is consistent with patient preferences.
Because errors are caused by system or process failures, 5 it is important to adopt various process-improvement techniques to identify inefficiencies, ineffective care, and preventable errors to then influence changes associated with systems. Each of these techniques involves assessing performance and using findings to inform change. This chapter will discuss strategies and tools for quality improvement—including failure modes and effects analysis, Plan-Do-Study-Act, Six Sigma, Lean, and root-cause analysis—that have been used to improve the quality and safety of health care.
Efforts to improve quality need to be measured to demonstrate “whether improvement efforts (1) lead to change in the primary end point in the desired direction, (2) contribute to unintended results in different parts of the system, and (3) require additional efforts to bring a process back into acceptable ranges” 6 (p. 735). The rationale for measuring quality improvement is the belief that good performance reflects good-quality practice, and that comparing performance among providers and organizations will encourage better performance. In the past few years, there has been a surge in measuring and reporting the performance of health care systems and processes. 1 , 7–9 While public reporting of quality performance can be used to identify areas needing improvement and ascribe national, State, or other level of benchmarks, 10 , 11 some providers have been sensitive to comparative performance data being published. 12 Another audience for public reporting, consumers, has had problems interpreting the data in reports and has consequently not used the reports to the extent hoped to make informed decisions for higher-quality care. 13–15
The complexity of health care systems and delivery of services, the unpredictable nature of health care, and the occupational differentiation and interdependence among clinicians and systems 16–19 make measuring quality difficult. One of the challenges in using measures in health care is the attribution variability associated with high-level cognitive reasoning, discretionary decisionmaking, problem-solving, and experiential knowledge. 20–22 Another measurement challenge is whether a near miss could have resulted in harm or whether an adverse event was a rare aberration or likely to recur. 23
The Agency for Healthcare Research and Quality (AHRQ), the National Quality Forum, the Joint Commission, and many other national organizations endorse the use of valid and reliable measures of quality and patient safety to improve health care. Many of these useful measures that can be applied to the different settings of care and care processes can be found at AHRQ’s National Quality Measures Clearinghouse (http://www.qualitymeasures.ahrq.gov) and the National Quality Forum’s Web site (http://www.qualityforum.org). These measures are generally developed through a process including an assessment of the scientific strength of the evidence found in peer-reviewed literature, evaluating the validity and reliability of the measures and sources of data, determining how best to use the measure (e.g., determine if and how risk adjustment is needed), and actually testing the measure. 24 , 25
Measures of quality and safety can track the progress of quality improvement initiatives using external benchmarks. Benchmarking in health care is defined as the continual and collaborative discipline of measuring and comparing the results of key work processes with those of the best performers 26 in evaluating organizational performance. There are two types of benchmarking that can be used to evaluate patient safety and quality performance. Internal benchmarking is used to identify best practices within an organization, to compare best practices within the organization, and to compare current practice over time. The information and data can be plotted on a control chart with statistically derived upper and lower control limits. However, using only internal benchmarking does not necessarily represent the best practices elsewhere. Competitive or external benchmarking involves using comparative data between organizations to judge performance and identify improvements that have proven to be successful in other organizations. Comparative data are available from national organizations, such as AHRQ’s annual National Health Care Quality Report 1 and National Healthcare Disparities Report, 9 as well as several proprietary benchmarking companies or groups (e.g., the American Nurses Association’s National Database of Nursing Quality Indicators).
More than 40 years ago, Donabedian 27 proposed measuring the quality of health care by observing its structure, processes, and outcomes. Structure measures assess the accessibility, availability, and quality of resources, such as health insurance, bed capacity of a hospital, and number of nurses with advanced training. Process measures assess the delivery of health care services by clinicians and providers, such as using guidelines for care of diabetic patients. Outcome measures indicate the final result of health care and can be influenced by environmental and behavioral factors. Examples include mortality, patient satisfaction, and improved health status.
Twenty years later, health care leaders borrowed techniques from the work of Deming 28 in rebuilding the manufacturing businesses of post-World War II Japan. Deming, the father of Total Quality Management (TQM), promoted “constancy of purpose” and systematic analysis and measurement of process steps in relation to capacity or outcomes. The TQM model is an organizational approach involving organizational management, teamwork, defined processes, systems thinking, and change to create an environment for improvement. This approach incorporated the view that the entire organization must be committed to quality and improvement to achieve the best results. 29
In health care, continuous quality improvement (CQI) is used interchangeably with TQM. CQI has been used as a means to develop clinical practice 30 and is based on the principle that there is an opportunity for improvement in every process and on every occasion. 31 Many inhospital quality assurance (QA) programs generally focus on issues identified by regulatory or accreditation organizations, such as checking documentation, reviewing the work of oversight committees, and studying credentialing processes. 32 There are several other strategies that have been proposed for improving clinical practice. For example, Horn and colleagues discussed clinical practice improvement (CPI) as a “multidimensional outcomes methodology that has direct application to the clinical management of individual patients” 33 (p. 160). CPI, an approach lead by clinicians that attempts a comprehensive understanding of the complexity of health care delivery, uses a team, determines a purpose, collects data, assesses findings, and then translates those findings into practice changes. From these models, management and clinician commitment and involvement have been found to be essential for the successful implementation of change. 34–36 From other quality improvement strategies, there has been particular emphasis on the need for management to have faith in the project, communicate the purpose, and empower staff. 37
In the past 20 years, quality improvement methods have “generally emphasize[d] the importance of identifying a process with less-than-ideal outcomes, measuring the key performance attributes, using careful analysis to devise a new approach, integrating the redesigned approach with the process, and reassessing performance to determine if the change in process is successful” 38 (p. 9). Besides TQM, other quality improvement strategies have come forth, including the International Organization for Standardization ISO 9000, Zero Defects, Six Sigma, Baldridge, and Toyota Production System/Lean Production. 6 , 39 , 40
Quality improvement is defined “as systematic, data-guided activities designed to bring about immediate improvement in health care delivery in particular settings” 41 (p. 667). A quality improvement strategy is defined as “any intervention aimed at reducing the quality gap for a group of patients representative of those encountered in routine practice” 38 (p. 13). Shojania and colleagues 38 developed a taxonomy of quality improvement strategies (see Table 1), which infers that the choice of the quality improvement strategy and methodology is dependent upon the nature of the quality improvement project. Many other strategies and tools for quality improvement can be accessed at AHRQ’s quality tools Web site (www.qualitytools.ahrq.gov) and patient safety Web site (www.patientsafety.gov).
Taxonomy of Quality Improvement Strategies With Examples of Substrategies
Quality improvement projects and strategies differ from research: while research attempts to assess and address problems that will produce generalizable results, quality improvement projects can include small samples, frequent changes in interventions, and adoption of new strategies that appear to be effective. 6 In a review of the literature on the differences between quality improvement and research, Reinhardt and Ray 42 proposed four criteria that distinguish the two: (1) quality improvement applies research into practice, while research develops new interventions; (2) risk to participants is not present in quality improvement, while research could pose risk to participants; (3) the primary audience for quality improvement is the organization, and the information from analyses may be applicable only to that organization, while research is intended to be generalizable to all similar organizations; and (4) data from quality improvement is organization-specific, while research data are derived from multiple organizations.
The lack of scientific health services literature has inhibited the acceptance of quality improvement methods in health care, 43 , 44 but new rigorous studies are emerging. It has been asserted that a quality improvement project can be considered more like research when it involves a change in practice, affects patients and assesses their outcomes, employs randomization or blinding, and exposes patients to additional risks or burdens—all in an effort towards generalizability. 45–47 Regardless of whether the project is considered research, human subjects need to be protected by ensuring respect for participants, securing informed consent, and ensuring scientific value. 41 , 46 , 48
Quality improvement projects and studies aimed at making positive changes in health care processes to effecting favorable outcomes can use the Plan-Do-Study-Act (PDSA) model. This is a method that has been widely used by the Institute for Healthcare Improvement for rapid cycle improvement. 31 , 49 One of the unique features of this model is the cyclical nature of impacting and assessing change, most effectively accomplished through small and frequent PDSAs rather than big and slow ones, 50 before changes are made systemwide. 31 , 51
The purpose of PDSA quality improvement efforts is to establish a functional or causal relationship between changes in processes (specifically behaviors and capabilities) and outcomes. Langley and colleagues 51 proposed three questions before using the PDSA cycles: (1) What is the goal of the project? (2) How will it be known whether the goal was reached? and (3) What will be done to reach the goal? The PDSA cycle starts with determining the nature and scope of the problem, what changes can and should be made, a plan for a specific change, who should be involved, what should be measured to understand the impact of change, and where the strategy will be targeted. Change is then implemented and data and information are collected. Results from the implementation study are assessed and interpreted by reviewing several key measurements that indicate success or failure. Lastly, action is taken on the results by implementing the change or beginning the process again. 51
Six Sigma, originally designed as a business strategy, involves improving, designing, and monitoring process to minimize or eliminate waste while optimizing satisfaction and increasing financial stability. 52 The performance of a process—or the process capability—is used to measure improvement by comparing the baseline process capability (before improvement) with the process capability after piloting potential solutions for quality improvement. 53 There are two primary methods used with Six Sigma. One method inspects process outcome and counts the defects, calculates a defect rate per million, and uses a statistical table to convert defect rate per million to a σ (sigma) metric. This method is applicable to preanalytic and postanalytic processes (a.k.a. pretest and post-test studies). The second method uses estimates of process variation to predict process performance by calculating a σ metric from the defined tolerance limits and the variation observed for the process. This method is suitable for analytic processes in which the precision and accuracy can be determined by experimental procedures.
One component of Six Sigma uses a five-phased process that is structured, disciplined, and rigorous, known as the define, measure, analyze, improve, and control (DMAIC) approach. 53 , 54 To begin, the project is identified, historical data are reviewed, and the scope of expectations is defined. Next, continuous total quality performance standards are selected, performance objectives are defined, and sources of variability are defined. As the new project is implemented, data are collected to assess how well changes improved the process. To support this analysis, validated measures are developed to determine the capability of the new process.
Six Sigma and PDSA are interrelated. The DMAIC methodology builds on Shewhart’s plan, do, check, and act cycle. 55 The key elements of Six Sigma is related to PDSA as follows: the plan phase of PDSA is related to define core processes, key customers, and customer requirements of Six Sigma; the do phase of PDSA is related to measure performance of Six Sigma; the study phase of PDSA is related to analyze of Six Sigma; and the act phase of PDSA is related to improve and integrate of Six Sigma. 56
Application of the Toyota Production System—used in the manufacturing process of Toyota cars 57 —resulted in what has become known as the Lean Production System or Lean methodology. This methodology overlaps with the Six Sigma methodology, but differs in that Lean is driven by the identification of customer needs and aims to improve processes by removing activities that are non-value-added (a.k.a. waste). Steps in the Lean methodology involve maximizing value-added activities in the best possible sequence to enable continuous operations. 58 This methodology depends on root-cause analysis to investigate errors and then to improve quality and prevent similar errors.
Physicians, nurses, technicians, and managers are increasing the effectiveness of patient care and decreasing costs in pathology laboratories, pharmacies, 59–61 and blood banks 61 by applying the same principles used in the Toyota Production System. Two reviews of projects using Toyota Production System methods reported that health care organizations improved patient safety and the quality of health care by systematically defining the problem; using root-cause analysis; then setting goals, removing ambiguity and workarounds, and clarifying responsibilities. When it came to processes, team members in these projects developed action plans that improved, simplified, and redesigned work processes. 59 , 60 According to Spear, the Toyota Production System method was used to make the “following crystal clear: which patient gets which procedure (output); who does which aspect of the job (responsibility); exactly which signals are used to indicate that the work should begin (connection); and precisely how each step is carried out” 60 (p. 84).
Factors involved in the successful application of the Toyota Production System in health care are eliminating unnecessary daily activities associated with “overcomplicated processes, workarounds, and rework” 59 (p. 234), involving front-line staff throughout the process, and rigorously tracking problems as they are experimented with throughout the problem-solving process.
Root cause analysis (RCA), used extensively in engineering 62 and similar to critical incident technique, 63 is a formalized investigation and problem-solving approach focused on identifying and understanding the underlying causes of an event as well as potential events that were intercepted. The Joint Commission requires RCA to be performed in response to all sentinel events and expects, based on the results of the RCA, the organization to develop and implement an action plan consisting of improvements designed to reduce future risk of events and to monitor the effectiveness of those improvements. 64
RCA is a technique used to identify trends and assess risk that can be used whenever human error is suspected 65 with the understanding that system, rather than individual factors, are likely the root cause of most problems. 2 , 4 A similar procedure is critical incident technique, where after an event occurs, information is collected on the causes and actions that led to the event. 63
An RCA is a reactive assessment that begins after an event, retrospectively outlining the sequence of events leading to that identified event, charting causal factors, and identifying root causes to completely examine the event. 66 Because it is a labor-intensive process, ideally a multidisciplinary team trained in RCA triangulates or corroborates major findings and increases the validity of findings. 67 Taken one step further, the notion of aggregate RCA (used by the Veterans Affairs (VA) Health System) is purported to use staff time efficiently and involves several simultaneous RCAs that focus on assessing trends, rather than an in-depth case assessment. 68
Using a qualitative process, the aim of RCA is to uncover the underlying cause(s) of an error by looking at enabling factors (e.g., lack of education), including latent conditions (e.g., not checking the patient’s ID band) and situational factors (e.g., two patients in the hospital with the same last name) that contributed to or enabled the adverse event (e.g., an adverse drug event). Those involved in the investigation ask a series of key questions, including what happened, why it happened, what were the most proximate factors causing it to happen, why those factors occurred, and what systems and processes underlie those proximate factors. Answers to these questions help identify ineffective safety barriers and causes of problems so similar problems can be prevented in the future. Often, it is important to also consider events that occurred immediately prior to the event in question because other remote factors may have contributed. 68
The final step of a traditional RCA is developing recommendations for system and process improvement(s), based on the findings of the investigation. 68 The importance of this step is supported by a review of the literature on root-cause analysis, where the authors conclude that there is little evidence that RCA can improve patient safety by itself. 69 A nontraditional strategy, used by the VA, is aggregate RCA processes, where several simultaneous RCAs are used to examine multiple cases in a single review for certain categories of events. 68 , 70
Due the breadth of types of adverse events and the large number of root causes of errors, consideration should be given to how to differentiate system from process factors, without focusing on individual blame. The notion has been put forth that it is a truly rare event for errors to be associated with irresponsibility, personal neglect, or intention, 71 a notion supported by the IOM. 4 , 72 Yet efforts to categorize individual errors—such as the Taxonomy of Error Root Cause Analysis of Practice Responsibility (TERCAP), which focuses on “lack of attentiveness, lack of agency/fiduciary concern, inappropriate judgment, lack of intervention on the patient’s behalf, lack of prevention, missed or mistaken MD/healthcare provider’s orders, and documentation error” 73 (p. 512)—may distract the team from investigating systems and process factors that can be modified through subsequent interventions. Even the majority of individual factors can be addressed through education, training, and installing forcing functions that make errors difficult to commit.
Errors will inevitably occur, and the times when errors occur cannot be predicted. Failure modes and effects analysis (FMEA) is an evaluation technique used to identify and eliminate known and/or potential failures, problems, and errors from a system, design, process, and/or service before they actually occur. 74–76 FMEA was developed for use by the U.S. military and has been used by the National Aeronautics and Space Administration (NASA) to predict and evaluate potential failures and unrecognized hazards (e.g., probabilistic occurrences) and to proactively identify steps in a process that could reduce or eliminate future failures. 77 The goal of FMEA is to prevent errors by attempting to identifying all the ways a process could fail, estimate the probability and consequences of each failure, and then take action to prevent the potential failures from occurring. In health care, FMEA focuses on the system of care and uses a multidisciplinary team to evaluate a process from a quality improvement perspective.
This method can be used to evaluate alternative processes or procedures as well as to monitor change over time. To monitor change over time, well-defined measures are needed that can provide objective information of the effectiveness of a process. In 2001, the Joint Commission mandated that accredited health care providers conduct proactive risk management activities that identify and predict system weaknesses and adopt changes to minimize patient harm on one or two high-priority topics a year. 78
Developed by the VA’s National Center for Patient Safety, the health failure modes and effects analysis (HFMEA) tool is used for risk assessment. There are five steps in HFMEA: (1) define the topic; (2) assemble the team; (3) develop a process map for the topic, and consecutively number each step and substep of that process; (4) conduct a hazard analysis (e.g., identify cause of failure modes, score each failure mode using the hazard scoring matrix, and work through the decision tree analysis); 79 and (5) develop actions and desired outcomes. In conducting a hazard analysis, it is important to list all possible and potential failure modes for each of the processes, to determine whether the failure modes warrant further action, and to list all causes for each failure mode when the decision is to proceed further. After the hazard analysis, it is important to consider the actions needed to be taken and outcome measures to assess, including describing what will be eliminated or controlled and who will have responsibility for each new action. 79
Fifty studies and quality improvement projects were included in this analysis. The findings were categorized by type of quality method employed, including FMEA, RCA, Six Sigma, Lean, and PDSA. Several common themes emerged: (1) what was needed to implement quality improvement strategies, (2) what was learned from evaluating the impact of change interventions, and (3) what is known about using quality improvement tools in health care.
Substantial and strong leadership support, 80–83 involvement, 81 , 84 consistent commitment to continuous quality improvement, 85 , 86 and visibility, 87 both in writing and physically, 86 were important in making significant changes. Substantial commitment from hospital boards was also found to be necessary. 86 , 88 The inevitability of resource demands associated with changing process required senior leadership to (1) ensure adequate financial resources 87–89 by identifying sources of funds for training and purchasing and testing innovative technologies 90 and equipment; 91 (2) facilitate and enable key players to have the needed time to be actively involved in the change processes, 85 , 88 , 89 providing administrative support; 90 (3) support a time-consuming project by granting enough time for it to work; 86 , 92 and (4) emphasize safety as an organizational priority and reinforce expectations, especially when the process was delayed or results were periodically not realized. 87 It was also asserted that senior leaders needed to understand the impact of high-level decisions on work processes and staff time, 88 especially when efforts were underway to change practice, and that quality improvement needed to be incorporated into systemwide leadership development. 88 Leadership was needed to make patient safety a key aspect of all meetings and strategies, 85 , 86 to create a formal process for identifying annual patient safety goals for the organization, and to hold themselves accountable for patient safety outcomes. 85
Even with strong and committed leadership, some people within the organization may be hesitant to participate in quality improvement efforts because previous attempts to create change were hindered by various system factors, 93 a lack of organization-wide commitment, 94 poor organizational relationships, and ineffective communication. 89 However the impact of these barriers were found to be lessened if the organization embraced the need for change, 95 changed the culture to enable change, 90 and actively pursued institutionalizing a culture of safety and quality improvement. Yet adopting a nonpunitive culture of change took time, 61 , 90 even to the extent that the legal department in one hospital was engaged in the process to turn the focus to systems, not individual-specific issues. 96 Also, those staff members involved in the process felt more at ease with improving processes, particularly when cost savings were realized and when no layoff policies were put in place to protect job security even when efficiencies were realized. 84
The improvement process needed to engage 97 and involve all stakeholders and gain their understanding that the investment of resources in quality improvement could be recouped with efficiency gains and fewer adverse events. 86 Stakeholders were used to (1) prioritize which safe practices to target by developing a consensus process among stakeholders 86 , 98 around issues that were clinically important, i.e., hazards encountered in everyday practice that would make a substantial impact on patient safety; (2) develop solutions to the problems that required addressing fundamental issues of interdisciplinary communication and teamwork, which were recognized as crucial aspects of a culture of safety; and (3) build upon the success of other hospitals. 86 In an initiative involving a number of rapid-cycle collaboratives, successful collaboratives were found to have used stakeholders to determine the choice of subject, define objectives, define roles and expectations, motivate teams, and use results from data analyses. 86 Additionally, it was important to take into account the different perspectives of stakeholders. 97 Because variation in opinion among stakeholders and team members was expected 99 and achieving buy-in from all stakeholders could have been difficult to achieve, efforts were made to involve stakeholders early in the process, solicit feedback, 100 and gain support for critical changes in the process. 101
Communication and sharing information with stakeholders and staff was critical to specifying the purpose and strategy of the quality initiative; 101 developing open channels of communication across all disciplines and at all levels of leadership/staff, permitting the voicing of concerns and observations throughout the process of creating change; 88 ensuring that patients and families were appropriately included in the dialogue; ensuring that everyone involved felt that he or she was an integral part of the health care team and was responsible for patient safety; sharing lessons learned from root-cause analysis; and capturing attention and soliciting buy-in by sharing patient safety stories with staff and celebrating successes, no matter how small. 85 Yet in trying to keep everyone informed of the process and the data behind decisions, some staff had difficulty accepting system changes made in response to the data. 89
The successful work of these strategies was dependent upon having motivated 80 and empowered teams. There were many advantages to basing the work of the quality improvement strategies on the teamwork of multidisciplinary teams that would review data and lead change. 91 These teams needed to be comprised of the right staff people, 91 , 92 include peers, 102 engage all of the right stakeholders (ranging from senior managers to staff), and be supported by senior-level management/leadership. 85 , 86 Specific stakeholders (e.g., nurses and physicians) had to be involved 81 and supported to actually make the change, and to be the champions 103 and problem-solvers within departments 59 for the interventions to succeed. Because implementing the quality initiatives required substantial changes in the clinician’s daily work, 86 consideration of the attitude and willingness of front-line staff for making the specific improvements 59 , 88 , 104 was needed.
Other key factors to improvement success were implementing protocols that could be adapted to the patient’s needs 93 and to each unit, based on experience, training, and culture. 88 It was also important to define and test different approaches; different approaches can converge and arrive at the same point. 81 Mechanisms that facilitated staff buy-in was putting the types and causes of errors in the forefront of providers’ minds, making errors visible, 102 being involved in the process of assessing work and looking for waste, 59 providing insight as to whether the improvement project would be feasible and its impact measurable, 105 and presenting evidence-based changes. 100 Physicians were singled out as the one group of clinicians that needed to lead 106 or be actively involved in changes, 86 especially when physician behaviors could create inefficiencies. 84 In one project, physicians were recruited as champions to help spread the word to other physicians about the critical role of patient safety, to make patient safety a key aspect of all leadership and medical management meetings and strategies. 85
Team leaders and the composition of the team were also important. Team leaders that emphasized efforts offline to help build and improve relationships were found to be necessary for team success. 83 , 93 These teams needed a dedicated team leader who would have a significant amount of time to put into the project. 84 While the leader was not identified in the majority of reports reviewed for this paper, the team on one project was co-chaired by a physician and an administrator. 83 Not only did the type and ability of team leaders affect outcomes, the visibility of the initiative throughout the organization was dependent upon having visible champions. 100 Multidisciplinary teams needed to understand the numerous steps involved in quality improvement and that there were many opportunities for error, which essentially enabled teams to prioritize the critical items to improve within a complex process and took out some of the subjectivity from the analysis. The multidisciplinary structure of teams allowed members to identify each step from their own professional practice perspective, anticipate and overcome potential barriers, allowed the generation of diverse ideas, and allowed for good discussion and deliberations, which together ultimately promoted team building. 100 , 107 In two of the studies, FMEA/HFMEA was found to minimize group biases by benefiting from the diversity within multidisciplinary composition of the team and enabling the team to focus on a structured outline of the goals that needed to be accomplished. 107 , 108
Teams needed to be prepared and enabled to meet the demands of the quality initiatives with ongoing education, weekly debriefings, review of problems solved and principles applied, 84 and ongoing monitoring and feedback opportunities. 92 , 95 Education and training of staff 95 , 80 , 95 , 101 , 104 and leadership 80 about the current problem, quality improvement tools, the planned change in practice intervention, and updates as the project progressed were key strategies. 92 Training was an ongoing process 91 that needed to focus on skill deficits 82 and needed to be revised as lessons were learned and data was analyzed during the implementation of the project. 109 The assumption could not be made that senior staff or leadership would not need training. 105 Furthermore, if the team had no experience with the quality tools or successfully creating change, an additional resource could have been a consultant or someone to facilitate the advanced knowledge involved in quality improvement techniques. 106 Another consideration was using a model that intervened at the hospital-community interface, coupled with an education program. 97
The influence of teamwork processes enabled those within the team to improve relationships across departments. 89 Particular attention needed to be given to effective team building, 110 actively following the impact of using the rapid-cycle (PDSA) model, meeting frequently, and monitoring progress using outcome data analysis at least on a monthly basis. 86 Effective teamwork and communication, information transfer, coordination among multiple hospital departments and caregivers, and changes to hospital organization culture were considered essential elements of team effectiveness. 86 Yet the impact of team members that had difficulty in fully engaging in teamwork because of competing workloads (e.g., working double shifts) was dampened. 97 Better understanding of each other’s role is an important project outcome and provides a basis for continuing the development of other practices to improve outcomes. 97 The work of teams was motivated through continual sharing of progress and success and celebration of achievements. 87
Teamwork can have many advantages, but only a few were discussed in the reports reviewed. Teams were seen as being able to increase the scope of knowledge, improve communication across disciplines, and facilitate learning about the problem. 111 Teams were also found to be proactive, 91 integrating tools that improve both the technical processes and organizational relationships, 83 and to work together to understand the current situation, define the problem, pathways, tasks, and connections, as well as to develop a multidisciplinary action plan. 59 But teamwork was not necessarily an easy process. Group work was seen as difficult for some and time consuming, 111 and problems arose when everyone wanted their way, 97 which delayed convergence toward a consensus on actions. Team members needed to learn how to work with a group and deal with group dynamics, confronting peers, conflict resolution, and addressing behaviors that are detrimental. 111
As suggested by Berwick, 112 the leaders of the quality improvement initiatives in this review found that successful initiatives needed to simplify; 96 , 104 standardize; 104 stratify to determine effects; improve auditory communication patterns; support communication against the authority gradient; 96 use defaults properly; automate cautiously; 96 use affordance and natural mapping (e.g., design processes and equipment so that the easiest thing to do is the right thing to do); respect limits of vigilance and attention; 96 and encourage reporting of near hits, errors, and hazardous conditions. 96 Through the revision and standardization of policies and procedures, many of these initiatives were able to effectively realize the benefit of making the new process easier than the old and decrease the effect of human error associated with limited vigilance and attention. 78 , 80–82 , 90–92 , 94 , 96 , 102 , 103 , 113 , 114
Simplification and standardization were found to be effective as a forcing function by decreasing reliance on individualized decisionmaking. Several initiatives standardized medication ordering and administration protocols, 78 , 87 , 101 , 103 , 106–108 , 109 , 114–116 realizing improvements in patient outcomes, nurse efficiency, and effectiveness. 103 , 106 , 108 , 109 , 114–116 One initiative used a standardized form for blood product ordering. 94 Four initiatives improved pain assessment and management by using standardized metrics and assessment tools. 80 , 93 , 100 , 117 In all of these initiatives, simplification and standardization were effective strategies.
Related to simplification and standardization is the potential benefit of using information technology to implement checks, defaults, and automation to improve quality and reduce errors, in large part to embedding forcing functions to remove the possibility of errors. 96 , 106 The effects of human error could be mitigated by using necessary redundancy, such as double-checking for certain types of errors; this was seen as engaging the knowledge and abilities of two skilled practitioners 61 , 101 and was used successfully to reduce errors associated with dosing. 78 Information technology was successfully used to (1) decrease the opportunity for human error through automation; 61 (2) standardize medication concentrations 78 and dosing using computer-enabled calculations, 115 , 116 standardized protocols, 101 and order clarity; 116 (3) assist caregivers in providing quality care using alerts and reminders; (4) improve medication safety (e.g., implementing bar coding and computerized provider order entry); and (5) track performance through database integration and indicator monitoring. Often workflow and procedures needed to be revised to keep pace with technology. 78 Using technology implied that organizations were committed to investing in technology to enable improvement, 85 but for two initiatives, the lack of adequate resources for data collection impacted analysis and evaluation of the initiative. 93 , 97
Data and information were needed to understand the root causes of errors and near errors, 99 to understand the magnitude of adverse events, 106 to track and monitor performance, 84 , 118 and to assess the impact of the initiatives. 61 Reporting of near misses, errors, and hazardous conditions needs to be encouraged. 96 In part, this is because error reporting is generally low and is associated with organizational culture 106 and can be biased, which will taint results. 102 Organizations not prioritizing reporting or not strongly emphasizing a culture of safety may have the tendency to not report errors that harm patients or near misses (see Chapter 35. “Evidence Reporting and Disclosure”). Using and analyzing data was viewed as critical, yet some team members and staff may have benefited from education on how to effectively analyze and display findings. 106 Giving staff feedback by having a transparent process 39 of reporting findings 82 was viewed as a useful trigger that brought patient safety to the forefront of the hospital. 107 It follows then that not having data, whether because it was not reported or not collected, made statistical analysis of the impact of the initiative 115 or assessing its cost-benefit ratio not possible. 108 As such, multi-organizational collaboration should have a common database. 98
The meaning of data can be better understood by using measures and benchmarks. Repeated measurements were found to be useful for monitoring progress, 118 but only when there was a clear metric for measuring the degree of success. 83 The use of measures could be used as a strategy to involve more clinicians and deepened their interest, especially physicians. Using objective, broader, and better measures was viewed as being important for marking progress, and provided a basis for “a call to action” and celebration. 106 When measures of care processes were used, it was asserted that there was a need to demonstrate the relationship between specific changes to care processes and outcomes. 61
When multiple measures were used, along with better documentation of care, it was easier to assess the impact of the initiative on patient outcomes. 93 Investigators from one initiative put forth the notion that hospital administrators should encourage more evaluations of initiatives and that the evaluations should focus on comprehensive models that assess patient outcomes, patient satisfaction, and cost effectiveness. 114 The assessment of outcomes can be enhanced by setting realistic goals, not unrealistic goals such as 100 percent change, 119 and by comparing organizational results to recognized State, regional, and national benchmarks. 61 , 88
The cost of the initiative was an viewed as important factor in the potential for improvement, even when the adverse effects of current processes were considered as necessitating rapid change. 106 Because of this, it is important to implement changes that are readily feasible 106 and can be implemented with minimal disruption of practice activities. 99 It is also important to consider the potential of replicating the initiative in other units or at other sites. 99 One strategy to improve the chances of replication is to standardize processes, which will most likely incur some cost. 106 In some respects, the faster small problems were resolved, the faster improvements could be replicated throughout the entire system. 84 , 106 Recommendations that did not incur costs or had low costs and could be demonstrated to be effective were implemented expeditiously. 93 , 107 A couple of investigators stated that their interventions decreased costs and patients’ length of stay, 103 but did not present any data to verify those statements. It was also purported that the costs associated with change will be recouped either in return on investment or in reduced patient risk (and thus reduced liability costs). 61
Ensuring that those implementing the initiative receive education is critical. There were several examples of this. Two initiatives that targeted pain management found that educating staff on pain management guidelines and protocols for improving chronic pain assessment and management improved staff understanding, assessment and documentation, patient and family satisfaction, and pain management. 80 , 93 Another initiative educated all staff nurses on intravenous (IV) site care and assessment, as well as assessment of central lines, and realized improved patient satisfaction and reduced complications and costs. 109
Despite the benefits afforded by the initiatives, there were many challenges that were identified in implementing the various initiatives:
Lack of time and resources made it difficult to implement the initiative well. 82Some physicians would notaccept the new protocol and thwarted implementation until they had confidence in the tool. 103