OJNI

Evaluating Reliability of Assessments in Nursing Documentation

by

Karen A. Monsen, PhD, RN;
Amy B. Lytton, MS, RN;
Starr Ferrari, MS, RN, CNM;
Katie M. Halder, MS, RN, FNP;
David M. Radosevich, PhD, RN;
Madeleine J. Kerr, PhD, RN;
Susan M. Mitchell, MPH;
and Joan K. Brandt, PhD, RN

This article was made possible by an educational grant from

Chamberlain College of Nursing

CITATION

Monsen, K., Lytton, A., Ferrari, S., Halder, K., Radosevich, D., Kerr, M., Mitchell, S. and Brandt , J. (October 2011). Evaluating Reliability of Assessments in Nursing Documentation. Online Journal of Nursing Informatics (OJNI), 15, (3). Available at http://ojni.org/issues/?p=899

ABSTRACT

Clinical-documentation data are increasingly being used for program evaluation and research, and methods for verifying inter-rater reliability are needed. The purpose of this study was to test a panel-of-experts approach for verifying public health nurse (PHN) knowledge, behavior, and status scores for Income, Mental health, and Family planning problems within a convenience sample of 100 PHN client files. The number of instances of agreement between raters across all problems and outcomes averaged 42.0 (2 experts), 21.3 (3 experts), and 7.8 (3 experts and agency). Intra-class correlation coefficients ranged from 0.35-0.63, indicating that inter-rater reliability was not acceptable, even among the experts. Post-processing analysis suggested that insufficient information was available in the files to substantiate scores. It is possible that this method of verifying data reliability could be successful if implemented with procedures specifying that assessments must be substantiated by free text or structured data. There is a continued need for efficient and effective methods to document clinical-data reliability.

Introduction

Nursing must be able to describe practice through documentation of interventions, and demonstrate how nursing interventions affect client outcomes (Westra, Delaney, Konicek, & Keenan, 2008). Large data sets from nursing documentation in electronic health records (EHRs) are becoming available for such intervention effectiveness research (Monsen et al., 2010; Monsen, Radosevich, Kerr, & Fulkerson, 2011). However, as with all observational data, data quality issues must be addressed. Reliable use of a standardized interface terminology assures accuracy and consistency of the data (Martin, 2005; McDaniel, 1994; Minnesota Omaha System Users Group, 2011). Additionally, cost and time constraints may prohibit practice agencies from verifying data reliability (Monsen & Martin, 2002). New methods are needed to efficiently verify the reliability of practice-generated nursing data for purposes of research and program evaluation. The purpose of this study was to test a proposed panel-of-experts approach for verifying the reliability of nursing assessments using a de-identified convenience sample of client records from a Midwest public health nursing (PHN) agency.

Background

As administrators and researchers increasingly turn to standardized clinical data to help inform health care quality questions, the importance of data quality becomes paramount (McDaniel, 1994; Westra et al., 2008). Few reports of reliability of clinical nursing documentation are available in the literature (Bjorvell, Thorell-Ekstrand, & Wredling, 2000; Keenan et al., 2003; Muller-Staub et al., 2008; Nies Albrecht, 1991). In a seminal study of inter-rater reliability using standardized terminologies, investigators noted challenges with user subjectivity and misapplication of assessment criteria. Suggestions for addressing these challenges included increasing item clarity, dropping unreliable items, and improving user skills (McDaniel, 1994). In research studies, the most common and rigorous form of inter-rater reliability verification was to utilize trained research assistants to conduct joint visits with nurses, to ensure that the nurse and the research assistant observed the same scenario (Martin & Scheet, 1992). Researchers used this method extensively during the development of the Omaha System Problem Rating Scale for Outcomes knowledge, behavior, and status scales. A specially trained research assistant accompanied nurses on 97 visits and compared independent knowledge, behavior, and status scores following the visits. The research assistant and nurse scores were analyzed for agreement using a coefficient gamma test, and were found to agree significantly (p

Interface terminologies are standardized terminologies that enable practitioners to document assessments and services within the EHR. Nursing scholars are leaders in the development of interface terminologies. The American Nurses Association recognizes 12 standardized nursing terminologies and minimum data sets to support documentation in the EHR (ANA, 2010). Of these, the Omaha System is an interface terminology frequently used in community care settings in the United States and internationally (Omaha System, 2011; Martin, 2005).

The Omaha System is a comprehensive interface terminology that broadly describes health. It consists of three relational components: Problem Classification Scheme, Intervention Scheme, and Problem Rating Scale for Outcomes (Martin, 2005). Omaha System data have been employed extensively in health services research, including home visiting evaluation (Monsen et al., 2006; Monsen et al., 2010; Monsen et al., 2011).

The Problem Classification Scheme is a comprehensive, holistic assessment tool. It consists of 42 concepts (problems) organized in four Domains: Environmental, Psychological, Physiological, and Health related behaviors. Each problem has a definition and unique signs and symptoms that describe client health concerns (Martin, 2005).

The Intervention Scheme is a multi-axial, hierarchical, relational classification that describes problem-specific interventions. It consists of terms at four levels: problem, category, target, and care description. Problems (n=42) are the concepts of the Problem Classification Scheme. Categories (n=4) are the action terms: teaching, guidance, and counseling; treatments and procedures, case management, and surveillance. Targets (75) are defined terms that provide additional information about the intervention. Care descriptions are not defined, but the suggested terms may be used to report typical practice interventions, and may be customized to describe unique aspects of care. There are 12,600 possible problem-category-target intervention combinations (42 problems * 4 categories * 75 targets) (Martin, 2005).

The Problem Rating Scale for Outcomes is a problem-specific outcome measurement instrument. It consists of three five-point Likert-type ordinal rating scales; one each for the concepts of knowledge, behavior, and status (KBS). Similar to the Intervention Scheme, the Problem Rating Scale for Outcomes is used with the Problem Classification Scheme, permitting the assessment of client knowledge, behavior, and status for every Omaha System problem addressed with a client. The scoring of the scales ranges from 1 (most negative) to 5 (most positive) (Martin, 2005). This study focuses specifically on the Problem Rating Scale for Outcomes, a comprehensive, systematic evaluation framework designed to measure client progress relative to each problem in three domains: knowledge, behavior, and status (Martin, 2005). Definitions of the Problem Rating Scale for Outcomes are provided in Table 1.

TABLE 1 Omaha System Problem Rating Scale for Outcomes (Omaha System, 2011)

Concepts 1 2 3 4 5
KNOWLEDGE
Ability of the client to remember and interpret information No knowledge Minimal knowledge Basic knowledge Adequate knowledge Superior knowledge
BEHAVIOR
Observable responses, actions, or activities of the client fitting the occasion or purpos Not appropriate behavior Rarely appropriate behavior Inconsistently appropriate behavior Usually appropriate behavior Consistently  appropriate behavior
STATUS
Condition of the client in relation to objective and subjective defining characteristics Extreme signs/ symptoms Severe signs/ symptoms Moderate signs/ symptoms Minimal signs/ symptoms No signs/ symptoms

The Omaha System web site maintains an up-to-date list of standards organizations that have officially recognized, integrated, been mapped with, or have other formal relationships with the Omaha System, including the American Nurses Association (ANA), Healthcare Information Technology Standards Panel (HITSP), National Library of Medicine’s Metathesaurus; CINAHL; ABC Codes; NIDSEC; Logical Observation Identifiers, Names, and Codes (LOINC®); and SNOMED CT®, Health Level Seven (HL7®), International Organization for Standardization (ISO), and International Classification of Nursing Practice (ICNP®) (Omaha System, 2011).

The Omaha System was developed through four federally funded research projects between 1975 and 1993. These extensive studies documented the validity, reliability, and usability of the Omaha System as an instrument for clinical documentation (Martin & Scheet, 1992; Martin, Norris & Leak, 1999; Martin, 2005). These three important characteristics give confidence that the Omaha System is a robust instrument and can be used with confidence. However, use of a valid, reliable instrument is not sufficient to ensure data quality; it is also critical to verify that Omaha System documentation is reliable both with the original instrument and between documenters (Martin & Scheet, 1992; Monsen & Martin, 2002; Monsen et al., 2006; Monsen et al., 2009; Monsen et al., 2010).

The Omaha System community of practice has actively pursued data quality for many years through the use of shared inter-rater reliability materials, evidence-based pathways, user group meetings, and other activities (Monsen & Martin, 2002; Monsen et al., 2006; Monsen et al., 2010; Minnesota Omaha System Users Group, 2011). In a data- and practice-quality project for high risk maternal clients, eight problems were chosen as a structured assessment tool across agencies (Monsen et al., 2006). Table 2 provides the names and definitions of these problems. Participants in this project developed a supplemental rating guide to support Problem Rating Scale for Outcome rating reliability, augmenting the definitions and examples provided in the Omaha System book. Table 3 is an example of a rating guide supplement for the Mental health problem (Minnesota Omaha System Users Group, 2011). The guide is used by PHNs and other practitioners in the United States and internationally to support Omaha System outcome data reliability for purposes of program evaluation and research (Monsen et al., 2006; Monsen et al., 2010). Agencies have used the rating guides together with case studies as inter-rater reliability exercises in team meeting settings (Minnesota Omaha System Users Group, 2011). However, none of the methods currently being used in practice settings have been evaluated, and therefore the effectiveness of these methods is not known.

TABLE 2 Selected Omaha System Problems Used As Maternal Health Indicators In a Standardized Admission Assessment (Monsen et al., 2006)

Problem

Definition

Abuse Child or adult subjected to non-accidental physical, emotional, or sexual violence or injury (Martin, 2005, p. 219)
Antepartum/postpartum Before or after parturition (Martin & Scheet, 1992, p. 212)
Caretaking/parenting Providing support, nurturance, stimulation, and physical care for dependent child or adult (Martin, 2005, p. 208)
Family planning Practices designed to plan and space pregnancy within the context of values, attitudes, and beliefs (Martin 2005, p. 343)
Income Money from wages, pensions, subsidies, interest, dividends or other sources available for living and health care expenses (Martin, 2005, p. 169)
Mental health Development and use of mental/emotional abilities to adjust to situations, interact with others, and engage in activities (Martin, 2005, p. 212)
Residence Living area (Martin, 2005, p. 175)
Substance use Consumption of medicines, recreational drugs, or other materials likely to cause mood changes and/or psychological/physical dependence, illness, and disease (Martin, 2005, p. 337)

TABLE 3 KBS Rating Guide Supplement for the Mental Health Problem

KBS Rating Guide Supplement for the Mental Health Problem

The purpose of this study was to test a proposed panel-of-experts approach for verifying the reliability of nursing assessments. Specific aims were to: (1) evaluate agreement between experts for nine outcomes (Omaha System knowledge, behavior, and status scores) at discharge for three common problems (Income, Mental health, and Substance use); and (2) compare the three experts’ scores to the scores in the agency record.

Methods

This reliability validation study employed a retrospective cohort design to verify inter-rater agreement of scores in agency records using a panel-of-experts approach. Approval was obtained from the University of Minnesota Institutional Review Board and the local public health department director. The study employed a convenience sample of records from clients served in a family home visiting program by PHNs. Inclusion criteria were: (a) admitted and received at least three visits between December 2003 and November 2008; (b) documentation for the Income, Mental health, and Substance use problems; (c) all Income, Mental health, and Substance use problems had baseline ratings of four or less. The rationale for having at least three visits was to increase the likelihood of client change following nursing intervention. The rationale for having all three problems present in each record was to streamline the data collection process by minimizing the number of records reviewed by the experts. The rationale for selecting Income, Mental health, and Substance use problems was that either structured or free text data was likely to be available regarding these problems due to quantifiable and observable aspects of the problems. That is, data related to requirements for supplemental income programs, use of mental health services, and/or amount and type of substance use may have documented by PHNs. For example, quantifiable structured data might include a pick list of number of cigarettes per day, and free text documentation might include nurse observations of client behaviors during the visit. The rationale for having baseline status ratings of four or less was to increase the likelihood of documentation of assessment related to client status. A status rating of five indicates no signs/symptoms. Such ratings are common among new parents receiving prevention services. Of 3794 total clients in the database, 100 clients met the inclusion criteria and were included in the analysis. All clients were female. Additional characteristics of the sample and services they received are reported in Table 4.

TABLE 4  Characteristics of the Sample and Services Received by Clients

Minimum

Maximum

Mean

SD

Age at admission in years

15

35

20.9

5.1

Number of visits

3

60

11.6

10.1

Length of care episode in months

1

48

13.1

9.1

Race/ethnicity

N and %

Caucasian

40

African American

29

Native American

11

Hispanic/Latino

9

Asian

6

Other/Unknown

5

The following sections of the client record were included in the file reviewed by the experts: visit reports (n= up to 10; if client had more than 10, the 9 most recent, plus admission visit report); discharge summary; and the assessment history report. One portable document format (pdf) file was created for each client. Redaction of identifying information, staff names, and KBS ratings was completed automatically using Adobe Acrobat’s redaction feature. After redaction, the files were visually inspected to verify that the automatic redaction was successful. All redacted files were stored in a secure conference room at the agency on password-protected agency computers.

The panel-of-experts consisted of three nurses [one faculty researcher,Z, and two graduate students, X and Y] with advanced training in use of the Problem Rating Scale for Outcomes (X, Y, and Z) and extensive experience using the Problem Rating Scale for Outcomes in practice and research (Y, 8 years and Z, 12 years). The experts were given access to the agency computers for the data collection period. Based on the structured and free text documentation in the redacted records, the experts independently assigned knowledge, behavior, and status scores for Income, Mental health, and Substance use problems at the time of client discharge. The panel-of-experts used the Omaha System book (Martin, 2005) and the Omaha System supplemental rating guide (Minnesota Omaha System Users Group, 2011) to support rating decisions. These same decision supports were available to the local public health agency PHNs who generated the data for the study.

Instances of agreement between raters for nine outcomes (Omaha System knowledge, behavior, and status scores at discharge for Income, Mental health, and Substance use problems) was calculated for experts (pairs and all) and for experts with the agency. The inter-rater reliability of the scores was evaluated using Intraclass Correlation Coefficients (ICC). The ICC looks at the degree of agreement between multiple observations of the same problems, and reflects the percentage of score variance attributable to different sources. Values of ICC greater than 0.7 are considered to represent acceptable inter-rater agreement (Streiner & Norman, 2008). The data collection period was three consecutive days. At the end of the third day, a post-processing meeting was convened to review the data collection process and challenges.

Results

For Aim 1, instances of agreement among expert pairs ranged from 24 (Mental health status – experts X and Y) to 58 (Income knowledge – experts X and Z). Agreement among all three experts ranged from 6 (Mental health status) to 32 (Income knowledge). Measures of inter-rater reliability (ICC) ranged from 0.40 (Mental health knowledge) to 0.63 (Income behavior) (see Table 6). For Aim 2, instances of agreement ranged from 0 (Mental health behavior and status to 14 (Income knowledge) (see Table 5). Measures of inter-rater reliability (ICC) ranged from 0.35 (Substance use knowledge) to 0.56 (Income behavior) (see Table 6). In the post-processing meeting, the three experts reported unanimously that for most of the client records, there was very little information in the client records (free text or structured data) to substantiate knowledge, behavior, and status scores.

TABLE 5  Number of Instances of Agreement between Raters

Outcome

2 agree

3 agree

4 agree

N=100

N=100

N = 97*

X – Y

X – Z

Y – Z

X, Y, Z

X, Y, Z & Agency

Income K

48

58

54

32

14

Income B

43

53

50

28

9

Income S

42

46

44

23

7

Mental health K

37

55

40

22

10

Mental health B

31

36

35

14

0

Mental health S

24

16

49

6

0

Substance use K

47

39

38

18

9

Substance use B

45

40

47

26

8

Substance use S

43

38

37

23

13

Mean

40

42.3

43.8

21.3

7.8

K = knowledge, B = behavior, S = status
X = expert rater 1, Y = expert rater 2, Z = expert rater 3, PHN = public health nurse
* N = 97 because 3 records had missing PHN ratings

 

TABLE 6 Measure of Inter-Rater Reliability using the Intraclass Correlation Coefficient (ICC)

                      Experts

          Experts and Agency

Income
Knowledge

0.44

0.35

Behavior

0.63

0.56

Status

0.41

0.42

Mental health
Knowledge

0.40

0.38

Behavior

0.49

0.53

Status

0.51

0.44

Substance use
Knowledge

0.44

0.35

Behavior

0.58

0.45

Status

0.49

0.49

Discussion

In this study, a panel-of-experts method was investigated for documenting reliability of the quality of public health nursing data. Results showed very low instances of agreement among experts and agency scores for all outcomes. All ICC values for all problems and outcomes failed to meet the threshold of 0.7. These results indicate that inter-rater reliability was not acceptable, even among the experts. This suggests that the panel-of-experts method may not be useful as applied in this study.

There were several limitations of the study design. First, the design assumed that PHNs documented additional free text or structured data to support their assessments. However, the experts’ comments revealed that insufficient information was available in the chart to enable the experts to substantiate the scores in agency records. This absence of data related to the scores in the agency records was unexpected, and appears to be a major factor in the low inter-rater reliability findings. In essence, the experts guessed at ratings, and the guesses resulted in poor reliability. In addition, the study design assumed that the nursing assessments were accurate due to the documentation quality supports that were in place for PHNs (Monsen et al., 2006). Finally, the design assumed that it would not be necessary to triangulate the findings of the expert panel and with the PHN documentation. This assumption was based on the consistent use of the same documentation supports by PHNs and the expert panel, and was also necessitated by the goal of using a method that could be realistically employed by any agency. Triangulation methods such as joint visits with a PHN would greatly add to the costs of verifying reliability for agencies.

The fact that knowledge, behavior and status scores were not substantiated by other documentation indicates that the scores were used as the PHN’s language for documentation of client acuity and risk, and were an integral part of the client record. This finding demonstrates the tension between comprehensive charting and efficient use of PHN time. These findings were shared with agency administrators, who agreed that use of structured knowledge, behavior, and status ratings as client discharge documentation may be reasonable for low risk clients. For example, if the baseline Mental health problem is 4 in knowledge, 4 in behavior, and 5 in status, these ratings document that the client has adequate knowledge, usually appropriate behavior, and no signs/symptoms related to the Mental health problem. For quality assurance purposes, the agency has verification that the PHN evaluated the Mental health problem and that the client is low risk in this area. However, the discharge documentation for high risk client situations may require more information regarding knowledge, behavior, and status scores. For example, if a PHN assessed the Mental health problem as 2 in knowledge (minimal knowledge), 2 in behavior (rarely appropriate behavior), and 2 in status (severe signs/symptoms), additional free-text or structured documentation regarding the client’s signs/symptoms, support system, access to appropriate care, and safety plan would be critical for the safety of the client, and legal considerations for the PHN and the agency.

The goal of verifying inter-rater reliability of scores in agency records using a panel-of-experts approach was not reached. However, valuable information was gained on the use of knowledge, behavior, and status scores as an essential part of PHN documentation, and the ongoing need to develop an efficient, effective method for verifying inter-rater reliability. It is possible that this method of verifying data reliability could be successful if implemented with procedures specifying that assessments must be substantiated by free text or structured data. Agency documentation standards for supplemental documentation with knowledge, behavior, and status ratings could be developed and implemented. Adherence of the PHN to these standards could be monitored. However, these recommendations are problematic from an agency perspective due to fiscal constraints and demanding caseloads that are drivers for minimizing documentation time and maximizing service efficiency. Demands for documentation quality compete with demands for fiscally efficient care delivery systems, and the tension between these demands is at the crux of this issue. Solutions are needed that leverage technology within the EHR to support documentation quality. For example, inter-rater reliability could be supported by incorporating KBS rating guidelines within the electronic health record (Minnesota Omaha System Users Group, 2011). In addition, it may be possible to compute KBS ratings using algorithms based on other data entered into the EHR. To achieve new solutions, it is imperative that agencies, software systems, and practitioners work together to create new solutions that will improve both data- and practice-quality.

Conclusion

In this study, a panel-of-experts approach for verifying KBS rating reliability was tested. Three experts conducted a blinded record review to verify reliability of clinical documentation ratings based on supporting information available within the records. Very low agreement was found between expert and agency ratings, and thus, this method is not recommended for verifying reliability of practice-generated data unless implemented with clear procedures and expectations. This study highlights the challenges of verifying data quality, and supports the need for further efforts to develop efficient, effective inter-rater reliability verification processes.

Author Note

We would like to thank the public health nurses of St. Paul-Ramsey County Minnesota Public Health Department and acknowledge their commitment to excellence in family home visiting care.

References

Bjorvell, C., Thorell-Ekstrand, I., & Wredling, R. (2000).  Development of an audit instrument for nursing care plans in the patient record. Quality Health Care, 9, 6-13.

Keenan, G., Stocker, J., Barkauskas, V., Johnson, M., Maas, M., Moorhead, S., & Reed, D.  (2003). Assessing the reliability, validity, and sensitivity of nursing outcome classification in home care settings.  Journal of Nursing Measurement, 11(2), 135-155.

Martin, K. S. (2005). The Omaha System: A key to practice, documentation, and information management (Reprinted 2nd ed.). Omaha, NE: Health Connections Press.

Martin, K. S., & Scheet, N. J. (1992). Omaha System: A pocket guide for community health nursing. Philadelphia PA: Saunders.

Martin, K. S., & Norris, J. (1996).  The Omaha System: A model for describing practice.  Holistic Nursing Practice, 11(1), 75-83.

Martin, K. S., Norris, J., & Leak, G. K. (1999).  Psychometric analysis of the problem rating scale for outcomes.  Outcomes Management for Nursing Practice, 3(1), 20-25.

McDaniel, A. M. (1994).  Using generalizability theory for the estimation of reliability of a patient classification system.  Journal of Nursing Measurement, 2(1), 49-62.

Minnesota Department of Health.  (2001). Public Health Nursing Section: Public health interventions–Applications for public health nursing practice. St. Paul, Minnesota: Department of Health, 2001.

Minnesota Omaha System User Group (2011). Minnesota Omaha System Users Group. Retrieved from www.omahasystemmn.org

Monsen, K. A., Fulkerson, J. A., Lytton, A. B., Taft, L. L., Schwichtenberg, L. D., & Martin, K. S. (2010). Comparing maternal child health problems and outcomes across public health nursing agencies. Maternal and Child Health Journal, 14, 412-421.

Monsen, K. A., Radosevich, D. M., Kerr, M. J., & Fulkerson, J. A. (2011). Public health nurses tailor home visiting interventions. Public Health Nursing, 28, 119–128. doi: 10.1111/j.1525-1446.2010.00911.x

Monsen, K. A., Fitzsimmons, L. L., Lescenski, B. A., Lytton, A. B., Schwichtenberg, L. D., & Martin, K.S. (2006).  A public health nursing informatics data-and-practice quality project.  Computers, Informatics, Nursing, 24(3), 152-158.

Monsen, K. A., & Martin, K. S. (2002). Developing an outcomes management program in a public health department.  Outcomes Management, 6(2), 62-66.

Muller-Staub, M., Lunney, M., Lavin, M. A., Needham, I., Odenbreit, M., & van Achterberg, T. (2008). Testing the Q-DIO as instrument in measuring the documented quality of nursing diagnoses, interventions and outcomes. International Journal of Nursing Terminologies and Classifications, 19(1), 20-27.

Nies Albrecht, M. (1991). Home health care: Reliability and validity testing of a patient-classification instrument.  Public Health Nursing, 8, 124-131.

Omaha System (2011). The Omaha System: Solving the clinical data-information puzzle. Retrieved from www.omahasystem.org

Streiner, D., & Norman, G. (2008). Health measurement scales: A practical guide to their development and use (4th ed.).  New York: Oxford Medical Publications.

Westra, B. L., Delaney, C. W., Konicek, D., & Keenan, G. (2008). Nursing standards to support the electronic health record.  Nursing Outlook, 56, 258-266.

Author Bios

Karen A. Monsen, PhD, RN

Karen A. Monsen, PhD, RN earned her PhD in Nursing at the University Of Minnesota School of Nursing in 2006, where she is an assistant professor, and the Director of the Omaha System Partnership for Knowledge Discovery and Health Care Quality.

Amy B. Lytton, MS, RN

Amy B. Lytton, MS, RN, is a nurse informaticist at St. Paul – Ramsey County Minnesota Public Health department. She earned her Masters of Science degree in Nursing/ Public Health Nursing from the University of Minnesota.

Starr Ferrari, MS, RN, CNM

Starr Ferrari, MS, RN, CNM earned her Bachelor of Science degree in Nursing from the Baker University School of Nursing in 2005 and her Master of Science degree in Nursing/Midwifery from the University Of Minnesota School of Nursing.

Katie M. Halder, MS, RN, FNP

Katie M. Halder, RN, CNP earned her Bachelor of Science degree in Nursing from the College of St. Catherine and her Master of Science degree in Nursing/Family Nurse Practitioner from the University of Minnesota School of Nursing.

David M. Radosevich, PhD, RN

David M. Radosevich, PhD, RN, earned his Ph.D. in Epidemiology from the University of Minnesota, Division Epidemiology, School of Public Health, Minneapolis, Minnesota in 1992. He is the Director, Transplant Information Services and Deputy Director, Clinical Outcomes Research Center; and Assistant Professor, Departments of Surgery and Health Services Research and Policy at the University of Minnesota.

Madeleine J. Kerr, PhD, RN

Madeleine J. Kerr, PhD, RN earned her PhD in Nursing from the University Of Michigan in 2004. Specialty areas include worker health promotion and protection, intervention effectiveness research, preventing noise-induced hearing loss, Pender Health Promotion Model. Dr. Kerr’s current research focus is developing and testing theory-based health promotion interventions with special populations of workers.

Susan M. Mitchell, MPH

Susan M. Mitchell, MPH supervises program evaluation and Child and Teen Checkups programs at St. Paul – Ramsey County Minnesota Public Health Department.

Joan K. Brandt, PhD, RN

Joan K. Brandt, PhD, RN received her PhD in nursing from the University Of Minnesota School of Nursing in 2007. She is program manager for maternal-child health programs at St. Paul – Ramsey County Minnesota Public Health department.

Be Sociable, Share!

Comments are closed.