Veronica A. Thurmond RN, PhD and Sue Popkess-Vawter, RN, Ph.D
The primary aim of this paper is to describe the background and structure of Astin's Input-Environment-Outcome (I-E-O) model. Further, there are three secondary aims of the study. First, the model will be examined as middle range theory. Although Astin has labeled the I-E-O a model, it will be reviewed as a middle range theory, specifically delimiting the application of the model to assessments of Web-based educational courses. By applying the model to only Web-based courses in higher education, it qualifies as a middle range theory of limited scope, less abstraction, and focuses on a specific phenomenon (Meleis, 1997). Second, this paper will identify and clarify the concepts, statements, and some empirical referents of the I-E-O as a theory. Third, a research study using the model in examining Web-based courses will be described.
Examination of A Middle Range Theory: Applying the Input-Environment-Outcome (I-E-O) Model to Web-Based Education
Assessment in higher education is important to enhance learning and to provide
feedback to both teachers and students (Cross, 1999). Assessment in higher
education is defined as gathering information about how students, staff, and
institutions function (Astin, 1993). Information gathered during assessments
can be used by teachers and students about learning that occurs in a particular
classroom (Cross, 1999). Effective assessment findings should provide greater
understanding of causal connections between the practice and outcomes of education
(Astin, 1993). To assist in research endeavors of educational assessment, Astin
(1993) developed the Input-Environment-Outcome (I-E-O) model. He used this
model as a framework for developing assessment and evaluation activities in
the traditional classroom setting.
The primary aim of this paper is to examine the background and structure of Astin's Input-Environment-Outcome model. The purpose of this paper is to provide a detailed description of the I-E-O model so that readers will have an idea on the intended use of the model. Further, there are three secondary aims of the study. The secondary aims of the study focus on the feasibility of applying the I-E-O model in assessing Web-based education. First, the model will be examined as a middle range theory. Although Astin labeled the I-E-O a model, it will be reviewed as middle range theory, specifically delimiting the application of the model to assessments of Web-based education courses. By applying the model to only Web-based courses in higher education, it qualifies as a middle range theory of limited scope, less abstraction, and focuses on a specific phenomenon (Meleis, 1997). Second, this paper will identify and clarify the concepts, statements, and some empirical referents of the I-E-O as a theory.
Third, application of the model will be illustrated in examining a previously published study evaluating students’ satisfaction in Web-based courses. Although the study described in this paper has previously been published elsewhere, it is important to include it in this extensive examination of the I-E-O model to exemplify how the model can effectively guide research. Including the study is quintessential to highlighting the value of a strong theoretical underpinning in conducting research studies. Additionally, the previously published study is the only one to date that has used the I-E-O model in examining Web-based courses.
The Input-Environment-Outcome (I-E-O) Model
Educational assessments should provide some understanding of causal connections between the practice and outcomes of education. The key to accurate assessments is to minimize error associated with causal inferences (Astin, 1993). One effective way to minimize this error is by controlling for input characteristics, i.e. characteristics of students at the outset of learning experiences. Most educational research occurs in natural settings; consequently, the “I-E-O model was designed to address the basic methodological problem with all nonexperimental studies in social sciences, namely random assignment of people (inputs) to programs (environments)” (Astin & Sax, 1998, p. 252).
Unfortunately, many studies conducted in distance education lacked the rigor necessary to make strong causal inferences regarding the learning environment. One problem in these studies was the lack of consideration regarding student characteristics prior to participating in educational courses. Student performance and satisfaction [outcome] with Web-based courses may have been due to prior computer skills or advanced knowledge of course content [inputs], rather than a direct result of what they learned in Web-based courses [environment]. Without controlling for student characteristics, accurate assessment can be limited when evaluating Web-based courses in the virtual environment. The I-E-O model helps in providing consideration of and statistical control for input characteristics.
Components of the I-E-O Model
The Input-Environment-Outcome (I-E-O) model was developed by Alexander W. Astin (1993) as a guiding framework for assessments in higher education (Figure 1). The premise of this model is that educational assessments are not complete unless the evaluation includes information on student inputs (I), the educational environment (E), and student outcomes (O) (Astin, 1993).
Figure 1. Astin’s Input-Environment-Outcome (I-E-O) Model
Note. Assessment for Excellence (p. 18), by Alexander W. Astin, 1993, Phoenix: The Oryx Press. Copyright 1993 by The Oryx Press. Reproduced with permission from Greenwood Publishing Group, Inc., R&P, 88 Post Rd West, Westport, CT 06881-5007.
The primary purpose of the model is to control for input differences, resulting in a less biased and inaccurate estimates of how environmental variables effect student outcomes. Application of the I-E-O model results in more accurate assessment of the effects of the learning environment. Use of this model “forces” researchers to address not only outcomes, but also inputs and environmental variables when evaluating human performance.
three constructs of this model are inputs, environment, and outcomes.
Inputs "refers to those personal qualities the student brings initially to the education program (including the student's initial level of developed talent at the time of entry)" (Astin, 1993, p. 18). Inputs also can be such things as antecedent conditions or performance pretests that function as control variables in research. Examples of student inputs might include demographic information, educational background, political orientation, behavior pattern, degree aspiration, reason for selecting an institution, financial status, disability status, career choice, major field of study, life goals, and reason for attending college (Astin, 1993). Inclusion of input data when using the I-E-O model is imperative because inputs directly influence both the environment and outputs, thus having a “double” influence on outputs—one that is direct and one that indirectly influences through environment (see Figure 1). Input data also can be used to examine influences that student inputs have on the environment; these input data could include gender, age, ethnic background, ability, and socioeconomic level.
Environment "refers to the student's actual experiences during the educational program" (Astin, 1993, p. 18). The environment includes everything and anything that happens during the program course that might impact the student, and therefore the outcomes measured. Environmental items can includes those things such as educational experiences, practices, programs, or interventions. Additionally, some environmental factors may be antecedents (e.g. exposure to institution policies may occur before joining a college organization). Environmental factors may include the program, personnel, curricula, instructor, facilities, institutional climate, courses, teaching style, friends, roommates, extra-curricular activities, and organizational affiliation (Astin, 1993). When doing evaluative research, there are instances when environmental variables could be considered intervening outcomes variables, depending on how researchers use data in the analysis (e.g., moderator variables). Defining and assessing environmental variables can be an extremely challenging endeavor.
Outputs "refer to the 'talents' we are trying to develop in our educational program" (Astin, 1993, p. 18). Outputs are outcome variables that may include posttests, consequences, or end results. In education, outcome measures have included indicators such as grade point average, exam scores, course performance, degree completion, and overall course satisfaction.
Origins of the Model
Astin's earlier work as a clinical and counseling psychologist provided
him a developmental framework from which to view human behavior. Consequently,
he transitioned to conducting research in educational psychology, he
brought with him the clinical psychologist’s perspective (Astin, 1993). During
his first research project in assessing doctoral productivity, Astin (1993, p.
18) became convinced that "any educational assessment project is incomplete
unless it includes data on student inputs, student outcomes, and the education
environment to which the student is exposed. . . ". The findings from
these earlier studies led him to develop the I-E-O model. The I-O-E model
developed as a result of these studies.
The model was developed for use in natural settings. The advantages of research conducted in natural settings compared to true experiments are that it avoids artificial conditions and it makes possible simultaneous studying of multiple environmental variables (Astin, 1993). Data gathered from natural experiments allow contrasting of data gathered from a variety of educational environments. Unfortunately, lack of randomization in environmental settings can impose limitations since student input variables are not controlled. However, the I-E-O model, through multivariate analyses, can control for initial student input (Astin, 1993). The statistical control for initial student characteristics provides some additional rigor to studies when randomization of subjects is not possible. Using the model to design evaluation research studies can help determine assessment activities to explain student outcomes.
The I-E-O model could be considered a grand theory because of its wide scope and abstract assessment constructs (Fawcett, 1993b). The model could be used in almost any social or behavioral science field (i.e. history, anthropology, economics, sociology, psychology or political science) that study human beings and the environmental influence on their development (Astin, 1993). Despite the origins of the model focusing specifically on education, applications of the model need not be limited to the educational arena. Narrowing the application of the model to assessment of Web-based courses, however, also narrows the scope and delimits concepts of the model as a middle range theory in online distance learning.
The goals of a theory are to describe, explain, predict, (Fawcett, 1993a, 1993b; Meleis, 1997) and prescribe (Meleis, 1997). The I-E-O model was developed to conduct complete assessments in higher education using three essential components (descriptive level). Because the goal of the model was expanded beyond description to obtain information about how outcomes are influenced by educational policies and practices, it could be classified as explanatory also (Fawcett, 1993a; Meleis, 1997). Additionally, when pretesting and self-prediction questions are added as inputs of the I-E-O model, the purpose becomes predictive. Predictive theories not only explain relationships among concepts of a phenomenon, but they also predict outcomes resulting from these relationships (Fawcett, 1993a).
Research and Empirical Referents
Testing of a theory has been equated with evaluation and considered
significant when developing, accepting, or using theories
(Meleis, 1997). Testing
of theory is "a systematic process of subjecting theoretical propositions to the rigor
of research in all its forms and approaches, and consequently the use of the
results to modify or refine the research propositions" (Meleis, 1997, p.
269). All theoretical models must be testable to some degree; however, this does
not mean that all propositions must be testable (Dubin, 1978), only that they "should
be potentially testable" (Fawcett, 1993b, p. 42). The I-E-O
model is easily subjected to testing; however, the constructs
must be clearly
operationally defined for measurement.
A review of educational literature indicated no articles that specifically addressed the use of the I-E-O model to assess Web-based courses. In a personal communication, Astin verified that he knew of no researchers using his model to study Web-based courses (A. W. Astin, personal communication, February 17, 2001). Research studies based on Astin's I-E-O model as the guiding framework tended to be exploratory (Knight, 1994b) or descriptive (Kelly, 1996). Although using some similar variables in their assessment, the reviewed research focused on different issues in education, thus having different empirical indicators. Empirical indicators are "the actual instruments, experimental conditions, and procedures that are used to observe or measure the concepts of a middle-range theory" (Fawcett, 1993c, p. 23).
Empirical Testing of the Input-Environment-Outcome Model
Since Astin’s conception in 1968, the I-E-O model has been used by many researchers to evaluate relationships among student inputs, environmental factors, and student outcomes (Astin, 1968; Astin & Sax, 1998; Campbell & Blakey, 1996; House, 1999; Kelly, 1996; Knight, 1994a, 1994b; Long, 1993; Pace, 1976).
Astin (1968) operationalized the constructs of inputs, environment and outputs with 669 students to test the assumption that attending a high quality institution enhanced student development. Some of the input empirical indicators for this descriptive, longitudinal study included results on the National Merit Scholarship Qualifying test, gender, size of high school class, and intended field of study. Some environmental measures or institutional quality measures were number of library books, faculty-student ratio, percentage of faculty with doctoral degrees, and type of college town. Astin (1968) hypothesized that institutional excellence [environment] positively affected student intellectual achievement [outcome], measured by GRE scores. The findings did not support the hypothesis that instructional quality [environment] had an impact on student achievement [output], when the input variables were controlled. The contribution of this study was in highlighting the importance of considering all three components in assessment activities. Although results lacked confirming evidence, the model served as a prototype for future study, several of which in the 1990s had stronger evidence to support the model.
Time Required to Completion
Knight (1994b) used Astin's (1993) I-E-O model
as a guide in an exploratory examination
of student enrollment
explain and predict
the amount of
time required for
degree completion [output]. Degree completion
was obtained from enrollment data of 868
a U.S. southeast
on whether or not students earned a degree
within a specified time
[outcomes] would also have an impact on the
time it took to obtain the degree [environment].
Influences represented model inputs (student
background) and environmental factors (student
number of courses
The best predictors of time to degree were enrollment behaviors and academic ability. The results indicated that academic eligibility, cumulative credit hours earned, and courses dropped had the strongest effect on when students obtained their bachelors degree. Student variables [input] such as age and gender, and environmental variable such as being a university resident and enrolling in an orientation course had a substantial impact on the amount of time it took students to complete their bachelors degree [outcome]. Findings suggested that changes in institutional policies [environmental changes] could help decrease time to degree completion [outcomes]. Knight's (1994b) study demonstrated support for relationships among inputs, environment, and outputs. Furthermore, omission of one or more of these operationalized constructs could have made findings difficult to interpret.
Astin and Sax (1998) examined the influence
of participating in service programs
on undergraduate student development.
Astin and Sax tested
the model using
the Cooperative Institutional Research
Program data from 3,450 students as the empirical
indicator of service participation. The
dependent variables were
grouped into three broad categories that included
civic responsibility, educational attainment,
and life skills. Input variables included
as race, gender, and pretest scores on
selected outcome measures. Environmental variables
included students’ major, characteristics of the
school, and service participation information. Information
sponsorship, and locations of service involvement. Hierarchical
regression analysis was used and student characteristics
(inputs) were entered
first, as directed
by the model.
The longitudinal, descriptive study controlled for individual student characteristics at college entry [input] and found strong support that participation in service activities as an undergraduate [environment] had a positive impact on students’ academic and life skill development (outcomes) and enhanced awareness of civic responsibility (outcomes). Furthermore, students who participated in service programs statistically improved their academic performance. Astin and Sax pointed out that although results in academic performance improvement was statistically significant, the change was only 0.1 grade points for the typical student. The researchers stated that despite additional time required of volunteer service, these same students spent more time in academic study than students who did not participate in volunteer activities.
Student Retention and Persistence
Kelly (1996) conducted a longitudinal study of persistence to graduation at the United States Coast Guard Academy in Connecticut, by looking at the process of retention. This descriptive study focused on three areas: (1) the relationship between input and persistence outcomes, (2) the relationship between measures of academic and social involvement with persistence outcomes, and (3) the relationship between input and measures of academic and social involvement. Kelly's findings confirmed that input variables had no significant impact on measures of student persistence (output); however, they were significantly related to involvement measures [environment]. Kelly (1996) concluded that measuring the effects of academic performance and early social integration helped to determine predictors of long-term persistence. This research provided investigation of the effects of input variables and measures of involvement [environment] over time and how both impact persistence [outcome].
Early Remediation and Persistence
Campbell and Blakely (1996) wanted to determine if early remediation [environment] influenced persistence and/or performance [output] of those students who were under prepared for school. The sample for this longitudinal, descriptive study was 3, 282 community college students. The results indicated that cumulative grade point average (GPA) [input] and number of remedial courses [environment] had an impact on students' persistence with staying in school [output]. By using Astin's I-E-O model, Campbell and Blakely found that input and environmental variables helped predict the outcome variable of persistence.
Student Satisfaction and Degree Completion
House (1999) used the I-E-O model to investigate students’ satisfaction and degree completion. The input variables in the study included high school GPA; self-ratings of overall academic ability; and expectations of graduating with honors. The environmental variables were hours spent studying; participation in class group projects; changes in major area of study; satisfaction with quality of instruction; job status; and commute time. House (1999) used stepwise multiple regression analyses to show that students’ satisfaction was positively influenced by high GPA in high school; satisfaction with course instruction; work on group projects; and less commute time. Likewise, degree completion significant predictors were GPA; satisfaction with course instruction; changes in majors; time spent commuting; and work on group projects. The findings indicated that high school GPA [input] significantly predicted of satisfaction [environmental]. Additionally, after accounting for the affects of student inputs, when the environmental variables were entered into the model, three were found to be significant predictors of satisfaction. These three environmental variables included satisfaction with course quality, working on group projects, and commute time.
Summary of the I-E-O Studies
In summary, the studies provided
some support for the
I-E-O model. Conceptually,
model is parsimonious,
the constructs make sense,
complexity lies in accurately
as testable variables.
An example of such
complexity is when student
outcomes might be interpreted
such as high school
GPA scores. Similarly,
risk of omission may
attempting to capture
fully what environmental
a narrow scope of
the environment is used.
Many extraneous variables
outcomes can escape
measurement, which may
be largely attributed as an
than the theoretical
Researchers who use this model must be very clear in contextually defining each model construct and supporting why particular variables were used as measures. Although some researchers used large databases, generalizability of findings is limited by the lack of randomization of subjects. Despite these weaknesses, the model has been shown to be useful and testable.
Empirical studies reviewed were descriptive in design and used only quantitative methodology. Each study used the I-E-O model to test hypotheses and highlighted the importance of all three constructs when conducting assessment activities. Although findings in Astin's (1968) study of institutional excellence did not support that educational environments impact student outcomes, the merit of the model endured and stimulated a growing body of supporting evidence. The five studies reviewed here supported the contributing effects of student input characteristics, lending credence to the importance of examining all three constructs when assessing educational programs. Finally, the I-E-O model was not found to be used in evaluation of distance education courses. The primary author’s (*Thurmond, Wambach, Connors, & Frey, 2002) work is the first known to use the model in examining Web-based learning environments.
Assessment of the Web-Based Environment
Use of the Internet has burgeoned as a pedagogical medium. Many Web-based education studies have been unable to link causal inferences between virtual environments and student outcomes. Unlike traditional classrooms, Web-based courses lack face-to-face interaction (Aase, 2000) and often are in asynchronous format, which allows students and instructors to participate at their convenience. The asynchronous nature does not require students and instructors to gather at the same time. The convenience and flexibility in schedule is usually viewed as one of the strengths of online learning, contributing to its popularity as a learning format. The environmental structures of online courses are different than typical classrooms, which changes methods of course delivery. Although there are greater similarities than differences between teaching in traditional classrooms and a Web-based environment, educators should be systematic and purposeful when adapting courses for the distance educational setting (Billings, 1996). Simply placing traditional classroom lectures online does will most likely not make effective online courses. Furthermore, differences in environmental settings also present unique challenges to pedagogical presentations.
An Application of the I-E-O Model to Web-Based Courses
The I-E-O model
provided a strong
in the examination
environment and its impact
on students’ satisfaction.
Theoretical underpinnings of research studies are important to properly align
variables from a framework and to view findings in light of the chosen theoretical
perspective. “This framework allows readers to understand the perspective
of the researcher, and provides a clearer path from which to carry the research
forward” (Thurmond, 2002, p. 23).
A previously published study is described to provide readers with an idea on how the middle range theory can link practice with research. This Web-based educational study was conducted by Thurmond, Wambach, Connors and Frey (2002) and is reported in its entirety elsewhere. However, it is important to include the previously published study in this paper to clearly demonstrate how the theory can be applied to Web-based education research. The study is reported here to illustrate how the I-E-O model can be effectively used as a middle range theory in the evaluation of Web-based courses. The next sections describe study design and findings.
Purpose of the Research Study
The purpose of the study was to determine which environmental variables predicted student outcomes of satisfaction while controlling for specified student characteristics [inputs]. Using hierarchical regression analysis, the primary aim was to answer the research question, “How well do the environmental variables predict a student’s level of satisfaction, when controlling for student characteristics?” Astin’s model guided the overall study and Chickering and Gamson’s (1987) educational practice principles specified the classroom environment.
The study was a secondary analysis using data from student evaluations of Web-based nursing courses called Evaluating Educational Uses of the Web in Nursing (EEUWIN) (pronounced "you-win"). Subjects were nursing students enrolled in Web-based courses at a U.S. Midwestern university. A descriptive correlational design examined the relationships among student characteristics [inputs], environmental variables [environment], and student satisfaction in Web-based courses [outputs]. During the fall 2000 semester, 120 students from seven different nursing courses completed evaluation and satisfaction questionnaires at the conclusion of Web-based courses.
The EEUWIN instrument,
use of technology.
analyses. Ten demographic
suggestions for improvement
Connors, & Skiba, 2001). Reports
of internal consistency for the total instrument using
.85 (Billings, Connors,
Content and construct validity were established through nursing literature regarding Web courses and a national consensus panel of distance learning experts. Further, items were reviewed by a panel of nursing faculty from the three schools participating in the survey. For this study, the researcher categorized the 55 Likert-type items as either an input, environment, or outcome variable according to Astin's (1993) model. As a result, 13 items were identified as input variables, 33 environment variables, and nine output variables. Subsequently, criterion and predictor variables were selected from each category to answer the research question.
Input Predictor Variables
Five input variables, selected a priori based on an extensive literature review in distance education, included: perceptions of computer skills; knowledge of electronic communications; number of Web-based courses taken; distance living from main campus; and age. The literature had conflicting views on the impact of these variables on student satisfaction.
Environmental Predictor Variables
Six environmental predictors, selected based on Chickering and Gamson’s (1987) Seven Principles For Good Practice In Undergraduate Education, included: encouraging faculty/student contact; developing reciprocity and cooperation; engaging in active learning; providing quick feedback; emphasizing the amount of time dedicated to a task; and respecting diversity. These principles were based on fifty years of research and supported by the experiences of students and teachers (Chickering & Gamson, 1987). Other authors (Howland & Wedman, 2003; Koeckeritz, Malkiewicz, & Henderson, 2002; Billings et al., 2001; Chickering & Ehrmann, 1996; Muirhead, 2001a, 2001b) have supported the credibility of these principles in technology-based education. Six questions from the EEUWIN instrument, selected to represent each of the six principles, represented the environment of the Web-based course. The study examined whether the same principles of good practice implemented in the Web-based environment contributed to student satisfaction.
Student satisfaction was selected as the outcome variable. Satisfaction was chosen as the outcome variable because the researchers believed that students' satisfaction influenced whether they would electe to take additional Web-based courses (Arbaugh, 2000; Lim, 2001). The question chosen to assess student satisfaction was, “Rate your satisfaction with this course".
Bivariate correlations and multiple hierarchical, regression analysis were performed to examine the data. Hierarchical regression analyses were used to assess the influence of several predictor variables on the criterion (Knapp, 1998). Hierarchical analysis consisted of multiple regression using a block method. A hierarchical (blockwise entry) was used as directed by the I-E-O model with input variables entered before environmental variables. Initial entry of student characteristics (characteristics present before the start of the Web-based course) helped to control for the influence of these predictors on the outcome variables and allowed for more accurate interpretation of causal inferences regarding environmental variables (Astin, 1993). Once the influence that student characteristics had on student outcomes was removed (covariance), environmental, predictor variables (principles of good practice) were entered as a group in the second block. If environmental predictors entered in the second step yielded statistically significant contributions, then the results could be interpreted as the environmental variables having a significant influence on student satisfaction [outcome]. Analyses used the statistical level of p < .05 for significance.
Based on the Pearson product moment correlation coefficients, students who were more satisfied felt they knew the instructor (r = .59, p < .001); believed the course offered a variety of ways to assess their learning (r = .68, p < .001); and reported receiving prompt feedback (r = .51, p < .001). Additionally, students who felt they knew the instructor also believed that they received timely feedback (r = .53, p < .001); had a variety of ways to assess their learning (r = .68, p < .001); and actively participated more in discussions (r = .50, p < .001). Findings regarding knowing the instructor may be related to instructors’ fostering a sense of “connectedness” through contact with their students. Absence of the face-to-face meetings in Web-based courses make connecting with students more difficult.
In the first step of the regression analysis, the five student characteristic variables [inputs] were entered first, as indicated by Astin’s I-E-O model. Results suggested that having knowledge about student characteristics (computer skills, number of Web-based courses taken, knowledge on use of electronic communications technology, distance from main campus, and age) did not help predict students’ levels of satisfaction. Student characteristics explained only 6.5% of the variance in student satisfaction, which was not statistically significant [R2 = .065, F(5,109) = 1.513, p = .192].
In the next step of the analysis, the six environmental variables were entered. The results strongly suggested that the selected environmental factors representing the principles of good practices in education were highly predictive of whether or not students were satisfied with a Web-based course. Environmental variables explained an additional 52% of the variance and was statistically significant [R2 = .52, F(6,103) = 21.503, p < .001]. The entire model, including student input characteristics and environmental variables, accounted for 58.5% of the variance in student satisfaction (R2 = .585, adjusted R2 = .541).
There were three specific environmental variables that were statistically significant in predicting student satisfaction. The strongest variable in explaining student satisfaction was students’ perceptions that there were a variety of ways to assess their learning [b = .412 (t = 4.65, p < .000)]. The next best predictor of students’ satisfaction was their likelihood of working in teams/groups [b = – .242 (t = –2.74, p = .007)]. The final significant predictor of student satisfaction was students’ perceptions regarding receiving timely comments [b =.198 (t = 2.34, p = .021)].
asked, “How well do the environmental
variables predict a student’s level of satisfaction, when controlling for
student characteristics?” The overall findings suggested that 52% of student
satisfaction was attributable to the influence of the Web-based environment.
The 52% is a large effect size and suggested that the environmental variables,
not the student characteristics [inputs], could successful help predict students’ overall
satisfaction with the Web-based course.
Results of the regression analysis indicated that the strongest predictor of students’ satisfaction was their perceptions regarding having a variety of ways to assess their learning. Those student who believed that there a variety of ways to assess their learning tended to be more satisfied with the Web-based course. The second strongest predictor of student satisfaction was working in teams in groups. The negative relationship between satisfaction and working in teams indicated that students who were more likely to participate in team/group work also tended to be less satisfied. This relationship could be due to the increased difficulty in participating in group work through an electronic medium. The absence of the face-to-face meeting during team projects may have proved challenging.
Finally, students who tended to believe that they received timely feedback from the instructors reported higher levels of satisfaction. This finding regarding timely feedback is consistent with other research (Leong, Ho, & Saromines-Ganne, 2002). Timely feedback is important because instructor comments give students an idea on how they are progressing in the course.
Overall study findings suggested that student satisfaction can be attributed to what happened in the virtual classroom [environment], and not to student characteristics [input]. The study findings provided additional support regarding the importance of implementing the principles for good practice for education in a Web-based environment. The use of the I-E-O model as a guiding framework assured consideration for happened in the Web-based classroom and students’ characteristics prior to taking the course. Attention to, and controlling for, student input variables allowed for stronger statements regarding causal inferences of the Web-based environment and its impact on students’ satisfaction.
Interrelationships Among Theory, Research, and Educational Practice
Astin's (1993) Input-Environment-Model has great potential for use in assessing Web-based courses; using this model could encourage researchers to address all three constructs. These constructs become especially important when attempting to link positive student performance in the Web-based classroom. Without accounting for student input information, inferences about learning environments may be inaccurate and misinterpreted. Knowing what student inputs might have contributed to positive outcomes, may be more significant or enlightening than what happened in the Web-based environment. Using the I-E-O model would, at a minimum, require researchers to address the lack of student inputs as a limitation to the study. The model can provide educators with a comprehensive perspective when planning assessment activities.
Implications for the Use of the I-E-O Model in Web-Based Course Evaluations
Student characteristics [inputs] can be vital when evaluating Web-based courses. Major emphasis on technology in virtual classrooms dictates that students’ previous exposure to such learning environments is assessed. Regression analyses techniques, which can control for input variables, can provide more complete assessments about learning environmental impact on student outcomes. Future qualitative research studies can address the three constructs using individual interviews, observations, or focus groups interviews.
Astin’s (1993) Input-Environment-Outcome (I-E-O) model promises a valuable alternative view of evaluating distance education through collection of inputs and environmental information to more fully explain traditional unitary assessments of educational outcomes. Simply measuring student satisfaction and performance in courses [outputs] are not necessarily appropriate indicators of course effectiveness [environment]. Student satisfaction and performance outcomes could be due to students’ knowledge and preferences [inputs] as predisposing factors before beginning the courses. The key to evaluating Web-based courses as effective learning environments is to design evaluation studies that accounts for all three components of the I-E-O model—inputs, environment, and outputs. Use of this model to guide future assessments of Web-based courses could positively contribute to the existing body of knowledge regarding effectiveness of online courses. Most importantly, this model provides a strong framework to stimulate assessments that enhance learning and provide feedback information to both teachers and students. This article described the I-O-E model in detail and illustrated how the model can successfully be used as middle range theory in educational assessments of Web-based courses.
Disclaimer: The views expressed in this article are those of the author and
do not reflect the official policy or position of the Department of the Army,
the Departments of defense, or the U.S. Government.
Aase, S. (2000). Higher learning goes the distance. Computer User, 19(10), 16-18.
Arbaugh, J. B. (2000). How classroom environment and student engagement affect learning in Internet-based MBA courses. Business Communication Quarterly, 63(4), 9-26.
Astin, A. W. (1968). Undergraduate achievement and institutional "excellence". Science, 161(842), 661-668.
Astin, A. W. (1993). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. Phoenix: The Oryx Press.
Astin, A. W., & Sax, L. J. (1998). How undergraduates are affected by service participation. Journal of College Student Development, 39(3), 251-263.
Billings, D. M. (1996). Distance education in nursing: Adapting courses for distance education. Computers in Nursing, 14(5), 262-263, 266.
Billings, D. M., Connors, H. R., & Skiba, D. J. (2001). Benchmarking best practices in web-based nursing courses. Advances in Nursing Science, 23(3), 41-52.
Campbell, J. W., & Blakey, L. S. (1996, May 5-8). Assessing the impact of early remediation in the persistence and performance of underprepared community college students. Paper presented at the 36th Annual Forum of the Association for Institutional Research, Albuquerque, NM. (ERIC Document Reproduction Service No. ED 397 749)
Chickering, A. W., & Ehrmann, S. C. (1996). Implementing the seven principles: Technology as lever. Retrieved May 11, 2003, from the World Wide Web: http://www.tltgroup.org/programs/seven.html
Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39(7), 3-6.
Cross, K. P. (1999). Assessment to improve college instruction. In S. J. Messick (Ed.), Assessment in higher education (pp. 35-45). Mahwah: Lawrence Erlbaum Associates, Publishers.
Dubin, R. (1978). Hypotheses. In Theory Building (pp. 205-213). London: Collier Macmillian Publishers.
Fawcett, J. (1993a). Analysis and evaluation of conceptual models of nursing. Philadelphia: F. A. Davis Company.
Fawcett, J. (1993b). Analysis and evaluation of nursing theories. Philadelphia: F. A. Davis Company.
Fawcett, J. (1993c). The structure of contemporary nursing knowledge. In Analysis and evaluation of nursing theories (pp. 1-21). Philadelphia: F. A. Davis Company.
House, J. D. (1999). The effects of entering characteristics and instructional experiences and student satisfaction and degree completion: An application of the input-environment-outcome assessment model. International Journal of Media, 26(4), 423-434.
Howland, J. L., & Wedman, J. (2003). Technology use and values of teachers and faculty: PT3 results. Society for Information Technology and Teacher Education International Conference, 2003(1), 3603-3607.
Kelly, L. J. (1996, May 5-8). Implementing Astin's I-E-O model in the study of student retention: A multivariate time dependent approach. Paper presented at the 36th Annual Form of the Association for Institutional Research, Albuquerque, NM. (ERIC Document Reproduction Service No. ED 397 732)
Knapp, T. R. (1998). Quantitative nursing research. Thousand Oaks: Sage Publications.
Knight, W. E. (1994a, May 29 - June 1). Influences on the academic, career, and personal gains and satisfaction of community college students. Paper presented at the 34th Annual Forum of the Association for Institutional Research, New Orleans, LA. (ERIC Document Reproduction Service No. ED 373 6544)
Knight, W. E. (1994b, May 29 - June 1). Why the five-year (or longer) bachelors degree? An exploratory study of time to degree attainment. Paper presented at the 34th Annual Forum of the Association for Institutional Research, New Orleans, LA. (ERIC Document Reproduction Service No. ED 373 645)
Koeckeritz, J., Malkiewicz, J., & Henderson, A. (2002). The seven principles of good practice: Applications for online education in nursing. Nurse Educator, 27, 283-287.
Leong, P., Ho, C. P., & Saromines-Ganne, B. (2002). An empirical investigation of student satisfaction with Web-based courses. World Conference on E-Learning in Corporate, Government, Healthcare, & Higher Education, 2002(1), 1792-1795.
Lim, C. K. (2001). Computer self-efficacy, academic self-concept, and other predictors of satisfaction and future participation of adult distance learners. The American Journal of Distance Education, 15(2), 41-51.
Long, P. N. (1993, November 4-10). A study of underprepared students at one community college: Assessing the impact of student and institutional input, environmental, and output variables on student success. Paper presented at the 18th Annual Meeting of the Association for the Study of Higher Education, Pittsburgh, PA. (ERIC Document Reproduction Service No. ED 365 177)
Meleis, A. I. (1997). Theoretical nursing: Development and progress (3rd ed.). Philadelphia: Lippincott.
Muirhead, B. (2001a). Enhancing social interaction in computer-mediated distance education. USDLA Journal, 15(4). Retrieved May 11, 2003, from the World Wide Web: http://www.usdla.org/html/journal/APR01_Issue/article02.html
Muirhead, B. (2001b). Interactivity research studies. Educational Technology & Society, 4(3). Retrieved May 11, 2003, from the World Wide Web: http://ifets.ieee.org/periodical/vol_3_2001/muirhead.html
Pace, C. R. (1976). Evaluating higher education (Topical Paper No.1). Tucson: Arizona University (ED 131737).
Thurmond, V. (2002). Considering theory in assessing quality of Web-based courses. Nurse Educator, 27, 20-24.
Thurmond, V., Wambach, K., Connors, H. R., & Frey, B. B. (2002). Evaluation of student satisfaction: Determining the impact of a Web-based environment by controlling for student characteristics. The American Journal of Distance Education, 16, 169-190.
Veronica A. Thurmond RN, PhD
RN, Ph.D, CNOR is a Major in the Army Nurse Corps. During her 17 years in the Army, Veronica has held various positions as a
medical-surgical nurse and as perioperative nurse. She completed her masters degree in 1995, earning a clinical nurse specialist designator, from
the University of Colorado, Health Sciences Center. She obtained her Ph.D in nursing from the University of Kansas. Her dissertation study focused on examining the effects of interaction activities on students' satisfaction and likelihood of enrolling in future Web-based courses. Dr. Thurmond's
primary area of interest is in Informatics and she is very interested in the area of distance education.
Sue Popkess-Vawter, RN, Ph.D
Dr. Sue Popkess-Vawter graduated from the University of Kansas School of Nursing with a BS degree in Nursing in 1970 and a Masters degree in Nursing in 1972. She received a PhD in Nursing degree from The University of Texas at Austin in 1978. Her area of clinical expertise and research was in cardiovascular critical care nursing. She was an active member of the board of directors in the early days of the American Association of Critical Care and the North American Nursing Diagnosis Association. Her area of research and practice shifted to a wellness focus emphasizing reduction of cardiac risk through weight management.
In faculty practice,
Dr. Popkess-Vawter is a weight management personal coach who analyzes, designs,
and adjusts lifestyle habits to reach a healthy weight to match daily schedules
and personal choices. Her three-pronged individual approach, Holistic Self-Care
for Long-term Weight Management, is different from most professional and
commercial weight loss programs. Popkess-Vawter helps clients learn to structure
their days to include eating for hunger, regular exercise, solitude, and
relationships they need to develop balance in their lives. She offers lifestyle counseling for
individuals and small groups and consultation and continuing education in corporate programs.