FINAL REPORT AND RECOMMENDATIONS

Task Group B2
Online Assessment

Jim Andris
Marj Baier
Kay Mueggenburg
David Sill
Kay Werner

The B2 Committee was assigned the questions How do faculty assess student performance?  What are the desired outcomes?  How do we measure outcomes?  How do we build quality improvement cycles? Our members were Jim Andris, Marge Baier, Kay Mueggenberg, David Sill and Kay Werner. We met five times, clarifying our objectives and developing a plan of action. We collected and surveyed research relevant to online assessment of instruction, surveyed the new types of instructional technology with a view to thinking of new ways that assessment can be accomplished, conducted an online campus-wide survey, heard a report from an expert on assessment, and held a focus group with three knowledgeable SIUE faculty members. In addition, our committee members were a valuable source of information.

Assessment/Evaluation

A presentation by Douglas Eder, Assessment of Technology-Assisted Learning, helped the Committee to distinguish between assessment and evaluation. Assessment of instruction deals with determining if the components of instruction achieve the student learning goals they were intended to produce. Assessment is cyclic, and the results of one cycle can lead to the improvement of instruction. The Office of Assessment has many well-developed techniques for improving instruction, including primary trait analysis, minute papers, muddiest point responses, and one-thing-learned responses. Assessing student learning deals with determining what learning goals students achieved, particularly in relationship to specified course objectives. Finally, one can evaluate both student learning and the effectiveness of instruction, i.e. put a value, perhaps a grade, on either process.

 

One committee member suggested that the distinction between assessment and evaluation was similar to the distinction between formative and summative evaluation. Another member suggested that both can be done continually using technology, such as WebCT and email communication with students. Both can be done using a variety of methods and more easily using technology than by using only traditional teaching tools.

 

Even though there is a large potential list of outcomes that could be evaluated: learning, recruitment, retention, graduation, access, convenience, connectedness, preparation for real-world work, computer tool proficiency, professional practice, socialization, and satisfaction (Billings, 2000), the Committee focused on assessment of student learning and instructional effectiveness.

Research findings

The Committee was not able to identify a large body of carefully designed and controlled experimental research on online instruction and assessment. According to Merisotis and Phipps (1999) most of the research on distance learning is anedoctal, flawed or lacking in adequate theoretical base or conceptual framework. Research is made even more difficult because of the rapid change in technologies. One outcome may be a totally new approach to research, but what that might be, is unclear.

 

At least one Committee member stated the need to develop a workable research paradigm for online learning. Another member concurred, stating both a need to identify a research design that could be used to assess how well students learn by various online methods and also a research design that measures the validity of online student evaluation methods. Another conclusion was that the dynamic of teaching online may be even more complex than traditional classroom approaches, making comparison difficult.

 

We concur with Ehrmann’s (2001) opinion that “seeking answers to universal questions about the comparative teaching effectiveness and costs of technology” may be bad questioning. Instead, he encourages the use of “worldware,” such as email, discussion groups, internet browsers, and the like, i.e. software that enables a better process of education. If these tools are used consistently across the curriculum to achieve educational objectives by both faculty and students, improvement of education is likely. This would include their use in assessment.

 

The Committee was in accord with the Seven Principles for Good Practice in Undergraduate Education as presented by Dr. Eder. These were

  1. Encourages contact between students and faculty.
  2. Develops reciprocity and cooperation among students.
  3. Encourages active learning.
  4. Gives prompt feedback on performance.
  5. Emphasizes time on task.
  6. Communicates high expectations.
  7. Respects diverse talents and ways of learning.

The Committee also found useful Chickering and Ehrmann’s (2001) article Implementing the Seven Principles:  Technology as Lever that discusses how technology facilitates the implementation of these principles.

 

Finally, despite the paucity of specific research studies on online assessment, members of the Committee felt confident that there is already a large body of knowledge on effective assessment that is relevant to online teaching and that specific instructors who have experimented with online instruction in their own classes have been able to improve their practice. In light of this, we found the results of the focus group particularly enlightening.

Focus Group on Online Assessment

Our focus group consisted of Darryl Coan, Music, Dennis Hostetler, Public Administration and Policy Analysis, and Wendy Shaw, Geography, and was attended by the Committee and Cathy Santanello, Office of Assessment. Each of these instructors have taught online classes, and shared with us some of the insights they gained in formatively evaluating their online instruction. One Committee member encapsulated the discussion: “it’s all about reflective practice. Online learning promotes feedback, makes process visible, leads to more student-student interaction.” Another thing that seems clear is that, especially in online instruction, because of the increased and more frequent communication and the visibility of the process, assessment is hard to distinguish from other components of the instructional process. This is especially true of assessment viewed as containing a feedback loop.

The following are some of the principles about which there seemed to be general agreement. They are offered as wisdom gleaned from advocates of online teaching, to be sure, but also, from excellent teachers who have moved their classrooms online.

Online instruction

Scholarship of Teaching and Technology

The Committee also had several discussions of the scholarship of teaching.


Lee Shulman differentiates between scholarly teaching and scholarship of teaching. Scholarly teaching is well-grounded in the field, thoughtful, and incorporates well-designed and appropriate pedagogical strategies. Scholarship of teaching is public, peer reviewed and can be built upon.


The use of technology in teaching and learning does not in itself strengthen or weaken the groundedness of teaching or its thoughtfulness or the quality and appropriateness of its design. Technology can, on the other hand, make teaching and learning more public, can assist in peer review by documenting process, and can provide the archives and electronic documents that can be built upon.


WebCT facilitates reflection upon one's strategies and the effect on student learning, partly because a whole course can be visualized. Communications from the beginning to the end of the course can be examined; changes from one course to the next can be examined since old courses are archived. Instructors can use online surveys to elicit feedback from students. Individual quizzes and student participation and response can be examined.

The Committee also discussed what might be sacrificed by using technology in the service of the scholarship of teaching. It was suggested that misuse of technology can do the same damage that misuse of any tool can do, and perhaps our inexperience and lack of sophistication regarding the use of technology can make misuse more of a danger than with more familiar tools. Note was also taken of the seductive nature of technology. However, it was suggested that the public nature and the extent to which we can engage in effective peer review of scholarship of teaching will offer a corrective to misuse in the same way it does for other forms of scholarship.

Other Technology than Online

While the Committee tended to focus on the transition from traditional to online teaching, there was at least one general discussion of other forms of technology which enhance instruction. PowerPoint, smart classrooms, voice mail and word processing were mentioned in this connection. Clearly some of this technology is related to assessment, a prominent example being the use of spreadsheets for grades and manipulating the data of instruction. Many disciplines have specific software and hardware, as well as non-computing related technology, that enables the practice of that discipline.

Results of Online Survey of Teaching Practices

The Committee conducted a campus-wide survey instructors’ use of traditional and online assessment practices. The survey was posted on http://angel.ah.siue.edu/tltr/survey.html and voluntary participation was invited through email by the Provost. After one week of collecting online data, 120 people responded. Figure 1 shows the distribution of respondents across schools. The designation “Unclassified” indicates that the respondent did not identify their school or departmental affiliation. Figure 2 shows the distribution across departments in the School of Arts and Sciences. Departments were included only if they generated more than one response.

Figure 1: Percent of Respondents by School Designation

Percent of Department Response within Arts & Sciences

Figure 2: Percent of Department Response within Arts and Sciences

The three graphs below illustrate the major results of the survey in graphic form. In Figure 3 there is a cluster of traditional assessment practices that are used by two thirds to three fourths of the respondents. These include papers, multiple choice/objective tests, oral presentations, class discussion, essay tests, and group projects. Peer review is used by about a third of those responding to traditional practices. Another quarter or less use portfolios, student journals, and simulations. Only a small number of the sample uses rubrics, models or contracts.

Traditional Assessment Practices

Figure 3: Traditional Assessment Practices

The profile of use for online assessment practices is given in Figure 4. One generalization is that less than three fifths of the respondents use any of the specified online assessment techniques. Over half of the respondents have used email or a syllabus with some online materials as part of the assessment process. A little over a third indicate also using web links, while a quarter have used electronic bulletin boards. Over a sixth use online group projects, and 10-15% have used multiple online tests, chat, or student web pages. Less than a tenth have used simulations, peer review, electronic portfolios, contracts or models as a part of the online assessment process.

Online Assessment Practices

Figure 4: Online Assessment Practices

Online Assessment Practices

Figure 5: Online Assessment Practices

The survey showed that of three campus online tools indicated, about a third of the respondents have used WebCT, 14% have used cougar web publishing, and less than 5% have used WebBoard.

Summary and Conclusion

This Committee considered, but did not directly answer, four questions. Instead, we suggest replacing the assessment question with the four questions: How can student learning and instructional effectiveness be both assessed and evaluated? We found results that relate to all four questions, most importantly that online teaching tools enable direct, timely discussion and feedback, and enhance teacher-student and student-student interaction and dialog.

Our survey gave some answers to the question ”How do we measure outcomes?” We found that instructors at SIUE use a variety of assessment methods in both traditional and online classrooms. It is encouraging to find that over half of the sample has used email and online syllabi as part of a total course presentation. On the other hand, only a small fraction has used most of the available online assessment modalities.

Unless a hearty endorsement of the use of web course tools is itself considered a desired outcome, we did not suggest an answer to the question “What are the desired outcomes?” Assuming all courses have objectives and goals, these can be measured. We are also intrigued with the unanticipated outcomes that come from innovation.

Finally, to the question, ”How do we build quality improvement cycles?” we suggest these answers. By the use of formative evaluation, both student learning outcomes and teaching effectiveness can be assessed. Online tools assist the scholarship of teaching, making the data of teaching public and available for peer review. Clearly, they can be an important component of quality improvement cycles.

References

Billings, D. (2000). A framework for assessing outcomes and practices in web-based courses in nursing. Journal of Nursing Education, 39, 61-67.


Chickering, A. and Ehrmann, C. Implementing the Seven Principles:  Technology as Lever. [Online, 2001] American Association for Higher Education. <http://www.tltgroup.org/programs/seven.html>


Cradler, J. and Bridgforth, E. Recent Research on the Effects of Technology on Teaching and Learning.
[Online, 2001] WestEd. <http://www.wested.org/techpolicy/research.html>


Ehrmann, C. Asking the Right Question: What Does Research Tell Us About Technology and Higher Learning? [Online, 2001] Annenberg/CPB Learner.org <http://www.learner.org/edtech/rscheval/rightquestion.html>


Merisotis, J. and Phipps, R. (1999). What's the Difference? Outcomes of Distance vs. Traditional Classroom-Based Learning. The Institute for Higher Educational Policy. 41 pp.


Shulman, L. From Minsk To Pinsk: Why A Scholarship Of Teaching And Learning? The Journal of Scholarship of Teaching and Learning (JoSoTL), 1, 1, April, 2000. <
http://www.iusb.edu/%7Ejosotl/Vol1No1/shulman.pdf


Walvoord, B.E, & Anderson, V.J. (1998). Effective grading. San Francisco: Jossey-Bass.