Southern Illinois University Edwardsville Logo
Apply to SIUE
Academic Innovation & Effectiveness
Academic Innovation & Effectiveness
Innovation Header

FAQ

Frequently Asked Questions

1. The committee should address why this conversation about developing a university-wide SET instrument and process has arisen again.

Because there is no SIUE requirement on SET form or content, there is wide variation within the Schools/Colleges, units or even departments. Faculty members across the various schools and colleges have expressed concerns about the way SETs are carried out within their departments. Some faculty members are genuinely concerned about the disparate standards for evaluating effective teaching between individuals and/or departments. Likewise, faculty members have expressed concerns about using instruments that have not been validated. Additionally, as an AQIP institution we must seek to identify opportunities for improvement and act upon them. The Higher Learning Commission has asked SIUE to take a more "scholarly approach" to assessment measures while demonstrating our commitment to "valuing people" by addressing concerns expressed by faculty. Therefore, this initiative is driven by the desire to create a SET instrument designed in thoughtful ways, tested for reliability and validity, and considered as only one source of evidence regarding teaching effectiveness that can be used to facilitate instructors' efforts to improve the quality of their teaching and document teaching effectiveness.

2. Is this a top-down initiative?

This is a joint initiative. Problems with student evaluation of teaching first emerged as a university concern in 2004 with the formation of a Faculty Senate Committee. The current committee was partially formed by Faculty Senate Curriculum Council in 2009-2010. That current AQIP Committee was jointly charged by the Faculty Senate and the Provost's Office to address faculty concerns regarding SET practices across campus. If the initiative was top-down (as is the case in several states with legislative mandates for university SETs), we would simply comply. SIUE is committed to continuous improvement. Providing faculty with a validated SET instrument supports those efforts.

3. Is this a centralized system?

No. The Provost's Office is not interested in administering or centralizing this process. Units and programs will still be responsible for their own SET implementation processes. The Committee is seeking information from departments so that we can better understand what programs' current practices are and how changes might affect them. The Committee will work with the Provost's Office to identify ways to help departments with the transition to a new core instrument and corresponding policies. To be clear, there will be no centralization of administration or analysis. Data will not be collected and stored centrally. The only effort to aggregate data will be related to the revalidation process every three years.

4. What does faculty have to gain from a standard SET instrument?

The central concerns with current student evaluation of teaching instruments at SIUE are the potential lack of reliability and validity. Most forms, often developed at the department level, used to evaluate "teaching" have not been validated; that is, there have not been systematic attempts to determine whether they measure the intended concepts. Creating an instrument that has been tested and checked for systematic biases will enhance fairness. "Student rating forms that have not been constructed according to professional psychometric standards may be unreliable and thus able to be influenced by factors such as popularity, temperature in the classroom, instructor gender or anything else. Unfortunately, many institutions use student rating forms that have not been constructed and validated using professional psychometric standards. Without rigorous reliability and validity data on such forms, it is impossible to tell for certain what influences the final student rating" (Arreola, 2007). Faculty will gain confidence that they are administering a validated SET form and can reasonably assume that the items are measuring constructs related to the ethics of instruction. Thus, as they respond to the results of their SETs, they can do so with greater confidence.

5. If we aren't developing a SET instrument to compare faculty, why do we need a standard SET form?

When SET instruments are designed in thoughtful ways, tested for reliability and validity, and considered as one source of evidence regarding teaching effectiveness, they can be used to facilitate instructors' efforts to improve the quality of their teaching and to promote student learning. The University can support faculty efforts in this regard. While some faculty members and departments may have strong instruments and practices in place, there are ample opportunities to improve SET practices on campus. Moreover, we have the opportunity to support faculty by offering them the opportunity to use WET items that have been tested in rigorous ways.

6. The committee needs to address the multiple ways to evaluate teaching effectiveness. SETs are only one way to evaluate teaching and are indirect measures. Peer review processes and other requirements for demonstrating teaching effectiveness vary throughout the university.

The Committee clearly recognizes that SETs are only one source of data related to teaching effectiveness. The committee has written a policy statement regarding use that calls for the use of multiple measures for high stakes decisions, "Faculty members should employ multiple methods to evaluate their teaching effectiveness in addition to SETs. Specifically, faculty members, reviewers, and administrators should not use any single indicator as the sole source of teaching effectiveness." The SET website also includes materials describing how faculty can demonstrate teaching effectiveness. The Office of Academic Innovation and Effectiveness will also create other resources that describe best practices for peer review and outline how peer review can be a supportive and instructive process to promote teaching effectiveness.

7. The committee needs to address response rates, if on-line administration is required.

The SET Committee is not requiring on-line administration. We recognize that some programs are concerned about response rates with on-line administration. As such, our proposal allows programs to choose on-line or paper administration. For programs that wish to use on-line administration procedures, the committee would like to encourage student response rates by sending students e-mail reminders, posting information around campus, and addressing the importance of encouraging a culture of assessment with students in Springboard. The hope is to educate students on the importance that SETs play in continuous improvement and embed these values into the culture for our students. Lessons from other campuses suggest that consistent timing of evaluations and campus-wide strategic initiatives, such as those named above, can improve response rates and be effective strategies. While there may be benefits to university-wide on-line administration, the Committee wants to allow for flexibility in practices. We are happy to continue these conversations and share what we have learned about on-line SET administration.

8. SET practices need to be flexible enough to allow for variation among programs. If we are to adopt a standard form, units need to have the ability to add unit or discipline specific questions. Likewise, SET procedures should not be too rigid (e.g., administration must occur in the 13th week of classes). The Committee should find ways to accommodate the diversity of class formats and durations. For example, it will be difficult for programs with 5 week classes to determine when they should be administered. Flexibility is imprtant to the programs.

Given the wide variety of SET practices currently in use at SIUE, the SET committee recommends that the proposed standardized SET instrument would be based primarily upon the SIUE ethics of instruction, with the addition of a small number of validated questions to assess the core aspects of instruction not covered by the ethics of instruction. The instrument would be limited to 12-15 standard quantitative questions that would be administered to all programs. Allowing for flexibility, individual schools, departments, and instructors would be permitted to add additional quantitative and qualitative questions to the instrument as they see fit. The committee is seeking information from departments regarding their current SET practices to determine how such a strategy might affect department's current efforts.

Additionally, the SET committee has discussed the need to establish administration procedures that do not create barriers for individual programs. It will be necessary for department liaisons to help the SET committee identify potential problems in order for the committee to recommend appropriate and flexible procedures/policies. The committee is dedicated to sharing the information we have learned about best practices in administration. Nevertheless, the committee recognizes the need to attend to differences in class formats, lengths, etc. Additionally, the committee will offer recommendations regarding administration that reflect best practices but allow for the flexibility that individual instructors and departments need.

9. Some schools (School of Business) have concerns about the change in infrastructure if we move to a standard form. How will that be addressed?

The SET Committee will meet with the School of Business and representatives from other schools to determine how a standard SET could be administered using the current infrastructure.

10. Will the standard SET form be driven by student input? Some programs have developed forms specifically to address questions that students perceive as important, will this form do the same?

Because the questions are directly tied to the ethics of instruction and to constructs identified in the literature as consistently strong indicators of teaching effectiveness, student input will only be used to evaluate the clarity of the items. While student input is critical to this effort, students will not determine the concepts of interest. In order to be confident about the validity of the proposed measures and to better understand how students make sense of the questions, their clarity and purpose, we must seek student feedback.

11. Many faculty liaisons expressed concern about the departmental survey. They felt uncomfortable "speaking" for their entire department. The week turn-around time seemed too short to collect all faculty input. If the committee is doing and individual survey, why is this necessary?

The Committee revised the questions on the liaison survey to rely on primarily factual information rather than collective sentiment. Specifically, the questionnaire was revised to focus on departments' current practices and the corresponding strengths and opportunities for improvement. The questionnaire also includes one open-ended question requesting general information regarding concerns the faculty would like to see the committee address regarding administration, use and content. The Committee believes that learning from the work of departments throughout campus is important to this process. Moreover, gaining this information will allow the committee to consider how the eventual recommendations will affect departments.

facebookoff twitteroff vineoff linkedinoff flickeroff instagramoff googleplusoff tumblroff foursquareoff socialoff