- What is Assessment?
- Why should we assess?
- How can faculty add assessment to their already heavy workloads?
- What is done with collected assessment data?
- What are goals and objectives of student learning?
- What are some examples of assessment methods?
- Why can’t we use grades as assessment of our students’ learning?
- Who is responsible for assessing the Core?
- How can we assess the higher order learning goals?
- What will happen to programs (or parts of programs) that find, as a result of their assessment activities, that they are meeting their goals?
- What will happen to programs (or parts of programs) that find, as a result of their assessment activities, that they are deficient in meeting their goals?
Assessment and Middle States
- What role does assessment play in our MSCHE accreditation?
- What does Middle States want to see when the visiting team returns for the next review?
- What must individual programs do for assessment?
- What is required of each faculty member? Department chair? Dean?
Campus Assessment Structures
- What is the “Committee for the Assessment of Student Learning (CASL), who serves on it, and what is their charge?
- What are the school/college committees, what is their make up, and what is their authority?
- How many faculty members are part of the assessment committees?
- From whom can we get help with our program as we develop an assessment plan that will be approved?
- From whom can we get assistance with developing methods and measures of assessment?
- From whom can we get assistance with “closing the loop”?
The primary purpose of assessing student learning is to assist faculty members in their task of helping students learn more and better. Assessment of Student Learning refers to “the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent learning" (Huba and Freed, 2000).
Assessment is an activity that begins in the classroom with individual faculty members. The process of gathering evidence of student learning helps instructors understand how well their students are learning, and it gives them information that will help them help their students learn better. Classroom-level assessments (Angelo and Cross, 1996) help the instructor make choices about improving student learning.
At the program (or major) level, assessment is a collective activity where faculty members, in consultation with one another:
- establish clear, measurable expected goals and objectives of student learning in the program;
- ensure that students have sufficient opportunities to achieve those learning goals and objectives;
- systematically gather, analyze, and interpret evidence to determine how well student learning matches the faculty members’ expectations;
- make decisions about curriculum, instructional pedagogy or the expected goals and objectives based on the analyzed information.
Canisius College uses assessment as the assurance that our students are developing knowledge, skills, and values in six
- Academic Excellence
- Communication Skills
- Integrity and Civility
- Critical Thinking and Problem Solving
- Community Involvement and Leadership
- Catholic Jesuit Intellectual Tradition
We assess to help students learn more and better. We assess to make decisions about student learning and programs we provide to enhance the programs and students’ educational experiences.
Through ongoing and authentic assessment, we discover and make visible to ourselves and our partners our strengths, our weaknesses, and opportunities for growth and change.
Faculty members are already engaged in informal assessment of student learning. Through the creation of learning goals that are aligned with program goals, and classroom activities that develop and assess student performance of those learning goals (tests, projects, papers, etc.), instructors are actively assessing their students’ learning. These are the traditional parts of any course structure, and they are very important.
In a given program, faculty will function collectively to set program goals and objectives, to create methods and measures for collecting evidence of student attainment of those goals, analyze and interpret selected samples of student work. In addition, the faculty in the program will programmatic decisions based on the interpretation of the results of the evidence collected.
The initial work of creating an assessment system that minimizes faculty workload and time is the most time-consuming; however, once a system is in place and running, the amount of time and effort will become minimal on the part of individual faculty members.
At the center of assessment is the improvement of student learning. We achieve this improvement by sharing results with the campus community and by using results to make curricular and resource changes designed to create those improvements. At various stages of the process, goals and objectives, methods and measures, results, and feedback are shared with different constituencies as appropriate.
Assessment data are used to improve student learning or to garner resources to improve student learning.
Goals and objectives for each program are shared publicly on the Assessment and individual program’s web pages.
Departments and programs will describe methods and measures for each learning goal on a pass-word protected (internal) web page, so that departments and programs can learn from one another.
Results and outcomes will remain in the department. The report to the responsible Assessment Committee and Dean will describe the goal or goals assessed, the interpretation of the assessment results (findings), and a description of what steps the program has done to address the findings.
In the feedback portion of assessment, programs can use the data to make changes and requests for resources to improve student learning. As faculty in programs learn more about the strengths and weaknesses in their students’ learning, they will be able to allocate resources in more intentional ways.
“Goals and objectives of student learning” are the knowledge, skills, and values that students are able to exhibit at the conclusion of their studies. From the point of view of assessment, the institution has created broad goals and objectives for student learning. Academic programs use the institutional goals to help shape their program goals. Within programs, course goals are then aligned with the program goals. With this alignment, programs are able to demonstrate that their courses of study are in keeping with the mission of the institution and with the standards of the academic discipline they represent.
Good assessment practices use multiple measures, and typically, there are two types of evidence we can use in assessment of student learning: direct and indirect.
Direct evidence documents students’ performance of a learning goal or goals. When students have produced a “product”—written, spoken, performed, or created, and that product is a direct result of the students’ learning experience, we can call it “direct” evidence.
Examples of direct evidence of student learning (in no particular order):
- Exit exams
- Professional certification tests or standardized field tests
- Standardized examinations
- Locally developed common exams
- Oral exams scored with a common rubric
- Reviews or evaluations by an external examiner
- Portfolios analyzed or scored with a common rubric
- Final papers analyzed or scored with a common rubric
- Employer ratings of skills of recent graduates
- Student reflections analyzed with a common rubric
When evidence of student learning is not the direct result of students’ learning experiences, we call it indirect evidence. Indirect evidence is useful and even powerful when it is combined with direct evidence; each measure supports and confirms the conclusions of the other. However, on their own, evidence gathered from indirect measures may not accurately represent what students have learned.
Examples of indirect evidence of student learning (in no particular order):
- Periodic surveys of student satisfaction or attitude
- Course grades alone
- Grades on individual assignments
- Focus groups, questionnaires, or interviews
- Admission rates into graduate programs
- Placement rates into employment
- Student ratings of their knowledge and skills
- Performance appraisals
- Capstone Experiences that utilize many of the above tools
- Exit surveys
Sometimes you can use grades—when they are direct evidence of student learning of a particular goal. However, course grades are usually based on a number of behavioral objectives (attendance, participation, etc.) as well as on how well the students have learned the content of the course, so those grades may not fully reflect the student’s learning of a particular course objective. Grades on individual assignments are often based on how well the student has achieved specific aspects of the course objectives, but they do not necessarily reflect the students’ full achievement of a program goal.
Another problem with using grades in program assessment is that grades are normally awarded by a single rater. It is a methodological problem when the rater is the teacher of the students. For greater validity and reliability, assessment generally requires a more collective and collaborative structure for establishing program goals and assessing the students’ achievement of those goals at the conclusion of their study in the program.
On the other hand, the grade may reflect too much information. For example, a grade on a written assignment rates the quality of the students’ knowledge combined with their writing ability. If, however, the learning goal being assessed at the program level is writing, the grade will not specifically reflect the writing ability of the student.
The Faculty Senate is ultimately responsible for ensuring that the Core Curriculum is assessed. The Committee for the Core Curriculum has been designated the task of coordinating the assessment process with the faculty and departments teaching toward Core Learning Goals. Since many Core Learning Goals are inherently part of the work conducted through programs, the CCC will work with programs to coordinate assessment of common goals. This will allow for meaningful and economical assessment because the CCC will draw down information about core goals from program assessment without imposing additional methods.
Assessing such higher order learning goals as critical thinking, life-long learning, and leadership is a complex task. It is possible; by using multiple measures and triangulating them, we can reliably assess these outcomes. Once we have established a credible body of evidence using several different tools, we have a pattern of proof that can reasonably infer an outcome. For more resources and support, contact the Center for Teaching Excellence.
10. What will happen to programs (or parts of programs) that find, as a result of their assessment activities, that they are meeting their goals?
We will celebrate! Our successes can be used to promote the program and recruit majors and minors. They can be used in campus recruitment for new students, and they can be held up as exemplars of academic excellence.
11. What will happen to programs (or parts of programs) that find, as a result of their assessment activities, that they are deficient in meeting their goals?
Faculty who teach in these programs will ask themselves and each other where they can make improvements, and they can use the data to request resources to strengthen their programs. The faculty can work with administration and assessment committees to seek out ways to strengthen and improve.
Assessment and Middle States
The Middle States Commission on Higher Education (MSCHE) is responsible for making sure that the institutions in its jurisdiction are meeting the missions they describe. Middle States assures other interested parties that its institutions are engaged in on-going, authentic assessment activities, and that institutions are using the results of their assessment activities to improve and strengthen student learning and the programs of the institution.
Before the next team visit, we will submit the “Periodic Review Report” (PRR) in 2010 in which we detail our progress on all the 14 Standards, including the ones related to assessment. We will then prepare for the next team visit in 2015.
MSCHE will want to see in the PRR discussion of our progress in assessment. When the team arrives in 2015, they will want to see the evidence that assessment is on-going, authentic, and closes the feedback loop. That means that faculty in programs use the results of the assessment process to effect positive change and growth. The team will ask for evidence that the institution has been strengthened as a direct result of assessment activities in programs of study, departments, schools, and administrative offices.
Faculty in programs will collectively create and implement their assessment plans. Those plans should include: program learning goals and objectives, a timeline for assessing each of the goals over a four-year period, a description of the methods and measures to be used, the results of the annual assessment. Programs should collect samples of student achievement for possible review.
The most important role faculty play in assessment work is by using their expertise and talent to establish goals and objectives for student learning and to align their courses with program learning goals, providing program assessment with data in the form to be analyzed by the program’s assessment committee. Individual faculty members have multiple roles in the assessment process. They participate in committees involved in assessment at the department, school, and/or college level, working collaboratively in those committees to assure that student learning is being assessed and that the results are being used to inform decisions about programs.
Department chairs are responsible for involving the members of the department in assessment planning and implementation and for completing the necessary plans and reports annually.
Deans are responsible for ensuring that the process of assessment is in place and that each department is fully engaged in assessment activities.
Campus Assessment Structures
1. What is the “Committee for the Assessment of Student Learning” (CASL), who serves on it, and what is their charge?
The CASL will:
Regularly brief the Academic Vice-President regarding:
- The further implementation of student learning assessment in all areas of student learning in the university including (but not limited to) degree programs, major and minor programs, certificate programs, specialized learning experiences such as All-College Honors and the Urban Leadership Learning Community, and non-classroom activities such as community service experiences, study abroad, and student trips.
- The status of assessment at the university. The CASL will comment about the quality of assessment efforts and how assessment is incorporated into the improvement of the teaching-learning enterprise.
- External developments regarding assessment from sources such as Middle States and the US Department of Education.
- Provide a forum for questions and concerns regarding the assessment process, techniques of assessment, and the use of assessment data in a sound and responsible manner including the use of assessment data in budgeting decisions.
- Address macro-level concerns presented by the CLAC or other applicable groups.
Deans: Arts and Sciences, Wehle School of Business, School Education and Human Services, Student Affairs.
Directors: Institutional Research (Chair), Core Curriculum, Campus Ministry, Admissions, Center for Teaching Excellence.
Faculty representative: one from each of the three Schools/College, appointed by the Dean).
Students: 1 undergraduate; 1 graduate
As a result of their respective accreditation activities the Schools of Business and Education have existing assessment plans and activities, including a committee structure.
The School of Business has an assessment coordinator and standing Assurance of Learning Committee that oversees the assessment processes within each major program.
The School of Education and Human Services uses NCATE-related assessment activities to assess its students' learning.
The College of Arts and Sciences has responded to MSCHE recommendations by creating its own committee and by assigning to an Associate Dean the responsibility for coordinating the college’s assessment plans and activities. The Outcomes Assessment Advisory Committee in Arts and Sciences (OAAC) is a faculty committee, chaired by the Associate Dean. Membership is made up of representatives from Humanities, Social Sciences and Sciences, and the Core Curriculum. The Director of the CTE sits as an ex offico to provide support and information.
The committee is responsible for designing the templates for program assessment plans and reports, and for counseling programs on assessment activities. When the plans are approved and in place, the committee will review the annual reports to assure that assessment is on-going and being used for continual improvement.
Faculty are represented on assessment committees and groups at all levels. There are faculty members in the CASL, each of the schools/college assessment committees, and faculty within the departments and programs are active participants in setting learning goals, determining assessment methods and measures, and making adjustments to programs or courses as a result of these activities. In general, assessment is a faculty activity, supported by various administrative offices across campus.
1. From whom can we get help with our program as we develop an assessment plan that will be approved?
The school/college assessment committees are the first places to look to for advice and assistance. The Associate Deans and the assessment liaisons are helpful resources. The Center for Teaching Excellence has a variety of resources: a library, workshop offerings that can be tailored to an individual department’s needs, programming, and a cadre of faculty fellows who can work with departments is being developed.
2. From whom can we get assistance with developing methods and measures of assessment?
See above—these same resources can assist with implementation of the assessment plan.
See above. In addition, your own colleagues, who know and understand your discipline, your students, and Canisius, when given some facts and reliable information, can help with curricular and administrative changes that can give our students the best educational experience possible.
Allen, M.J. (2006). Assessing General Education Programs. Bolton, MA: Anker. LC985.A55 2006 and in CTE Library
Angelo, T. A., & Cross, P. K. (1993). Classroom assessment techniques: A handbook for college teachers, 2nd ed. San Francisco, CA: Jossey-Bass. CTE Library
Anderson, R. S., & Speck, B. W. (Eds.). (1998). Changing the way we grade student performance: Classroom assessment and the new learning paradigm. New Directions in Teaching and Learning, 74, Summer. San Francisco, CA: Jossey-Bass. LB3051.C43 1998
Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses. Boston, MA: Allyn and Bacon.
Palomba, C. A. & Banta, T. W. (1999). Assessment essentials: Planning implementing, and improving assessment in higher education. San Francisco: Jossey-Bass. LB2366.2 P35 1999 and in CTE Library
Ratcliff, J. L. (Ed.). (1992). Assessment and curriculum reform. New Directions for Higher Education No. 80. San Francisco: Jossey-Bass. LB2331.72. N48 and in CTE Library
Stiggins, R. J. (1997). Student-centered classroom assessment, 2nd ed. Upper Saddle River, NJ: Merrill.
Suskie, L. (2009). Assessing student learning: A common-sense guide, 2nd ed. Bolton, MA: Anker. LB 2366.S97 and in CTE Library
Walvoord, B. E. (2004) Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco, CA: Jossey-Bass. LB 2822.75 W35 2004 and in CTE Library
Walvoord, B. E., & Anderson, v. J. (1998). Effective grading: A tool for learning and assessment. San Francisco, CA: Jossey-Bass.
LB 2368. W35 1998 and in CTE Library