Promising Practices in Assessing Learning in Student Activities

Promising Practices in Assessing Learning in Student Activities

Kim Yousey-Elsener
Campus Labs
Stella Mulberry Antic
University of North Texas

While publications such as Learning Reconsidered 2 (Keeling, 2006) and the CAS Learning Domains (Council for the Advancement of Standards, 2011) have set the standards for assessing student learning in campus activities, actually assessing student learning in co-curricular experiences is still a time consuming and elusive effort for many campus activities professionals. Bresciani (2011) and Schuh and Upcraft (2001) assert that purposeful, ongoing assessment of learning and administrative outcomes is critical to program improvement and success. However, assessing student learning can be a challenge for many reasons, since learning is “messy” and not linear for our students; therefore, finding an appropriate way to capture that learning for assessment purposes can be a challenge. In addition, student activities programs and services are often created for purposes beyond student learning, such as community building, connecting students to the institution, and educating students regarding campus traditions and ethos. While these are all important purposes, they are not solely for the objective of student learning. This notion leaves professionals asking: how and why do I need to assess learning?

While new technologies help us track exposure to events and programs, such as through the use of ID card swiping, it is important for us to go beyond these perfunctory ways of understanding our students outside of the classroom. Assessing student learning in campus activities tell us what students are “taking away” from interactions with specific programs and services. Perhaps the most important reason for assessing learning is for our own learning as practitioners. Assessing learning ensures that all of our planning, facilitating, and advising contribute to our students taking away tangible skills, knowledge, and behaviors. It also provides us the opportunity to improve our programs to better meet our intended goals. In addition, assessing student learning helps us engage in a larger conversation on campus, where well-articulated student learning outcomes and assessment results help to bridge the conversation of student learning with faculty at our institutions. These cross-institution conversations also serve the institution in articulating and proving what students are learning overall, an effort being encouraged across the nation by an increased focus on student learning in the accreditation process.

For all of these reasons, it is important for professionals in campus activities to gather information about every aspect of their programs, including budget and operations, student needs, student learning outcomes, and overall program effectiveness. While this can be a challenge, one suggestion is to not try to do everything at once; rather, pick a few programs or goals or assessment needs each year. Rotating through programs and services each year allows professionals the time and resources to gather useful information, while utilizing a pre-established framework may help as well. Below are examples of two institutions implementing promising practices using the CAS Standards and NACA Student Learning Frameworks.

The University of North Texas and CAS Standards

In 1979, the Council for the Advancement of Standards in Higher Education (CAS) was founded as an organization dedicated to advocating for the implementation of standards and guidelines in student affairs practice (CAS, 2011; Nuss, 2000). CAS has 41 member organizations that represent many of the major professional organizations in the field of student affairs, including those related to general practice, graduate preparation, counseling, health, housing and residence life, judicial affairs, facilities, Greek life, and campus activities, among others (CAS, 2011). CAS Standards and Guidelines are used frequently in student affairs in order to help departments assess their programs and services (McNeil, 2009).

At the University of North Texas (UNT), the Division of Student Affairs recommends departments undergo a full CAS review every five years. This process is facilitated by staff in the office of Research, Assessment, and Planning (RAP), which assists with at least two departmental CAS reviews each year. UNT’s Student Activities Center is currently engaged in this review using the CAS Standards and Guidelines for Campus Activities Programs within the following framework.

Norming. The first component to UNT’s CAS review process is the self-study, in which departmental staff work independently to score themselves across the various dimensions of the CAS Self-Assessment Guide (for Campus Activities Programs, there are 14 separate sections). Before the self-study, the staff participates in an initial norming session, facilitated by RAP staff, to learn about the rating system and to discuss the criteria needed to evaluate the standards and guidelines. Staff members then receives copies of the instrument and rating sheets, and are given ample time to complete the self-study and submitting their scores to both the RAP office and to a staff member in the department tasked with monitoring progress. Staff members are also asked to provide narratives for open-ended questions. This process takes between three and four weeks to complete.

Consensus Meeting. Once all self-studies are complete, the staff reconvenes for a consensus-building meeting, facilitated by RAP staff. The norming information is reiterated, and each element of the CAS Self-Assessment Guide is reviewed. Staff members are asked to give their ratings by holding up scorecards for all to see, and to give their rationale for the rating. If there is unanimous agreement, that score is assigned; if not, the range of scores is recorded and the discrepancies are discussed until the group reaches consensus. The consensus score is then recorded. This process can take up to one full day, but can occur over the course of several days for ease of scheduling if needed.

Dashboard. The RAP office developed a spreadsheet-based template that, when completed, automatically graphs a department’s scores across the various components of the CAS instrument. UNT developed this template with the collaboration of Eastern Washington University to ensure the most up-to-date congruence with CAS Standards and Guidelines. It was designed to provide senior leadership with a quick snapshot of the entire self-study process in one spreadsheet to save time and make the process more meaningful from an administrative enhancement perspective. The dashboard templates developed by RAP are available within a limited number of CAS areas, corresponding to those functional areas present at the campus. After the consensus meeting, RAP staff inputs the final scores and narrative into the template. A report is then generated and sent to the department’s director, who will then determine the next steps for gathering evidence.

Evidence Gathering. The next step in the process is to gather evidence to support the scores that were determined in the consensus meeting. Each rating should be accompanied by appropriate evidence, such as policy documents, organizational charts, departmental meeting minutes, and assessment results. Department staff work together to gather copies of the evidence needed, either electronically or in hard copy. The evidence is then put into a compendium along with the dashboard report. This process takes between four to six weeks to complete.

Internal-External Review. After the evidence has been gathered at a central location, RAP staff review the compendium through the lens of an external reviewer from another institution. In this “devil’s advocate” role, RAP ensures that enough evidence has been compiled to support the ratings and asks staff to gather more if needed. This is done in order to prepare the department for the official external review to endure it goes as smoothly as possible. This review takes roughly two to three weeks to complete.

External Review. Department directors are asked to identify colleagues outside the institution who are authorities in the specific functional area to come to campus as external reviewers. Once the compendium of evidence is complete, these external reviewers are invited to review the documentation and to hold interviews and focus groups with key stakeholders, such as department staff, senior leadership, and students. The feedback gathered from these meetings, combined with the reviewers’ expertise, inform the reviewers’ interpretation of the ratings and final recommendations. While the actual review takes between one to three days on campus, the invitations to reviewers should be extended at least eight to10 weeks in advance to allow for scheduling considerations.

Closing the Loop. Once the external review is complete, it is up to the department leadership to develop a summative evaluation of the entire process. This evaluation includes an executive summary to be shared with senior division leadership as well as action steps that will evolve from the external reviewers’ recommendations. If the action steps have any bearing on the department’s strategic plan or assessment plan, those documents should be updated during the annual review period and tracked accordingly.

The Student Activities Center has completed the above process through the internal-external review. The department is working through the external review phase as of Fall 2011, and a final copy of the summative evaluation will be posted on the UNT Division of Student Affairs Web site once completed.

Linfield College and NACA Framework

Created in 2007, the NACA Competency Guide for Student Leaders is available in several publications including guides for students and campus administrators. The Facilitators Guide provides practitioners with a description, learning outcomes, suggested initiatives, key questions, additional resources, and assessment questions related to 10 core competencies: (leadership development, assessment and evaluation, event management, meaningful interpersonal relationships, collaboration, social responsibility, effective communication, multicultural competency, intellectual growth, clarified values); and seven additional competencies: (enhanced self-esteem, realistic self-appraisal, healthy behavior and satisfying lifestyles, interdependence, spiritual awareness, personal and educational goals, career choices). More information about the Frameworks is available at www.naca.org.

In an effort to enhance student learning outside the classroom, the Student Affairs Division of Linfield College examined how they define and assess learning. To start this process, the entire staff within the division of Student Affairs read Learning Reconsidered 2 and participated in a day-long training on student learning outcomes and assessment. Following the training, the staff grappled with how to put the theory of the learning and assessment into practice. Staff members were introduced to the new NACA Student Leader Competency Guide at the NACA National Conference and a NASAP regional conference. It provided student learning outcomes and included an assessment tool and facilitator’s guide. This guide provided a good starting point for the development of learning outcomes and assessment.

A meeting of Residence Life and the Student Activities Staff occurred to narrow the 17 NACA competencies down to five core competencies that fit the student culture and desired outcomes. They included: leadership development, meaningful interpersonal relationships, collaboration, social responsibility and effective communication. The staff used the assessment tool in a pre and a post self-evaluation to determine the impact that programs were having on the five competencies. The first year of data revealed that students self-scored lower on the post-evaluation, even after extensive leadership training and a yearlong leadership experience. This scoring drop was credited to a greater self-awareness and understanding of their own leadership competencies. After much consideration it was decided to drop the pre-evaluation as part of the assessment process. The second year, at a full division retreat, the student affairs staff developed four core competencies for all student leaders, including leadership development, social responsibility, effective communication, multicultural competency. Also, a partnership was formed with CampusLabs to develop a post-evaluation for the student leaders with an expanded scale. Finally, multiple assessment methods were developed and incorporated utilizing reflection through journaling and one-to-one meetings

For Linfield, this was an intensive and rewarding project where valuable lessons were learned. First, a pre and post-evaluation is not always the ideal methodology. Second, for small divisions with limited budgets it is important to seek out existing resources. These may include guiding documents from other institutions, templates and tools through NACA, and consultations with CampusLabs. Finally, it is important to be upfront and direct about learning outcomes with students. Showing students that their leadership positions are learning laboratories was an important part of the assessment process. To that end, learning outcomes were incorporated into each part of a student’s leadership experience from the marketing of the position, to the hiring process, training, program goals and one-on-one meetings. Linfield found that once the language of learning and assessment was used, their students followed suit and incorporated it into their experience.

Conclusion

Whether staff are searching for a way to complete a full program review or a small campus looking for a place to start, assessing student learning in campus activities begins with determining what framework or process works best. There are myriad ways to assess student learning in the co-curricular realm. A focus on intentionally gathering relevant data to help improve the student experience is paramount, regardless of which method one chooses. Assessing student learning is a challenging and rewarding experience, one that can benefit students and staff alike.

Discussion Questions

  • What are the benefits of assessing student life programs? How can assessment help maximize opportunities or mitigate challenges in a student life context?
  • What do students learn through participation in student life programs? Are there differences in learning depending on breadth of experience vs. depth of experience?
  • How can student learning outcomes truly be measured in the context of student life?
  • What steps can one take today to plan an assessment of a student life program? What steps can be planned for this month? This semester?

References

Bresciani, M. J. (2011, August). Making assessment meaningful: What new student affairs professionals and those new to assessment need to know. (NILOA Assessment Brief: Student Affairs). Urbana, IL: University for Illinois and Indiana University, National Institute for Learning Outcomes Assessment.

Council for the Advancement of Standards in Higher Education. (2011). CAS handout. Retrieved from the CAS Web site http://www.cas.edu/wp-content/uploads/2011/04/CAS_Handout04-11.pdf

Keeling, R. P., American College Personnel Association, & National Association of Student Personnel Administrators (U.S.). (2006). Learning reconsidered 2: Implementing a campus-wide focus on the student experience. Washington, D.C.: ACPA.

McNeil, M. (2009). Using standards to support peer education. Retrieved from the Alice! Health Promotion Program, Columbia UniversityMcNeil, M. (2009). Using standards to support peer education. Retrieved from http://health.columbia.edu/files/healthservices/alice_downloads_using_st…

Nuss, E. M. (2000). The role of professional associations. In M. J. Barr, M. K. Desler, & Associates (Eds.),The handbook of student affairs administration (2nd ed.). San Francisco, CA: Jossey-Bass.

Schuh, J. H., & Upcraft, M. L. (2001). Assessment practice in student affairs: An applications manual. San Francisco, CA: Jossey-Bass Publishers.

White, E. R. (2006).Using CAS standards for self-assessment and improvement. Retrieved from the NACADA Clearinghouse of Academic Advising Resources Web site http://www.nacada.ksu.edu/Clearinghouse/AdvisingIssues/CAS.htm.

About the Authors

Kim Yousey-Elsener, Ph.D. is an Associate Director of Assessment Programs at Campus Labs as well as serving as the Chair for ACPA’s Commission for Assessment and Evaluation. In addition to her assessment work with over 100 campuses nation-wide she serves as adjunct faculty at West Virginia University.

Please e-mail inquiries to Kim Yousey-Elsener.

Stella Mulberry Antic, Ph.D., is the Assistant Director of Research, Assessment, and Planning for Student Affairs at the University of North Texas, and serves on the directorate for the ACPA Commission for Assessment and Evaluation. At UNT, Dr. Antic conducts research related to student populations and works on developing a statistical model of student retention using direct evidence of program and service usage patterns.

Please e-mail inquiries to Stella Mulberry Antic.

Disclaimer

The ideas expressed in this article are not necessarily those of the Developments editorial board or those of ACPA members or the ACPA Governing Board, Leadership, or International Office Staff.

6 thoughts on “Promising Practices in Assessing Learning in Student Activities”

Leave a Reply

Your email address will not be published. Required fields are marked *