Student Affairs Staff Support, Resistance, or Indifference to Assessment
Matthew B. Fuller
Sam Houston State University
Student Affairs Staff Support, Resistance, or Indifference to Assessment
Much has been said about the importance of a culture of assessment in institutional communities and while less has been researched and written, an emerging scholarship does make it possible for student affairs practitioners to discuss their role in developing, maintaining, or augmenting a culture of assessment on their campus. Assessment culture is often assumed to be a positive force because purported benefits to student learning are highly desirable. However, a strong culture of assessment might also precipitate positive benefits to accreditation, financing institutional efforts, and the overall effectiveness of programs and the institution (Maki, 2010). Assessment that serves only the aims of improving student learning is often not tapped for its importance to institutional processes such as program review, accreditation, or planning. Conversely, assessment crafted only to respond to accreditation, accountability, or financial concerns often neglects or is completely disconnected from student learning. Instead, a healthy balance of assessment cultures, a tool capable of exploring and measuring this balance, and opportunities for cross-institutional dialogue about perceptions of assessment are needed.
Often, higher education professionals do not recognize or seek out the expertise of other professionals on their campus, preferring instead to adhere to tradition or other forms of collective, professional wisdom (Kezar, 2005; Ward, 2000). In the case of assessment, professional or disciplinary boundaries may prevent collaborations that would otherwise prove beneficial for autonomous units, the institution, and students. Traditional narratives falsely maintain a place for student affairs practitioners as merely the coordinators of non-classroom activities and purveyors of occasional, co-curricular learning (Kezar & Elrod, 2012). Recent research led by Kezar (2005), Kuh, Kinzie, Schuh, & Whitt (2005), Schuh & Gansemer-Topf (2010), and many others has firmly established the role of a student affairs professionals in contributing to deep, long-lasting, meaningful learning spanning disciplines and social boundaries.
The time has come to also recognize and solidify the vital role student affairs practitioners play in developing, maintaining, or changing an institution’s culture of assessment. In the fall 2011 semester, the new Survey of Assessment Culture© was administered to a nation-wide, stratified, and random sample of America’s directors of institutional research and assessment. The focus of this research effort was to explore factors influencing the manner in which higher education institutions develop, maintain, or augment their institutional culture of assessment. Developed around Maki’s (2010) Principles of an Inclusive Commitment to Assessment, the Survey of Assessment Culture contributes to initial empirical explorations of assessment cultures. This article introduces brief methodological aspects of the Survey of Assessment Culture before offering findings from one section of the Survey: assessment directors’ rankings of campus leaders’ support, resistance, or indifference to assessment, in particular, the support or resistance of student affairs staff. This article addresses the research question: “What are institutional research and assessment directors’ perceptions of student affairs practitioners’ support, resistance or indifference to assessment?” After providing brief methodological overviews and articulating findings, this article outlines new potentials for cross-institutional practice and directions for future research.
Brief Methodological Overview
Prior research has either relied on samples of convenience (Ndoye & Parker, 2010) or has not studied institutional research or assessment directors (Kuh & Ikenberry, 2009) to explore assessment practices. The present study uses publically-available resources to construct a random, stratified sample of the U.S. directors of institutional research and assessment. A listing of undergraduate, degree-granting, regionally-accredited institutions was downloaded from the Carnegie Classification of Institutions of Higher Education website and was stratified according to institutional full-time enrollment size, accreditation region, and Carnegie Basic Classification. This stratified listing of institutions was placed in a sampling matrix according to the type of degrees awarded (primarily associates vs. primarily bachelors), regional accreditation region, and size of full-time enrollment. This resulted in a listing of 2,617 institutions, a population similar to those surveyed by Kuh and Ikenberry (2009). Institutions were sampled at the most refined level of stratification and were over-sampled by a factor of three to ensure the best possible dispersion of a representative number of respondents at and across each level of stratification. A total of 1,026 institutions were randomly sampled for invitation to participate in the survey.
After institutions were randomly selected, the Higher Education Directory © ® was utilized to identify the contact information for directors of institutional research and assessment at sampled institutions. Although the Higher Education Directory is a voluntary listing of contact information, 77.2% or 792 email addresses for contacts were obtained using this resource. The remaining institutional contacts underwent status checks using institutional websites and public search engines1 to identify Chief Assessment Officers, identified as the individuals for whom assessment is their primary responsibility. 170 Chief Assessment Officers were identified using this method. The remaining 64 participants did not have an entry in the Higher Education Directory and web searches did not yield contact information. In these cases, the Provost of the institution was invited to participate in the survey and his/her contact information was gathered using the Higher Education Directory.2
Of the 917 invited participants, 316 responded to the electronic survey and completed at least three-quarters of the survey, providing a 34.5% response rate. The least-responded-to question still obtained 224 responses with the average number of responses being 302. This response suggests the potential for cautious generalizing to the national level and could be strengthened with greater response in future administrations.
Limitations
The current study is limited in that little is known about respondents’ mental scheme when ranking various colleagues as supportive, resistant, or indifferent to assessment. Stated differently, additional analyses are needed to ascertain why an institutional research or assessment director indicated a particular group of colleagues was viewed as supportive or resistant to assessment. The constructs under examination in this study—support, resistance, and indifference—will be vastly different from participant to participant. For this reason, the current study focuses its findings on a clear depiction of institutional research and assessment directors’ perceptions rather than a “hard and fast” ranking of colleagues’ support of assessment. A more comprehensive discussion of sampling method, conceptual frameworks, and limitations can be found at the Sam Houston State University website . Findings from one section of the survey focusing on institutional research and assessment directors’ ranking of campus leaders’ support, resistance, or indifference to assessment are offered in the following section.
Findings
Participating institutional research and assessment directors were asked to rank ten different campus leaders or leadership groups regarding their supportiveness, resistance, or indifference/unawareness to assessment. Campus leaders or leadership groups included (a) Board of Trustee members; (b) President; (c) Provost; (d) Faculty; (e) Student Affairs staff; (f) Faculty Senate members; (g) Development/ Fundraising officers; (h) Alumni services; (i) Academic Advisors; (j) Student Government Leaders. A seven-point Likert-type scale was developed ranging from “Indifferent/Unaware of assessment” (1); “Highly Resistant” (2); “Moderately Resistant” (3); “Only Slightly Resistant” (4); “Only Slightly Supportive” (5); “Moderately Supportive” (6); to “Highly Supportive” (7).
Regarding student affairs staff, institutional research and assessment directors ranked only 5.8% of student affairs staff as “Indifferent/Unaware of assessment.” Highly resistant student affairs staff accounted for 1.3%, moderately resistant student affairs staff accounted for 2.7%, and student affairs staff who were ranked as “Only Slightly Resistant” accounted for 1.8% of the rankings provided. Participants ranked student affairs staff as “Only Slightly Supportive” in 12.4% of their rankings, while both the “Moderately Supportive” and “Highly Supportive” categories accounted for 38.1% of the rankings each. In aggregate, institutional research and assessment directors ranked 88.5% of student affairs staff as “Supportive” in any extent, 5.8% as “Resistant” in any extent, and 5.8% of “Indifferent/Unsupportive.”
In comparison, student affairs practitioners were ranked as just slightly less supportive of assessment as Presidents (91.6%) and Provosts (90.6%), though they were ranked as the third most supportive group. Faculty Senate (78.9%) and Faculty (75.8%) were viewed as the next most supportive leaders, followed by Academic Advisors (73.2%), Board of Trustee members (69.5%), Development/ Fundraising Officers (53.4%), Student Government leaders (49.3%), and Alumni groups (29.1%). An equal portion of Student Government leaders (49.3%) were ranked as indifferent to assessment and notable percentages of indifference are see in the Alumni groups (69.5% indifferent), Development and fundraising officers (42.9% indifferent), and Board of Trustee members (30.5% indifferent). For a more detailed comparison and treatment of scale items as interval/ratio data, see Fuller (2011).
Discussion and Call for Future Research
The large percentage of supportive responses indicates U.S. student affairs practitioners are perceived as being supportive of assessment and their colleagues in institutional research and assessment have taken note of this support. Coupled with presidents and provosts, student affairs practitioners are perceived as among the most supportive members of an institutional community. Empowering student affairs staff to demonstrate their support for assessment may prove beneficial to advancing assessment practices across campus. Student affairs staff may be supportive allies in advancing the benefits of assessment to faculty, administrators, or students. In particular, student affairs practitioners can be critical in reaching out to students or student organizations and instilling fundamental commitments to self-exploration and inquiry inherent in assessment.
Findings from the current study reveal a positive depiction of assessment administrators’ belief in student affairs practitioners. However, the fact that 5.8% of student affairs staff were ranked as Indifferent/Unaware and 5.8% were perceived as resistant to some extent offers an opportunity for student affairs practitioner, and all groups in the present study, to consider how they can be perceived as more supportive of assessment. Campus leaders in these groups may be astonished to learn that they are perceived more or less supportive of assessment than they view themselves. Student affairs staff have daily contact with students and are vital collaborators in an effective culture of assessment focused on improving student learning (Maki, 2010; Upcraft & Schuh, 1997). Further debunking myths that student affairs staff are resistant to assessment could precipitate advantageous conditions for the improvement of student learning as an institutional way of life.
Similarly, student affairs practitioners may see unique opportunities to translate their support for assessment into generative, meaningful action. Student affairs staff may see avenues for collaboration with institutional research staff or other colleagues perceived as more resistant to assessment. Student affairs staff must connect with colleagues inside and outside of student affairs and the institution. They are masters of seeking innovative partnerships, respectfully spanning boundaries, and leveraging colleagues for synergy (Kezar, 2005). Determining avenues for mutual benefit between student affairs staff and other campus colleagues concerned about advancing the benefits of assessment may initiate and sustain long-term cultural change in institutions. These findings may have been drastically different in years or decades prior (Astin & Antonio, 2012; Ewell, 2002; Upcraft & Schuh, 1997), and are points worth celebrating on individual campuses and in assessment scholarship. Data on the support, resistance, or indifference to assessment may be most meaningful as a model for additional, institution-level dialogue. Assessment practitioners may not have considered which campus constituents are most supportive, resistant, or indifferent to assessment on their campus. This finding may offer individual practitioners an avenue to initiate conversations about assessment within their units and on their campus. Moreover, student affairs or assessment practitioners may not fully recognize their role and the power they possess in formulating a campus culture supportive of assessment for the purpose of learning. Individual reflections on assessment cultures and supportive partnerships for assessment across campus communities may be the most useful and meaningful reflections in which educators can engage. This finding suggests student affairs colleagues support assessment to an exemplary degree and should be exemplified for their commitment to this reform agenda.
Further research and advocacy are necessary to organize student affairs practitioners’ highly supportive approach to assessment as a force for change in higher education. Doing so will allow institution research and assessment directors to connect supportive practitioners with more resistant or indifferent colleagues. Through these connections, a scholarship of collegiality (e.g., the study of how colleagues come together and are changed despite differing perspectives on a subject) will emerge and can be leveraged to support change in cultures of assessment. This will be an evolving process in which scholars and practitioners must engage to fully understand how traditionally disparate units learn from each other. Nonetheless, these initial descriptive results offer promising perspectives on the role student affairs practitioners can play as role models to transform or maintain an institution’s culture of assessment. For now, the overwhelming support student affairs practitioners have shown toward assessment is a point worth celebrating. These data support the notion that the majority of student affairs practitioners on the campuses studied are supportive of assessment. Positioning student affairs practitioners as exemplars of support for assessment may advance institutional improvements in cultures of assessment.
Discussion Questions
- What forms of power and authority do student affairs staff members and leaders possess to change their institution’s culture of assessment?
- How do different campus leaders and leadership groups define a culture of assessment or assessment in general?
- What partnerships between student affairs staff and other campus leaders or leadership groups may be beneficial in advancing a culture of assessment?
- What are the necessary components of an effective relationship between student affairs staff and campus leaders and leadership groups?
Notes
- Search terms: assessment; institutional research, evaluation, institutional effectiveness.
- Once the 1,026 survey respondents were invited to participate in the Survey, a total of 109 emails were returned as either inaccurate or no longer active. It can be assumed a total of 917 participants were adequately invited to participate in the survey.
References
Astin, A. W., & Antonio, A. L. (2012). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education (2nd ed.). Lanham, MD: Rowman & Littlefield Publishing Group, Inc.
Ewell, P. T. (2002). An emerging scholarship: A brief history of assessment. In T. Banta & Associates (Eds.). Building a scholarship of assessment (pp. 3–25). San Francisco, CA: Jossey-Bass.
Fuller, M. B. (2011). Preliminary results of the Survey of Assessment Culture. Retrieved from http://www.shsu.edu/research/survey-of-assessment-culture/documents/2011… AssessmentCultureResults.pdf
Kezar, A. (2005). Moving from I to we: Reorganizing for collaboration in higher education. Change: The Magazine of Higher Learning, 37(6),50-56
Kezar, A., & Elrod, S. (2012). Facilitating interdisciplinary learning: Lessons from project kaleidoscope. Change: The Magazine Of Higher Learning, 44(1), 16-25.
Kuh, G., & Ikenberry, S. (2009). More than you think, Less than we need: Learning outcomes assessment in American Higher Education. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment.
Kuh, G. D., Kinzie, J., Schuh, J., & Whitt, E. (2005). Student success in college: Creating conditions that matter. San Francisco: Jossey-Bass.
Maki, P. (2010). Assessing for learning: Building a sustainable commitment across the institution (2nd ed.). Sterling, VA: Stylus.
Ndoye, A., & Parker, M. A. (2010). Creating and sustaining a culture of assessment. Planning For Higher Education, 38(2), 28-39.
Schuh, J.H., & Gansemer-Topf, A.M. (2010, December). The role of student affairs in student learning assessment. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment.
Upcraft, M. L., & Schuh, J. H. (1996). Assessment in student affairs: A guide for practitioners. San Francisco: Jossey-Bass.
Ward, D. (2000, January/February). Catching the wave of change in American higher education. Educause Review, 35(1), 22-30.
About the Author
Matthew Fuller, Ph.D., is Assistant Professor and Coordinator of Higher Education Administration at Sam Houston State University. Dr. Fuller serves as the Principal Investigator for the Survey of Assessment Culture, a nation-wide annual survey of factors influencing institutional assessment approaches. He has held administrative positions in assessment at Texas A&M University and Illinois State University as well as positions in Residence Life at Texas A&M University and the University of Alaska – Southeast. Dr. Fuller earned a Bachelor of Arts in Biology, a Master of Science in Educational Administration and Human Resource Development (Emphasis in Student Affairs Administration in Higher Education), and a certificate in College Teaching from Texas A&M University and a Ph.D. in Educational Administration and Foundations from Illinois State University. He is a 2008 recipient of the Association of Institutional Research’s Julia Duckwall Fellowship and a 2012 fellowship with the National Center for Educational Statistics’ National Data Institute. Dr. Fuller’s research agenda focuses on the foundations of assessment, assessment cultures in higher education, and the history of higher education.
Please e-mail inquiries to Matthew B. Fuller.
Disclaimer
The ideas expressed in this article are not necessarily those of the Developments editorial board or those of ACPA members or the ACPA Governing Board, Leadership, or International Staff Office.