Employee Evaluation Using Professional Competencies

Employee Evaluation Using Professional Competencies

Vicki L. Wise
Portland State University
Lisa J. Hatfield
Portland State University

At many universities the office of human resources typically offers generic employee evaluation forms for various classifications of non-academic employees that broadly measure their performance. Student Affairs employees are often evaluated against a set of standards that do not directly relate to their work. Our institution’s Student Affairs professionals have shared their difficulty in using our university’s generic evaluation tool in a meaningful way. The measured areas do not align to professional standards in Student Affairs, and the scale is not well defined and difficult to understand. Finally, many employees do not consider the scale easily applicable to goal setting and professional development.

To remedy these difficulties, the Director of Student Affairs Assessment and the Director of the Learning Center developed a supplemental employee self-evaluation tool aligned with the ACPA – College Student Educators International and NASPA – Student Affairs Administrators in Higher Education Professional Competency Areas for Student Affairs Practitioners (2010). In addition, our division of Student Affairs has used the Council for the Advancement of Standards (CAS) in Higher Education (2012) to inform strategic planning, program development, and assessment. The new evaluation tool also aligns to the CAS Standards. It was our desire to create an evaluation tool that would inform staff regarding areas of strength and areas where they might need further training and professional development.

We developed this scale following the recommendations of DeVellis (1991; 2012), who recommended an eight-step process in scale development to produce scales that accurately and reliably measure constructs of interest, and includes:

  • Defining the construct(s) of interest to measure.
  • Creating a set of draft questions that will become the item pool.
  • Determining the format for both the items and the response scale.
  • Seeking expert opinion for item and response scale review.
  • Adding items to reduce social desirable responding.
  • Pilot-testing items with a sample of the target population.
  • Analyzing the results of the pilot test to determine item and scale quality.
  • Determining which items to keep for the final scale.

Process

Step 1: Decide What to Measure

In beginning to develop our instrument we needed to first identify the construct(s) and aspects of employee performance to measure. This was accomplished by reviewing the Professional Competency Areas for Student Affairs Practitioners (ACPA & NASPA, 2010) and our Human Resource guidelines specified on the employee evaluation tool. We used these collectively to develop our scale. ACPA and NASPA address 10 competency areas, each with outcomes at the basic, intermediate, and advanced skills levels. These are (1) advising and helping, (2) assessment, evaluation, and research, (3) equity, diversity, and inclusion, (4) ethical professional practice, (5) history, philosophy, and values, (6) human and organizational resources, (7) law, policy, and governance, (8) leadership, (9) personal foundations, and (10) student learning and development. There are 335 outcomes addressed across these competency areas.

To begin the process of distilling 335 outcomes to a set of items that would represent the work of Student Affairs staff, we created an Excel workbook with 10 sheets, one for each competency area and their corresponding outcomes. Three ratings’ columns allowed for the researchers and a graduate student to independently code each skill (335 outcomes) with a keyword that reflected the essence of that skill. The two researchers met and reviewed all three ratings. For each item, we asked if the keyword(s) reflected what we thought each item meant. Where we disagreed in our interpretation of an item, we discussed until we reached agreement on meaning. In this process, we created a fourth column for our agreed upon theme.

We then combined the 335 outcomes with the agreed upon themes into one spreadsheet. We sorted themes alphabetically and combined themes that were alike. The seven themes generated from this review were communication, cultural competence, core foundations, leadership, law and ethics, management, and professionalism.

Step 2: Generate Item Pool

We still had 335 outcomes related to our seven themes, and knew that we had to reduce outcomes to a manageable list. We reviewed all 335 items to determine which items best reflected each of the seven themes. For the first draft of the scale, we extracted 34 outcomes (items) from the competency areas aligned with our seven themes. We revisited the seven themes and their corresponding items and determined that the area of law and ethics should be removed and one item from this be moved to the cultural competence theme: act in accordance with federal and state/province laws and institutional policies regarding nondiscrimination. We then decided on a measurement scale that reflected levels of performance that would be informative to employees’ self-evaluation.

Step 3: Determine Format for Measurement

Our initial scale was Proficient and Not Proficient as we reasoned that an employee either met a standard or did not. However, upon feedback from department leaders in Student Affairs, we concluded that Not Proficient did not support a developmental model. Therefore, we used the following response scale: Proficient: exemplifies practices most or all of the time; Developing: exemplifies practices on occasion and has room for growth; and not applicable: does not apply to job description and expectations. We then created a draft rubric and prepared for reviews.

Step 4: Have Item Pool Reviewed by Expert

We solicited feedback from our university’s Student Affairs Assessment Council (SAAC). The SAAC is comprised of representatives from departments across Student Affairs that are responsible for conducting assessment in their areas. The goal of the Council is to create a systemic and systematic culture of assessment where we use data, in all its forms, to inform our educational practices and to ensure student success. Based on their feedback, several items were rewritten, as they were double-barreled, confusing, or contained errors. Three items were eliminated. For example, under the Management theme, we reworded this item Model the principles of the profession and communicate the expectation of the same from colleagues and supervisees to Model the standards of your professional organization (e.g., NASPA, NACADA, etc.) Our plan was to then submit the rubric to all Student Affairs staff for more feedback. The Council recommended removing the proficiency scale for the first review and having staff use the scale applies to my work or does not apply to my work. The Council also recommended that it was best to determine if the items were relevant to the work staff do and that including the proficiency scale would be confusing for judging item relevance.

Step 5: Consider Validation Items

DeVellis (1991; 2012) recommends including validation items to reduce response bias, which occurs when individuals may be motivated to present themselves in the most positive light, known as social desirability. The higher the consequences to the employee, the more likely there is to be bias. As this self-evaluation tool is not linked to promotion and pay, it is unlikely that staff would be motivated to demonstrate bias.

Step 6: Administer Items to a Developmental Sample

We pilot-tested the items with our target population of Student Affairs personnel and included their supervisors. Pre-testing allowed us to know if items were applicable to the work in which unclassified staff engage. We administered the items online using the survey tool Qualtrics. Respondents were asked to review each item in light of their current job position and note if the item was applicable or their work. If they reported that an item was not understandable, a follow-up question asked: Please tell us why the items are not understandable. All 153 staff employed in Student Affairs were provided the opportunity to evaluate the items. A total of 53 staff members (35%) responded. The feedback received was overwhelmingly positive with 98% of respondents reporting that items were understandable.

We expected that regardless of area employed in Student Affairs almost all staff would report that the competencies were applicable to their position. That was the case, generally, although there were a few exceptions (results shown in Table 1). In terms of communication, all but one item applied across the division: Assist students in ethical decision-making, and may include making referrals to more experienced professionals when appropriate was applicable less often. This makes sense given that not all positions engage students on a regular basis. In terms of Cultural Competence, we expected that 100% of positions would apply these practices. While the ratings were quite high, we realize that the item Ensure assessment practices are culturally inclusive was not well understood. The Director for Assessment and the Director for Diversity and Multicultural Services will address cultural inclusivity in future employee trainings. In the area of Core Foundations, ratings were quite high. In terms of Leadership, ratings were also high, although for two items ratings were a bit lower: Give appropriate feedback to individuals pertaining to professional growth and development and create or participate in meaningful mentoring. Many of the student affairs staff at our institution are not in roles that require giving feedback to other staff, so in some ways this item has limited applicability. However, it is still important for those who do have the responsibility to be able to do so. As fewer staff view themselves as responsible for mentoring, this can provide an opportunity for professional development. Ratings were lowest in the area of Management, unsurprisingly given the range of unclassified position responsibilities. Again, it is still important for those who do have that responsibility. Finally, ratings were high in the area of Professionalism.

The second pilot test included our proficiency scale: Proficient– exemplifies practices most or all of the time; Developing–exemplifies practices on occasion and has room for growth; Not Applicable–does not apply to job description and expectations. The items and scale were administered online. Twenty-five staff members reviewed the scale and determined that it was understandable.

Step 7: Evaluation of Items

In this step, DeVellis (1991; 2012) recommends that once the pilot data has been collected, items need to be evaluated to determine if they function. This includes examining item-score correlations, item variances, and means. DeVellis also recommended examining internal reliability (consistency) of a scale. In this step, we veered from the protocol, as items were assessed on a dichotomous scale, there was little variability in responding. We measured six themes and included 31 items so we knew we had a multidimensional scale. We examined internal consistency applying Kuder-Richardson 20 (KR-20). KR-20 is recommended when examining scales with dichotomous measures and is comparable to Cronbach’s α used for non-dichotomous measures (DeVellis, 1991; 2012). A general rule-of-thumb is that internal consistency be greater than .70 (Nunnally & Bernstein, 1994). We had an internal consistency reliability coefficient of .85.

Step 8: Optimize scale length

Typically, the final step in scale development is factor analysis to determine the number of factors and whether the scale represents a unidimensional or multidimensional set of factors. Factor analysis was unwarranted given that the data were dichotomous and the sample size small. Moreover, given that the maximum number of staff using this form is 153, and that we will not have access to the results since the form is a self-evaluation tool, this procedure was unwarranted.

Conclusion

This paper presents a scale development process used in Student Affairs. The scale developed was a Student Affairs staff self-evaluation tool that can be used for personal performance review and professional goal setting. It can also be used by supervisors to assist them in setting professional development agendas. As previously mentioned, this instrument will certainly inform our professional development efforts in Student Affairs. Our next annual employee self-evaluation period is spring 2014 and all Student Affairs staff will complete this self-assessment. The Director of Assessment will work with supervisors through a series of trainings on how to use the tool efficiently to support staff in their professional development. Our university has a very active employee-development agenda. The university training schedule offers workshops to support staff skill development in management/supervision, technology, and communication, to name a few. In addition, Student Affairs has an Employee Learning Group responsible for monthly learn-at-lunch sessions related to areas represented in the CAS Standards, and directly related to this evaluation tool.

While our original intention was to develop a tool for Student Affairs staff at our institution, we recognize that this scale could be used across Student Affairs in all job positions, as the competencies addressed on this scale should have applicability to all. Moreover, it can be used to expand and improve job descriptions to include these competencies.

Discussion Questions

  1. As this scale is used for employee self-reflection, growth and development, what types of professional development might you offer that is aligned with the competencies measured?
  2. Being able to measure employee growth over time is essential. Who at your institution might help develop an instrument like this for the Student Affairs staff?
  3. How might a supervisor use this scale to develop job descriptions for new employees?

References

ACPA & NASPA. (2010). Professional Competency Areas for Student Affairs Practitioners. Retrieved from: http://www.myacpa.org/professional-competency-areas-student-affairs-practitioners

Council for the Advancement of Standards in Higher Education. (2012). CAS professional standards for higher education (8th ed.). Washington, DC: Council for the Advancement of Standards.

DeVellis, R. F. (1991). Scale development: Theory and application. Applied Social Research Methods Series. Newbury Park, CA: Sage.

DeVellis, R. F. (2012). Scale development: Theory and application. Applied Social Research Methods Series (3rd ed.). Thousand Oaks, CA: Sage.

Nunnally J. C. , & Bernstein I. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.

About the Authors

Vicki L. Wise, PhD, serves as Director of Assessment & Research at Portland State University (PSU) where she oversees assessment, planning, and reporting for the Division of Enrollment Management & Student Affairs.  Prior to PSU, she was at James Madison University for 10 years and held the positions of Director of Assessment and Evaluation for the College of Education, Assistant Director for Institutional Research, and Assistant Professor/Research Administrator in the Center for Assessment and Research Studies. Vicki earned her PhD and MA degrees at the University of Nebraska in Psychological and Cultural Studies and Educational Psychology, respectively.

Please e-mail inquiries to Vicki L. Wise.

Lisa Hatfield is the Director of Portland State University’s Learning Center. Lisa is a member of our institution’s Student Affairs Assessment Council and has had a great deal of experience in classroom assessment (both student and instructor). Having taught in the K-12 system for several years, Lisa also has statewide experience with assessment, especially with evaluating students’ writing. She holds an MA and an MAT, and is a doctoral student in Curriculum and Instruction.

Please e-mail inquiries to Lisa Hatfield.

Disclaimer

The ideas expressed in this article are not necessarily those of the Developments editorial board or those of ACPA members or the ACPA Governing Board, Leadership, or International Office Staff.

7 thoughts on “Employee Evaluation Using Professional Competencies”

Leave a Reply

Your email address will not be published. Required fields are marked *