About the PICCS Evaluation Frameworks
At the heart of the HCMS approach in PICCS is a rigorous evaluation system. The participating schools have spent significant time refining a shared evaluation program such that it meets local and state regulations (New York and New Jersey) and federal guidelines for the Teacher Incentive Fund (TIF – the program through which the PICCS project is funded). Schools have also worked diligently to ensure that the evaluation system is considered equitable and rigorous by stakeholders within their school communities.
The result of these intensive planning, development, and engagement processes are two shared PICCS Evaluation Frameworks–one for teachers and another for school leaders–which each PICCS school then customizes to meet the specific needs and mission of their school.The evaluation processes for participating schools are based on the following shared tenets:
1. Evaluation should be supportive to professionals. The PICCS Evaluation Framework includes both formative and summative assessments that are used to inform professional development, allocation of resources and technical assistance/training for the staff member being evaluated. The goal is to move all staff members to their highest potential performance.
2. Evaluation should be transparent, clear and useful to professionals. The PICCS Evaluation Framework spells out what data will be used to evaluate a staff member, how that data will be collected, how and where the data will be recorded, and how the data will be converted to a score and rating on a HEDI chart (HEDI stands for “Highly Effective,” “Effective,” “Developing,” and “Ineffective”). To help teachers and principals implement the process, PICCS has established a set of web-based resources for the teacher evaluation and is currently developing a similar set of resources to support the principal evaluation.
3. Evaluation must be conducted by trained professionals. There are two types of data used in the PICCS Evaluation Framework: 1) Student Growth Data; 2) Professional Practice Data. All student outcome data is collected, managed and calculated by trained Data Coaches at the schools and reviewed by Data Engineers at the PICCS central office in order to ensure accuracy. All professional practice data (such as data collected during classroom observations or walkthroughs) is collected by individuals who have been trained and certified to properly implement the observation/assessment tool used at their school (either Marzano or Danielson Frameworks). Periodically, a nationally-certified expert in the school’s observation/assessment tool makes school visits to review the data collected and the processes being implemented to help ensure reliable and consistent practices across all PICCS schools.