Join us on Facebook Follow us on Twitter Watch us on YouTube

Standard 2 Report

  • Homect_img
  • Standard 2 Report
 

2.1 How does the unit use its assessment system to improve candidate performance, program quality, and unit operations? 

The Edinboro University (EU) School of Education (SOE) Unit-Wide Assessment System has been operating effectively since 2003. The assessment system was designed to reflect the Effective Facilitators of Learning conceptual framework resulting in a comprehensive system for assessing candidates, programs, and the Unit as a whole.  The assessments within this system are aligned with national and state standards. The Unit has engaged in continuous aggregation and dissemination of data including candidate performance; academic statistics; budget; alumni, employer, and candidate satisfaction surveys; and program/unit assessment data. Programs have used the data to guide program development in the areas of content, field experiences, and pedagogy. A description of the unit’s assessment system including the requirements and key assessments used in transition points can be found in Exhibit 2.3.a.

For initial certification programs, admission to Teacher Education programs goes beyond the admission requirements of the University. Candidates must earn candidacy in order to be officially admitted to a program. These additional requirements are delineated in the course catalog ( UG catalog – admissions requirements), are a part of advising (Advising checklist), and require an approved candidacy form (Candidacy form).  Further criteria for entering student teaching, such as passing the appropriate standardized exam, earning a "C" or better in all specialized course work, and successful completion of field experiences, are also delineated in the course catalog as well as on the student teaching pre-requisite form.

Admission to advanced programs is managed by the School of Graduate Studies and Research. Admission requirements are delineated in the Graduate Catalog (Graduate Catalog) and include the specific requirements for each program.  All advanced programs in the School of Education require a valid degree and/or certification in an education field.

Candidates receive feedback regularly and systematically through the use of classroom, SPA, and Unit assessments. Using the Livetext platform for submission and evaluation of key assessments allows candidates to see results quickly and efficiently as to whether they are meeting the standards of the field. Within this context, rubric scores, comments, and overall grades can be seen by the candidates as soon as evaluation is completed so that their knowledge and practice can be affected immediately. This feedback is in addition to classroom evaluations such as assignments, tests, quizzes, and projects.  All candidates receive a mid-term and final grade that can be accessed on the university platform, SCOTS. All EU teacher education candidates enrolled in initial programs are able to self-assess through the use of the Diversity and Disposition Surveys given before candidacy in SEDU 271 and during clinical experiences, as well as through the Technology Survey in SEDU 183 or HPE 384, and the TCPP in the field experience. In addition, these candidates are assessed and receive feedback through Unit assessments such as the Instructional Assessment Plan and PDE-430 to improve their planning and performance.  All candidates in both initial and advanced programs receive feedback on the Conceptual Framework reflection to aid in guiding them toward becoming Effective Facilitators of Learning.

To ensure candidate success in program completion, the Unit has developed a comprehensive assessment system based on integrated assessment and evaluation measures, particularly focused on five transition points. Critical points for monitoring candidate progress through both initial and advanced programs include admission, candidacy, clinical experience, graduation, and first-year professional. Though there are some current supports in place for first-year professionals, these are in their initial stages of development. Though transition points may be defined differently between initial and advanced, they are still critical and effective for monitoring the success of candidates. A complete description of criteria and key assessments at each transition point for various programs are provided in Exhibit 2.3.a. The data from these assessments, together with other evaluation measures, are used by committees and administrators within the Unit’s structure to manage and improve the unit’s operations and programs (CIC Minutes).

Many steps have been taken to ensure that Unit and program assessments are providing unbiased, reliable, and valid results. These steps include but are not limited to the following:

  • Diligent programmatic discussion during the development of each assessment and rubric;
  • Assessments and rubrics posted on the candidate’s dashboard in Livetext at the beginning of each semester; Assessments and rubrics can be seen and studied prior to assignment and submission.
  • Unit meeting discussion of data from Unit assessments each semester;
  • Program review of program data from all assessments using the Program Analysis Report each spring;
  • On-going posting of all data from all assessments on the D2L platform for review by all faculty;
  • All assessments based on the appropriate program/SPA/INTASC/NCATE standards. Rubrics are all aligned to the relevant standards;
  • All surveys continue to be developed with the input from various stakeholders including program, administration, and P-12 partners.

Our current assessment system promotes data collection that is used to drive meaningful program change by providing regular and comprehensive information on candidate qualifications, proficiencies, and graduate competence. These data are collected and analyzed through the use of informational technologies – primarily our accreditation support platform, Livetext. The program related data are housed within our SPA reports, all of which are posted on the EU website NCATE page (SPA reports). These data are regularly and systematically collected, aggregated by the Unit Accreditation coordinator (UAC), and disseminated as quickly as possible using the online platform of D2L. All members of the SOE have full access to the data, both program and Unit, at all times through the use of this platform. Data are also discussed at Unit meetings in both the Fall and Spring semesters. Additionally, data driven program changes can clearly be seen in SPA Reports and, beginning Spring 2012, annual Program Analysis Reports.

In order for data to be reviewed and used effectively, a structure was put in place (CI governance structure). The Accreditation Coordination Council (ACC) is comprised of the Dean, Associate Dean, current UAC, past UAC, and the Management Technician responsible for data collection support. This council meets regularly to oversee the program and Unit-wide assessments. Representatives from each of the NCATE Standards Committees, now referred to as Continuous Improvement Committees (CIC), are responsible for reviewing both the program and Unit-wide assessments as led by the ACC. These representatives, comprising the Continuous Improvement Team (CIT), are faculty from each of the SOE's four departments, as well as the ACC members, and meets regularly to discuss Unit assessment issues. Each member of the CIT serves as a co-chair for a particular committee and is responsible for reviewing key assessments pertaining to one of the NCATE standards and related Unit issues (e.g., Clinical Experiences: Early Field Placements or Clearance Issues).  Each chair reports findings and recommendations to the Team and Unit. The review process of Unit operations began in 2006 and continues in order to consider revisions to the assessment system, governance structure, and  Unit-wide data such as exit surveys, diversity data, the Teacher Candidate Performance Profile. These deliberations have already resulted in employer, alumni, and faculty surveys; changes to the diversity survey; a new system for compiling student complaints; and, a SOE disposition policy (Unit meetings & CIC minutes).

Data collection and analysis are facilitated through the use of the College of Education Information System (CEIS). CEIS is an EU – SOE data warehouse comprised of elements extracted from the university's student information system, Banner, and other sources. The underlying strategy was to simplify, unify, and verify data from multiple sources to enhance query and analysis capabilities.  Integral to the functioning of the system is the use of a professional report writing application (Crystal Reports). CEIS allows the SOE to use a flexible range of criteria to track the progress of students within the SOE-defined population. Originally designed and implemented to comply with NCATE requirements, it has been used as a reporting and analysis platform by the SOE since its creation in 2005 (CEIS Overview and Explanation).

In response to a university need to track the progress of the larger population of all EU students, a second database called RBASE was created, and was based on the CEIS model. SOE subsequently realized a need to utilize information from beyond the SOE, and so began to incorporate elements of RBASE with CEIS. Further improvement will occur as the university implements a comprehensive data warehouse in 2013-2014 that will allow for replacement of the RBASE/CEIS system.

In addition to reporting capabilities, the systems discussed offer value-added functionality. An example of this is an application currently under development called the Undergraduate Profile. This application summarizes various key transitions by student, and presents them in a unified visual interface, allowing an advisor or researcher to see at a glance important measurements for a specific student. These measurements can originate from multiple sources—even different database environments—and may help pinpoint where academic or other issues originate.

Though resolution to student complaints has always been a high priority for the SOE, changes to this process have been recently implemented.  Beginning in the Fall of 2012, a system was developed to document the candidate name, date, type of complaint, and resolution to that complaint.  This database is kept on a secure server and is used to inform the SOE regarding areas for improvement to better serve candidates. As of the 2011-2012 academic year, it became clear that students did not understand the proper protocol for registering complaints, prompting the SOE Dean’s office to develop a system to address this issue.  In an effort to provide candidates with a clearly articulated process to register complaints, a specific e-mail address was created (soehelp@edinboro.edu) for this purpose. The SOE website now provides directions for communication flow to ensure that students follow the protocol for expressing concerns. If a candidate has contacted the instructor, advisor, and/or chair of the department and not resolved the issue, s/he may forward the complaint to the e-mail. This is checked daily by a representative in the Dean’s office who will then record this in the database and create a plan for resolution to the problem. These initiatives were formally approved by the Assessment CI team in November 2012 and implemented during the end of the Fall 2012 semester (Student concerns process and Student concerns log).

The strong growth of SOE faculty expertise using candidate and program assessment has made the SOE Unit faculty a valuable resource and leader for the university as it pursues Middle States accreditation.  As the university focusses more on data driven decision-making, particularly in relation to learning outcomes, the SOE is regarded as the model for other schools to replicate. In support of a university-wide commitment to continuous improvement, the Provost appointed an Assessment Coordinator in 2010 to work with each department/program independently to develop assessments based on student learning objectives. The Assessment Coordinator has taken an active role in establishing and monitoring a sustainable and cyclical process for program review focusing on implementation of assessments, analysis of assessments, and a plan for improvement based on the results. (Middle States webpage) The template designed to support this process includes objectives, an assessment plan, assessment results, and a plan for a response to the analysis of results. A review of the completed 2011 Student Learning Responses indicate that many of the departments have made detailed suggestions for programmatic changes based on assessment of the data. The SOE is notably one of the strongest in this area with specific recommendations based upon data analysis at unit meetings, program meetings, and department retreats. The SOE Unit Accreditation Coordinator and the University Assessment Coordinator have worked closely to move the university toward its goal of authentic and meaningful assessment, analysis, and continuous program improvement.

2.2.b  Continuous Improvement: Summarize activities and changes on data that have led to CI of candidate performance and program quality.

There have been several agents of change for the Unit since the last NCATE visit.  New legislation by the state, as well as a change in leadership in the SOE, led to significant changes to the assessment system and to the Unit.  Many of these changes have improved the effectiveness of the assessment system, and thus, have led to candidate and program improvement.

EU teacher certification programs were changed to reflect a focus on Early Childhood, Middle and Secondary education, as well as an integrated approach to Special Education and ELL.  As new state program requirements were legislated in Pennsylvania (Ch. 49-2), all new initial programs were submitted to PDE. Following approval, the new programs were implemented effective Fall 2009.  (Initial and Advanced Programs)  This development process provided an opportunity for faculty to revisit the current assessment system.  As a result, today’s programs have in-depth assessments with detailed rubrics addressing candidate competencies related to specific professional standards.

As a consequence of the changes in the certification requirements the admission criteria for initial programs were revised.  The admission criteria described in 2.3.b reflect these changes.  Praxis I tests, developed by ETS and used for many years, have been replaced by the PAPA test, developed by Pearson and used beginning Spring 2012 with the newly established cut-off scores.  (Praxis I and PAPA sites) New PECT tests have been developed as exit exams, in place of the Praxis II, for the Early Childhood, Special Education, and Middle Level Candidates. Secondary 7-12 candidates still take the Praxis II to ensure appropriate content knowledge.  All licensure exams for advanced programs have remained the same however an on-line format is under development (PDE Testing Requirements).

Other changes to program admission requirements include the following:

(a)  Candidates entering initial programs as Graduate students no longer have a specific Math or English requirement. A bachelor's degree is assumed to indicate basic mathematics and English knowledge appropriate for an initial certification program. (PDE weekly email & clarification)

(b)  Though there are general admissions requirements for all Graduate programs, minimum GPA requirements vary depending on the program, but range from 2.8 – 3.0 for teacher certification.  Some programs require additional evidence of readiness (Praxis or GRE) for applicants with lower GPAs. (Grad Admissions)

(c)  Appropriate clearances must be obtained prior to obtaining candidacy. (Clearances)

(d)  SPED 210 is required for obtaining candidacy for most initial programs. (Candidacy form)

(e)  Candidates wishing to enter the Educational Leadership and Educational Psychology programs must provide a writing sample in response to a question related to the field. (Grad Admissions – Ed programs)

Using the Desire to Learn (D2L) platform, Unit and program data are now regularly distributed to all Unit faculty. Logistical support for this dissemination is provided through the SOE Office of Accreditation. Any member of the Unit can now access D2L and see the latest data available for all programs and for the Unit. Members can then use this information to make programmatic decisions, to review SPAs, facilitate CIT discussions, and support Unit retreat discussions. To ensure that available data are viewed and discussed regularly, the Program Analysis Report (PAR) template was developed and a schedule for submission implemented (PAR template). Given this template, all programs are to review the most current data and make recommendations for change with specific implementation dates. The template was first developed and used in Spring 2012, and has been revised to include Goals and Standards in order to ensure alignment. As program data is being reviewed every spring, it was determined that Unit data should be reviewed during the Fall semester.  This was first implemented in Fall 2011, and continued in Fall 2012 at the Unit meetings.

As discussed in 2.1, changes in leadership within the SOE  led to changes to the assessment system review structure and ultimately improved the assessment culture of the SOE.  As previously described, the ACC was convened to oversee and direct the SOE’s accreditation efforts.  This council provides directives for the Continuous Improvement committees (formerly known as Standard Committees).  The shift in name to CIC reflects the shift in culture away from strictly meeting each Standard toward dealing with all issues associated with the SOE on a continuous basis.  The CICs include the following: Disposition, Assessment, Clinical Experiences, Diversity, Faculty Vitality, and Governance. The co-chairs of each CIC also meet regularly as the CIT and present progress at each Unit meeting. The CIT, along with the ACC, coordinate Unit efforts to maintain accreditation. This structure has been made possible by the commitment of the SOE and university leadership to dedicate resources for its success. The main resource made available was the increase in release time for the UAC from 6 credit release (0.5 load) to 12 credit release (1.0 load).

As an additional part of the structure, the Educational Partners Advisory Council (EPAC) was started in Fall 2011 which has led to the use of additional data to guide program improvement.  This vital group consisting of superintendents and administrators from area school districts meets once a semester at the University to discuss important topics affecting partnerships, clinical experiences, and impact of programs on P-12 student learning. Feedback from the EPAC group was used in the development and dissemination of the Employer Satisfaction Survey. Additionally, the P-12 partners requested a 7-12 Special Education program to meet their future needs. In response to this request, the SOE applied for and received a state grant during the summer 2012 to design a SPED 7-12 master’s program. A program proposal was developed during the Fall 2012 semester and has been submitted to the University Wide Curriculum Committee for consideration early in Spring 2013. The program will be submitted to PDE in February of 2013. The development of this program included a conversation between university field and student teaching coordinators from each department and EPAC members to discuss concerns related to implementation of state guidelines for the four stages of field experiences. The SOE developed a mutually beneficial plan in response to these concerns which was implemented immediately. Further, feedback from EPAC indicated a need for the appointment of a full-time director to oversee all field and student teaching placements. Beginning in Spring 2013, a new Director of Field Experiences and Student Teaching was appointed. (EPAC minutes & SPED 7-12 minutes)

Additionally, there have been many efforts since 2007 to obtain feedback from stakeholder groups including candidates and P-12 administrators. On-line surveys have now been developed to obtain feedback from many different stakeholder groups.  Alumni, Employer, Clinical Faculty, Faculty, and Candidate surveys all provide important information for program improvement.  All surveys were based on the Conceptual Framework as well as the appropriate standards for the program and stakeholder. Reaching each of these stakeholders has required cooperation with the Alumni Office, the local Intermediate Unit, and area school districts.            

Under the direction of the Dispositions CIC and the Associate Dean, a SOE Disposition policy was developed in Fall 2012.  Though all programs were governed by program specific disposition policies, it was determined that an overall SOE policy was necessary to encompass, support, and sustain each of those individual policies.  The Disposition CIC will continue to monitor its use and revise as necessary.

Under the direction of the Assessment CIC and the Associate Dean, a SOE Student Concern database was developed. In this way, the Unit maintains records of formal candidate complaints and documents their resolution. This system was developed in Fall 2012 to begin use in Spring 2013(Student Concern Process).  Possibly the most vital part of this process is the intentional attempt to teach candidates the appropriate process for lodging a complaint. Teaching candidates the established steps in the process and the need for appropriate use of language will aid in swift resolution and in collection of data relevant to program improvement, while protecting the rights of all persons involved.

In an attempt to connect the NCATE and Middle States accreditation processes, the Program Analysis Report form was developed by the Assessment CIT. These report forms facilitate analysis of data regarding specific program improvement each spring.  Though SPA review provides granular data about program completers, the data were often used to revise assessments and only viewed from the perspective of SPA standards, as opposed to a more global view of Student Learning Outcomes within the University.  These reports necessitate program faculty to review data with the intent of program change. This process began in Spring 2012.

There have been many continuous improvement efforts within programs. The following are just a few examples: (please click to see description)

(a) Early Childhood Education major program revision

(b) Advanced (non-cert) Programs adopt standards and assessments

(c) New Science Report of Supervision Form

(d) HPE Technology objectives

(e) Re-examination of Reading Program

(f) Educational Leadership Comprehensive Exam

Since 2008, one of the major Unit assessments – the Teacher Candidate Performance Profile (TCPP) – has been refined to better reflect state and national standards.  This assessment is completed by University Supervisors, Cooperating Teachers, and Candidates in both Field and Student Teaching/Intern experiences (TCPP data).

Under the direction of the Diversity CIC, an investigation into the use of theMulticultural Awareness Knowledge Skills Survey (MAKSS) and the MAKSS-T (for Teachers) diversity surveys was undertaken.  The MAKSS was piloted in 2011 as an option for advanced programs.  The Cultural Diversity Awareness Inventory (CDAI), currently used for diversity information, was developed primarily for initial certification candidates.  The MAKSS uses more appropriate language for advanced candidates.  As a result of that pilot and dissatisfaction with the CDAI, the MAKSS-T was approved in Fall 2012 at the Unit meeting and with implementation in Spring 2013. The committee will assess the effectiveness as data becomes available.

One particular area of interest for the Assessment CIC is in the area of “elimination of bias” in the assessment system. The CIC has discussed and approved an effort to encourage systematic discussion at the program level with regard to rubric development and use.  All faculty within a program must have a consistent definition of rubric criteria and must apply those definitions consistently throughout the use of the assessment.  These conversations must also happen at the Unit level for assessments such as the Instructional Assessment Plan, TCPP, and, especially, the Conceptual Framework Reflection. Beyond this, the CIC has discussed the possibility of reliability and validity investigations of assessments. Though alignment with standards aids in this quest, systematic investigations involving the triangulation of data are suggested to ensure the fairness and accuracy of the assessments and evaluations. The Assessment CIC will initiate discussions with department chairs and program heads to determine a course of action and timeline for implementing such investigations.

Assessment has been the largest area of growth for the SOE. All of the activities listed above demonstrate a commitment to continuous improvement in the area of assessment. The new governance structure now in place is capable of sustaining the continuous improvement process taking into consideration changes to national and state standards, changes in state legislation, new research findings, and feedback from P-12 partners. Through the use of systematic assessment and evaluation, data driven decisions can be made programmatically to enhance Unit, program, faculty, and candidate performance. Unit and program data analysis have now become internalized within the School of Education. Evaluations from the unit retreats indicate faculty members appreciate the availability and access to program and Unit data and the time to analyze it together. This process has taken place over the past two years and has led to a noticeable shift toward a culture of assessment (Retreat evaluations).