Designing An Evaluation Instrument For Learning Institutions

Purpose of the Evaluation Instrument

1. The evaluation instrument is being designed to periodically adapt and assess the activities, which ensures effectiveness. Evaluation also helps in identifying the areas of improvement and thus, helps in realizing the goals in an effective manner. In addition to this, it enables an individual to demonstrate the program’s progress or success (Romiszowski 2016).

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

2. The people conducting the evaluation of learning institutions need to be experienced in the field and have proper knowledge about the manner in which guidance and motivation need to be provided to the students. They need to have experience in schools and universities.  Out of a total of five facilitators, the three candidates who belong to the role of assessors and moderators, hold the responsibility of evaluating the entire process as well as the quality of teaching and learning process. The evaluators should hold aPost-Graduation degree in Education and have sufficient experience in the task of teaching and evaluating (Posavac 2015). Besides, they should have the necessary skill to use modern technological facilities in the process of evaluation. “Evaluation quality is a product of the capacities of the evaluation commissioner and evaluation team, the relationship between them, and the wider institutional environment in which the evaluation is being conducted” (Lloyd, R. and Schatz, F., 2015).

3. The program evaluation is an important organizational practice for the community development work. It is a specific way of evaluating the specific activities and projects, instead of an entire enterprise or a comprehensive initiative of a community. There are various types of evaluation approaches that prove to be the distinct way of conducting, thinking about and designing the evaluation efforts. Various evaluation approaches help in solving the problems and refining the existing ones. The different types of evaluation approaches are outcome based evaluation, participatory evaluation and impact or process evaluation. The outcome based evaluation approach is being chosen for the study as it helps in giving an overview of the evaluation programs or practices (Creswell and Clark 2017).

The outcome based evaluation approach, also known as outcomes measurement, is often used in a systematic way to determine if the programs have achieved its objectives or goals. It is being used as the program helps in establishing or articulating the benefits or outcomes, identifies various ways of measuring those benefits and clarify the target audiences, for whom the programs’ benefits are intended and designed. The outcome refers to the benefits, which occurs to the program’s participants. This approach helps in benefitting as many individuals as possible. Typically, the outcomes represent any achievement or any change in the skills, attitude, knowledge or skills of the participants (Soy 2015).

4. The data gathering methods used for the evaluation purpose are observation and survey. The observations are being made of various aspects of change in the teaching and learning activities such as, lectures, lab classes and seminars. The teaching materials and documents are being presented to the learning resources and interactions between the participants also helped in gaining useful insights. The highly structured observations will comprise a checklist, for example; the presence, incidence, frequency of the predetermined evidence. The semi-structured as well as unstructured observations will allow the issues to emerge from observations. However, the semi-structured observations around the issues are considered as relevant for the evaluation purpose (Creswell and Creswell 2017).

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Who will Conduct the Evaluation

The observation method can provide various insights regarding the behaviors and interactions of different participants. Moreover, the observation method enables an individual to view things, which the participants take for granted. However, there are certain limitations as well. The method can be time-taking. Getting a proper picture of the overall implementation may involve more than one teaching activity or learning event. In addition to this, the observation activity can affect the involved ones’ behavior and influence it. Moreover, the participants may become concerned about the evaluation factor. The academic staffs can also become concerned about the quality of their teachings and how it is being evaluated (Buckley and Voorhees 2017).

The other method is survey, which is being used in the study for gaining valuable insights. The survey can be electronic or paper based. The virtual learning environments often have various survey or evaluation tools. The structured questionnaires are predominantly based upon the closed questions that produce data and analyzed in more quantitative for trends and patterns. The overall agenda is being predetermined by the evaluators and provides flexibility for the respondents for qualifying their answers. On the other hand the unstructured questionnaires are totally based upon open questions that give freedom to the respondents to answer the questions in their own words (Creswell and Clark 2017).

The strength of this method is that data can be collected quite quickly. Moreover, a large number of participants will have the opportunity of providing their valuable feedbacks. The feedbacks are generally anonymous that encourages honesty and openness. However, the limitations are; data analysis or processing for the large samples can be a bit time taking, interpretations can be done in a different manner by the respondents and it can be a challenging task to motivate the potential respondents for the completion of the questionnaires (Buckley and Voorhees 2017).

5. The first step is to identify the issues or opportunities for gathering data and to decide which survey method is to be used. Opinion survey is a data gathering tool from the teachers who work with the teacher leaders and can give effective feedback about their senior teacher leader for detecting the unmet needs (Romiszowski 2016). Another data collection tool is classroom observations to assess the teaching.

6.

Efficiency:

The instruments like survey and observations are playing important role in this case. Survey helps to gather information from various level and observation helps to evaluate the consequence of each educational training program. Therefore, it can be stated that they are quite efficient in nature.

Effectiveness:

Survey helps to design the questionnaires for the educationists and certain powerful tools can access to make all the data effective in nature. Further, through this process, feedback and opinions can be gathered. This creates effects on the learners and educationists.  

Sufficiency

Surveys and observation can be sufficient if done in a proper way. Further, the verbal and non-verbal behaviour of the teachers can be understood by these surveys. Further, it can be stated that observation helps to evaluate the training programs and the pros and cons of the program.  

Manageable

These two instruments can be managed by applying proper application skills and it has been observed that the teachers and the learners can improve their teaching methods and course processes by obtaining proper information.

Avoid bias

Bias takes place in the absence of any accurate information and survey and observation help to mitigate this problem. These two instruments help to make a clear view regarding the evaluation of educational training program.  

Accurate result

If proper information and data can be collected, there will not be ambiguity and the outcomes will be accurate.  

7. There are certain types of educational evaluation processes that help to gain certain effective results such as formative, summative, process, outcomes and impacts. In case of educational sector, formative and summative evaluation processes help to evaluate different learning institution. Formative programs helps to evaluate in a minute level to filter out the incompetencies by engaging in constant use ofevaluative tests, experiments and their reports, and subsequent improvements upon them, thereby facilitating evaluated improvements at all stages of the learning process. The summative programs help to gather information on the effectiveness of the overall training program. Further, the process programming, especially with the use of technology, has helped to implement specific strategies to portray the operation. All the sustainable changes can be made through impact programming, which runs through simultaneous process of the formative and summative evaluation process. The programs for the evaluation purpose might include; STEM learning and research centre instrument, ATIS, reports articles and also the consideration of certain out-of-classroom features.

Choice of Evaluation Approach and Data Gathering Methods

8. Evaluation helps to set certain approaches and techniques to develop the quality of a program. There are certain data collection processes that help to evaluate the educational training program. The data collection processes can be divided as Assessment Tools in Informal Science (ATIS), STEM learning and research centre instrument (STELAR), DEVISE (developing, validating and implementing situated evaluation instrument).  In case of ATIS, certain reports and articles are evaluated. All the innovative technological experiences can be gathered through STELAR. DEVISE helps to associate instruments designed for the measurement of individual learning outcomes. It helps to increase interest, self-efficacy and skills.

9. Given the importance of educational training and development programs for adults, it is imperative to device adequate evaluation instruments to accurately assess the quality and progress of education imparted. The importance is further heightened by the fact that it is initiated towards the intellectual growth of adults who, unlike children, currently engage in economic transactions, thereby taking active part in the development of the society. Hence, the evaluation instrument should assess not only the performance of the students, but should be aimed at the appropriation of the total learning outcomes by considering the performance of the instructors as well (Shahiri and Husain 2015). Thus the instrument aims to ensure the effectiveness of the total learning program.

There are several instruments in order to measure how good or bad a teacher is. Some of the common range of instruments is- Classroom observation, objective settings and individual interviews, teacher self- evaluation, teacher port folio, use of standardized forms in order to record the performance of the teachers across a range of dimensions, teacher testing, and use of learner evaluation sheet, facilitator report and moderation report. One of the instruments that have been chosen for this discussion is the Inventory list. An Inventory list is generally used for characterizing the teaching practices in science and mathematics. It aids the educators in reflecting on their own teaching.

On the actual inventory there are some checkboxes that are needed to be filled for indicating whether a listed practice of the curriculum has used in the course or has been provided to the students.

The items on the inventory are classified in to eight categories-

  • The course information ( learning goals as well as the outcomes)
  • The supporting materials
  • Assignments
  • In class activities and feature
  • Feedback and testing method
  • Diagnostic tests, pre-post testing and more.
  • Training of TAs
  • Collaboration in teaching process.

In the inventory the teacher can list the forms of the cognitive knowledge and can give examples .Teachers are asked to respond to each of the items with the help of the six point likert scale ranging from 0-1-5. The information provided in the inventory can be considered as a function of the truthfulness. A scoring rubric is also present with the inventory that gives points for the different practices documented by research for improving student learning. The scores obtained by the teachers can be tallied with the benchmark data. Inventory provides a detailed scenario of what practices are used in the course and their quality.

The inventory should contain check boxes to mark the list of topics that needed to be covered, the list of the topic specific competencies, list of competencies not related to the topic like critical thinking and problem solving skills, and the affective goals like enhancing motivation, interest , beliefs and relevance. A check boxes to tick the supporting materials that have been provide or not- such as provision of student discussion or the wiki boards, solution to the assignments, worked out examples, previous years question papers , simulations, video clips or animations related to the course that might arouse children motivation, lecture notes, extracts from scientific articles and reference materials. Checking all these parameters can evaluate whether all these had been done in the course or not.

A box should be there allowing the teacher to fill out regarding the in-class activities and workshops. Like average number of times for each class, amount of time given for students to ask questions, average number of discussions per term, average number of times visual instructional aids have been used in class. There is also a unit in the inventory sheet that would contain a checklist to check whether students were asked to read material on an upcoming class, whether students are allowed to reflect upon the activities and whether the students are allowed to engage in poster presentations. A student response system is also used for the evaluation. Finally feedback and testing is used where feedbacks are taken from the students. There are also parameters to check how far a teacher has collaborated with other teachers to enhance the teaching learning process.

Therefore, it may be summarized that execution of evaluation instruments like the inventory tools, in the correct process, following all the necessary steps and integrating the conventional evaluation tools at necessary touch points can be fruitful for the assessment and improvement of all the stakeholders involved in the concerned educative process. It would create a framework of performance and achievement based learning, which develops new and innovative career paths for the learners as well as for the educational institutes.

10.

Criteria

Remarks

Does the instrument lead to quality improvement including the enhancement of the learner experience?

These instruments are helping to gather all the relevant information regarding whether all the topics has been covered by instructor and have enhanced the learner’s experience or not.

There are certain weak points too such as the process can be biased and time consuming.

Are the evaluation procedures continuously reviewed and improved?

New parameters are set as checklist point for more accurate evaluation.

There are still certain loopholes and therefore the process is not accurate enough to make all the changes.

Are all stakeholders in evaluation responsible for the establishment of evaluation procedures?

The evaluator, the principle and the educator himself is responsible for the establishment of the evaluation procedure.

One of the weak point is that different teachers have got several methods of teaching, depending upon the topic hence some of the attributes cannot be measured by inventory lists.   

Are the evaluation procedures clear and transparent to all stakeholders?

Yes the evaluation procedures were all clear and transparent for the stakeholders.

Since some of the inventories are based on self reported data, hence there is a chance of biasness among in the report.  

Do the evaluation procedures conform to international best practice?

Yes, the evaluation procedure abides by the international standards and measures.  

The international best practice parameters might not be suitable for all kinds of subjects or depends upon the teaching styles.

Do the evaluation procedures include self-evaluation, followed by a review by persons who are competent to make national and international comparisons?

The main aim of the self-evaluation process is to collect authentic data and the people who can make comparison between the national and international perspective become treasure worthy.

In certain extent, it can be stated that the wrong review factors create certain problems and the authentication process can be hampered by that.

Are learners, staff and other stakeholders involved in the evaluation process?

Educational achievements can be made with the cooperation of all and therefore, everyone has the right to involve in the evaluation process.

Do the evaluation procedures include appropriate measures to protect the integrity of the overall process?

The main aim is to protect the integrity.

Once the feedbacks have been given by the teachers, those can be shared with other teachers thus failing to maintain the confidentiality before their assessment such that they can remain prepared beforehand.

Do the evaluation procedures ensure public accountability and transparency through the publication of the outcomes of the evaluations?

The main intention is to make clarity of whether all the elements of the curriculum have been completed or not.

No certainty is there to make the process transparent.

References

Ballou, D. and Springer, M.G., 2015. Using student test scores to measure teacher performance: Some problems in the design and implementation of evaluation systems. Educational Researcher, 44(2), pp.77-86.

Buckley, C. and Voorhees, E.M., 2017, August. Evaluating evaluation measure stability. In ACM SIGIR Forum (Vol. 51, No. 2, pp. 235-242). ACM.

Creswell, J.W. and Clark, V.L.P., 2017. Designing and conducting mixed methods research. Sage publications.

Creswell, J.W. and Creswell, J.D., 2017. Research design: Qualitative, quantitative, and mixed methods approaches. Sage publications.

Fetterman, D.M., 1984. Ethnography in educational evaluation(Vol. 68). Corwin Press.

Huh, S., 2016. Promotion to MEDLINE, indexing with Medical Subject Headings, and open data policy for the Journal of Educational Evaluation for Health Professions. Journal of educational evaluation for health professions, 13.

Lloyd, R. and Schatz, F., 2015. Improving Quality: Current Evidence on What Affects the Quality of Commissioned Evaluations.

McLaughlin, M.W., 1987. Learning from experience: Lessons from policy implementation. Educational evaluation and policy analysis, 9(2), pp.171-178.

Posavac, E.J., 2015. Program evaluation: Methods and case studies. Routledge.

Romiszowski, A.J., 2016. Designing instructional systems: Decision making in course planning and curriculum design. Routledge.

Shahiri, A.M. and Husain, W., 2015. A review on predicting student’s performance using data mining techniques. Procedia Computer Science, 72, pp.414-422.

Smirnova, E.O., 2018. Psychological and educational evaluation of toys in Moscow Center of Play and Toys. Psychological Science, 7(1).

Soy, S., 2015. The case study as a research method.

Weiner, G., 2017. Ethical practice in an unjust world: educational evaluation and social justice. In Gender matters in educational administration and policy (pp. 116-124). Routledge.