Monday, 28 January 2019


UNIT II: Tools and Techniques to assess Learner’s Performance (20 hrs)
*      General Techniques of Assessment- Observation, projects, assignments, worksheets, practical work, seminars and reports, Interview, Self reporting.
*      Tools of Assessment- tests, checklist, rating scale, cumulative record, questionnaire, inventory, schedule, anecdotal record concept, merits, demerits - relevance in the field of research
*      Characteristics of a good evaluation tool validity , reliability, objectivity and practicability
*      Norm-referenced tests and Criterion referenced tests.
*      Diagnostic Test and Achievement Test-Concept, Purpose and Distinction between the two tests,
*      Steps involved in the construction of an Achievement test and Diagnostic test, Types of items-Objective type, Short answer type and Essay type,
*      Item analysis-concept, Teacher made and Standardized Achievement tests.
*      Online examination/Computer based Examination, Portfolio assessment and Evaluation based on Rubrics

GENERAL TECHNIQUES OF ASSESSMENT
OBSERVATION

Observation is defined as the method of viewing and recording the actions and behaviors of participants.  There are three main types of observational methods
Naturalistic Observation or non participant observation - This method takes place in the natural, every day setting of the participants. In naturalistic observation, there is no intervention by the researcher and to carry out the observations without the knowledge of the participants.  In this way, the researcher is able to observe the spontaneous, natural behavior of the participants in their natural surroundings. 
Disadvantages
only with small sample size and the participants may not be representative of population. 
Naturalistic observations may also more difficult to replicate.
Example:  A researcher may use naturalistic observation to study the behaviors and interactions of pre-school aged children on a playground at recess.
Participant Observation - In participant observation, it is able to observe behaviors that may otherwise not be accessible to the researcher. The observations can either be covert or overt.  If they are covert, the researcher is under cover and his or her real identity and purpose are concealed.  If the observations are overt, the researcher will reveal his or her real identity and intent and will ask permission to make the observations.
Advantage
  • It provides a deeper insight into the participants. 
  • Disadvantages
  • Difficult to get the time and privacy to record observations
  • The researcher may become “too close” and lose objectivity, resulting in bias. 
Example:   A researcher may want to study the behaviors and habits of a particular religious group and joins the group in order to gain access.
Controlled Observation – This is carried out under controlled, arranged conditions, often in a laboratory setting.  Controlled observations are overt as the researcher will explain the purpose of the research and the participants know they are being observed.  Each test subject is exposed to the same situation in order to examine differences between individual reactions. 
Advantage
  • The study is reproducible and therefore, can be tested for reliability. 
  • These studies are often fairly quick and can accommodate a larger sample size as well. 
  • The data is often coded to be numerical in nature which allows for less time consuming data analysis. 
The disadvantage
  • Participants may behave differently when they know that they are being watched.  
Advantages of Observation:
(1) Simplest Method:
 (2) Useful for Framing Hypothesis:
 (3) Greater Accuracy:
 (4) An Universal Method:
        Observation is a common method used in all sciences, whether physical or social.
(5) Observation is the Only Appropriate Tool for Certain Cases:
Observation can be used to assess the behaviour, feeling and activities who cannot speak e.g. infants or animals.
(6) Observation is indispensable for studies on infants In the case of animals observation is the only way out. For deaf and dumb persons, for serious cases of abnormality or mad persons, for non-cooperative persons, for too shy persons and for persons who do not understand the language of researcher, observation will be the only appropriate tool.
(7) Independent of People’s Willingness to Report
8. Very direct method for collecting data or information – best for the study of human behavior.
Limitations of Observation
(1) Some of the Occurrences may not be Open to Observation:
There are many personal behaviours or secret activities which are not open for observation.
(2) Some social events are very much uncertain in nature.
      It is a difficult task on the part of the researcher to determine their time and place. The event may take place in the absence of the observer. On the other hand, it may not occur in the constant presence of the observer. For example, the quarrel and fight between two individuals or groups is never certain. Nobody knows when such an event will take place.
(3) Not all Occurrences can be observed
Some social phenomenon is abstract in nature. For example, love, affection, feeling and emotion of parents towards their children are not open to our senses and also cannot be quantified by observational techniques. The researcher may employ other methods like case study; interview etc. to study such phenomena.
(4) Lack of Reliability:
 (5) Faulty Perception:
Two persons may judge the same phenomena differently. One person may find something meaningful and useful from a situation but the other may find nothing from it. Only those observers who are having the technical knowledge about the observation can make scientific observation.
(6) Personal Bias of the Observer
 (7) Slow Investigation:
 (8) Expensive:
It requires high cost, plenty of time and hard effort. Observation involves travelling, staying at the place of phenomena and purchasing of sophisticated equipment’s.
(9) Inadequate Method:
Therefore many suggested that observation must be supplemented by other methods also.
(10) Difficulty in Checking Validity:
Checking the validity of observation is always difficult.
PROJECT METHOD
Project method is one of the modern method of teaching in which, the students point of view is given importance in designing the curricula and content of studies. This method is based on the philosophy of Pragmatism and the principle of ‘Learning by doing’. In this strategy pupils perform constructive activities in natural condition. The teacher is a facilitator than a deliver of knowledge and information. It is allowed to explore and experience their environment through their senses and, direct their own learning by their individual interests. The emphasis is on experiential learning, rather than rote and memorization. A project method classroom focuses on democracy and collaboration to solve "purposeful" problems.
DEFINITION
 “A project is a wholehearted purposeful activity proceeding in a social environment”.
W.H. Kilpatrick
 “A project is a bit of real life that has been imparted into school.”
Ballord,
 “It is a voluntary undertaking which involves constructive effort or thought and eventuates into objective results”.
Thomas & long
Characteristics of project method
  1. It takes the student beyond the walls of the class room.
  2. It is carried out in a natural setting, thus making learning realistic and experiential.
  3. It is focused on the student as it enlists his/her active involvement in the task set.
  4. It encourages investigative learning and solution of practical problems.
  5. It encourages the spirit of scientific enquiry as it involves validation of hypotheses based on evidence gathered from the field through investigation.
  6. It promotes a better knowledge of the practical aspects of knowledge gained from books.
  7. It enhances the student’s social skills, as it requires interaction with the social environment.
  8. Teacher plays a facilitative role rather than the role of an expert.
  9. It allows the students a great degree of freedom to choose from among the options given to them; hence it provides a psychological boost. It encourages the spirit of research in the student.
TYPES OF PROJECT
  1. Individual and Social projects
In individual project every students select and solve problem in their own, according interest, capacity, attitude and needs.
In Group projects the problem is solved by the group of pupil in the class. Here the social, citizenship qualities and synergism are develops.
  1. Simple and Complex project
In the simple projects the students are completing only one work at a time. It gives the deep information about the project in a one angle. The students get deeper and broader knowledge about the problem.
In the complex project the students are carried out more than one work at a time. They are focuses on the work in various subject and angles. Here the students get the knowledge about the work in various activities and dimensions.
According to Kilpatrick there are four types of projects:
  1. Constructive project: Practical or physical tasks such as construction of article, making a model, digging the well and playing drama are done in this type of projects.
  2. Aesthetic project: Appreciation powers of the students are developed in this type of project through the musical programmes, beautification of something, appreciation of poems and so on.
  3. Problematic project: In this type of project develops the problem solving capacity of the students through their experiences. It is based on the cognitive domain.
  4. Drill project: It is for the mastery of the skill and knowledge of the students. It increases the work efficacy and capacity of the students.


STEPS OF A PROJECT METHOD
  1. Creating Situation: In the first step teacher creates the proper situation and puts up the knowledge about the project method procedure, steps, and uses to the students. A project should arise out of a need felt by students and it should never be forced on them. It should be purposeful and significant.
  2. Selection of the problem: The teacher helps the students to select the problem and guide them. Students are having freedom to choose the topic or problem based on their interest and ability. Before choosing the topic the principles should be taken in to an account. School tasks are to be as real and as purposeful and they are of such a nature that the student is genuinely eager to carry them out in order to achieve a desirable and clearly realized aim. Teacher should only tempt the students for a particular project by providing a situation but the proposal for the project should finally come from students.
  3. Planning: The teacher discuss with the students about the problem in various angles and points. After the free expression of the students’ opinion about the problem, the teacher writes down the whole programme of action stepwise on the blackboard. In the process of planning teacher has to act only as a guide and should give suggestions at times but actual planning be left to the students.
  4. Execution: The students are stating their work in this step. They are collecting the relevant information and materials at first. The teacher should give the time and right to the students according to their own speed, interest and ability. During this step the teacher should carefully supervise the pupils in manipulative skills to prevent waste of materials and to guard accidents. Teacher should constantly check up the relation between the chalked out plans and the developing project.
  5. Evaluation: Evaluation of the project should be done both by the pupils and the teachers. They determine whether the objects are achieved or not. After that they criticize and express their feeling about the task freely. The evaluation of the project has to be done in the light of plans, difficulties in the execution and achieved results.
  6. Reporting and Recording: It is the last step of the project method in which each and every step of the work are reported. The reported things are recorded in a certain order in a book form. It should include the proposal, plan and its discussion, duties allotted to different students and how far they were carried out by them. It should also include the details of places visited and surveyed guidance for future and all other possible details. The book formatted report is submitted to the teacher at the end.
ROLE OF THE TEACHER:
  • teacher is that of a guide, friend and philosopher.
  • a working partner.
  • He encourages his students to work collectively, and co-operatively.
  • He also helps his students to avoid mistakes.
  • He makes it a point that each member of the group contributed something to the completion of the project.
  • If the students face failure teacher suggest them some better methods of techniques that may be used by them next time for the success of the project.
  • He should help the students in developing the character and personality by allowing them to accept the responsibilities and discharge them efficiently.
  • He should provide democratic atmosphere in the class
  • He should be alert and active all the time
  • He should have a thorough knowledge of individual children so as to allot them work accordingly.
  • He should have initiative, tact and zest for learning.
  • Teacher should always remain alert and active
  • During execution of the project teacher should maintain a democratic atmosphere.
  • Teacher must be well – read and well-informed so that he can help the students to the successful completion of the project.
MERITS OF PROJECT METHOD
  1. Students get proper freedom to execute the project in accordance with their interest and abilities, because of which they get their psychological needs satisfied to considerable extent.
  2. This method is not only subject centered, but due importance is being provided to the students also
  3. Habit of critical thinking gets developed among the students through this method
  4. With this method, students get the ample chances in which they can develop coordination among their body and mind.
  5. teacher can lead a well balanced development of the students.
  6. science teaching can be done with considerable success, as science is a practical subject and this method is also scientific and practical in nature.
  7. in promoting social interaction and co-operation among the students, as they have to work in a group and have to interact with various persons for gathering information.
  8. As students gain knowledge directly through their own efforts, thus, they acquire permanent kind of information, which is retained by them since a long period of time.
  9. Mostly the projects are undertaken in classroom as classroom assignments, because of which load of home work from the students get reduced to considerable extent.
  10. It helps to widen the mental horizon of pupils.
  11. It sets up a challenge to solve a problem and this stimulates constructive and creative thinking.
  12. It helps in developing social norms and social values among the learners.
  13. It provides opportunities for correlation of various elements of the subject matter and for transfer of training or learning

DEMERITS OF PROJECT METHOD
  1. This method takes a lot of time to plan and execute a single project.
  2. It is not possible to design different projects for different topics and it is also not possible to cover all the topics or content in a single project.
  3. For proper execution of a project, large number of financial resources are required.
  4. Such method can only be prove successful if the teacher is highly knowledgeable, alert and exceptionally gifted.
  5. Systematic and adequate learning is not provided by this method, as it is a method of incidental learning. Through this method, students learn only what is required by them in relation to the completion of the projects
  6. its utility remains more or less limited to negligible extent
  7. Sometimes the projects may be too ambitious and beyond student’s capacity to accomplish.
  8. The project cannot be planned for all subjects and whole subject matter cannot be taught by this strategy.
  9. It is not economical from the point of view of time and cost.
  10. It is very difficult for a teacher to plan or to execute the projects to the learners and supervise them.
Principles in project method
  1. Principle of Utility. Choose those projects which are closer to the social life.
  2. Principle of readiness. Involve the learners in finding the solution of the problem with their active participation.
  3. Learning by Doing. Learner performs certain tasks and experiences new things which adds to his knowledge and results in learning.
  4. Socialization. It develops the feeling of cooperation and group work.
  5. Inter-disciplinary Approach. To involve the knowledge of different subjects in solving the social problems.

ASSIGNMENT METHOD
The Assignment method is an instructional technique comprises the guided information, self learning, writing skills and report preparation among the learners. The procedure to be followed by the pupils in doing is the work assigned must be explained by the teacher to make the study period effective. The chief function of assignment is to be the giving of specific and suffi­ciently detailed directions to enable the pupils to meet intelligently the problem or problems in the advance lesson or unit. Also the purpose of the lesson assigned must be made known to the pupils and be recognized by them so that their interest may be stimulated. Motivation is a definite function of the assign­ment. To require a student to do something without regard to his interest is unsound educational practice.

OBJECTIVES
  • It provides good training for information seeking and retrieval behaviour.
  • It inculcates the self learning attitude among the students.
  • It provides information analysis and research attittude to the learners.
  • It develops the learning experiences from various sources.
Features of good assignment
  • It must be lesson concerned and related with the text books and curriculum.
  • The topic / unit of the assignment must be explained with the availability of resources.
  • The core of the subject or unit must be clarified.
  • The hard and difficult portions of the assignment need to be explained well.
  • The topics / units irrelevant to the assignments must be defined very well
  • It must be simple and enable the students to complete it within the stipulated time.
  • Assignment must avoid ambiguous, complex information and instructional structure.
  • Objectives of the assignments must be clear and definite.
Functions of assignment
  • To create the proper attitude toward the performance of the work assigned.
  • The desire or willingness to do the work must be created in the pupils.
  • The pupils should under­stand the importance of the assignment and they should recog­nize the genuine merits of the advance work. This recognition is but one of the many means of providing incentive.
  • To anticipate special difficulties in the advance lesson, and to suggest ways to overcome them.
  • Every new lesson assigned assumes new elements to be mastered.
  • recognise adequate provisions for individual differences.
  • All studies in mental measure­ments agree that among pupils there exist vast differences in intelligence, aptitudes, and temperaments.
  • Consider the interests of pupils which are found to be widely divergent. Pupils work with more vigor, ease, and pleasure when the things they do are in conformity with their interests. It is, therefore, exceedingly important that the assignment provides for these varied interest, aptitudes, and abilities of the pupils.
Practical work
A practical is an examination or do experiments rather than simply writing answers to questions. Students benefit from practice because they are able to apply knowledge through interaction. Students connect with the material when they work with texts and concepts beyond a one-time exposure.
1. Provide opportunity to test in a realistic setting
2. Provide opportunity to confront the candidate with problems he has not met before both in the laboratory
3. Provide opportunity to observe and test attitudes and responsiveness to a complex situation (videotape recording).
4. Provide opportunity to test the ability to communicate under pressure, to discriminate between important and trivial issues, to arrange the data in a final form.
Merits of Practical works
a.       Through this method, a science teacher can provide various kinds of learning experiences to the students
b.      information gained by them turns out to be of permanent kind.
c.       In this method, individual differences and interest of all the students are taken into consideration
d.      it is considered as child centred method.
e.       full freedom is provided to them to participate in the laboratory activities,
f.       Through this method, students learn to explore various things on their own.
g.      They also learn to verify various scientific facts and principles.
h.      Develops high level of self-confidence.
i.        various kinds of practical skills and proficiency get developed in them
j.         Through this method, an intimate relationship got developed in between the students and teacher
k.       With this method, teacher can develop various good habits among the students because of which it is known for inculcation of good virtues among the students by a majority of experts.
l.        improve their performance to considerable extent in all spheres of life.
Demerits of Practical works
·         limited applicability
·         risk of occurrence of accident.
·         shortage of resources.
·         expensive.
·         students feel heavy burden on themselves
·         teacher find it difficult to attend to the individual needs of the students
·         This method can only be used by experienced and well qualified teacher
·         Lack of standardized conditions in laboratory experiments
·         Lack of objectivity.
·         limited feasibility for large groups
·         difficulties in arranging demonstrating the skills to be tested.
·         cannot be made practicable at primary and middle school stages
Seminar
seminar is a form of academic instruction, which brings together small groups for recurring meetings, focusing each time on some particular subject, in which everyone present is requested to participate. It is essentially a place where assigned readings are discussed, questions can be raised and debates can be conducted. They normally include an introductory session, a keynote address, different sessions, panel discussions and concluding sessions. While the specialist speakers and experts make their presentations, the participants interact and benefit from the question-and-answer sessions.
Several topic reviews are scheduled each day throughout the seminar, and attendees can usually make their choice of topics from among these scheduled events.

CHARATERISTICS OF SEMINAR
• Teacher is the leader.
• The group generally consists of 10 to 15 participants.
• An ideal seminar lasts for 1-2 hrs.
 • The topic is initially presented by the presenter followed by group discussion.
• The leader should keep the discussion within limits so the focus of discussion can be mentioned.
• care should be taken to avoid stereotypes.
 • In student seminars, students present their data in an informal way under the leadership of the teacher, followed by a teacher monitored discussion.
• All members take part in discussion in an informal but orderly manner.
 • The chairman should be skilled in encouraging the timid participants.
• A student secretary may record the problems that come up and the solutions given to them.
ORGANIZING A SEMINAR
 Define the purpose of the seminar.
 Relate the topic of seminar and discussion to the main concept or the objectives to be attained.
• Direct and focus on the discussion topic.
• Help students to express their ideas and keep the discussion at a high level of interest so that the students listen attentively to those who contribute the ideaas.
• Plan comments and questions that relate to the subject and also guide and direct the discussion.
Set time limitations for each person's contribution.
 • Guard against monopoly of the discussion by any member of the seminar.
• Plan for summary at intervals during the discussion and also at the end of the discussion and relate the ideas expressed to the purpose of discussion.
• Have the discussion recorded by a student as a recording secretary or by tape recording.
• Plan for teacher and student self evaluation of the progress made towards the immediate objectives.
Advantages of Attending Seminars
    1. A wealth of knowledge usually, presented by many speakers at one time in one place
    2. Student plays an active role
    3.  individuals can meet others with the same interests/problems/concerns
    4. A sense of renewed hope and inspiration
    5. A great way for those that don't like to read, or attend classes, to improve their knowledge of a specific subject.
    6. utilizes a scientific approach for the analysis of a problem chosen for discussion
    7. Students are expected to do considerable library search prior to the seminar
    8. concerned with academic matters rather than individual
    9. The students develop vocabulary, articulation, problem solving and critical thinking skills as they participate in the seminar.
    10. helps in self learning and promotes independent thinking
    11. Skilfully directed seminar promotes group spirit and co- operativeness.
Disadvantages of Attending Seminars
1.      Cost, high
2.      The chance that the speakers may be sharing incorrect knowledge and blindly 'follow the pack'
3.      The time spent away from your actual business, or life, to attend.
4.      The chance that the topics may not actively help your concerns, and that the seminar will be a waste of time
5.      The chance that attendees will expect too much from a seminar and thus be disappointed.
6.      Realism must rule here. These are not 'instant answers' to anything.
Overall, seminars, if chosen carefully, can be a good experience. They are not miracle cures to business problems or other problems, however, and this must be kept in mind when deciding to attend a seminar.
Workshops
Workshops are of experimental or creative kind. Workshops bring together a group of people working on a particular project or area of interest. Workshops are highly participative in nature.
They facilitate skill building through hands-on participation. Normally conducted in smaller groups, where attention is given to every participant, workshops help make learning easier. Examples of a workshop are a theatre workshop, carpentry workshop, horticulture workshop, etc.
Workshops normally involve demonstrations and how-to-do tips. Depending upon the mix of participants, workshops may be basic or advanced. It is expected that people who attend such workshops learn basic skills on such specific activities.
Symposia
Symposia refer to the plural of symposium. A symposium is also a conference organized to cover a particular subject in detail, especially relating to an academic subject. It is another valuable means of disseminating knowledge.
All these interactive methods of communication help add value to the participants. More often than not, fees are charged from the participants. The participants emerge wiser and better informed about the subjects discussed.

 

INTERVIEW

The word interview comes from Latin and middle French words meaning to “see between” or “see each other”. It is a two way communication between interviewer and interviewee, wherein the former seeks information, by way of questions and the latter provides the same, through his/her verbal responses. The person who answers the questions of an interview is called in the interviewer. The person who asks the questions of our interview is called an interviewer. It suggests a meeting between two persons for the purpose of getting a view of each other or for knowing each other. When we normally think of an interview, we think a setting in which an employer tries to size up an applicant for a job.

Objectives of the interview

·   It helps to verify the information provided by the candidate.
·   What the candidate has written in the resume are the main points. What other additional skill set does he have? All these are known by conducting interviews.
·   It gives candidate’s technical knowledge
·   It helps in establishing the mutual relation between the employee and the company.
·   It is useful for the candidate so that he comes to know about his profession, the type of work that is expected from him and he gets to know about the company.
·   interviewer and the interviewee gain experience, both professionally and personally.
·   It helps the candidate assess his skills
·   To evaluate applicant’s suitability.
·   To gain additional information from the candidate.

Types of Interview

1.      Structured Interview: The interview in which preset standardised questions are used by the interviewer, which are asked to all the candidates. It is also known as a patterned or guided interview.
2.      Unstructured Interview: The unstructured interview is one that does not follow any formal rules and procedures. The discussion is free flowing, and questions are made up during the interview.
3.      Mixed Interview: It is a combination of structured and unstructured interview, wherein a blend of predetermined and spontaneous questions are asked by the interviewer to the job seeker. It follows a realistic approach which allows the employer to make a comparison between answers and get in-depth insights too.
4.      Behavioural Interview: It is concerned with a problem or a hypothetical situation, put before the candidate with an expectation to solve. It aims at revealing the job seeker’s ability to solve the problem presented.
5.      Stress Interview: The employer commonly uses stress interview for those jobs which are more stress prone. A number of harsh, rapid fire questions are put to the interviewee with intent to upset him. It seeks to know, how the applicant will respond to pressure.
6.      One to one Interview: The most common interview type, in which there are only two participants – the interviewer (usually the representative of the company) and interviewee, taking part in the face to face discussion, in order to transfer information.
7.      Panel Interview: Panel interview is one, in which there is a panel of interviewers, i.e. two or more interviewers, but limited to 15. All the members of the panel are different representatives of the company.
8.      Telephonic Interview: Telephonic interview is one that is conducted over telephone. It is the most economical and less time consuming, which focuses on asking and answering questions.
9.      Video Interview: An interview, in which video conference is being employed, to judge or evaluate the candidate. Due to its flexibility, rapidity and inexpensiveness, it is used increasingly.
Self-report study

self-report study is a  self-report is any test, measure, or survey that relies on the individual's own report of their symptoms, behaviors, beliefs, or attitudes. Respondents read the question and select a response by themselves without researcher interference. It involves asking a participant about their feelings, attitudes, beliefs and so on. Self-reports are often used as a way of gaining participants' responses in observational studies and experiments. Self-report studies have validity problems.
Examples of self-reports are questionnaires and interviews
Self-reports are commonly used in psychological studies largely because much valuable and diagnostic information about a person is revealed to a researcher or a clinician based on a person’s report on himself or herself. One of the most commonly used self-report tools is the Minnesota Multiphasic Personality Inventory (MMPI) for personality testing.
Advantages of Self-Report Data
  • easy to obtain
  • main way that clinicians diagnose their patients
  • an inexpensive tool
  • performed relatively quickly so a researcher can obtain results in days or weeks
  • The self-reports can be made in private and can be anonymized to protect sensitive information and perhaps promote truthful responses.
  • self-report data can be collected in various ways to suit the researcher’s needs
  • can also be collected in an interview format, either in person or over the telephone
Disadvantages of Self-Report Data
  • Honesty: Subjects may make the more socially acceptable answer rather than being truthful.
  • Introspective Ability: The subjects may not be able to assess themselves accurately.
  • Interpretation of the questions: The wording of the questions may be confusing or have different meanings to different subjects.
  • Rating Scales: Rating something yes or no can be too restrictive, but numerical scales also can be inexact and subject to individual inclination to give an extreme or middle response to all questions.
  • Response Bias: Questions are subject to all of the biases of what the previous responses were, whether they relate to a recent or significant experience and other factors.
  • Sampling Bias: Are they representative of the population
  • measures are not reliable
  • different groups (e.g., men and women, adolescents and adults) are likely to interpret words differently
TESTS

A test is an instrument or systematic procedure for measuring a sample of behavior.
Psychological tests are defined as standardized, repeatable procedures used to elicit and measure samples of human behavior. It deals with the development, validation, administration, scoring, and interpretation of human behaviour.  It involves procedures for coding, scoring, or quantifying the behavior that is elicited so that it can be compared with normative data systematically collected on similar samples. Psychological test yield objective and standardized description of behaviour, quantified by numerical scores. A test may be administered verbally, on paper, on a computer. Tests vary in style, rigor and requirements. It can be administered formally or informally. A test score may be interpreted with regards to a norm or criterion, or occasionally both
Psychological test are classified as
·         performance test and paper pencil tests
·         power test versus speed tests
·         standardized versus non standardized test
·         individual and group tests
Uses of Testing
·         to evaluate
·         (1) human abilities, including intelligence, aptitudes, skills, and achievement in various areas;
·         (2) personality characteristics, which include traits, attitudes, interests, and values; and
·         (3) adjustment and mental health
Chief tests are achievement tests, intelligence test, aptitude tests, personality tests etc.

CHECKLIST
Checklists, Control Lists or Verification Lists are formats designed to perform repetitive activities, to verify a list of requirements or to collect data in an orderly and systematic manner. They are used to make systematic checks of activities or products ensuring that the worker or inspector does not forget anything important.
It is used to record the presence or absence of the item by checking yes or no or the type of number of items may be indicated by inserting the appropriate word or number. Responses to the check list items are a matter of fact and not judgmental or opinion.
The main uses of checklists are
§  Verification of the activities development in which it is important not to forget any step, or where the tasks have to be done with an established order.
§  Doing inspections and record the points which have been inspected.
§  Check the correct implementation of standards or procedures.
§  Obtain information to analyze where the incidences and non-conformities happen.
§  Help to check the causes of defects.
§  Verification of the product specifications.
§  Collect data for further analysis.
In short, these lists are usually used to perform routine checks and inspections, and to ensure that the worker doesn’t forget anything during his daily tasks.
Advantages
1.      enables you to systematize the repetitive activities
2.      checklist is a list of all the things that you need to do
3.      help one person feel more organized
Disadvantages
1.      Checklist is a non-graphical representation;
2.      Developing a comprehensive checklist may be difficult
3.      Checklists cannot provide a quantitative value associated with each risk
4.      Checklists do not take into account historical information




RATING SCALE
Rating scale is defined as a closed-ended survey question to rate an attribute or feature. Rating scale is a variant of the popular multiple-choice question which is widely used to gather information that provides relative information about a specific topic.

Types of Rating Scale: Ordinal and Interval Scales.

An ordinal scale is a scale the depicts the answer options in an ordered manner.
An interval scale is a scale where not only is the order of the answer variables established but the magnitude of difference between each answer variable is also calculable. Absolute or true zero value is not present in an interval scale. Temperature in Celsius or Fahrenheit is the most popular example of an interval scale. Net Promoter ScoreLikert Scale, Bipolar Matrix Table are some of the most effective types of interval scale.
There are four primary types of rating scales which can be suitably used in an online survey:
·         Graphic Rating Scale
·         Numerical Rating Scale
·         Descriptive Rating Scale
·         Comparative Rating Scale
1.      Graphic Rating Scale: Graphic rating scale indicates the answer options on a scale of 1-3, 1-5, etc.  Respondents can select a particular option on a line or scale to depict rating. Likert Scale is a popular graphic rating scale example.
2.      Numerical Rating Scale: Numerical rating scale has numbers as answer options
3.      Descriptive Rating Scale:  In a descriptive rating scale, each answer option is elaborately explained for the respondents. for example, a customer satisfaction survey, which needs to describe all the answer options in detail
4.       Comparative Rating Scale: Comparative rating scale expects respondents to answer a particular question in terms of comparison, i.e. on the basis of relative measurement or keeping other organizations/products/features as a reference.

Uses of Rating Scale

  1. Gain relative information about a particular subject
  2. Compare and analyze data: 
  3.  Measure one important product/service element
 Advantages of rating scale
  1. Rating scale questions are easy to understand and implement.
  2. Offers a comparative analysis of quantitative data 
  3. Using graphic rating scales, it is easy for researchers to create surveys
  4. Abundant information can be collected and analyzed using a rating scale.
  5. The analysis of answer is quick and less time-consuming.
  6. Rating scale is a standard for collecting qualitative and quantitative information
CUMULATIVE RECORD CARD
The cumulative record card is a valuable technique prepared by teachers in the school for the purpose of collection of data about the students. This includes entire history or record about the students regarding data related to identification, home and com­munity, scholarship, achievement, test scores and ratings, health, anecdotal record, vocational data, remark of teachers, headmasters and follow-up-record etc.
Generally it covers three consecutive years. It contains information regarding all aspects of life of the child -physical, mental, social, moral and psychological. It seeks to give as comprehensive picture as possible of the personality of a child. “The significant information gathered periodically on student through the use of various techniques – tests, inventories, questionnaire, observation, interview, case study etc.”
If the Cumulative Record is kept together in a folder it is called Cumulative Record Folder (CRF). If the Cumulative Record is kept in an envelop it is called a Cumulative Record Envelop (CRE). If the cumulative Record is kept in a card it is called a Cumulative Record Card (CRC).
Definitions
 “Cumulative records consist of all data about an individual pupil which a school con­siders important enough to collect and record, usually in some organised way, for safe keeping from year to year.”
Bonney and Hampleman:

Cumulative record is defined as “a permanent record of a student which is kept up-to-date by school, it is his educational history with informa­tion about his school achievement, attendance, health, test scores and similar pertinent data.”
Arthur Jones:

Characteristics of Cumulative Record Card:
 (i) The cumulative record is an useful and permanent record which includes various information’s about the student.
(ii) It is an up-to-date record which is maintained by teachers including the latest information’s about student.
(iii) It is a complete record which includes student’s educational progress covering student’s past achievement and present educational standard.
(iv) It is a comprehensive record as it embodies all the information’s about the students such as attendance, health, test, co-curricular activities, psychological data and educational data etc.
(v) It can be called as continuous record as it includes data about the student even starting from kindergarten level to the end of school education.
 (vi) It invites data about the students which should be valid, authentic, reliable, objective, usable and pertinent in nature.
(vii) A separate file is maintained in case of confidential information collected about the students.
(viii) The needed information may be given but not the card itself if in certain cases any information is required by concerned guidance personnel or well wishers of student for the purpose of his development.
(ix) The cumulative record can be recognized as Cumulative Record Folder (CRF) if the cumulative record is kept together in a folder.
(x) If the cumulative record is kept properly in an envelope, it can be called and recognized as Cumulative Record Envelope (CRE).
(xi) The cumulative record can be called as Cumulative Record Card (CRC) if the cumulative record is kept in a card.
(xii) The pages of cumulative record is not open for all and it is confidential. But in certain specific cases it may be disclosed.
(xiii) It is transferable from one school to another with students.

Items Included in a Cumulative Record:
1. Identification Data:
Name of the pupil, sex, father’s name, admission No., date of birth, class, section, any other information that helps in easy location of the card.
2. Environmental and Background Data:
Home-neighbourhood influences, socio-economic status of the family, cultural status of the family, number of brothers and sisters, their educational background, occupations of the members of the family.
3. Physical Data:
Weight, height, illness, physical disabilities, etc.
4. Psychological Data:
Intelligence, aptitudes, interests, personality qualities, emotional and social adjustment and attitudes.
5. Educational Data:
Previous school record, educational attainments, school marks, school attendance.
6. Co-curricular Data:
Notable experiences and accomplishment in various fields-intellectual, artistic, social, recreational, etc.
7. Vocational Information:
Vocational ambitions of the student.
8. Supplementary Information:
It is obtained by the use of standardized tests.
9. Principal’s overall remarks.
Sources of Collection of Information:
Information about every pupil or child for the maintenance in the CRC should be collected from the following sources:
1. Parents or guardian’s data form:
Family background and the personal history of the child may be gathered from the parents who are asked to fill in the form.
2. Personal data form:
In order to obtain information regarding the pupils interest and participation in extra-curricular activities and his vocational preferences the personal data is of great use. The pupil may be asked to give details of himself. This will supplement the information obtained from the parents data form.
3. School records:
These include:
(i) Records of achievement tests.
(ii) Records of other tests.
(iii) Admission and withdrawal record.
4. Other sources:
These include:
(i) Personal visits by the teachers
(ii) Observations made by the teachers.

Uses of Cumulative Record Card:
 (i) The CRC is useful for guidance worker and counsellor as it provides a comprehensive, objective picture about the student including his strength and weaknesses.
(ii) The CRC is useful for guidance counsellor to help pupil in educational achievement, vocational choice and personal progress so far adjustment is concerned.
(iii) The CRC is useful for headmaster/principal to ascertain the pupil’s performances in different subjects and his limitations.
(iv) The CRC is useful for parents to provide special privileges to make up the deficiencies what lie in case of his child.
(v) The CRC is useful for teachers to know the students and his progress and weaknesses at a glance.
(vi) The CRC does not give chance for overlapping of data collected by different teachers about the students.
(vii) The CRC is useful in making case study about the students.
(viii) The CRC is useful for the students for the vocational purposes.

 Limitations of Cumulative Record Card:
 (i) The entire data is of little use if they are not collected properly objectively and accurately.
(ii) The purpose of CRC is not served if it is not maintained secretly and confidentially.
(iii) Sometimes the information’s and its interpretations of CRC becomes confusing as the information’s are collected by different teachers.
(iv) The CRC needs much money to come to light which is not possible in the part of school to spend on this head.
(v) The maintenance of CRC is a hard some job like clerical work in the part of teachers.
(vi) It is a lengthy process which needs much time to be worked out.

Basic Principles that Should Govern the Maintenance of the CRC:
1. Accurate
2. Complete
3. Comprehensive
4. Objective
5. Usable
6. Valid
QUESTIONNAIRE
A questionnaire is a research instrument consisting of a series of questions for the purpose of gathering information from respondents. The questionnaire was invented by the Statistical Society of London in 1838.
It is  relatively cheap, quick and efficient way of obtaining large amounts of information from a large sample of people. It is an effective means of measuring the behavior, attitudes, preferences, opinions and, intentions of relatively large numbers of subjects more cheaply and quickly than other methods Data can be collected relatively quickly because the researcher would not need to be present when the questionnaires were completed. This is useful for large populations when interviews would be impractical. They can be carried out face to face, by telephone, computer or post. Often a questionnaire uses both open and closed questions to collect data. This is beneficial as it means both quantitative and qualitative data can be obtained.
Characteristics of A Good Questionnaire :
·         Questionnaire should deal with important or significant topic to create interest among respondents.
·         It should seek only that data which can not be obtained from other sources.
·         It should be as short as possible but should be comprehensive. • It should be attractive. •
·         Directions should be clear and complete.
·         It should be represented in good Psychological order proceeding from general to more specific responses.
·         Double negatives in questions should be avoided.
·         Putting two questions in one question also should be avoided.
·         It should avoid annoying or embarrassing questions.
·         It should be designed to collect information which can be used subsequently as data for analysis. It should consist of a written list of questions.
·         The questionnaire should also be used appropriately

Open Questions

Open-ended questions enable the respondent to answer in as much detail as they like in their own words. For example: “can you tell me how happy you feel right now?” Open questions are often used for complex questions that cannot be answered in a few simple categories but require more detail and discussion. If you want to gather more in-depth answers from your respondents, then open questions will work better. These give no pre-set answer options and instead allow the respondents to put down exactly what they like in their own words.

Limitations

·         Time-consuming to collect the data. It takes longer for the respondent to complete open questions. This is a problem as a smaller sample size may be obtained.
·         Time-consuming to analyze the data.
·         Not suitable for less educated respondents as open questions require superior writing skills and a better ability to express one's feelings verbally.

Important factors in questionnaire design.
§  Set Aims
§  Length accurate
§  Pilot Study should be conducted
§  Question Order should be from simple to complex
§  Terminology should be simple , clear and concise
§  should be free from ethical issues.

Pilot Study

A pilot study is a practice / small-scale study conducted before the main study. It allows the researcher to try out the study with a few participants so that adjustments can be made before the main study, so saving time and money.
It is important to conduct a questionnaire pilot study for the following reasons:
·         Check that respondents understand the terminology used in the questionnaire.
·         Check that emotive questions have not been used
·         Check that leading questions have not been used
·         Ensure the questionnaire can be completed in an appropriate time frame

Types of questionnaires:
·         Computer questionnaire.
·         Telephone questionnaire. 
·         In-house survey.  
·         Mail Questionnaire.
Questionnaires can include the following types of questions:
·         Open question questionnaires.
·         Multiple choice questions. 
·         Dichotomous Questions. This type of questions gives two options to respondents – yes or no, to choose from. It is the easiest form of questionnaire for the respondent in terms of responding it.
·         Scaling Questions. Also referred to as ranking questions, they present an option for respondents to rank the available answers to the questions on the scale of given range of values (for example from 1 to 10).
Advantages of Questionnaires
1. Questionnaires are cost-efficient
2. They’re practical
3. Speedy results
4. Scalability
5. don’t need to be a scientist
6. higher levels of objectivity
7. Scientific analysis and predictions
8. User anonymity
9. No pressure
10. Cover all aspects of a topic
 Disadvantages of Questionnaires
1. Dishonesty
2. Lack of conscientious responses
3. Differences in understanding and interpretation
4. Hard to convey feelings and emotions
5. Some questions are difficult to analyze
6. Respondents may have a hidden agenda
7. Lack of personalization
8. Skipped questions
9. Accessibility issues
INVENTORY
Inventory is a list, record or catalog containing list of traits, preferences, attitudes, interests or abilities used to evaluate personal characteristics or skills. The purpose of inventory is to make a list about a specific trait, activity or programme and to check to what extent the presence of that ability types of Inventories like  Personality Inventory Interest Inventory
Persons differ in their interests, likes and dislikes. Interests are significant element in the personality pattern of individuals and play an important role in their educational and professional careers. The tools used for describing and measuring interests of individuals are interest blanks. They are self report instruments in which the individuals note their own likes and dislikes. They are of the nature of standardised interviews in which the subject gives an introspective report of his feelings about certain situations and phenomena which is then interpreted in terms of interests. The use of interest inventories is most frequent in the areas of educational and vocational guidance and case studies.
As a part of educational surveys, children’s interest in reading, in games, in dramatics, in other extracurricular activities and in curricular work etc. are studied.
Eg.Strong’s Vocational Interest Inventory.
It compares the subject’s pattern of interest to the interest patterns of successful individuals in a number of vocational fields. This inventory consists of the 400 different items. The subject has to tick mark one of the alternatives i. e. L(for like), I(indifference) or D(Dislike) provided against each item. When the inventory is standardised, the scoring keys and percentile norms are prepared on the basis of the responses of a fairly large number of successful individuals of a particular vocation. A separate scoring key is therefore prepared for each separate vocation or subject area. The subject’s responses are scored with the scoring key of a particular vocation in order to know his interest or lack of interest or lack of interest in the vocation concerned.
Advantages
·         Self-report inventories reveal the covert ones
·         to get precise answers to standardized questions.
·         Inventories are also objective
Disadvantages
·         Self-report inventories often contain transparent questions,
·         subjects can lie intentionally and fake personality traits they don’t really have.
·         The social desirability bias can affect responses on self-report inventories
·         People sometimes don’t understand the questions on the test.
·         People sometimes don’t remember aspects of the experience they are asked about

ANECDOTAL RECORD?

 

Meaning of Anecdotal Record:

It is natural in school that certain significant incidents or happenings occur in the life of students in schools which are to be noted as they are based on some sort of experiences. So that teachers should advise the students to write down this concerning facts on a piece of paper or they should record this matters by tape recorder after asking the facts about incident of students without their knowledge. It is required for the purpose of guidance services.
 “An anecdotal record is a report of a significant episode in the life of a student.”
R. Louis
 “An anecdotal record is a simple statement of an incident deemed by the observer to be significant with respect to a given pupil.”
 J.D. Willard
Anecdotal record may be defined as a “on the spot descriptions of some incident, episode or occurrence that is observed and recorded as being of possible significance, when these reports are gathered together, they are known as anecdotal record.”
A.J. Jones

 

Characteristics of a Good Anecdotal Record:

 (i) Anecdotal record gives setting which includes the date, place and situation in which the action occurred.
(ii) It describes the actions of the individual (pupil/child) the reactions of the other people involved and the responses of the former to these reactions.
(iii) It quotes what is said to the individual and by the individual during the action.
(iv) It states “mood cues” postures, gestures, voice qualities and facial expressions of the individual. It does not provide interpretations of his feelings but only the cues by which a reader may judge what they were.
(v) The action or conversation is not left incomplete and unfinished but is followed through to the point where an aspect of a behavioural moment in the life of the individual is supplied. 
Suggestions for improving anecdotes:
1. Write an anecdote soon after viewing the incident. If some time lag is necessitated by the situation, jot down a key word or two to aid your memory during the more complete writing.
2. Include the basic action or statements of the chief person in the episode, that is, what he did or said.
3. Include enough setting details to indicate where and when the behaviour occurred, under what conditions, and who are involved.
4. Responses or actions of others to the chief person’s behaviour should be included.
5. Use direct quotations wherever possible
6. Anecdotes should preserve the sequence of actions and responses of the original behaviour incident.
7. Anecdotes should be objective, accurate and complete as far as important details are concerned.
8. Good literary style; correct grammar and spelling and even complete sentences are in-consequential.
9. Words chosen should be precise and unambiguous, nouns and verbs primarily; subjective terminology exemplified by most objectives and adverbs, should be used sparingly.
10. If research resources are sufficient, use of tape recorder and typist to transcribe anecdotes into written form generally increases the amount of the detail that can be included over simple stenographic of hand written recording.

 

4. Types of Anecdotal Records:

The following classification of anecdotal records has been made on the basis of contents included in it:
First Type:
objective description of a pupil’s behaviour recorded from time to time.
Second Type:
description of behaviour with some comment or interpretation.
Third Type:
takes into account the record of a pupil’s behaviour comments by the observer and the treatment offered to the pupil.
Fourth Type:
descrip­tion of a pupil’s behaviour along with the comments as well as suggestions for future treatment of the student.

 

Merits of anecdotal record
(i) The anecdotal record is useful for the guidance worker
(ii) to know and understand the pupil on the basis of description of happening of student’s life.
(iii) to understand the personality pattern of students.
(iv) to study and understand the adjustment patterns of the students.
(v) in assisting the students for solving their problems and difficulties.
(vi to improve relationship with teachers and peers etc.
(vii) to get rid of mental tensions, anxieties and so on.
(viii) to know about the child clearly and they will try to help their child in various ways.
(ix) to improve teaching standard after knowing comment of the students through anecdotal technique.

 

Limitations of Anecdotal Record:

 (i) The anecdotal records are of no value if the proper care is not taken by the teacher in the context of data collection about student’s behaviour.
(ii) The anecdotal records are of little use if objectivity in data collection is not followed and maintained strictly.
(iii) In some cases anecdotal records are merely confined to exceptional children as a result of which average students are seriously neglected.
(iv) may provides some partly information’s about the students.
(v) The anecdotal record is of no use if the incidents and its description is not properly recorded.
(vi) The anecdotal record sometimes in some cases disappointment and tensions of students which are not desirable from the part of the teacher.
(vii) It is not possible in the part of teacher to detect a observable incident as because an incident may be important and memorable for the student may not treated important in case of teacher.
(viii) Sometimes students being more sentimental, reactive and tensional do not respond or answer or write correctly as a result of which the anecdotal records do not bear weightage so far its uses and importance’s are concerned.
(ix) The preparation of anecdotal record is nothing but the unnecessary wastage time and money.


CHARACTERISTICS OF A GOOD TEST

 

1. Reliability

 “Reliability refers to the consistency of measurement—that is, how consistent test scores or other evaluation results are from one measurement to other.”
 Gronlund and Linn (1995) 
2. Reliability is the “wor­thiness with which a measuring device measures something; the degree to which a test or other instrument of evaluation measures consistently whatever it does in fact measure.”
 C.V. Good (1973)
The dictionary meaning of reliability is consistency, depend­ence or trust.
So in measurement reliability is the consistency with which a test yields the same result in measuring whatever it does measure. Therefore reliability can be defined as the degree of consistency between two measurements of the same thing.
Eg. We administered an achievement test on Group-A and found a mean score of 55. Again after 3 days we ad­ministered the same test on Group-A and found a mean score of 55. It indicates that the measuring instrument (Achievement test) is providing a stable or dependable result. On the other hand if in the second measurement the test provides a mean score around 77 then we can say that the test scores are not consistent.

It is not always possible to obtain perfectly consistent results due to several factors like physical health, memory, guessing, fatigue, forgetting etc. which may affect the results from one measurement to other. These extraneous variables may introduce some error to our test scores. This error is called as measurement errors. So while determining reliability of a test we must take into consideration the amount of error present in measurement.

Methods of Determining Reliability
Different types of consistency are determined by different methods. These are as follows:
1. Consistency over a period of time.
2. Consistency over different forms of instrument.
3. Consistency within the instrument itself
There are four methods of determining reliability coefficient, such as:
(a) Test-Retest method.
(b) Equivalent forms/Parallel forms method.
(c) Split-half method.
(d) Rational Equivalence/Kuder-Richardson method.
(а) Test-Retest Method:
This is the simplest method of determining the test reliability. To determine reliability in this method the test is given and repeated on same group. Then the correlation between the first set of scores and second set of scores is obtained. A high coefficient of correlation indicates high stability of test scores. Measures of stability in the .80’s and .90’s are com­monly reported for standardized tests over occasions within the same year.
 (b) Equivalent Forms/Parallel Forms Method:
Reliability of test scores can be estimated by equivalent forms method. It is also otherwise known as Alternate forms or parallel forms method. When two equivalent forms of tests can be con­structed the correlation between the two may be taken as measures of the self correlation of the test. In this process two parallel forms of tests are administered to the same group of pupils in short interval of time, then the scores of both the tests are cor­related. This correlation provides the index of equivalence. Usually in case of standardized psychological and achievement tests the equivalent forms are available.
Both the tests selected for administration should be parallel in terms of content, difficulty, format and length. When time gap between the administrations of two forms of tests are provided the coefficient of test scores provide a measure of reliability and equivalence. But the major drawback with this method is to get two parallel forms of tests. When the tests are not exactly equal in terms of content, difficulty, length and comparison between the scores ob­tained from these tests may lead to erroneous decisions.
(c) Split-Half Method:
In this method a single test is administered to a group of pupils in usual manner. Then the test is divided into two equivalent values and correlation for these half-tests are found.
The common procedure of splitting the test is to take all odd numbered items i.e. 1, 3, 5, etc. in one half and all even-numbered items i.e. 2, 4, 6, 8 etc. in the other half
Then scores of both the halves are correlated by using the Spearman- Brown formula.
image_thumb2_thumb3
For example by correlating both the halves we found a coef­ficient of .70.
By using formula (5.1) we can get the reliability coefficient on full test as:
image_thumb4_thumb2
The reliability coefficient .82 when the coefficient of correlation between half test is .70. It indicates to what extent the sample of test items are dependable sample of the content being measured—internal consistency.
 “Split half reliabilities tend to be higher than equivalent form reliabilities because the split half method is based on the administration of a single test form.” This method over-comes the problem of equivalent forms method introduced due to differences from form to form, in attention, speed of work, effort, fatigue and test content etc.
Factors Affecting Reliability:
The major factors which affect the reliability of test, scores can be categorized in to three headings:
1. Factors related to test.
2. Factors related to testee.
3. Factors related to testing procedure.
1. Factors related to test:
(а) Length of the test:
Spearman Brown formula in­dicates the longer the test is, the higher the reliability will be. Because a longer test will provide adequate sample of the behaviour. Another cause is that guessing factor is apt to be neutralized in a longer test.
 (b) Content of the test:
Content homogeneity is also a factor which results is high reliability.
(c) Characteristics of items:
The difficulty level and clarity of expression of a test item also affect the reliability of test scores. If test items are too easy or difficult for the group members it will tend to produce scores of low reliability. Because both the tests have a restricted spread of scores.
(d) Spread of Scores:
According to Gronlund and Minn (1995) “other things being equal, larger the spread of scores is the higher the estimate of reliability will be.” When the spread of scores are large there is greater chance of an individual to stay in the same relative position in a group from one testing to another. We can say that errors of measurement affect less to the relative position of the individual when the spread of scores are large.
For example in Group A students have secured marks ranging from 30 to 80 and in Group B student have secured marks ranging from 65 to 75. If we shall administer the tests second time in Group A the test scores of individuals could vary by several points, with very little shifting in the relative position of the group mem­bers. It is because the spread of scores in Group A is large.
2. Factors related to testee:
 (a) Heterogeneity of the group:
When the group is a homogeneous group the spread of the test scores is likely to be less and when the group tested is a heterogeneous group the spread of scores is likely to be more. Therefore reliability coefficient for a heterogeneous group will be more than homogeneous group.
(b) Test wiseness of the students:
Experience of test taking also affect the reliability of test scores. Practice of the students in taking sophisticated tests increases the test reliability. But when in a group all the students do not have same level of test wiseness, it leads to greater measurement errors.
(c) Motivation of the students:
When the students are not motivated to take the test, they will not represent their best achievement. This depresses the test scores.
3. Factors related to testing procedure:
 (a) Time Limit of test:
When the students get more time to take the test they can make more guessing, which may increase the test scores. Therefore by speeding up a test we can increase the test reliability.
(b) Cheating opportunity given to the students:
Cheat­ing by the students during the test administration leads to meas­urement errors. This will make the observed score of cheaters higher than their true score.

2. VALIDITY

Gronlund and Linn (1995)—”Validity refers to the ap­propriateness of the interpretation made from test scores and other evaluation results with regard to a particular use.”
Validity means truth-fullness of a test. It means to what extent the test measures that, what the test maker intends to measure.

Nature of Validity:

1. Validity refers to the appropriateness of the test results but not to the instrument itself.
2. Validity does not exist on an all-or-none basis but it is a matter of degree.
3. Tests are not valid for all purposes.  It is specific to particular interpretation. For example the results of a vocabulary test may be highly valid to test vocabulary but may not be that much valid to test composition ability of the student.
4. Validity is not of different types. It is a unitary concept. It is based on various types of evidence.

Factors Affecting Validity:

1. Factors in the test:
(i) Unclear directions to the students to respond the test.
(ii) Difficulty of the reading vocabulary and sentence structure.
(iii) Too easy or too difficult test items.
(iv) Ambiguous statements in the test items.
(v) Inappropriate test items for measuring a particular outcome.
(vi) Inadequate time provided to take the test.
(vii) Length of the test is too short.
(viii) Test items not arranged in order of difficulty.
(ix) Identifiable pattern of answers.
Factors in Test Administration and Scoring:
(i) Unfair aid to individual students, who ask for help,
(ii) Cheating by the pupils during testing.
(iii) Unreliable scoring of essay type answers.
(iv) Insufficient time to complete the test.
(v) Adverse physical and psychological condition at the time of testing.
Factors related to Testee:
(i) Test anxiety of the students.
(ii) Physical and Psychological state of the pupil,
(iii) Response set—a consistent tendency to follow a certain pattern in responding the items.

TYPES OF VALIDITY

What is Validity?

The concept of validity was formulated by Kelly (1927, p. 14) who stated that a test is valid if it measures what it claims to measure. For example a test of intelligence should measure intelligence and not something else (such as memory).
table showing the different types of validity

Face Validity

This is the least sophisticated measure of validity.

Face validity occurs where something appears to be valid. It is never sufficient and requires more solid validity to enable acceptable conclusions. Measures often start out with face validity
This rater could use a likert scale to assess face validity. For example:
  1. - the test is extremely suitable for a given purpose
  2. - the test is very suitable for that purpose;
  3. - the test is adequate
  4. - the test is inadequate
  5. - the test is irrelevant and therefore unsuitable
It is important to select suitable people to rate a test (e.g. questionnaire, interview, IQ test etc.). For example, individuals who actually take the test would be well placed to judge its face validity. Also people who work with the test could offer their opinion (e.g. employers, university administrators, employers). Finally, the researcher could use members of the general public with an interest in the test (e.g. parents of testees, politicians, teachers etc.).
The face validity of a test can be considered a robust construct only if a reasonable level of agreement exists among raters.

Construct Validity

Construct validity was invented by Cornball and Meehl (1955).

Construct validity is an assessment of the quality of an instrument or experimental design. It says 'Does it measure the construct it is supposed to measure'. If you do not have construct validity, you will likely draw incorrect conclusions from the experiment. So, the construct validity of a test for intelligence, for example, is dependent on a model or theory of intelligence. Construct validity entails demonstrating the power of such a construct to explain a network of research findings and to predict further relationships. The more evidence a researcher can demonstrate for a test's construct validity the better. However, there is no single method of determining the construct validity of a test. Instead, different methods and approaches are combined to present the overall construct validity of a test.

Concurrent validity

This is the degree to which a test corresponds to an external criterion that is known concurrently (i.e. occurring at the same time). This measures the relationship between measures made with existing tests. The existing tests is thus the criterion. If the new test is validated by a comparison with a currently existing criterion, we have concurrent validity.
For example a measure of creativity should correlate with existing measures of creativity.

Predictive validity

This measures the extent to which a future level of a variable can be predicted from a current measurement. This includes correlation with measurements made with different instruments.
For example, a political poll intends to measure future voting intent.
College entry tests should have a high predictive validity with regard to final exam results.

Content validity

Content validity occurs when the experiment provides adequate coverage of the subject being studied. This includes measuring the right things as well as having an adequate sample. Samples should be both large enough and be taken for appropriate target groups.
The perfect question gives a complete measure of all aspects of what is being investigated. However in practice this is seldom likely, for example a simple addition does not test the whole of mathematical ability.
Content validity is related very closely to good experimental design. A high content validity question covers more of what is sought. A trick with all questions is to ensure that all of the target content is covered (preferably uniformly).

Criterion-related validity

This examines the ability of the measure to predict a variable that is designated as a criterion. A criterion may well be an externally-defined 'gold standard'. Achieving this level of validity thus makes results more credible.
Criterion-related validity is related to external validity.

3. OBJECTIVITY

Objectivity in testing is “the extent to which the instrument is free from personal error (personal bias), that is subjectivity on the part of the scorer”.
C.V. Good (1973)
“Objectivity of a test refers to the degree to which equally competent scores obtain the same results. So a test is considered objective when it makes for the elimination of the scorer’s personal opinion and bias judgement. In this con­text there are two aspects of objectivity which should be kept in mind while constructing a test.”
Gronlund and Linn (1995)
 (i) Objectivity of Scoring:
Objectivity of scoring means same person or different persons scoring the test at any time arrives at the same result without may chance error. The scoring procedure should be such that there should be no doubt as to whether an item is right or wrong or partly right or partly wrong.
(ii) Objectivity of Test Items:
By item objectivity we mean that the item must have one and only one interpretation by students. It means the test items should be free from ambiguity. A given test item should mean the same thing to all the students that the test maker intends to ask. Dual meaning sentences, items having more than one correct answer should not be included in the test as it makes the test subjective.

4. Usability:

Usability is another important characteristic of measuring instruments. Because practical considerations of the evaluation instruments cannot be neglected. The test must have practical value from time, economy, and administration point of view. This may be termed as usability.
So while constructing or selecting a test the following practical aspects must be taken into account:
(i) Ease of Administration:
It means the test should be easy to administer by simple and clear directions and the timing of the test should not be too difficult.
(ii) Time required for administration:
Appropriate time limit to take the test should be provided. Gronlund and Linn (1995) are of the opinion that “Somewhere between 20 and 60 minutes of testing time for each individual score yielded by a published test is probably a fairly good guide”.
(iii) Ease of Interpretation and Application:
Another im­portant aspect of test scores are interpretation of test scores and application of test results. If the results are misinterpreted, it is harmful on the other hand if it is not applied, then it is useless.
(iv) Availability of Equivalent Forms:
Equivalent forms tests helps to verify the questionable test scores. It also helps to eliminate the factor of memory while retesting pupils on same domain of learning. Therefore equivalent forms of the same test in terms of content, level of difficulty and other characteristics should be available.
(v) Cost of Testing: It should be economical.


(i) Criterion-Referenced Evaluation:
When the evaluation is concerned with the performance of the individual in terms of what he can do is termed as criterion- referenced evaluation. There is no reference to the performance of other members of the group. In it we refer an individual’s performance to a predetermined criterion which is well defined. The purpose of criterion-referenced evaluation/test is to assess the objectives. It is the objective based test. The objectives are assessed, in terms of behavioural changes among the students. Such type of test assesses the ability of the learner in relation to the criterion behaviour.
Examples
(i) Raman got 93 marks in a test of Mathematics.
(ii) A typist types 60 words per minute.
(iii) Amit’s score in a reading test is 70.

 (ii) Norm Referenced Evaluation:
A norm-referenced test is used to ascertain an individual’s status with respect to the performance of other individuals on that test.
Norm-referenced evaluation is the traditional class-based assignment of numerals to the attribute being measured. It means that the measurement act relates to some norm, group or a typical performance. It is an attempt to interpret the test results in terms of the performance of a certain group. This group is a norm group because it serves as a referent of norm for making judgements. Test scores are neither interpreted in terms of an individual (self-referenced) nor in terms of a standard of performance or a pre-determined acceptable level of achievement called the criterion behaviour (criterion-referenced). The measurement is made in terms of a class or any other norm group.
Almost all our classroom tests, public examinations and standardised tests are norm-referenced as they are interpreted in terms of a particular class and judgements are formed with reference to the class.
Examples:
(i) Raman stood first in Mathematics test in his class.
(ii) The typist who types 60 words per minute stands above 90 percent of the typists who appeared the interview.
(iii) Amit surpasses 65% of students of his class in reading test.

*      Diagnostic Test and Achievement Test-Concept, Purpose and Distinction between the two tests

Schedule in Research Methodology

A schedule is a structure of a set of questions on a given topic which are asked by the interviewer or investigator personally.
The order of questions, the language of the questions and the arrangement of parts of the schedule are not changed. However, the investigator can explain the questions if the respondent faces any difficulty. It contains direct questions as well as questions in tabular form.
Schedule include open-ended questions and close-ended questions. Open-ended questions allow the respondent considerable freedom in answering. However, questions are answered in details. Close-ended questions have to be answered by the respondent by choosing an answer from the set of answers given under a question just by ticking.
Eg.
·         Village or community schedule
·         Family or Household schedule
·         Opinion or attitude schedule
BASIS FOR COMPARISON
QUESTIONNAIRE
SCHEDULE
Meaning
Questionnaire refers to a technique of data collection which consist of a series of written questions along with alternative answers
Schedule is a formalized set of questions, statements and spaces for answers, provided to the enumerators who ask questions to the respondents and note down the answers.
Filled by
Respondents
Enumerators
Response Rate
Low
High
Coverage
Large
Comparatively small
Cost
Economical
Expensive
Respondent's identity
Not known
Known
Success relies on
Quality of the questionnaire
Honesty and competence of the enumerator.
Usage
Only when the people are literate and cooperative.
Used on both literate and illiterate people.

Diagnostic Testing

Diagnostic test is a test used to diagnose strength and weakness of the learning in certain areas of study whereas diagnostic evaluation is centered on schooling process such as the curriculum programme, administration and so on.
Eg. Difficulties in learning occur frequently at all levels and among pupils of both high and low mental ability. In order to handle such cases, the teacher diagnose the relative strengths and weaknesses of pupil in the specific area of study, analyse the causes for the same and then provides remedial measures as per necessity.
When learning difficulties that are left unresolved by the standard corrective prescriptions of formative evaluation and a pupil continues to experience failure despite the use of prescribed alternative methods of instruction, then a more detailed diagnosis is indicated.
It searches for the underlying causes of those problems.
Thus it is much more comprehensive and detailed and the difference lies in the types of question each of them is addressing.
The following are the salient features of Diagnostic Testing:
(i) The diagnostic test takes up where the formative test leaves off.
(ii) A diagnostic test is a means by which an individual profile is examined and compared against certain norms or criteria.
(iii) Diagnostic test focuses on individual’s educational weakness or learning deficiency and identify the gaps in pupils.
(iv) Diagnostic test is more intensive and act as a tool for analysis of Learning Difficulties.
(v) Diagnostic test is more often limited to low ability students.
(vi) Diagnostic test is corrective in nature.
 (vii) Diagnostic test pinpoint the specific types of error each pupil is making and searches for underlying causes of the problem.
(viii) Diagnostic test is much more comprehensive.
(ix) Diagnostic test helps us to identify the trouble spots and discovered those areas of students weakness that are unresolved by formative test.

Dimensions of Diagnostic Test:

(i) Who can conduct → Teacher/Researcher
(ii) Where → School/Home/Work places
(iii) On whom → Learners
(iv) Purpose → Specific strength and weakness of the learner in a particular area.
(v) Length of time → Flexible in nature
(vi) Techniques of → Test/observation/interview etc. Assessment
(vii) Sequence → Logical and step by step
(vii) Method of → Negotiable/Therapeutic Remediation
(ix) Support to → Learner/Parents/Teacher

Steps of Educational Diagnostic Test:

(i) Identification and classification of pupils having Learning Difficulties:
(a) Constant observation of the pupils.
(b) Analysis of performance: Avoiding assignments & copying from others.
(c) Informal classroom Unit/Achievement test.
(d) Tendency of with-drawl and gap in expected and actual achievement.
(ii) Determining the specific nature of the Learning Difficulty or errors:
(a) Observation.
(b) Analysis of oral responses.
(c) Written class work.
(d) Analysis of student’s assignments and test performance.
(e) Analysis of cumulative and anecdotal records.
(iii) Determining the Factors/Reasons or Causes Causing the learning Difficulty (Data Collection):
(a) Retardation in basic skills.
(b) Inadequate work study skills.
(c) Scholastic aptitude factors.
(d) Physical Mental and Emotional (Personal) Factors).
(e) Indifferent attitude and environment.
(f) Improper teaching methods, unsuitable curriculum, complex course materials.
(iv) Remedial measures/treatment to rectify the difficulties:
(a) Providing face to face interaction.
(b) Providing as may simple examples.
(c) Giving concrete experiences, use of teaching aids.
(d) Promoting active involvement of the students.
(e) Consultation of Doctors/Psychologists/Counselors.
(f) Developing strong motivation.
(v) Prevention of Recurrence of the Difficulties:
(a) Planning for non-recurrence of the errors in the process of learning.

Construction of Diagnostic Test:

The following are the broad steps involved in the construction of a diagnostic test. Diagnostic Test may be Standardized or Teacher made and more or less followed the principles of test construction i.e., preparation, planning, writing items, assembling the test, preparing the scoring key and marking scheme and reviewing the test.
The Unit on which a Diagnostic Test is based should be broken into learning points without omitting any of the item and various types of items of test is to be prepared in a proper sequence:
1. Analysis of the context minutely i.e., major and minor one.
2. Forming questions on each minor concept (recall and recognition type) in order of difficulty.
3. Review the test items by the experts/experienced teacher to modify or delete test items if necessary.
4. Administering the test.
5. Scoring the test and analysis of the results.
6. Identification of weakness
7. Identify the causes of weakness (such as defective hearing or vision, poor home conditions, unsatisfactory relations with classmates or teacher, lack of ability) by the help of interview, questionnaires, peer information, family, class teacher, doctor or past records.
8. Suggest remedial programme (No set pattern).
Motivation, re-teaching, token economy, giving reinforcement, correct emotion, changing section, giving living examples, moral preaching’s.

Materials Used in Diagnostic Test:

1. Test records (Standardized and Teacher made).
2. Pupils’ written work (themes, compositions, home assignments and test papers).
3. Pupils’ oral work (discussion, speeches and oral reading).
4. Pupils’ work habits (in class activities, participation, peer relationship, independent work, interest, effort etc.).
5. Physical and health records (school and family records about vision, hearing, dental, general).
6. Guidance and cumulative record data (family) background, anecdotal references, school activities).
7. Interview with pupil (problem or trouble and elimination of misconceptions).
8. Parent conference (pupil problems at home, parent interpretation).
9. Self-guidance (completing assignments, independent work and seeking teacher help).
10. Clinic or laboratory aids (vision tester, audio-meter eye photographs, tape recorder etc.).

Barriers in Diagnostic Tests:

(i) Attitudinal change.
(ii) Will Power and patience of the teacher.
(iii) Time Scheduling .
(iv) Sequencing of Study.
(v) Faulty method of data collection and test.
(vi) Maintaining records impartially.
(vii) Costs.




CONSTRUCTION OF ACHIEVEMENT TESTS
The four main steps in construction of tests are:
1. Planning the Test
2. Preparing the Test
3. Try out the Test
4. Evaluating the Test.

 

Step 1. Planning the Test:

Planning of the test is the first important step in the test construction. The main goal of evaluation process is to collect valid, reliable and useful data about the student.
It includes
1. Determining the objectives of testing.
2. Preparing test specifications.
3. Selecting appropriate item types.

1. Determining the Objectives of Testing:

A test can be used for different purposes in a teaching learning process such as
1.      An instrument to measure the entry per­formance of the students.
2.      for formative evaluation.
3.      to find out the immediate learning difficulties and
4.       to assign grades or to determine the mastery level of the students
5.      To suggest its remedies.
So these tests should cover the whole instructional objectives and content areas of the course.

2. Preparing Test Specifications:

The second important step in the test construction is to prepare the test specifications in order to be sure that the test will measure a representative sample of the instructional objectives and an elaborate design for test construction. One of the most commonly used devices for this purpose is ‘Table of Specification’ or ‘Blue Print.’
Preparation of Table of Specification/Blue Print:
Preparation of table of specification is the most important task in the planning stage. It acts, as a guide for the test con­struction. Table of specification or ‘Blue Print’ is a three dimen­sional chart showing list of instructional objectives, content areas and types of items in its dimensions.
It includes four major steps:
(i) Determining the weightage to different instructional objectives.
(ii) Determining the weightage to different content areas.
(iii) Determining the item types to be included.
(iv) Preparation of the table of specification.
(i) Determining the weightage to different instructional ob­jectives:
In a written test we cannot measure the psychomotor domain and affective domain. We can only measure the cognitive domain. It is also true that all the subjects do not contain different learning objectives like knowledge, un­derstanding, application and skill in equal proportion. Therefore, it must be planned how much weightage to be given to different instructional objectives by keeping in mind the importance of the particular objective for that subject or chapter.
For example, if we may give the weightage to different instructional objectives in General Science for Class—X as following:
Weightage Given To Different Instructional Objectives in a Test of 100 Marks
(ii) Determining the weightage to different content areas:
The second step in preparing the table of specification is to outline the content area. It also prevents repetition or omission of any unit. Now weightage should be given to which unit should be decided by the concerned teacher by keeping the impor­tance of the chapter in mind, area covered by the topic in the text book and number of items to be prepared.
Table showing weightage given to different content areas:
                                         Weightage Given To Different Content Areas  
(iii) Determining the item types:
The third important step in preparing table of specification is to decide appropriate item types. Items used in the test construction can broadly be divided into two types like objective type items and essay type items. For some instructional purposes, the objective type items are most efficient where as for others the essay questions prove satisfac­tory.Appropriate item types should be selected according to the learning outcomes to be measured.
(iv) Preparing the Three Way Chart:
Preparation of the three way chart is last step in preparing table of specification. This chart relates the instructional objectives to the content area and types of items. In a table of specification the instructional objec­tives are listed across the top of the table, content areas are listed down the left side of the table and under each objective the types of items are listed content-wise.
Eg. Preparation of three dimensional chart
Instructional objectives
Knowledge
Understanding
Application
Total
Content
O
SQ
E
O
SQ
E
O
SQ
E

I
(1)²
(2)²







II










III





(1)5




IV










V










Total









25
Note: No. of questions are outside the bracket and marks are in the bracket.

 

STEP  2. PREPARING THE TEST:

After planning test items are constructed in accordance with the table of specification. Each type of test item need special care for construction. 
The preparation stage includes the fol­lowing three functions:
(i) Preparing test items.
(ii) Preparing instruction for the test.
(iii) Preparing the scoring key.

(i) Preparing the Test Items:

Preparation of test items is the most important task in the preparation step. Therefore care must be taken in preparing a test item. The following principles help in preparing relevant test items.
1. Test items must be appropriate for the learning out­come to be measured:
The test items should be so designed that it will measure the performance described in the specific learning outcomes.
2. Test items should measure all types of instructional objectives and the whole content area:
The items in the test should be so prepared that it will cover all the instructional objectives—Knowledge, understanding, think­ing skills and match the specific learning outcomes and subject matter content being measured. When the items are constructed on the basis of table of specification the items became relevant.
3. The test items should be free from ambiguity:
The item should be clear. Inappropriate vocabulary and awkward sentence structure should be avoided. The items should be so worded that all pupils understand the task.
Example:
Poor item —Where did Gandhi born?
Better —In which city did Gandhi born?
4. The test items should be of appropriate difficulty level:
The test items should be proper difficulty level, so that it can discriminate properly.
The items should not be so easy that everyone answers it correctly and also it should not be so difficult that everyone fails to answer it. The items should be of average difficulty level.
5. The test item must be free from technical errors and irrelevant clues:
For example grammatical inconsistencies, verbal associations, extreme words (ever, seldom, always), and mechanical features (correct statement is longer than the incorrect). Therefore while construct­ing a test item careful step must be taken to avoid most of these clues.
6. Test items should be free from racial, ethnic and sexual biasness:
The items should be universal in nature. Care must be taken to make a culture fair item. While portraying a role all the facilities of the society should be given equal importance. The terms used in the test item should have an universal meaning to all members of group.

(ii) Preparing Instruction for the Test:

This is the most neglected aspect of the test construction. Generally everybody gives attention to the construction of test items. So the test makers do not attach directions with the test items.
But the validity and reliability of the test items to a great extent depends upon the instructions for the test.
N.E. Gronlund has suggested that the test maker should provide clear-cut direction about;
a. The purpose of testing.
b. The time allowed for answering.
c. The basis for answering.
d. The procedure for recording answers.
e. The methods to deal with guessing.
Direction about the Purpose of Testing:
A written statement about the purpose of the testing maintains the uniformity of the test. Therefore there must be a written instruction about the purpose of the test before the test items.
Instruction about the time allowed for answering:
Clear cut instruction must be supplied to the pupils about the time allowed for whole test especially in case of essay type questions. So that the test maker should care­fully judge the amount of time taking the types of items, age and ability of the students and the nature of the learning outcomes expected. Experts are of the opinion that it is better to allow more time than to deprive a slower student to answer the question.
Instructions about basis for answering:
Test maker should provide specific direction on the basis of which the students will answer the item.
Instruction about recording answer
Students should be instructed where and how to record the answers. Answers may be recorded on the separate answer sheets or on the test paper itself.
Instruction about guessing:
Direction must be provided to the students whether they should guess uncertain items or not in case of recognition type of test items. If nothing is stated about guessing, then the bold students will guess these items and others will answer only those items of which they are confident. So that the bold pupils by chance will answer some items correctly and secure a higher score. There­fore a direction must be given ‘to guess but not wild guesses.’

(iii) Preparing the Scoring Key:

A scoring key increases the reliability of a test so that the test maker should provide the procedure for scoring the answer scripts. Directions must be given whether the scoring will be made by a scoring key or by a scoring stencil and how marks will be awarded to the test items.
Thus a scoring key helps to obtain a consistent data about the pupils’ performance. So the test maker should prepare a comprehensive scoring procedure along with the test items.

 

 

STEP  3. TRY OUT OF THE TEST:

Try out helps us to identify defective and ambiguous items, to determine the difficulty level of the test and to determine the discriminating power of the items.
Try out involves two important functions:
(a) Administration of the test.
(b) Scoring the test.

(a) Administration of the test:

Administration means ad­ministering the prepared test on a sample of pupils, provide congenial physical and psychological environment during the time of testing. Any other factor that may affect the testing procedure should be controlled.
Physical environment means proper sitting arrangement, proper light and ventilation and adequate space for invigilation, Psychological environment refers to these aspects which in­fluence the mental condition of the pupil. Therefore steps should be taken to reduce the anxiety of the students. The test should not be administered just before or after a great occasion like annual sports on annual drama etc.
One should follow the following principles during the test administration:
1. The teacher should talk as less as possible.
2. The teacher should not interrupt the students at the time of testing.
3. The teacher should not give any hints to any student who has asked about any item.
4. The teacher should provide proper invigilation in order to prevent the students from cheating.

(b) Scoring the test:

Once the test is administered and the answer scripts are obtained the next step is to score the answer scripts. A scoring key may be provided for scoring when the answer is on the test paper itself Scoring key is a sample answer script on which the correct answers are recorded.

 

STEP  4. EVALUATING THE TEST:

Evaluating the test is most important step in the test con­struction process. Evaluation is necessary to determine the quality of the test and the quality of the responses. Quality of the test implies that how good and dependable the test is? (Validity and reliability). Quality of the responses means which items are misfit in the test. It also enables us to evaluate the usability of the test in general class-room situation.
Evaluating the test involves following functions:
(a) Item analysis.
(b) Determining validity of the test.
(c) Determining reliability of the test.
(d) Determining usability of the test.

(a) Item analysis:

Item analysis is a procedure which helps us to find out the answers to the following questions:
a. Whether the items functions as intended?
b. Whether the test items have appropriate difficulty level?
c. Whether the item is free from irrelevant clues and other defects?
d. Whether the distracters in multiple choice type items are effective?
The item analysis data also helps us:
a. To provide a basis for efficient class discussion of the test result
b. To provide a basis for the remedial works
c. To increase skill in test construction
d. To improve class-room discussion.
Item Analysis Procedure:
Item analysis procedure gives special emphasis on item difficulty level and item discriminating power.
The item analysis procedure follows the following steps:
1. The test papers should be ranked from highest to lowest.
2. Select 27% test papers from highest and 27% from lowest end.
For example if the test is administered on 60 students then select 16 test papers from highest end and 16 test papers from lowest end.
3. Keep aside the other test papers as they are not required in the item analysis.
4. Tabulate the number of pupils in the upper and lower group who selected each alternative for each test item. This can be done on the back of the test paper or a separate test item card may be used
5. Calculate item difficulty for each item by using formula: 
image_thumb24
Where R= Total number of students got the item correct.
T = Total number of students tried the item.
In our example out of 32 students from both the groups 20 students have answered the item correctly and 30 students have tried the item.
The item difficulty is as following
image_thumb25
It implies that the item has a proper difficulty level. Because it is customary to follow 25% to 75% rule to consider the item difficulty. It means if an item has a item difficulty more than 75% then is a too easy item if it is less than 25% then item is a too difficult item.
6. Calculate item discriminating power by using the following formula:
Item discriminating power =image_thumb26 
Where RU= Students from upper group who got the answer correct.
RL= Students from lower group who got the answer correct.
T/2 = half of the total number of pupils included in the item analysis.
In our example 15 students from upper group responded the item correctly and 5 from lower group responded the item correctly.
image_thumb27
A high positive ratio indicates the high discriminating power. Here .63 indicates an average discriminating power. If all the 16 students from lower group and 16 students from upper group answers the item correctly then the discriminating power will be 0.00.
It indicates that the item has no discriminating power. If all the 16 students from upper group answer the item correctly and all the students from lower group answer the item in correctly then the item discriminating power will be 1.00 it indicates an item with maximum positive discriminating power.
The Item Analysis Card
Preparing a test item file:
Once the item analysis process is over we can get a list of effective items. Now the task is to make a file of the effective items. It can be done with item analysis cards. The items should be arranged according to the order of difficulty. While filing the items the objectives and the content area that it measures must be kept in mind. This helps in the future use of the item.

(b) Determining Validity of the Test:

At the time of evaluation it is estimated that to what extent the test measures what the test maker intends to measure.

(c) Determining Reliability of the Test:

Evaluation process also estimates to what extent a test is consistent from one measurement to other. Otherwise the results of the test cannot be dependable.

(d) Determining the Usability of the Test:

Try out and the evaluation process indicates to what extent a test is usable in general class-room condition. It implies that how far a test is usable from administration, scoring, time and economic point of view.
Difference between achievement test and diagnostic test
Achievement test
Diagnostic test
Test general ability, how well the students are at achieving the objectives of the course
Identifies the students strengths and weakness indented to ascertain what further teaching is necessary
Wide content area
Focus on difficult area
Total scoring is important
noscoring
Follow norms
No norms
Fixd time limit
No time limit
Quantitative
qualitative

Meaning of Teacher Made Test:

Carefully constructed, teacher-made tests and standardised tests are similar in many ways. Both are constructed on the basis of carefully planned table of specifications, both have the same type of test items, and both provide clear directions to the students.
Still the two differ. They differ in the quality of test items, the reliability of test measures, the procedures for administering and scoring and the interpretation of scores. No doubt, standardised tests are good and better in quality, more reliable and valid.
Teacher-made tests are normally prepared and administered for testing class­room achievement of students, evaluating the method of teaching adopted by the teacher and other curricular programmes of the school.
1.      Teacher-made test is designed to solve the problem or requirements of the class for which it is prepared.
2.      It is prepared to measure the outcomes and content of local curriculum.
3.      It is very much flexible
4.      It does not require any sophisticated technique for preparation.
teacher-made objective type tests do not require all the four steps of standardised tests Only the first two steps planning and preparation are sufficient for their construction.

Features of Teacher-Made Tests:

1. The items of the tests are arranged in order of difficulty.
2. These are prepared by the teachers which can be used for prognosis and diagnosis purposes.
3. The test covers the whole content area and includes a large number of items.
4. The preparation of the items conforms to the blueprint.
5. Test construction is not a single man’s business, rather it is a co-operative endeavour.
6. A teacher-made test does not cover all the steps of a standardised test.
7. Teacher-made tests may also be employed as a tool for formative evaluation.
8. Preparation and administration of these tests are economical.
9. The test is developed by the teacher to ascertain the student’s achievement and proficiency in a given subject.
10. Teacher-made tests are least used for research purposes.
11. They do not have norms whereas providing norms is quite essential for standardised tests.

Steps/Principles of Construction of Teacher-made Test:

1. Planning:
Planning of a teacher-made test includes:
a. Determining the purpose and objectives of the test, ‘as what to measure and why to measure’.
b. Deciding the length of the test and portion of the syllabus to be covered.
c. Specifying the objectives in behavioural terms.
d. Deciding the number and forms of items (questions) according to blue­print.
e. Having a clear knowledge and understanding of the principles of constructing essay type, short answer type and objective type questions.
f. Deciding date of testing much in advance in order to give time to teachers for test preparation and administration.
g. Seeking the co-operation and suggestion of co-teachers, experienced teachers of other schools and test experts.
2. Preparation of the Test:
preparation requires much thinking, rethinking and reading before constructing test items.
Different types of objective test items viz., multiple choice, short-answer type and matching type can be constructed. After construction, test items should be given to others for review and for seeking their opinions on it.
The suggestions may be sought even from others on languages, modalities of the items, statements given, correct answers supplied and on other possible errors anticipated. The suggestions and views thus sought will help a test constructor in modifying and verifying his items afresh to make it more acceptable and usable.
After construction of the test, items should be arranged in a simple to complex order. For arranging the items, a teacher can adopt so many methods viz., group-wise, unit-wise, topic wise etc. Scoring key should also be prepared forthwith to avoid further delay in scoring.
Direction is an important part of a test construction. Without giving a proper direction or instruction, there will be a probability of loosing the authenticity of the test reliability. It may create a misunderstanding in the students also.
Thus, the direction should be simple and adequate to enable the students to know:
(i) The time for completion of test,
(ii) The marks allotted to each item,
(iii) Required number of items to be attempted,
(iv) How and where to record the answer
(v) The materials, like graph papers or logarithmic table to be used.

Uses of Teacher-Made Tests:

1. To help a teacher to know whether the class in normal, average, above average or below average.
2. To help him in formulating new strategies for teaching and learning.
3. A teacher-made test may be used as a full-fledged achievement test which covers the entire course of a subject.
4. To measure students’ academic achievement in a given course.
5. To assess how far specified instructional objectives have been achieved.
6. To know the efficacy of learning experiences.
7. To diagnose students learning difficulties and to suggest necessary remedial measures.
8. To certify, classify or grade the students on the basis of resulting scores.
9. Skillfully prepared teacher-made tests can serve the purpose of standardised test.
10. Teacher-made tests can help a teacher to render guidance and counseling.
11. Good teacher-made tests can be exchanged among neighbouring schools.
12. These tests can be used as a tool for formative, diagnostic and summative evaluation.
13. To assess pupils’ growth in different areas.

 

Types of questions

Short answer questions

Short answer questions require a reasonably short answer – anything between a few words and a paragraph or two. The number of marks allocated often gives an indication of the length required.
When studying for short-answer questions, concentrate on:
  • Terminology
  • names
  • facts
  • concepts and theories, and examples underpinning them
  • similarities and differences.
When answering short questions:
  • Plan your answers before you start writing.
  • Keep your answers short.
  • Mark any questions you aren't sure of, and go back to them at the end of your exam if you have time.
  • Try to answer all of the questions. 

Multiple-choice questions

When studying for multiple-choice questions, concentrate on:
  • terminology
  • names
  • facts
  • concepts and theories, and examples underpinning them
  • similarities and differences.
When answering multiple-choice questions
  • Quickly read all of the questions and their answers before you answering any.
  • Mark the questions
  • Answer the questions you’re sure of first.
  • Then try the others. Start by eliminating any answers that are obviously wrong.
  • Watch out for negatives. For example, ‘Which of these is not…?’
  • Stick to your time allocation. If your time's up and you still haven't decided on an answer guess or leave it out. 
  • Don't change your first answer unless you're really sure; your first instinctive choice is usually right.

Essay-type questions (long answers)

Essay-type questions require an answer that is structured in the same way as an essay or report. They questions can be anything from a few paragraphs to a few pages. The mark allocation will often give an indication of the length required.
When studying for essay questions:
  • Try to identify possible questions you may be asked by reading past exam papers, corrected assignments and/or revision-type questions in your course material and textbook(s). However, check that the contents/format of the exam hasn't changed first.
  • Work out model answers.
  • Practise by writing answers under exam conditions. This means planning an answer, and writing it out within a timeframe.
When answering essay questions in the exam:
  • Read the question carefully and analyse it
  • Brainstorm ideas and plan your answer.
  • Write down some key words.
  • Start your answer by briefly rephrasing the question using your own words.
  • Use a new paragraph for each main idea or topic. Back up each topic with supporting detail such as examples, reasons and results.
  • Leave a few lines of between each paragraph, as you may want to add additional information later.
In essay-type questions it is important to stick to your time allocation. If you spend too much time on a question it may mean you run out of time for other questions. If you run out of time, jot down your main ideas and key words so that the examiner knows where you were going with the essay – you may get a few additional marks in this way.
·         leave wide margins for the marker, and try to write neatly and proofread as you go.
  • Read the questions and instructions very carefully before you start to ensure you know exactly what's required.
  • Once you’ve decided what you have to do, write down the formulas or methods you’re going to use (if applicable).
  • Show your workings. Even if your answer is wrong or incomplete you may still get some marks for showing you understand the process.
  • Use a pencil for drawings and diagrams in case you need to change anything. If required, you can go over them with pen once you're satisfied.
  • Label drawings and diagrams and include headings.

What is online examination?

Online examination is conducting a test online to measure the knowledge of the participants on a given topic. With online examination students can do the exam online, in their own time and with their own device, regardless where they life. Online need a browser and internet connection. 

How online examination system works

The teacher or course builder creates an account with an exam builder. In such an exam system, create questions and add them to the exam. Choose between multiple choice questions or free text questions. The students are provided with a link to the online exam; they sign up and can take the exam. They see the results immediately afterwards.

           Advantages of an online examination

An online examination system has plenty of advantages: 
  1. It saves paper. 
  2. It saves time. 
  3. It saves more time.
  4. It saves the student money. 
  5. It's more secure.
    disadvantages of an online examination
  1. You have to keep in mind that your students will take the exam on their own device in their own time with nobody to check up on them, so you have to alter your questions to provide for this situation.  Open text questions are possible, but they don't auto-grade, so you have to check them yourself.
  2. An online exam system is a little bit more susceptible for fraud.
PORTFOLIO ASSESSMENTS
A portfolio is a collection of student work that can exhibit a student's efforts, progress, and achievements in various areas of the curriculum. A portfolio assessment can be an examination of student-selected samples of work experiences and documents related to outcomes being assessed, and it can address and support progress toward achieving academic goals, including student efficacy. Utilizing portfolio assessments, students will be able to show a comprehensive correlation between skills taught and learned over an entire grading segment. This is in contrast to the standard testing that is done at generally the end of a unit or mid-marking period followed by a final examination.
A portfolio assessment is typically initiated right at the beginning of the class and is introduced with the core curriculum. The idea is to compile representations of both progress that is forming for a student on a given skill as well as a cumulative assessment.
Portfolio Assessments vs. Traditional Grading
Portfolio assessment show the depth and scope of a student's understanding of the applications being taught. One of the most effective elements of a portfolio is a student's ability to show their contributions at a comprehensive level. This assists students with a visual of their own progress. These numbers may show some relative progress. There are also typically grading criteria, rubrics and other methods for developing the overall grading strategy that are incorporated into the portfolio assessment plan.
Portfolio assessment is a term with many meanings, and it is a process that can serve a variety of purposes.
Portfolio assessments have been used for large-scale assessment and accountability purposes for purposes of school-to-work transitions, and for purposes of certification. For example, portfolio assessments are used as part of the National Board for Professional Teaching Standards assessment of expert teachers.
Types of Portfolios
While portfolios have broad potential and can be useful for the assessments of students' performance for a variety of purposes in core curriculum areas, the contents and criteria used to assess portfolios must be designed to serve those purposes.
  • showcase portfolios 
exhibit the best of student performance
  • working portfolios
contain drafts that students and teachers use to reflect on process. 
  • Progress portfolios 
contain multiple examples of the same type of work done over time and are used to assess progress. If cognitive processes are intended for assessment, content and rubrics must be designed to capture those processes.
Uses of Portfolios
  • Portfolio assessments can provide both formative and summative opportunities for monitoring progress toward reaching identified outcomes.
  • By setting criteria for content and outcomes, portfolios can communicate concrete information about what is expected of students in terms of the content and quality of performance in specific curriculum areas, while also providing a way of assessing their progress along the way.
  • Depending on content and criteria, portfolios can provide teachers and researchers with information relevant to the cognitive processes that students use to achieve academic outcomes.
  • portfolio assessment has focused a way to integrate assessment and instruction and to promote meaningful classroom learning.
  • Portfolio design should provide students with the opportunities to become more reflective about their own work, while demonstrating their abilities to learn and achieve in academics.
  • As students develop their portfolio, they are able to receive feedback from peers and teachers about their work.
  • Because of the greater amount of time required for portfolio projects, there is a greater opportunity for introspection and collaborative reflection. This allows students to reflect and report about their own thinking processes as they monitor their own comprehension and observe their emerging understanding of subjects and skills.
  • The portfolio process is dynamic and is affected by the interaction between students and teachers.
  • Portfolio assessments can also serve summative assessment purposes in the classroom, serving as the basis for letter grades.
  • Portfolios typically require complex production and writing, tasks that can be costly to score and for which reliability problems have occurred.
  • Generalizability and comparability can also be an issue in portfolio assessment, as portfolio tasks are unique and can vary in topic and difficulty from one classroom to the next.



RUBRICS
A rubric is an assessment tool that clearly indicates achievement criteria across all the components of any kind of student work, from written to oral to visual. It can be used for marking assignments, class participation, or overall grades.
Heidi Goodrich Andrade defines a rubric as "a scoring tool that lists the criteria for a piece of work or 'what counts.'  For example, a rubric for an essay might tell students that their work will be judged on purpose, organization, details, voice, and mechanics.
A good rubric also describes levels of quality for each of the criteria. These levels of performance may be written as different ratings (e.g., Excellent, Good, Needs Improvement) or as numerical scores (e.g., 4, 3, 2, 1) Under mechanics, for example, the rubric might define the lowest level of performance as "7-10 misspellings, grammar, and punctuation errors," and the highest level as "all words are spelled correctly; your work shows that you understand subject-verb agreement, when to make words possessive, and how to use commas, semicolons and periods."
There are two types of rubrics: holistic and analytical.
1.      Holistic rubrics
group several different assessment criteria and classify them together under grade headings or achievement levels.

2.      Analytic rubrics 

Analytic rubrics separate different assessment criteria and address them comprehensively.

How to make a rubric

To create your own rubric, follow these steps.
1. List the criteria that will be used in assessing performance in the first column. 
For example, a musical performance might be rated for intonation, rhythmic accuracy, and tone quality and an oral presentation might be rated for content, organization, delivery and language. Be sure that your criteria are explicit. "Neatness" would not be a good criterion because the term "neat" is not explicit enough.
2. Determine your performance ratings / levels in the first row. 
Examples of performance ratings may be:
·         Descriptors (In Progress, Basic, Proficient, Advanced)
·         Numbers (1,2,3,4)
3. Start with the best and worst levels of quality, and then fill in the middle levels based on your knowledge of common problems. It may be helpful to sort examples of actual student work into three piles: the very best, the poorest and those in between.  Try to articulate what makes the good assignments good and the poor assignments poor.
4. After use, evaluate and revise rubric as needed.

How to use rubrics effectively

·         Develop a different rubric for each assignment 

·         Be transparent

·         Leverage rubrics to manage your time

·         Include any additional specific or overall comments that do not fit within the rubric’s criteria.

·         Be prepared to revise your rubrics

·         Decide upon a final grade for the assignment based on the rubric. 

·         Consider developing online rubrics 

Uses of Rubrics
According to Heidi Goodrich Andrade:
·         Rubrics help students and teachers define "quality."
·         When students use rubrics regularly to judge their own work, they begin to accept more responsibility for the end product.
·         Rubrics reduce the time and makes it easier for teachers to explain to students why they got the grade they did and what they can do to improve.
·      Parents usually like the rubrics concept once they understand it, and they find rubrics useful when helping with homework.