Measure. Monitor. Maximise.

GMI follows Item Response Theory (IRT) and develops assessment instruments which align with the requirements of IRT modelling. Test design and measurement models are the two pillars of standardised assessments that are tightly interlinked. Data collection is the bridge between these pillars - rigorous data collection and processing lie at the heart of quality data-driven insights.

Our approach-01-02
 
 

 
IMG-20180907-WA0002.jpg

Assessment Design

The learning construct defines learning progression among students in any given subject. It is a detailed blueprint of the learning continuum that we expect students to demonstrate. It includes progression in concepts, student abilities and question types. The learning construct can be mapped to National Curriculum Framework (NCF 2005) and grade-specific curriculum of each intervention.

GMI has extensive experience in designing oral and written assessments in languages, Maths, Science and Critical Thinking. We design instruments in English, Hindi and multiple other vernacular languages. Our assessments balance between curriculum expectations and what a student needs to know in real life so that we can provide you with actionable, granular feedback.

 

 
GMI_RaschModel&Analysis

Measurement Model

GMI uses the modern approach of Item Response Theory (IRT) based measurement methods like Rasch Modeling to analyse and report results. Use of such statistical models allows us to build reliable, valid and standardised scores.

 

 
GMI_Pinacle

Sophisticated Scales

Using Rasch modelling, GMI has built a proprietary PinAcLe ® Scale. PinAcLe ® stands for Progression in Achievement of Learning and is vertically integrated scale on which students across classes can be measured. Other assessments can track growth over the course of one year, but GMI tracks a student’s entire educational journey.

 

 

Rigorous Data Collection & Processing

GMI takes data integrity very seriously. Any malpractice results in an incorrect understanding of a child’s learning levels. In order to deter these errors, in high-stakes assignments, GMI administers its own assessments by deploying staff that are rigorously trained to understand assessment context and ensure its fair conduct.

Additionally, GMI follows a unique protocol of running an algorithm that detects patterns that detect copying. The algorithm flags schools where malpractice may have occurred and such schools are dropped from our study.

GMIDataCollection
 

 
UC-Berkeley.jpg

Learning from internationally acclaimed experts

We have spent years building our expertise through partnerships with international organisations such as the Australian Council for Education Research (ACER) and perfecting our statistical analysis in partnership with academics at the UC Berkeley Evaluations and Assessments Research Center (BEAR). We constantly innovate to keep up with client needs and evolving assessment best practices and technologies.