Over the last few years, the Teaching at the Right Level (TaRL) Africa team has worked closely with governments and other partners in Côte D’Ivoire (CDI), Nigeria, and Zambia to develop and grow contextually appropriate TaRL programs so that education stakeholders focus on ensuring that foundational literacy and numeracy levels of their children improve over time.
In this note, we share key principles underpinning our approach to monitoring, measurement, and review (MMR) in TaRL programs. We also share adaptations we have been making along the way to ensure we respond to systems’ needs and strengthen the capacity of relevant stakeholders to engage with and act upon learning outcomes data.
Data at the center of the TaRL approach
A key element of the TaRL approach is to have everyone focus on improving children’s learning outcomes. The main data that is collected is the learning levels of children for basic reading and maths. This data is collected by the program implementation team instead of external enumerators.
TaRL instructors assess each child using an oral, one-on-one assessment tool, which quickly and simply indicates whether a child has mastery of reading, number recognition, and operations, and if not, from which point below this level they need to progress. This information is then used by the instructors to organize children into groups according to their current learning levels instead of their grades or age. These learning outcomes and certain other complementary data flow up the system to inform action at different levels. See our measurement and monitoring page for more details on how to develop MMR tools and processes for TaRL programs.
Key principles that are kept in mind while setting up MMR systems
1. Measurement and data collection should be an integral part of the program and instructional activity, rather than a secondary “task”. This connection of data with activity reinforces the crucial importance of data for people who are both collecting the data and using it. Such data is as important for the implementers at the school/ local levels as it is for those who design and support interventions at the state/ national level. We try to collect only what can be meaningfully acted on.
As an example, children’s assessment data is used as below:
At the school level, by doing assessments one-on-one with every child in her class, a teacher is able to understand the real needs of every individual learner
The teacher also records the level of every learner on a sheet. For each subject, the teacher marks the highest level task that the learner is able to do. For example, Obi here can read letters but struggles to read words.
When this data is summarized, a teacher is able to get a clear picture of a class. In the table above, a majority of students (7 out of 11) are at low learning levels (Beginner and Letter groups, i.e. they cannot read words)
All teachers in a school use the summarized data to then group learners by level.
At the cluster/ zone/ regional level, when the results are compared across locations, it helps the mentors/ government officials identify schools that have low learning outcomes or low improvements.
Lastly, at all levels, comparison of assessment results over time is helpful in getting a pulse check of the program. Are children making progress?
2. While more data may give more information, too many process indicators may sometimes shift focus away from the key ‘outcome’ – children’s learning levels in TaRL. Moreover, the capacity of many individuals to understand data is often limited. Therefore, we cut down on the “nice-to-have” indicators until we observe that the system is able to collect quality data and use it effectively. Lean but good quality data is better than large but questionable data.
In Zambia, the Catch Up Program has reduced the turnaround time of data from schools to the national level from 4 weeks to 2 weeks. This helps enable real-time use of data. This has been achieved through:
- Improved collection templates and built-in data checks: This includes data validation mechanisms that highlight errors in the data. This means corrections can be picked up and addressed at the initial data entry point.
- Simplified tools: Indicators that were not proving useful were removed.
- Reduced data flowing up: Specifically, more detailed data that is most useful at the school level remains there to inform programming, whereas only the key measures useful to others flow up.
3. To ensure meaningful use of data for ongoing program improvement, it needs to be available easily at every level at the right time. Data entry tools should be easy to fill in so that more time is spent on using the data rather than collecting it. The systems and processes are designed keeping the ground realities in mind such as resources (e.g. smartphones, internet, laptops, etc.) available and the capacity of personnel. If it’s a government program, all processes are led by the government officials as far as possible.
In Kebbi, Nigeria, for example, data flows up from the teachers as follows:
In this process, teachers get a holistic view of their class, headteachers can easily monitor the whole school, and mentors (who are responsible for multiple schools) can easily track all schools under their purview. The data they need to make decisions is immediately available to them.
The data collection process is adapted to every context. For example, due to low smartphone and internet penetration in Kebbi, the data stays in paper format for the teachers, headteachers, and mentors. In another context, if the teachers can enter the data digitally themselves, the data collection can be digital from the outset and many steps can be reduced.
4. It is clear that having data at one’s fingertips is not enough to ensure optimal action. Many education officials are juggling multiple tasks with different datasets crossing their desks. Some will struggle to understand how to make sense of the figures presented and others will not see the possibilities of meaningful action among the many challenges they are grappling with on a daily basis. In addition, data has often traditionally been used as more of a stick. Those holding it have not necessarily been empowered to use it for improvement-focused change which should be possible at every level. TaRL Africa works with the relevant departments and people to build capacity on data collection, understanding, and use. We are constantly iterating on interesting ways to do this well. To promote capacity and confidence in data use, we have introduced the following:
- Adding MMR-focused sessions in all levels of training: TaRL Africa has designed training resources to guide implementers on how they can make sense of the collected data and use it for decision-making. These include exercises to spot data errors in the tools and case studies to understand how to interpret and use data using real-life scenarios.
- Encouraging action-focused conversations during mentoring visits and review meetings: TaRL Africa field staff members facilitate discussions on data in review meetings and whenever they visit schools.
Learner outcomes data is a key feature of the TaRL approach and leads to immediate action by the TaRL instructors and others. As TaRL Africa, we see our careful iteration with our government partners working to enhance the data systems supporting TaRL as a key part of our broader education systems strengthening work. Ultimately, we hope to empower education stakeholders at different levels to focus on what really matters, and respond accordingly. We appreciate this is a process in and of itself, and like our other work, is most likely to have an effect if it is done together with our government partners at the pace at which they are able to absorb change.
We are excited to test new platforms in the near future and will continue to iterate and share on these.