Monitoring and Evaluation Issues

Monitoring and Evaluation Issues

A Knowledge Map on Information & Communication Technologies in Education

Guiding Questions:

  • What do we know about effective monitoring and evaluation practices and studies related to the uses of ICTs in education?
  • What large scale comparative studies of ICT uses in education exist, and what do they tell us about the monitoring and evaluation process?
  • What do we know about useful indicators related to the uses of ICTs in education?

Current knowledgebase:
 What we know, what we believe, and what we don’t know

  • Monitoring and evaluation is not receiving the attention is warrants
    A consensus holds that insufficient attention is paid to monitoring and evaluation issues and feedback loops during the program design process of most ICT in education initiatives.
  • The issues are known, the tools for tackling them aren't
    In general, many of the issues and challenges associated with ICT in education initiatives are known by policymakers, donor staff and educators. However, data on the nature and extent of these issues remain limited in most places because of the lack of monitoring and evaluation tools and methodologies dealing with the use of ICTs in schools and their impact on teaching and learning.
  • No common set of indicators
    There are no common international usage, performance and impact indicators for ICTs in education. Examples of monitoring and evaluation indicators and data collection methods exist from many countries.  The process for the development of ICT in education indicators is the same as the process for the development of indicators in other fields.
  • Few international comparative studies have been done
    There have been very few international evaluations of impact of ICT use in education. Those that exist rely in large part on self-reported data.
  • Quantitative data related to infrastructure has been the easiest to collect
    Quantitative data, typically related to the presence and functionality of ICT-related hardware and software, are seen as the easiest to collect, and most monitoring and evaluation indicators and collection efforts have focused on such data. In general, there has been a greater emphasis on technical infrastructure issues than on program design, monitoring and evaluation, training and on-going maintenance/upgrades issues.
  • A reliance on self-reported data
    Qualitative indicators have focused to a large report on self-reported data.
  • Data collection methods are varied
    Data collection methods are quite varied. The use of the Internet to collect data, and for self-assessment, especially in LDCs, has not been very successful and is seen as problematic.
  • ICTs are not being well used in the M&E process 
    There is a general belief that the communication potential of ICTs to facilitate feedback from findings of monitoring and evaluation work, to create and sustain communities of interest/practice, and to provide information and communication linkages with other communities is being under-utilized.

Comments

General comments

  • Simply put: A lot of work needs to be done in this area if ICTs are to become effective and integral tools in education, and if accountability is to be demonstrated to donors and communities financing ICT-related initiatives in education!
  • Bias is a very real issue in most of the monitoring and evaluation work done of ICT in education issues across the board. Such biases are often introduced at the monitoring and evaluation design stage, and include a lack of relevant and appropriate control groups, biases on the part of ‘independent evaluators’ (who often have a stake in seeing positive outcomes), and biases on the part of those evaluated (who may understandably seek to show that they have made good use of investments in ICTs to benefit education). The opportunity for such biases (which are usually positive biases) are especially acute where there is a great reliance on self-reported data.
  • There appears to be a lack of institutional and human resource capacity to carry out such evaluations by local groups (which increases the cost of such activities and potentially decreases the likelihood that the results will be fed back into program design locally).
  • A general lack of formal monitoring and evaluation activities inhibits the collection and dissemination of lessons learned from pilot projects and the useful formation of necessary feedback loops for such lessons learned to become an input into educational policy.  Where such activities have occurred, they focus largely on program delivery, and are often specific to the project itself.
  • Dedicated ICT-related interventions in education that introduce a new tool for teaching and learning may show improvements merely because the effort surrounding such interventions lead teachers and students to do ‘more’ (potentially diverting energies and resources from other activities).

Applicability to LDC/EFA context

  • The issues highlighted above are particularly acute in most developing countries.
  • Developing in-country capacity for monitoring and evaluation work will be vital if ICT in education investments are to be monitored and evaluated at less cost.
  • The opportunity costs of monitoring and evaluation work related to ICT in education interventions are potentially great, as there is typically a limited number of people able to do such work, and schools typically have little room in their calendars to participate in such activities. This is especially true where control groups are needed for interventions in rural and/or hard to reach areas – particular areas of interest for educational investments targeting education-related MDGs.
  • Attention to equity issues needs to be included in all monitoring and evaluation efforts related to the uses of ICTs in education. While the introduction of ICTs in LDCs is seen as a mechanism to reduce the so-called ‘digital divide’, in most cases such introductions serve to increase such divides, at least initially.

Some areas for further investigation and research

  • In general, there is a pressing need for work related to performance indicators to monitor the use and effects of ICT.
  • What would be a useful set of ‘core’ indicators that could be used across countries?
  • How have monitoring and evaluation studies related to the uses of ICTs in education been conducted in LDCs, and what can we learn from this?
  • How should monitoring and evaluation studies of the impact of ICTs in education in LDCS be conducted?

Some Recommended Resources
 to learn more ...

  • Assessing the Impact of Technology in Teaching and Learning: A Sourcebook for Evaluators [Johnston 2002]
  • Comparative International Research on Best Practice and Innovation in Learning [Holmes 2000]
  • Consultative Workshop for Developing Performance Indicators for ICT in Education [UNESCO- Bangkok 2002]
  • Developing and Using Indicators of ICT Use in Education [UNESCO 2003]
  • The Flickering Mind: The False Promise of Technology in the Classroom and How Learning Can Be Saved [Oppenheimer 2003]o Monitoring and Evaluation of Research in Learning Innovations – MERLIN [Barajas 2003]
  • The Second Information Technology in Education Study: Module 2 (SITES: M2) [ISTE 2003]
  • Technology, Innovation, and Educational Change—A Global Perspective. A Report of the Second Information Technology in Education Study, Module 2 [Kozma 2003]
  • World Links for Development: Accomplishment and Challenges, Monitoring and Evaluation Reports  [Kozma 1999, 2000]


Excerpted from infoDev's Knowledge Maps: ICTs in Education -- What do we know about the effective uses of information and communication technologies in education in developing countries?

 

Suggested citation:
Trucano, Michael.  2005. Knowledge Maps: ICTs in Education.  Washington, DC: infoDev / World Bank.


Please login to post comments.