Today we will be talking about literacy in the context of the report issued last week, "Literacy Skills for the World of Tomorrow". It is a joint OECD and UNESCO publication and reports in the press last week were based on it.
I will talk about the context in which this and other studies were carried out, in particular the concern with quality of education. I will then give a brief description of the programme for international student assessment, PISA, on which this report is based. Dr. Ó Dálaigh will then talk about the main findings of PISA.
The need to improve the quality of education is a common refrain throughout the world, in both industrial and developing countries. While a concern with quality cannot be said to be new, there have been developments in the last decade in the way it is thought about.
Traditionally quality was thought about in terms of input to the education system, the resources provided, physical facilities, curricula, teacher training and books. There has been a shift, however, to a focus on outcomes of education - what the students are learning as result of their experiences in education - a shift from inputs to outputs when considering quality.
Techniques have been developed to appraise the achievements of an educational system as a whole. We think of assessment in terms of student achievements in the leaving certificate but there are now techniques to aggregate student responses to a level where inferences can be made about how the system is performing. It is unlikely that statements will be made about the whole system, from beginning infant class to leaving certificate. Certain ages or grade levels are picked and a representative sample of students at that age or grade level is chosen. Standard assessment instruments are then administered and aggregated to give a picture of the achievements of the system as a whole. When I say that samples are used in such studies, a number of countries test all students. That happens in Britain, France and, increasingly, in the United States at state level. We have had these national assessments at primary school level for 20 years in basic literacy, English reading and, less frequently, in mathematics and Irish.
People were not satisfied with information about only their own systems and began to look for comparative data with other countries to measure how students in their systems are performing relative to students in other systems - such studies have existed since the 1960s.
However, they tended to be sporadic, underfunded and slow to report, and it was really only in the 1990s that Governments became interested in these comparative data. That interest has been expressed through the OECD, and during the late 1990s the OECD began to develop a programme for international assessments - that is, comparisons of students in member states.
This is regarded as very important as an index of human capital because it is believed that the prosperity of countries depends on the development of human capital. If students are not doing well in schools vis-à-vis competitors, that is likely to create not just educational but economic problems.
The last thing to be said about this in terms of context and quality is that the information that is obtained from these is not considered just to be useful to describe the achievements of students in the system; it is supposed to act as a basis or a lever for reform - that is, when the information comes to policymakers and reveals strengths or weaknesses in student achievements or indicates differences between students in different countries, gender differences or whatever, it is then up to the policymakers to make decisions about the allocation of resources to address those kinds of issues.
The purpose of the programme for international student assessment is, as I have indicated, to obtain comparative data on student achievements, and it has been working in three literacy domains - reading, mathematics and science. We would normally associate the term "literacy" with reading, but PISA talks about mathematical and scientific literacy. The reason that term is used is that it does not want the assessments just to be a reflection of what is going on in the curricula for schools. It wants them, as was indeed the case in previous international studies, to focus on the usefulness of the skills that students are acquiring for everyday life and for the future. Thus, this generic term "literacy" is used in those three areas.
Page six of my submission includes definitions of the three. Reading literacy is defined as the ability to understand, use and reflect on written texts in order to achieve one's goals, to develop one's knowledge and potential and to participate effectively in society. Mathematical literacy is defined as the capacity to identify, understand and engage in mathematics and to make well founded judgments about the role that mathematics plays in an individual's current and future private life, occupational life, social life and life as a constructive, concerned and reflective citizen. Scientific literacy relates to the ability to think scientifically in a world in which science and technology shape lives. It is something that all citizens require and is not just for those who are going to be scientists.
So far, PISA is a cyclical operation. The first assessment was in 2000, a second one has just been completed in 2003 and there will be a third in 2006. In 2000, reading literacy was the major domain, and they had what they call minor domains in mathematics and science. They did not fully sample those areas and used shorter tests. In the 2003 assessment, just completed, the major domain is mathematics, with minor domains in science and reading. In 2006 science will be the major domain, with the other two as minor domains.
The reason they test in each area every three years is to provide some kind of data for monitoring trends so that we will be able to look at literacy figures from 2000, 2003 and 2006. This gets a bit complicated because a new report has just come out. A report on the 2000 study was issued by the OECD in 2001. Some 28 out of 30 OECD countries participated, with only Slovakia and Turkey not participating. Data from the Netherlands were rejected because the samples were not regarded as satisfactory, so we ended up with 27 OECD countries in 2000 and four non-OECD countries.
In 2001 the study was repeated in ten further non-OECD countries, so the assessment is moving out to include less developed countries, though not in Africa, for example. These countries are mostly in eastern Europe, with some in Asia and Latin America. This new report now incorporates the 2000 data for OECD and non-OECD countries and the 2001 data for non-OECD countries.
The results do not have much implication for Ireland because all the non-OECD countries, with one exception, did very poorly. They all scored below the OECD countries and the only one that came out of the woodwork and knocked Ireland out of its position was Hong Kong-China in 2001. I ask Dr. Ó Dálaigh now to give the main findings.