Evaluating sustained STEM engagements: what we wish we had known
This is a story about evaluation, why good evaluation can be really hard to achieve and what we wish we had known when we started out.
NUSTEM was originally Think Physics, and we were set up in 2014 to develop and deliver a three year project with 15 primary and 15 secondary schools.
The vision of that project was:
- To create a holistic widening participation and gender equality scheme based on partnership working that will lead to greater uptake of physics and related disciplines by children, and particularly girls, in the North East region.
- To build science capital in the North East region
- To provide a blueprint for a regional scheme that can be shared with others and a sustainable scheme for the North East.
To help us achieve this vision we offered 15 partner secondary schools a three-year sustained but flexible offer: for students the offer included assemblies, class workshops, careers events, STEM clubs, informal activities, and visits to STEM activities run by other organisations, and for teachers we offered CPD.
To evaluate the project, we set up what (at the time) seemed like a fairly sensible evaluation plan. We selected a small subsample of partner schools to become evaluation schools, and collected data from young people in Year 7, Year 9 and Year 11 in those schools at three timepoints. They were at the start in 2015 to get a baseline, then in 2017 and finally in 2019 at the end of the project.
We also used comparator groups to mitigate the effects of the development within young people as they get older. The evaluation tools we used were specifically designed to measure the science capital of young people.
However when we analysed the evaluation data in 2019 – 2020, we found that our work in the partner secondary schools had not produced the intended impacts, at least not in the evaluation schools. In fact the young people in our comparator group seemed to have higher science capital than the young people that had taken part in the programme.
Finding this, we struggled to know how best to report this result. We have spent quite a bit of time reflecting on the design of our schools offer, the design of our evaluation plan, and also conducting new analysis of the implementation of the programme. In doing this we made some really useful discoveries, which we have written up as “Evaluating a complex and sustained STEM engagement programme through the lens of science capital: insights from Northeast England” in the International Journal of STEM Education.
What did we find:
- some of our evaluation schools didn’t do very many of the activities offered to them, or include the year groups that we were measuring in the activities;
- there were a lot of changes in the school curriculum starting in 2015 with new GCSEs being introduced, which limited the capacity of science teachers to support extra-curricular activities;
- it was really helpful to use a ‘science capital’ framework to design activities and to target them on potential predictors of young people’s future participation in science;
- it was challenging to use an outcome measure of ‘building science capital’ and trying to quantify a change in the level of science capital between the start and end of the project.
We hope our findings are useful to others evaluating STEM outreach or education programmes, particularly those with are sustained or involve multiple schools, partners and organisations. By publishing our findings in a paper, we hope also to address the critique that outreach programmes focus too often on short-term and positive findings, and help develop a constructive evaluation environment of STEM education, where educators can learn from one another and develop skills and knowledge about what works, based on robust evidence.
The paper is free to read and is available here.
Leave a Reply
Want to join the discussion?Feel free to contribute!