Applying Rosenshine to the science classroom

Share on twitter
Share on facebook
Share on linkedin

In 2012, Barack Rosenshine published the Principles of Instruction: a set of 10 research-based principles of instruction, along with suggestions for classroom practice. The principles come from three sources: (a) research in cognitive science, (b) research on master teachers, and (c) research on cognitive supports.

Principle 1: Begin a lesson with a short review of previous learning: Daily review can strengthen previous learning and can lead to fluent recall.

Principle 2. Present new material in small steps with student practice after each step. Only present small amounts of new material at any time, and then assist students as they practice this material.

Principle 3. Ask a large number of questions and check the responses of all students: Questions help students practice new information and connect new material to their prior learning.

Principle 4. Provide models: Providing students with models and worked examples can help them learn to solve problems faster.

Principle 5. Guide student practice: Successful teachers spend more time guiding students’ practice of new material.

Principle 6. Check for student understanding: Checking for student understanding at each point can help students learn the material with fewer errors.

Principle 7. Obtain a high success rate: It is important for students to achieve a high success rate during classroom instruction.

Principle 8. Provide scaffolds for difficult tasks: The teacher provides students with
temporary supports and scaffolds to assist them when they learn difficult tasks.

Principle 9. Require and monitor independent practice: Students need extensive, successful, independent practice in order for skills and knowledge to become automatic.

Principle 10. Engage students in weekly and monthly review: Students need to be involved in extensive practice in order to develop well-connected and automatic knowledge.

On this page, we have gathered a collection of guides for how the principles might be applied to the science classroom. The guides have been written by Adam Boxer, Head of Science, Totteridge Academy, North London.

This content was originally produced as part of the Accelerate programme, a Department for Education-funded early career teacher programme designed and delivered by Education Development Trust with the Chartered College of Teaching. It is used here with kind permission of Education Development Trust.

Principle 1: Begin a lesson with a short review of previous learning: Daily review can strengthen previous learning and can lead to fluent recall.

It is important that students become fluent in a vast number of core concepts and procedures in science education. Science is a ‘hierarchical’ subject, meaning that ideas build one on top of the other (Reif, 2010). Often referred to as a ‘spiral curriculum,’ fluency in major concepts and procedures is vital to support higher cognition. 

For example, if students do not have a solid foundational understanding of cells, human anatomy and bodily processes they will not be able to comprehend advanced curricular material on pathogens and illnesses. What’s more, without foundational knowledge, the ‘higher order’ activities that we think typical of the professional scientist will be inaccessible to our students. Scientific creativity, evaluative thought and critical analysis will all be beyond their grasp.

Unfortunately, the nature of human cognition is against us in the sense that learning is hard, but forgetting is easy. What’s more, many students use ineffective study techniques like re-reading, highlighting or summarising (Bjork, 2013). 

Retrieval practice (i.e. the process of actively retrieving a memory from long-term storage), is a far superior aid, and Rosenshine notes that the ‘most effective’ teachers spend the first five to eight minutes of every lesson recalling prior learning. 

A vast amount of material is covered in secondary science and regular retrieval practice ensures that it is not taught once in Year 7 and then forgotten about until GCSE.

Into the classroom

Several factors must be considered when implementing retrieval practice in the science classroom. What seems like a fairly simple technique in reality requires careful thought and planning.

  • Routine

Retrieval practice is best implemented through a routine. A specific technique should be used and routinised so that students understand the expectations and benefits. Furthermore, a strong routine will ensure that retrieval practice actually takes place, rather than becoming something that the teacher wishes to implement, but never quite gets round to. 

In terms of specific techniques, asking short answer questions for students to complete in their books or on mini-whiteboards are common ways to introduce retrieval practice in the classroom.

Colleagues across the country have helped in the development of the “Retrieval Roulette.” This simple Excel program can be pre-populated with questions and answers, and at the beginning of a lesson can be used to generate a random assortment. There are a number of roulettes now available online, covering the majority of secondary science (Boxer, 2018). Alternatives might include software like Quizlet, Tassomai, Socrative or Anki and using a simple program like this teachers to develop a quick and easy routine and is best used as a starter activity for all students to complete individually, followed by whole-class answers, feedback and correction. 

  • Types of knowledge

When thinking about implementing retrieval practice and designing your routine, there are a number of things worth considering:

The first is that a distinction between the two types of knowledge in science education: declarative and procedural, where the former are single concepts and the latter are procedures (Reif, 2010). 

Declarative knowledge:

A declarative example might be, ‘Explain why giant ionic lattices have high melting points’ or, ‘What is the function of a chloroplast?’ The vast number of such items and their relevance to future learning makes it imperative for teachers to regularly revisit them. 

Giving feedback on incorrect answers is vital (Roediger, 2011), but will by necessity be easier for some types of questions than others for two reasons as below:

Compound answers: Some answers contain multiple clauses, and if students are peer- or self-assessing it is difficult for them to ensure that they have included them all. In the ionic lattice example above, for example, a student may write: ‘It has strong electrostatic forces between ions.’ In some mark schemes, students must write ‘between oppositely charged ions’ or ‘that require a lot of energy to overcome’ otherwise a mark will not be awarded. Often, students might write a long paragraph with the correct answer buried in-between sentences of non-awardable content. It will ordinarily be beyond the ability of peers to correct this, and it can be time-consuming even for an expert teacher to identify and correct the specific learning deficit.

It is worth training students to give ‘lean’ answers. You could try putting the major components of an answer on the board in bullet points and explicitly show students how to scan either their own or model answers for the correct phrase. It is also beneficial to provide students with correct, lean, model answers for them to replicate so they develop their ability to give complex answers concisely. 

Almost right: Added to this is the potential for answers which are ‘almost right’. In response to ‘What is a compound?’, for example, student A might answer: ‘A substance made of two or more atoms chemically bonded together.’ This gives the teacher the opportunity to correct a small point and highlight the absence of the word ‘different’.

If the teacher did not ask student A for their response, they may have asked student B who might answer: ‘A substance made of two or more different atoms chemically bonded together.’ In this case, the teacher would affirm this as correct and move on, but would student A realise that they were missing a vital word from their own answer? Probably not.

The teacher’s Pedagogical Content Knowledge (PCK) must therefore be highly attuned to common errors. If a teacher does not have this PCK, then they must sample answers from a number of students to try and establish for themselves and for the class the accuracy of student response. 

Procedural knowledge:

Procedural knowledge relates to our ability to give answers which require set steps, for example formulae calculations in physics, balancing an equation, or calculating the magnification of a microscope. Embedding these in retrieval practice can be tricky and it may be worth drafting a set of ‘framework’ questions that you can adapt to different topics. For example:

  • A device transfers xJ of energy in y seconds. Calculate its power.
  • Give a half equation for the reaction between x and y.

Whenever these are used, the teacher can substitute relevant quantities or substances for x and y. This of course relies on teacher subject knowledge, but a bank of such questions can be extremely useful.


It is likely that student misconceptions never disappear, but are suppressed (Shtulman, 2012), so it is worth embedding questions aimed at misconceptions into retrieval practice. For example, you may regularly ask students:

  1. Which is bigger, an atom or a cell?
  2. Why does the mass of magnesium increase when it is heated?
  3. On a winter’s day, which is colder: a lamppost or a wooden fence?
  4. What is between the nucleus of an atom and the electrons?

Such an approach allows the teacher to constantly revisit common misconceptions and help their students to suppress them continually. Without such activity, they are likely to resurface frequently. 

  • Application

Science students must be able to ‘apply’ or ‘transfer’ their understanding to new contexts. Often mistaken for a generic skill, this ability is highly context dependent and is notoriously difficult to achieve but retrieval practice has been shown to aid in this process (Eva, 2018).

  • To be learned that day

If students are due to be learning about fractional distillation, it is a good idea to do retrieval practice on compounds, molecules, boiling points and mixtures. This “activates” the learning and allows students to construct knowledge in that lesson and secure it to prior learning (Furst, 2019).

  • Optimal spacing

Optimal spacing refers to the best amount of time to leave between material being taught and it being reviewed through retrieval practice. Damian Benny (2016) has conducted an incredibly thorough investigation of this in science teaching and looks at how for different types of task the optimal space changes, with his full conclusions being resistant to generalisations and beyond the scope of this article. 

A rule of thumb might be to allow a few days before revisiting and then start increasing the size of the gap steadily. 



Benney D (2016). Optimal Time For Spacing Gaps (?). mrbenney. Available at: (accessed 3 January 2019).

Bjork R, Dunlosky J and Kornell N (2013) Self-Regulated Learning: Beliefs, Techniques, and Illusions. Annual Review of Psychology 64(1): 417–444.

Boxer, A. (2018). Retrieval Roulettes!. [online] A Chemical Orthodoxy. Available at: (accessed 3 Jan. 2019).

Eva K, Brady C, Pearson M et al. (2018) The pedagogical value of testing: how far does it extend? Advances in Health Sciences Education 23(4): 803–816.

Furst E (2019) EfratFurst – Reconsolidation. Available at: (accessed 3 January 2019).

Reif F (2010) Applying cognitive science to education. Cambridge, MA: MIT Press.

Roediger H and Butler A (2011) The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1): 20–27.

Shtulman A and Valcarcel J (2012) Scientific knowledge suppresses but does not supplant earlier intuitions. Cognition 124(2): 209–215.

Principle 2. Present new material in small steps with student practice after each step. Only present small amounts of new material at any time, and then assist students as they practice this material.

Rosenshine’s observational studies confirm that where teachers present too much information at once, students cannot take it all in and learning is impeded. Added to this, without time to practice new material it cannot be embedded. As such, expert teachers restrict the flow of information and allow time for practice in between introduction of information. 

The theoretical basis for this is Cognitive Load Theory (CLT), which Dylan Wiliam (2017) has described as the ‘single most important thing for teachers to know.’ In short, CLT argues that for information to be retrained in a learner’s long-term memory – i.e. for it to have been learnt – it must pass through a narrow bottleneck called the Working Memory (WM). The WM is preciously limited and can hold approximately five items at any given point. 

If a teacher introduces more than five ideas, WM will be overloaded, and nothing will be learnt (Deans for Impact, 2016). 

A vast number of instructional techniques have now been researched to act against the constraints of the WM. Here is an overview of them for science education:

  • Breaking it down

As discussed previously, secondary science knowledge is a hierarchical construction in the sense that new knowledge builds upon old knowledge. This means that within every new concept or proposition is buried an architectural foundation of prior knowledge.

Take fractional distillation, for example. You might want students to know that ‘alkane molecules of different lengths are separated in the fractionating column based on their chain lengths.’

This sentence may appear simple to the expert science teacher, but buried within it are many different concepts including:

  1. Definition of a molecule
  2. Definition of a mixture
  3. That substances in a mixture can be separated based on their physical properties
  4. That boiling point is a physical property
  5. Definition of alkane
  6. That alkanes have different chain lengths
  7. Different chain lengths result in different properties
  8. Longer chain length = higher melting point.

All the above is required knowledge before the process of fractional distillation can be introduced. Without first solidifying student understanding of these propositions, the teacher will inevitably overload students’ WM (Reif, 2010). As such, teachers should think carefully about which specific propositions are needed in order to understand the material to be introduced that lesson. They should break it down into small chunks and introduce them slowly, with time for practice in-between. 

  • Minimally guided instruction (MGI)

A mainstay of many science classrooms, MGI refers to any time students are expected to learn something without a full and direct explanation from the teacher. Techniques here may include having students:

  • Learn information from a textbook/given web page
  • Research information freely on the internet
  • Construct their own definitions of key words from a text
  • Conduct peer-to-peer instruction
  • Use information stuck on the walls of a classroom to construct a text

Often described under the broader term “Discovery Learning,” evidence from lab-based studies in the cognitive sciences indicates that such techniques are likely to be ineffective (Kirschner, 2006). This is because the flow of information is unregulated by an expert teacher. 

In “Fully Guided Instruction,” the expert teacher is in complete control of the information flow to the student and can, as above, break it down to ensure cognitive overload does not take place (cite case for fully guided instruction). For example, in the teaching sequence above regarding fractional distillation, a teacher might:

  • Mini-quiz to recap core information
  • Explain how boiling points work in both directions
  • Give students drill questions on boiling points
  • Define alkanes and introduce chain lengths
  • Give students drill questions on all of the above

And so on, slowly building up to the process of fractional distillation. 

  • Practical science and inquiry

The majority of teachers in the UK believe that the purpose of practical science is “inquiry;” i.e. using observed phenomena to derive a scientific principle (Gatsby, 2017). There is now a growing body of research to indicate that inquiry-led learning is in fact detrimental to student understanding (see for example Ashman, 2018). A plausible scientific model to explain these findings would be the CLT framework. 

Whilst students are undergoing science practicals, their minds are incredibly busy with the complex physical manipulation of the experiment and following instructions correctly. Unfortunately, common exhortations to encourage students to be more “minds on” and to be actively thinking about the science (SCORE, 2009) are unlikely to be effective: students have no space in their minds left.

In light of this, practical science should rarely be about student inquiry into, or discovery of, a given concept. Students should be taught the concept explicitly beforehand, practise it to solidify understanding, and then conduct a practical having been shown explicitly every step. “Integrated instructions” or “visual practicals” can also be used to simplify the process while students are conducting the experiment (Boxer, 2017).

  • Dual coding

Your WM has two channels – visual and auditory – and you can alleviate some burden on the WM by activating them both at the same time. This works well for science teachers who can make extensive use of diagrams and visual supports in their explanations. The EEF’s recently published science guidance (EEF, 2018) notes that not all diagrams are effective.

I would argue that the best diagram would be one without any labels. This ensures that students do not start looking at a section of the diagram and trying to understand it by themselves, without fully guided instruction. 

The teacher can then use techniques, such as pointing at the diagram or cold calling students, to direct student attention and be in complete control of the flow of information to the students (Allison, 2017). 

  • Narrative and transcending CLT

There are times when it may not be advisable to break learning down into small chunks, however. We can take a teacher explaining the route that oxygenated blood takes from the lungs to the body cells as an example. The knowledge that the teacher wishes to convey is:

Pulmonary vein → left atrium → left ventricle → aorta → bodily arteries → capillaries → cells

As with fractional distillation, there are a number of propositions here, each with their own deep knowledge structure. A CLT-wary teacher might take the proposition that ‘oxygen passes into the blood in the pulmonary vein’ and have students practise it before moving on. The problem with this, however, is that the chunk is so small and so disembodied from the larger narrative that students may become bored and then fail to see a bigger picture.

There is a ‘sweet spot’ for optimal cognitive load: a point at which the amount of information in WM does not overburden them, but equally does not bore them (Willingham, chap 1). This will vary from student to student and classroom to classroom, and it is for the expert practitioner to determine its location.  


Allison S (2017) Making every science lesson count. Carmarthen: Crown House.

Ashman G (2018) Inquiry learning. Filling the pail. Available at: (accessed 3 January 2019).

Boxer A (2017) Cognitive Load in Chemistry Practicals. A Chemical Orthodoxy. Available at: (accessed 3 January 2019). (2016) The Science of Learning. Available at: (accessed 3 January 2019).

EEF (2018) Improving Secondary Science. Available at: (accessed 3 January 2019). (2017) Good Practical Science Report. Available at: (accessed 3 January 2019).

Kirschner P, Sweller J and Clark R (2006) Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist 41(2): 75–86.

Reif F (2010) Applying cognitive science to education. Cambridge, MA: MIT Press. (2009) Getting Practical. Available at: (accessed 3 January 2019).

Wiliam D (2017) Twitter Available at: (accessed 3 January 2019).

Willingham D (2010) Why don’t students like school?. San Francisco, CA: Jossey-Bass..

Principle 3. Ask a large number of questions and check the responses of all students: Questions help students practice new information and connect new material to their prior learning.

If students are not given the opportunity to practise new information, it will not be transferred to their long-term memory. Cognitive scientists refer to this process as ‘encoding’ and, despite fashions to move away from extensive student practice, they argue that it is vital to ensure learning occurs (Reif, 2010). 

It is also the case that teachers generally want students to connect their current and prior learning; this not only makes it easier to learn new information, but also their ability to apply knowledge in an unfamiliar domain (also known as transfer) (Didau and Rose, 2016). It is therefore vital that science teachers develop activities that foster effective student practice.

We will address procedural and declarative knowledge separately first, and conclude then finish by connecting new material to prior learning.

  • Procedural 

Knowledge of procedures is probably the easiest to design questions for. Automaticity and variation are key in building in practice. When first approaching a problem, students will identify the commonalities between the problem and the teacher’s prior explanation or worked example (WE). If the problems are too far removed from the WE, the student will not be able to identify common features and thereby solve the problem. If the problems are too similar, the student will not be able to understand varying examples.

We therefore need to plan problems so they have step changes. The initial problem needs to be similar to a WE but gradually varying, one element at a time, to a more complex problem. 

Taking a simple example of the formula P=E/t:

An initial WE may say: ‘A device transfers 100J of energy in 20s. What is its power?’ After instruction, students would do a number of questions with exactly the same phrasing, but different values in the place of 100 and 20. Then, the questions would take their first variation in introducing minutes for students to convert. Schematically this appears as below, with practice for each point:

  1. WE
  2. Variation of values
  3. Variation of units:
    1. Time in minutes
    2. Time in hours
    3. Energy in kJ
  4. Change wording in question, e.g. ‘In 130 seconds, a toaster transfers 205J of energy. What is its power?’
  5. WE with a rearranged formula
  6. Repeat b→d for rearranged formula
  7. Repeat e→f for final rearrangement
  8. Mixed practice with all variations as above possible.
  • Declarative knowledge

Declarative knowledge can be a lot harder to practise. For example, the proposition ‘Red blood cells have no nucleus to provide more space for haemoglobin’, does not lend itself to extensive student drill. You need other methods and a number of recent blogs show examples of science teachers trying to implement practice on such concepts (Rogers, 2018). One straightforward example would be to provide model answers for students to dissect. For example:

Q: ‘Describe the structural adaptations of the red blood cell and explain how those adaptations relate to its function.’ 

You could give your students a good answer, such as: The red blood cell has a biconcave structure to maximise surface area and increase diffusion in and out of the cell. It has no nucleus to allow for more space for haemoglobin and greater oxygen transport.’

And a poor answer might be provided as: ‘The red blood cell is shaped like a donut to make it bigger so gases can go into it. It doesn’t have a nucleus which means it has lots of space inside it for gases which can be moved around the body.’

You can use these two answers to ask short/closed questions, such as ‘Which gas diffuses in and out of the cell?’, ‘Where does this process take place?’ or ‘List and define from memory all keywords used in the first answer.’ 

A recent online symposium has further explored how to use “writing” and literacy-based approaches to further practice new knowledge, expose misunderstanding and push for deeper thought (Raichura, 2018)

You can also use extended and elaborative questions, such as ‘Why is describing the shape as bigger insufficient?’ or ‘How can the good answer be improved by using the words ‘function’ and ‘adaptation’?’ Such approaches allow you to rigorously check responses of a number of students regarding the core material. 

By using questions of varied format and type, the teacher will be able to assess student understanding beyond the specific language and sequence used in the initial explanation.

  • Making links, encoding and retrieval

Encoding is the process of moving information from the working memory into the long term memory. Practice doesn’t just encode current information, it also links it to prior learning. However, this must be done with care because if students have to switch from current information to prior learning too much, their encoding can be hampered (cite). It is important to ensure there has been plenty of practice on current information before adding links to prior learning.

An example might be in the study of alkanes as fuels in combustion. After students have practised balancing symbol equations for complete/incomplete combustion, as well as the various declarative concepts involved, give students the bond enthalpies for a given combustion reaction and have them calculate the overall energy change for complete and incomplete combustion and compare the difference. This enables students to first solidify the new information, then start linking it to prior learning in a way that is not distracting or confusing. 

  • Shed loads of practice (SLOP)

In response to the above, science teachers have been designing and implementing SLOP. This instructional technique looks to break material down into small segments with plenty of practice on each segment, slowly building up to a coherent and deeply interconnected whole (Boxer, 2017). SLOP will be discussed in further detail in principle nine. 



Boxer A (2017) Chemistry SLOP work. A Chemical Orthodoxy. Available at: (accessed 3 January 2019).

Didau D and Rose N (2016) What every teacher needs to know about…psychology. Woodbridge: John Catt.

Raichura P (2018) Writing in Science: A Symposium. Bunsen Blue. Available at: (accessed 3 January 2019).

Reif F (2010) Applying cognitive science to education. Cambridge, MA: MIT Press.

Rogers B (2018) Characteristics of Science Vocabulary and Some Classroom Tools. Reading for Learning. Available at: (accessed 3 January 2019).

Principle 4. Provide models: Providing students with models and worked examples can help them learn to solve problems faster.

When first learning a new concept, students’ knowledge of it is fragmented, transient and disconnected. It is fragmented in the sense that they may not have taken in all that you have said, and different students will have taken in different segments (Taber, 2002). It is transient in the sense that without immediate practice it will not be encoded and moved into the long-term memory so it could rapidly fade. It is disconnected in the sense that it will not yet be connected to prior learning and lacks the flexibility which comes from having one area of knowledge deeply connected to many others (Willingham, 2002). 

As such, it is vital for teachers to support students through this cognitively turbulent time as they start to grasp new material. Expert teachers will employ extensive models and worked examples to support students in this as they approach independent practice.

  • Thinking aloud/modelling

As teachers, our knowledge of science far outstrips that of our students. There are many things which are obvious to us and research shows that experts struggle to understand what it is like to learn something from the perspective of a novice (Heath and Heath, 2006). Often termed “expert blindness” this is a cognitive bias which prevents us from accurately anticipating student knowledge and understanding.

Thinking aloud is an excellent strategy to avoid this cognitive bias. For example, if a teacher is constructing a symbol equation of neutralisation on the board, they may say, ‘Hydrochloric acid, which is HCl’ and then write ‘HCl(aq)’ on the board. For the teacher and some of their students it may be obvious that acids are aqueous, but there is a chance that this will confuse some students. This cognitive distraction will inevitably reduce the impact of instruction.

Science teachers should take care to make every step explicit so that students do not need to desperately search through their long-term memory to gain a cognitive foothold in your explanation. In the example above, the teacher should have written ‘HCl(aq)’ on the board then either explicitly explained what the (aq) stands for or asked a student to do the same. 

Thinking aloud is also a way to encourage metacognition in your students (EEF, 2018). Science education blogger Pritesh Raichura describes his approach to teaching interpretation and rearrangement of physics formulae. By following one method and making each step completely explicit, he fosters a counter-intuitive independence of thought in his students. The strategy that he uses can be used 6by them without hesitation or error because every step of it has been explicitly accounted for (Raichura, 2017). 

  • Worked examples

A number of experiments have shown that worked examples are highly effective strategies for students to use when conquering new material (Kalyuga, 2003 ). It is likely that they are of limited use when the problem is extremely straightforward to the learner, however, so think carefully about which students you use them with. Whilst it might be a good idea to give worked examples to Year 7 students showing how to locate and identify the name of an element from its symbol, it would not be for GCSE chemists.

As discussed in principle three, it might appear at first that worked examples are inappropriate for declarative knowledge. However, it may be worth thinking about it as a ‘problem’ that can also be subject to worked examples. For instance, if students need to be able to identify the adaptations on a polar bear (which is declarative), it is worth doing worked examples showing arctic wolves and brown bears first.  

  • Problem pairs

Some research has shown that worked examples are most effective when followed immediately by students independently doing a parallel example with minimal variation from the worked example (Ashman, 2016). 

This has led to some advocating for example-problem pairs, where the example and the problem are laid next to each other. One way to do this would be by splitting your board in two and having a worked example down one side, and a problem on the other for students to solve. If you were teaching how to derive an ionic equation for NaOH + HCl for example, you would verbally describe your steps and make explicit every stage. Following this, you could write the equation for KOH + HCl and have students derive the equation for themselves. The next worked example might be to use H2SO4 to introduce stoichiometry of ions in solution.

The research here is not completely clear and, in one experiment, researchers used troubleshooting a faulty parallel circuit to demonstrate that there was little difference in outcomes for students who had just studied worked examples and ones who had done problem pairs (Van Gog et al., 2010). To pay heed to all the research the expert teacher should start with a full, explicitly described worked example, then move on to problem pairs, then more independent practice. 



Ashman G (2016) Example-problem pairs. Filling the pail. Available at: (accessed 3 January 2019).

Heath C and Heath D (2006) The Curse of Knowledge. Harvard Business Review. Available at: (accessed 3 January 2019).

EEF (2018) Improving Secondary Science. Available at: (accessed 3 January 2019).

Kalyuga S, Ayres P, Chandler P et al. (2003). The Expertise Reversal Effect. Educational Psychologist 38(1): 23-31.

Raichura P (2017) Equations in Science. Bunsen Blue. Available at: (accessed 3 January 2019).

Taber K (2002) Chemical Misconceptions: Theoretical background. Cambridge: Royal Society of Chemistry.

van Gog T, Kester L and Paas F (2011) Effects of worked examples, example-problem, and problem-example pairs on novices’ learning. Contemporary Educational Psychology 36(3): 212-218.

Willingham D. (2002) Ask the Cognitive Scientist. American Federation of Teachers. Available at: (accessed 3 January 2019).

Principle 5. Guide student practice: Successful teachers spend more time guiding students’ practice of new material.

We have already seen the importance of worked examples and student independent practice in principles three and four – principle 5  is stage of instruction that sits between them. In worked examples and modelling, students are entirely dependent on what the teacher is saying in that moment. In independent practice, students are only dependent on the teacher in the sense that they were taught by them a matter of minutes ago. 

Guided practice (GP) is an intermediate stage where students think for themselves rather than receiving information, but still with active control of the teacher. This is in keeping with what cognitive scientists refer to as the Guidance Fading Effect, where teachers should be attempting to fade the amount of guidance students receive as they develop expertise. 

Rosenshine gives three examples of simple guided practice techniques: rephrasing, elaborating and summarising. The commonality between them is that even though students are working by themselves they are very deeply tied to the information flow which has been presented and regulated by the teacher. In the sections below we will examine Rosenshine’s suggestions and make a number of other ones. 

  • Rephrasing

Rephrasing is of questionable value in science teaching. One approach would be to emphasise dialogic teaching (Mortimer, 2010). This position believes that students should be allowed to develop their own answers and express scientific concepts in their own way. Others would argue that a hallmark of scientific study is the use of precise definitions, and it would be foolhardy to not present students with such definitions or to expect them to parse them differently to as presented (Reif, 2010), and as such rephrasing would be of questionable value. 

  • Elaborating

Elaborative interrogation makes the learner push their thinking beyond the information presented and try to link it to prior knowledge in the same domain, which aids encoding (see principle three). GP is an excellent time to do elaborative interrogation as students who are novices are not best placed to ask themselves “why” and “how does this relate to” type questions and monitor the quality of their responses.

After a teacher has explained a given concept, process or procedure, spent time with the class pushing the thinking further, all the time guided by the subject knowledge of the teacher, pupils will be well primedr for independent practice. An example might run as follows:

Teacher: ‘Who can summarise for me the structure in graphite…[wait time]… Daniel’

Daniel: ‘It has lots of carbon atoms arranged in flat layers.’

Teacher: ‘There is something missing from Daniel’s answer..[repeats answer verbatim and gives wait time]…Chloe?’ 

Chloe: ‘Daniel didn’t say the delocalised electrons.’

Teacher: ‘Good, and why is that important…[wait time]…Jess?’

Jess: ‘Because they are what let it conduct electricity.’

Teacher: ‘And what is that similar to that we have seen already…[wait time]…Sam?’

Sam: ‘Metals can conduct electricity because they have delocalised electrons.’

Teacher: ‘Good. And how is that different to how giant ionic lattices conduct electricity…?’

And so on and so forth. This dialogue is important because the teacher may not have wanted to incorporate elements from other study during instruction of the structure and bonding in graphite to avoid crowding students’ working memories. But, once that stage has been completed, students can now start to appreciate links to other domains, and are now sufficiently primed for extensive independent practice to tease these out further.

  • Summarising

As with rephrasing, summarising can be of questionable value for science teachers. On the one hand, the cognitive effort associated with summarising information has been shown to produce learning gains, most interestingly when compared to taking notes via laptop which is normally verbatim rather than a summary (Mueller, 2014). However, if you’re following advice from previous principles, you should have given students the leanest possible explanation already. In a more open setting, after explaining say bioaccumulation of DDT for example, a teacher might ask students to then write their own summary of the process. Such an open activity leaves a large amount of room for error though, and it is not always the case that students are ready for such an open task before having done practice on smaller segments of the whole. 

With each of the above it is important to note that we are not dealing with memory strategies, for which the evidence can vary (Bjork, 2013). We are using them as tools to help the teacher bridge the gap between their explanation and student practice.

  • Mini-whiteboards

During guided practice it is important that the teacher receives rapid feedback from the students as to whether they have grasped the key content otherwise, valuable time can be lost or, worse, students can embed mistakes. Mini-Whiteboards are excellent routes to gaining quick feedback on the understanding of the whole class. Lemov’s “Show call” (Lemov, 2015) involves taking student work like this and projecting it onto the board for discussion and interrogation and can be combined well with the use of mini-whiteboards to drive whole-class discussion. 

  • Blanked worked examples

After worked examples, teachers can use ‘blanked’ worked examples. These are problems with exactly the same layout as worked examples, but have some information left blank. This causes a student to fully engage with the worked example they have already covered and use it as a support for further practice. 

  • Providing answers

A good way to build quality assurance into the GP process is to give students answers to the problems you have set. If you are working on formulae, equations or some other quantitative process giving students the answers in advance and expecting them to show how to get from the problem to the solution is a way of ensuring students become alert to any mistakes made.

It is worth noting Rosenshine’s insistence that if the information received in GP demands it, the teacher should re-teach the material. We are all probably familiar with going round a room having to explain the same thing to multiple students: an undoubtedly inefficient use of time. It is likely that if a few students have it wrong many others do as well, at which point the only course of action should be to pause the class and go back to the beginning. 



Bjork R, Dunlosky J and Kornell N (2013). Self-Regulated Learning: Beliefs, Techniques, and Illusions. Annual Review of Psychology 64(1): 417–444.

Lemov D and Atkins N (2015) Teach like a champion 2.0. New York, NY: John Wiley & Sons.

Mortimer E and Scott P (2010) Meaning making in secondary science classrooms. Maidenhead: Open University Press.

Mueller P and Oppenheimer D (2014) The Pen Is Mightier Than the Keyboard. Psychological Science 25(6): 1159–1168.

Reif F (2010) Applying cognitive science to education. Cambridge MA: MIT Press.

Principle 6. Check for student understanding: Checking for student understanding at each point can help students learn the material with fewer errors.

Encoding is a tricky business. No teacher should assume that just because they have followed Rosenshine’s Principles of explanation and practice that flawless student learning will occur. Learning is not entirely dependent on the teacher – it relies on student attention and prior knowledge, both of which contain significant elements that are outside of the teacher’s control. Mistakes are almost guaranteed. 

Data and inferences

Checking for understanding is about the teacher gathering data to make an inference about student understanding (Ghataura, 2018). Rosenshine points out that the least effective teachers ask students if they have any questions before handing out practice work. In this case, however, it is hard to make a valid conclusion. Students may nod their heads, for example, but it might be that they did not want to admit they had a question or, more likely, they thought they understood when they really didn’t. 

More effective teachers ask specific questions, such as, ‘Where do the reactants for photosynthesis come from?’ Even in this case, however, the teacher must appreciate the complexity of making a valid inference.

We will take a number of scenarios based on this question. A teacher has just taught the word equation for photosynthesis and explained where the reactants come from, where the products go to, and how light is involved.

They might follow with either of the scenarios below:

Scenario 1: The question above is asked verbally. Five hands shoot up and the teacher chooses one. The student answers, ‘from the air and the soil.’ All that the teacher can validly conclude is that this particularly eager student knows the answer; they can’t infer anything about anyone else.

Scenario 2: They improve on scenario 1 by asking the question and allowing wait time. They choose a student to answer and ask students to put their hand up if they agree. Three quarters of the class put up their hands. They might infer that one quarter do not fully understand and the rest do, but realistically it may be that they are agreeing with the first student because they are normally right, or the teacher may have given away that their answer was correct via a ‘tell’ (Lemov, 2015).

Scenario 3: The teacher hands out mini-whiteboards (MWBs) and students answer the question on their MWBs. Using an appropriate routine to avoid looking at other students’ answers, the teacher ascertains that most of the class got the answer right. A valid conclusion would be that the majority of students understand where the reactants for photosynthesis come from. However, the teacher cannot infer validly that the students understand where the products go to or how light is involved. At this point the teacher should use a variety of techniques from principles one, three and five, as well as an excellent practical discussion in Perks (2018) to try and widen their evidence base to make more valid inferences. 

Drilling down

Let’s say in scenario 3 some students answer, ‘oxygen leaves the plant through the leaves and glucose is used by the plant for building and repair’. You could assume that the student does not understand this at all, but actually, only one small point is wrong: they have confused reactants and products. This is referred to as a ‘substantive learning impediment’ (Taber, 2002) in the sense that they have failed to connect new learning to old learning.

Let’s say a different student writes ‘from the air, soil and sun’. The inferences to be made from this student’s response are more subtle; whilst it is most likely that they have a faulty understanding of energy, mass and what is appropriate to call a ‘reactant’, it also could be that they think some of the reactants come from the sun. 

To address this, the teacher must understand how knowledge is constructed in science. In a very thorough discussion, Ruth Walker (2018) argues that science knowledge can be thought of as a series of nodes of information connected by conceptual explanatory links. It is vital that the science teacher understands how knowledge is constructed in order to identify plausible explanations for where students’ learning is deficient. In this particular example, the teacher must have a sound knowledge of chemical reaction terminology as well as the different aspects of photosynthesis and how the reactant origins, product destinations and energy transfers interplay with the overall equation.

In the long term

None of this, of course, tells us anything for the long term. The human mind forgets information rapidly and just because students understand material today does not mean they will do so tomorrow, next week or next month. Cognitive scientists refer to this as ‘learning versus performance’ – performance is the visible responses and behaviours that a student exhibits today, learning is an invisible process that takes place across many days and weeks and can only be inferred in the long term (Bjork, 1994).

As such, science teachers should be using a range of strategies (like the ones discussed here or in principle five) over time in order to gain accurate data about student learning in the long term. There is an extensive discussion on Check For Understanding with practical techniques in Lemov’s Teach Like a Champion. For a more science-specific approach see a recent online symposium featuring a number of science teachers already mentioned and Dylan Wiliam.



Bjork RA (1994) Memory and metamemory considerations in the training of human beings. In: Metcalfe J and Shimamura A (ed.) Metacognition: Knowing about knowing (pp. 185-205). Cambridge, MA: MIT Press.

Boxer A (2018) AfL in Science: A Symposium. A Chemical Orthodoxy. Available at: (accessed 3 January 2019).

Ghataura D (2018) Validity of Formative Assessment. Deep Ghataura. Available at: (accessed 3 January 2019).

Lemov D and Atkins N (2015) Teach like a champion 2.0. New York: John Wiley & Sons.

Perks M (2018) Planning for Effective Assessment in Science. docendo discimus. Available at: (accessed 3 January 2019).

Taber K (2002) Chemical Misconceptions: Theoretical background. Cambridge: Royal Society of Chemistry.

Walker R (2018) The nature of school science knowledge and why Adam’s SLT was wrong. Available at: (accessed 3 January 2019).

Principle 7. Obtain a high success rate: It is important for students to achieve a high success rate during classroom instruction.

Success Rate (SR) roughly refers to the amount of work that students are getting right. If students are answering all the work correctly, it may not be challenging enough to make them think hard – a prerequisite for learning (Coe, 2013). If they are answering too much incorrectly, the work can be frustrating and demotivating and best, and damaging at worst because students will be practising mistakes. Empirical research cited by Rosenshine puts the figure to aim at as around 80% for when students are engaged in independent practice.

There is some evidence that to build long-term motivation students need to experience success: we enjoy the things that we become good at (Garon Carrier, 2014). Of course, human motivation is messy and there are likely many other effects too, but in terms of things that teachers can control, helping students to enjoy their work through achieving success is a worthy direction. This is, of course, best achieved by implementing practices found to be effective at promoting learning like Rosenshine’s Principles. 

It is very difficult to predict whether work will be too easy or difficult; I have been surprised when ‘weaker’ sets have grasped something very quickly and ‘stronger’ sets have taken a little longer. I usually find there are three scenarios – if work is too hard sometimes students won’t start it, will make lots of mistakes or they will give an answer that is correct, but not sufficient. I’ve mapped out all three scenarios, with practical suggestions, below:

Won’t start it

If students are reluctant to begin a task it is likely that they are completely overwhelmed by the material and have not followed the explanation or guided practice. This is normally true of procedural knowledge and complex declarative knowledge:

  • Procedural knowledge

If a student is not starting a problem, such as rearranging a formula or balancing an equation, it could be that the quality of explanation was not sufficient. If you realise this is a widespread problem (which you will probably pick up when checking for understanding in principle 6), you should reteach.

It’s also possible that the student is struggling to find the commonalities between the ‘problem state’ and the knowledge of worked examples they already have. They see different numbers and letters and cannot connect what is on the page in front of them with what was delivered from the whiteboard. If this is the case, it is important during guided practice to make those connections explicit to the student so that they can begin the work. It is worth being wary of prompting students too much (like “where in the equation is the quantity for E?”) though and remembering that as time goes on you should be fading your intervention, feedback and removing cues (Sweller, 2011).

  • Complex declarative knowledge

As per principle 2, you should ensure that material is broken into small segments, explain each one and allow for practice in-between. If you don’t, cognitive overload can occur, making learning fragmented and incomplete. As discussed though, it is not always desirable for a teacher to use the smallest chunks possible. For example, when explaining the greenhouse effect, breaking the process up into small pieces can ruin the narrative and cause students to lose sight of the ‘big picture’. Your end goal will be for students to explain the whole process, but they will not be ready to do that until they have practised the smaller steps.

For example, if you ask students to ‘describe and explain what causes the greenhouse effect’, they might not know where to start. A better approach might be a series of short questions like:

  • Where does light initially come from?
  • What is the name of the energy transfer by which it reaches us?
  • Which wavelengths of light are reflected by the atmosphere?
  • Which wavelengths of light pass through the atmosphere to the Earth’s surface?
  • At the Earth’s surface what occurs to the light?
  • Describe and explain the greenhouse effect. Use information from previous answers to structure your response.

Making mistakes

If students make too many mistakes in independent work, there is a good chance that they will embed this mistake into their long-term memory, hampering future knowledge and understanding. For example, when teaching balancing equations, a key mistake students make is to alter the “small” numbers in the formulae, resulting in balancing an equation like H2 + O2 → H2O as H2 + O2 → H2O2. The student has correctly identified that there are fewer oxygen atoms in the products than reactants, but has not appreciated that the small numbers cannot be changed as this would result in fundamentally different substances. It is vital that the teacher rapidly intercepts such errors if they are being made to prevent students from embedding this mistake.

Correct insufficient answers

A quirk of science teaching is that often students will write things which are correct, but do not fully address the problem. When describing how useful adaptations are passed to the next generation, for example, a student might write: ‘The useful adaptation allows the animal to live a longer life, increasing the chances of it having offspring and passing on its genes.’

This answer is correct, but misses the important corollary which is that animals with less useful adaptations are less likely to survive and pass on their genetic material. Depending on the nature of the class, students may have been expected to discuss competition as well. 

This complicates the matter of monitoring the 80 per cent success rate as students will think their answer is correct during a whole-class feedback session unless you are explicit that a sufficient response will have to have certain characteristics. As an expert teacher, you have a deep subject knowledge that allows them to help all students know what success looks like. Using numbers of marks can also be helpful to indicate to students that there are 3 or 4 or however many marks available, and these are the points which are credited. For example, in response to a question “what occurs when sodium bonds with chlorine?” the teacher may write on the board the bullet points:

  • Sodium transfers
  • Chlorine receives
  • One electron

And prompt students to look for these phrases in their answers. The potential downside of that then becomes that students focus on the marks rather than the actual feedback (Kluger and DeNisi, 1996) so the teacher must cultivate a very careful environment to mitigate against this effect.

Break and mark

A useful technique can be after explanation to give students 20 questions to start, and then stop the class from working when most students are around halfway through. The teacher can then go through answers verbally with the students, asking lots of questions to a wide range of students and then assess how many students got how many questions right through quick hands-up. This ensures the teacher can maintain a high success rate and build in a check for understanding, preventing any possible mistakes from being encoded.

Standardise the format

This technique from Doug Lemov (2015) involves having students lay out their work in a way that enables the teacher to easily see if their answers are correct. This ensures the teacher does not have to go hunting through one student’s unclear and messy work and thereby waste precious moments which could be more efficiently used checking the work of five students at a time. For examples when students are performing calculations you can ask them to do all their working in their book as normal starting from the left margin, but to then put their final answer on the right edge of the page inside a green square. This means the teacher can instantly check down an entire page of working and see if the answers are correct. In whole class feedback, students can then mark their work with a red pen adding a tick or cross to their final answer. 

The teacher can ask students to hold up their work and quickly and easily get a feel for the distribution of success across an entire class. The efficiency of this approach means that the teacher can keep a careful eye on the success rate of the class and increase or decrease their support as necessary.  


Coe R (2013) A triumph of hope over experience. Available at: (accessed 3 January 2019).

Garon-Carrier G, Boivin M, Guay F et al. (2015) Intrinsic Motivation and Achievement in Mathematics in Elementary School: A Longitudinal Investigation of Their Association. Child Development 87(1): 165–175.

Kluger A and DeNisi A (1996) The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin 119(2): 254–284.

Lemov D and Atkins N (2015) Teach like a champion 2.0. New York: John Wiley & Sons.

Sweller J, Ayres P and Kalyuga S (2011) Cognitive load theory. New York: Springer.

Kluger A and DeNisi A (1996) The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin 119(2): 254–284.

Lemov D and Atkins N (2015) Teach like a champion 2.0. New York: John Wiley & Sons.

Sweller J, Ayres P and Kalyuga S (2011) Cognitive load theory. New York: Springer.

Principle 8. Provide scaffolds for difficult tasks: The teacher provides students with temporary supports and scaffolds to assist them when they learn difficult tasks.

A scaffold is an instructional intervention that helps a student move from initial, shaky understanding to full understanding through practice. In that sense, it is a form of guided practice (principle five) which aids to alleviate the burden on working memory. 

Traditional cognitive load theory contains the variables intrinsic load, extraneous load and germane load. Reif’s (2010) formulation ignores these variables and is far simpler as:

  • Cognitive load = task demands/available resources (page 362)

As the demand of the task increases, so does load. As the available resources increase, load decreases. One of the major contributors to available resources are the student’s internal resources, their prior knowledge and the instruction just received. But another contributor is their external resources. The simplest example of this would be the use of a calculator when solving a difficult problem. If each time a student had to solve a titration calculation they had to perform long division and multiplication not only would the task take far longer and more points for error be introduced, but the “task switching” between different parts of the problem will cause greatly increased difficulty (Sweller, 2011). 

As with guided practice, the aim is to fade the support provided. The scaffold only extends so far, at some point it has to be taken away. 

Below are a number of scaffolds which can be used for different tasks in the science classroom.

  • Complex tasks, multiple supports

A complex task that I want my students to be able to complete is to calculate the energy change involved in a given reaction. There are a number of ‘moving parts’ to this so I will normally give supports so students can focus on one thing at a time. With this type of problem I have a number of supports available:

  • The displayed structure of the molecule e.g. ethanol
  • A written hint as to whether or not double bonds are present
  • Whether the symbol equation is balanced
  • Whether the symbol equation is even provided or students have to derive for themselves (e.g. simple combustion of methane)
  • The energy change when the bonds in an entire molecule are broken (if I just want students to practice summing reactants and products and then subtracting one from the other)

I can vary which scaffold is provided at which point to support students in progressing through the problems with the eventual aim of them being able to calculate without the scaffold.

  • Word cues

A fascinating recent example comes from teacher-blogger Ruth Walker (2018). Ruth uses a bank of higher-level keyword and phrase cues to elevate the quality of writing produced by the student. For example, when addressing open ended questions about how food is digested, providing students with a curated list of key words and phrases including:






prior to



actively helps students to structure their thinking in a logical and coherent order. Walker has similar banks for comparisons, categorisations, making links and cause and effects, as well as the temporal process words mentioned. 

  • Bar model

Popularised by teacher-blogger Ben Rogers (2018), the bar model is a Singaporean method which represents quantities as bars so that students can visually appreciate the size of the quantities, their proportions relative to each other and how they should be manipulated in terms of addition and subtraction. Rogers has provided a number of resources on his website, and his approach has been emulated by Gethyn Jones for kinetic energy calculations (Jones, 2018) and Pritesh Raichura for deriving the formula of ionic compounds (Michaela Science, 2018).

  • Diagrams e.g. balancing equations

A similar approach is the use of diagrams in problem solving. This makes use of “dual coding” (supplementing verbal explanation with visual images) and how the logical coherence of a method can be followed with greater ease when drawn out rather than just written (Clarke 1991). When teaching students to balance equations for example, you might draw the atoms out and present them as molecules to show students how the atoms rearrange and how the number of each atom must remain constant. 

The students can then use this method themselves to balance equations and will eventually be able to balance the equation without use of such a diagram. Titration calculations are another example, where it can be advantageous for students to draw sketches of the glassware, name the substances that are in them and gradually annotate them with quantities like volume, number of moles and concentration. This gives students a concrete narrative and prevents them from forgetting where they are up to in the method and what needs to come next. 

  • Step-by-step guides

Step-by-step guides or rule lists are mainstays of the science classroom and have applications in a wide range of procedural areas like rearranging formulae, constructing food webs, statistical analysis, completing complex calculations or deriving ionic equation whose use is well-supported by the evidence base (Rosenshine and Meister 1992). As ever, they should only be used as a temporary support and students should be eventually aiming to complete them freely 

  • Checklists

Checklists are commonly found in the science classroom, often for processes like drawing graphs or checking safety measures before a practical can begin. They have also found use in open ended tasks in the guise of ‘level ladders’ tied to KS3 or GCSE ‘success criteria’. Such approaches should be used with caution as they can either be too vague to be of use or so specific as to give away the answer. 

For example, if a student is completing research on a given atom, a descriptor of ‘contains electrons in the correct location’ is too vague to help a student who is not clear on where the correct location is. Conversely, a descriptor of ‘contains six electrons in the second shell’ is so precise as to give the game away without the student having conducted any thought. If these are to be used, they should be used with great care and in such a way that the teacher can guarantee that students are going to be undertaking mental effort.



Clark J and Paivio A (1991) Dual coding theory and education. Educational Psychology Review 3(3): 149–210.

Jones G (2018) Kinetic Energy Using The Singapore Bar Model. e=mc2andallthat. Available at: (accessed 3 January 2019).

Michaela Science (2018) Writing Ionic Compound Formulae. Available at: (accessed 3 January 2019).

Reif F (2010) Applying cognitive science to education. Cambridge, MA: MIT Press.

Rogers B (2018) Can the Singapore Bar-Model Reduce Cognitive Load in Physics? Reading for Learning. Available at: (accessed 3 January 2019).

Rosenshine B and Meister C (1992) The use of scaffolds for teaching higher-level cognitive strategies. Educational Leadership 49: 26–33. 

Sweller J, Ayres P and Kalyuga S (2011) Cognitive load theory. New York: Springer.

Walker R (2018) Sentences and the web of knowledge. Available at: (accessed 3 January 2019).

Principle 9. Require and monitor independent practice: Students need extensive, successful, independent practice in order for skills and knowledge to become automatic.

In a way, all the principles until now have been building towards this. More than anything else, students must be able to practise their new knowledge for it to be thoroughly embedded. Without independent practice, no matter how clear the explanation, worked example or guided practice, little learning will occur.

A conceptual framework for providing extensive practice can be found in principle three. Here, we will focus on the use of new ‘shed loads of practice’ (or SLOP) resources. SLOP represents a frustration with the commonly used textbooks and their lack of extensive practice. One KS3 textbook for example has 3 questions for each double spread: one is a gap fill with the sentence lifted directly from the page, one is a ‘read and regurgitate’ question based on the text on the page, and the final one is a piece of extended writing worth six marks that cannot be adequately answered using information found only on the page. This leaves us with a number of problems:

  1. Not enough practice
  2. Practice related to material that is not on the page
  3. Practice comes at the end of reading a long text which has lots of information (see Principle 2)
  4. Practice is not varied
  5. Links to prior learning are not made.

SLOP resources look to remedy this problem by:

  1. Providing plentiful practice
  2. No practice involves knowledge students have not yet been exposed to
  3. Material is broken into small chunks with practice in-between each chunk
  4. Practice is varied
  5. Links to prior learning are made at an appropriate point. 

Ruth Walker’s (2017) physics booklet on particle theory, for example, has sections for each concept. In the Specific Latent Heat section, she introduces the topic using lean, focused sentences. This is followed by 10 practice questions which are varied to include sentence recall, graph analysis and calculations. These are followed by two extension questions and 10 ‘intervention’ questions which mirror and simplify the 10 practice questions (Walker, 2017).  

SLOP resources now exist for physics as mentioned and chemistry (including required practicals) (Boxer, 2017). Biology resources are more sparse, but a number of units are now available free for download. 

SLOP resources do not have to be an entire unit or course. An individual worksheet can also be considered SLOP if it carries most of the features listed above. Below is a checklist for preparing your own SLOP work that should be used in conjunction with the strategies presented in previous principles:

  1. Establish exactly what you want students to know
  2. Decide if it is best broken down into smaller segments
  3. As per Principle 5, how are you going to bridge the gap between your explanation and student independent work?
  4. Start with extremely simple questions, based entirely on the topic to be learned
  5. Include scaffolds to the early questions if needed
  6. Start making the questions more complex
  7. Vary the type of question being asked if you can
  8. Remove any kind of guidance
  9. Start making links between current learning and previous learning

It is highly recommended that you download a SLOP booklet and try to identify how these steps have come together to provide a cohesive and comprehensive whole that allows students to thoroughly practice and embed their learning. 



Boxer A (2017) Chemistry SLOP work. A Chemical Orthodoxy. Available at: (accessed 3 January 2019).

Walker R (2017) Why we should write our own science textbooks – Part 1 of my researchED Rugby talk. Available at: (accessed 3 January 2019).

Principle 10. Engage students in weekly and monthly review: Students need to be involved in extensive practice in order to develop well-connected and automatic knowledge.

After I have taught bonding at the start of Year 10, I give students two tests. The first is the ionic bonding test, and the second is the covalent bonding test. Without fail, students stumble and fall on this question from the second test: ‘This question is about sodium chloride and iodine. Describe the structure and bonding in sodium chloride.’ Perhaps unsurprisingly, students usually reference covalent bonding, despite it being a question about ionic bonding. This includes students who had no difficulty in the first test describing the process of ionic bonding. So why do they confuse the two?

The issue is context. We tend not to think about the importance of context for thinking, learning and remembering, but research indicates that our entire cognitive apparatus is highly dependent on it. In the ionic bonding question, the context of the test is covalent bonding so students think in those terms and answer accordingly.

The same is true of memory. When we first learn something, our memory of it is highly cued to the environment: to the way the teacher talked, the examples they used and even the room that we were in (Smith, 1979).

As we have seen in principle one, retrieval practice is vital in strengthening our memories, but it can also help to reduce this reliance on context. By bringing that knowledge to the fore once again and varying the cues associated with it, we can help that knowledge become less cue-dependent. Furthermore, by tying it together with other knowledge, we strengthen the routes available to it, allowing for more automatic recall. Performing this on a weekly or monthly basis ensures that the memories are strengthened and more easily recalled. 

Cue-dependence is part of the problem of “transfer”: the ability to perceive the deep structure of a problem and apply knowledge from a different context to it. In a seminal study, Chi et. al. (1986) presented physics undergraduates and physics professors with a range of physics problems. The different groups were asked to categorise the problems, putting similar ones together. For example, the undergraduates put problems with inclined planes together, whereas the professors separated them based on whether they were problems associated with conservation of energy or Newton’s second law. The fact that on the surface the problems were both about inclined planes meant little next to the fact that they require different concepts to solve. The professors moved from the surface structure to the deep structure.

Obviously this finding is incredibly important in science education. If our students struggle with a question about carrots and osmosis it is because they have been cued to think about osmosis in potatoes only; they think that osmosis is something that happens in potatoes. 

The context and cues must be varied regularly in order to embed knowledge that is not tied to one particular cue, but can be accessed by many different routes. If the first time they learnt osmosis the context was potatoes and the second time the context was carrots, they are more likely to perceive the deeper structure of osmosis. A third episode of retrieval but in the context of gummy bears for example, further decontextualises the knowledge, allowing for application in a wider range of future contexts.

Weekly and monthly review allows students to practice their knowledge in different contexts and with different cues. Answering a question about ionic bonding within a bigger question about covalent bonding not only strengthens memories about ionic bonding but forms stronger links within student knowledge, allowing for easy and rapid retrieval, whatever the context. When faced with a question about ionic bonding buried within a question about organic chemistry or conservation of mass or any other topic, they are more likely to be able to perceive the deep structure and transfer their knowledge.

Another key example of this would be the use of physics equations. All too often the problem facing the student is not that they don’t know the equation, but they cannot understand which equation this particular question requires. Their knowledge of that equation is too tied to particular question themes and formats to be able to transfer to a different one. The only way to rectify this is through repeated exposure to multiple questions in a range of different contexts. Their format must be varied (for example through values in a table, sentences or list form) and the context must be varied: the specific events in the question (e.g. doing momentum calculations for a range of objects in a range of environments) and the sub-topic involved (e.g. asking an electricity question when studying forces).

Instilling “scientific thinking” in our students is incredibly important. But such thinking relies not on some generalised ability or set of instructions, but specific knowledge that is flexible enough to be applied in a range of contexts. Cognitive scientist Daniel Willingham (2008) outlines an experiment where students were asked to hypothesise why different cars had different mileages. The students’ ability to identify relevant variables was not tied to a general ability to identify variables, but their knowledge of variables in this particular example. So whilst most students would not have considered paint colour as a relevant variable, if they had have known for example that the driver’s driving habits changed after they painted the car, they may have identified it as a relevant variable. 

In another experiment (Friedler, 1990), students were asked to identify relevant confounding variables in two investigations. The first related to keeping an animal alive and the second related to heat loss from a swimming pool. Students were much more adept at identifying variables in the former than the latter, as they had more knowledge and experience of matters like diet and health than they had of energy transfers as a function of mass and surface area. Their ability to “think scientifically” was not generalised, it was tied to their knowledge of a particular context.

Transferring knowledge to new contexts is incredibly hard. It is only by building our students’ knowledge bases and by helping that knowledge to become cue-independent that we can hope to enable our students to perform this operation.



Chi M, Feltovich P and Glaser R (1981) Categorization and Representation of Physics Problems by Experts and Novices*. Cognitive Science 5(2): 121–152.

Friedler Y, Nachmias R and Linn M (1990) Learning scientific reasoning skills in microcomputer-based laboratories. Journal of Research in Science Teaching 27(2): 173–192.

Smith S (1979) Remembering in and out of context. Journal of Experimental Psychology: Human Learning & Memory 5(5): 460–471.

Walker R (2017) Why we should write our own science textbooks. Available at: (accessed 3 January 2019).

Willingham D (2008) Critical Thinking: Why Is It So Hard to Teach? Arts Education Policy Review 109(4): 21–32.

About the author

Leave a Reply

Screenshot 2019-09-24 at 10.16.00


Remote coaching

If you’re engaging in coaching or mentoring with your trainees and early career colleagues at the moment, it’s likely you’ll have begun to facilitate this


Applying Rosenshine to Religious Education

In 2012, Barack Rosenshine published the Principles of Instruction: a set of 10 research-based principles of instruction, along with suggestions for classroom practice. The principles