search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
VIEWS BBC BITESIZE Revision success It’s not what you do, but the way that you do it, says Head of BBC Bitesize CERYS GRIFFITHS.


Where did I put the car keys? What’s that person’s name? What did I need to buy for tonight’s tea? Day to day we all forget lots of things. Humans have amazing brains but our working memory – where information is processed in the short term – can easily become overloaded. Research suggests working memory has a capacity to recall four or five things at a given moment. At BBC Bitesize , this is something we try not to forget and is why we provide lots of study skills resources to help young people understand more about their brains and the business of learning.


There are lots of myths about revision that young people may believe. Does highlighting texts help retain information? Why is cramming the night before an exam such a bad idea? Teachers are experts in helping young people understand how to revise. Our aim at BBC Bitesize is to provide resources that give you additional tools to support your students. We’ve worked with experts in meta-cognition and memory to create tools that can help young people to revise effectively.


Being smart about revision is key. We have videos and guides providing a wealth of practical advice, such as how to use flashcards, the value of micro-breaks, the best use of quizzes, creating a study plan and breaking down revision into manageable chunks. Bitesize also has videos, written guides, quizzes and podcasts aligned to GCSE exam specifications.


Another theme we have focused on is giving students lots of different revision approaches. Revision can happen in many ways, whether that’s sticky notes on the fridge door, getting together with friends to test each other, or getting outside to revise on a sunny day. Podcasts have become increasingly popular with young people, and Bitesize has produced dozens of podcasts with BBC Sounds. These include Bitesize Study Smart hosted by education influencer, Ibz Mo. The series covers a wide range of topics including how to boost resilience when studying, how sleep can help you study and why it’s best not to cram. The podcasts also give students a different way to revise for science, English literature, and history.


Finally, a theme running through our study support resources is the need to get organised. With mobile phones and social media, helping young people to focus is essential. Our website is packed with tips such as how to create a revision timetable, setting achievable goals and creating an environment for studying. Where possible, we capture the voices and ideas of young people to make it more engaging. Our Mindset resources combine advice from young people with expert coaches such as Anxiety Josh and Dr Radha.


As exam season gets closer, we aim to provide students with the support to fulfil their potential. With expert help and new ideas, they can equip themselves with tools that last a lifetime.


To access our study support resources, visit: https://www.bbc.co.uk/ bitesize/study-support


GenAI in the classroom – considering the risks ALEX DAVE, Safeguarding Lead at LGfL-The National Grid for Learning, shares her insights.


With the Department for Education (DfE) giving the green light to schools and encouraging them to harness GenAI’s potential, it’s important they do so with a responsible approach – and it’s crucial that all staff understand the risks.


GenAI is already out there, embedded in a multitude of common apps, social media, and even educational tools our children interact with. While the initial conversation surrounding GenAI focused on academic dishonesty, there are far greater risks, and


we can’t be naïve in assuming safeguards exist within the apps or platforms. Fake CSAM. We know from the Internet Watch Foundation (IWF) there has been a sharp rise in fake child sexual abuse material (CSAM) online. These images, whether hyper-realistic photos, cartoons, anime, or manga, depict sexual activity involving under-18s. Even when fabricated, the emotional harm to victims is comparable to direct abuse. Some offenders create or trade this material on the dark web, while others use AI to produce fake sexualised images of children taken from social media, often for extortion. Tragically, some victims have been driven to suicide under such pressure. Easily downloadable ‘nudifying’ apps and AI tools also tempt young people to experiment, manipulating peers’ photos for ‘fun’, which can quickly turn into bullying. All fake CSAM is illegal under new offences in the Online Safety Act, so it’s essential students understand both the risks and legal consequences. Adult content via chatbots. Chatbots, common on platforms such as Snapchat and Instagram, are now capable of engaging children in explicit conversations. Combined with AI-generated sexual imagery, these risk distorting young people’s understanding of consent, gender, and healthy relationships, normalising harmful behaviours, and reinforcing damaging stereotypes, including violence against women and girls. Inappropriate advice. Popular AI tools such as ChatGPT and Gemini have been shown to provide harmful guidance on issues including self-


December 2025


harm, suicide, extreme dieting and sexual activity. The death of 16-year-old Adam Raine in the U.S. illustrates the risk. Augusta Free Press reported that after turning to ChatGPT for support, Adam began to consider it as his ‘closest confidant’ and engaged extensively with the bot about his declining mental health. The resulting lawsuit alleges the bot deepened his isolation, encouraged self-destructive thoughts, and even drafted a suicide note. Exposure to harmful or fake content. Hyper-realistic images and videos, often created with political or malicious intent, are now widespread online. These deepfakes are increasingly difficult to distinguish from reality. Which highlights why teaching critical thinking and media literacy is crucial to safeguarding.


Emotional impact of AI relationships. Another growing concern is AI companion apps that let users ‘create’ friends and partners, blurring the boundaries between real and virtual relationships. This can displace genuine connections and distort views of intimacy. Bias and reinforcing harmful stereotypes. Generative AI is a large language model relying on vast datasets scraped from the internet with bias in the source material readily evident in output. When prompted to generate an image of a ‘child involved in county lines’, one platform produced only images of Black teenage boys; prompts for a ‘doctor’ returned male images, while ‘nurse’ defaulted to female. These patterns highlight the deep-rooted ethnic and gender biases built into these models. Sharing personal data. Children can unwittingly share personal information with AI chatbots they trust, often without realising the risks. Names, addresses and other sensitive details can be used for targeted advertising – or fraud, fake profiles and grooming. As young people increasingly use generative AI, strong data-privacy safeguards are essential; without them, tools marketed as ‘engaging’ can become gateways to exploitation.


For free information and support, visit genai.lgfl.net www.education-today.co.uk 19


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44