AI at Clonard: Supporting Students to Think Ethically, Learn Effectively and Act Responsibly
As part of our One Pace Beyond pillar to develop Thriving Change Agents, we continue to work towards our 2025–2028 Strategic Commitment to optimise the creative, productive and ethical use of digital technologies through the implementation of the Clonard College AI Roadmap.
Following the success of our AI policy and protocol development, along with staff professional learning with Matt Esterman in 2025, we are now taking the next step in our AI journey with students, staff and families.
This work is guided by our 2026 Annual Action Plan goal of enabling students to be ethical, effective and responsible users of AI. On Monday 16 March, Matt Esterman and Dr Tim Kitchen returned to Clonard to work alongside our community as we shaped what this looks like in practice across learning, teaching and assessment.
Listening to Student Voice
A key part of this work has been listening carefully to students. This year, 492 students completed an AI student perception survey, and student representatives from every year level participated in five focus groups across the day to look at these results and propose some initiatives moving forward.
What the Survey Told Us
The student survey reinforced the urgency of this work:
- 88% of students already use AI tools for school, many on a weekly or daily basis. This is not an emerging trend. It is current practice.
- Digital safety literacy is critically low. Only 15% of students report using privacy or security settings on AI platforms.
- Students strongly see AI as a productivity tool, with slightly more mixed views about its impact on learning, highlighting the need for explicit guidance.
Student Focus Groups
Students engaged in open feedback forums to share their perceptions, concerns and ideas about AI in their learning. They explored survey findings, reviewed current policy, and took part in design sprints imagining resources the school could create, including workshops, posters and learning experiences.
Across all groups, several consistent themes emerged:
- AI as a tool, not a replacement. Students were clear that AI should support thinking, learning and teaching and not replace it. Writing was frequently described as a form of thinking, and students spoke openly about the risk of losing learning when work is outsourced.
- Concerns about over‑reliance. Shortcutting and dependency were seen as real risks. One message appeared repeatedly: too much AI means less learning.
- The need for supervision and explicit teaching. Students agreed AI use should be age appropriate, supervised and taught intentionally.
- Legitimate and valued uses. Students identified appropriate uses such as summarising complex content, refining drafts, setting goals, supporting neurodivergent learners and helping when they don’t yet know how to ask for help.
Students consistently framed AI as an amplifier, not an escape: a way to do better work, not avoid doing the work.
Risks Students Are Naming
Students were also clear‑eyed about the risks.
The dominant concern was learning loss, that relying on AI can mean not actually developing skills or understanding. Plagiarism and the fear of “getting caught” generated high anxiety, alongside concerns about incorrect or biased information. Privacy and data leakage were raised explicitly, as were environmental costs and copyright issues.
Notably, one group framed the risk in long term terms: “losing chances at jobs” connecting AI misuse at school to consequences well beyond the classroom.
The key message here is important: students are not naïve about AI. They can articulate both its value and its risks.
What Students Are Asking For
Perhaps the strongest signal from the focus groups was that students want to be partners, not passive recipients.
Students proposed:
- Student led communication, including workshops, assemblies, videos and posters created by students for students.
- Clear, direct and visible policy, consistently applied across classrooms. Students want everyone; teachers, students, families and leaders on the same page.
- Ongoing education, not one‑off assemblies. One group suggested a regular AI learning structure so expectations and skills are revisited over time.
- Engagement over enforcement, linking AI learning to wellbeing, PB4L and motivation rather than punishment.
Learning Together as a Community
Alongside student work, staff participated in beginner and advanced AI learning sessions, learning leaders explored implications for assessment, and families were invited into the conversation through a parent session designed to cut through the hype.
Families explored practical ways to navigate AI at home, using a simple framework to guide conversations around access, privacy, critical thinking, learning, wellbeing and ethics. A consistent message emerged: conversations matter more than controls, and values matter more than technical expertise.
Where to Next?
This work reflects our commitment to learning with students, not just for them. By listening carefully, setting clear expectations and building shared understanding, we are supporting young people to navigate AI with confidence, integrity and care.
AI is already part of students’ lives. Our joint responsibility is to ensure it strengthens learning, protects thinking and supports the formation of thoughtful, ethical and capable learners.

