Upholding learning integrity in the age of AI

Amal Dimashki, Regional Director, MENAT at Turnitin

From redesigning assessments and embedding integrity by design, to fostering ethical AI usage aligned with regional values and global standards, Amal Dimashki, Regional Director, MENAT at Turnitin shares insights on how universities can adapt to the evolving assessment landscape  

Artificial intelligence (AI) is rapidly reshaping how learning is delivered, assessed and evaluated across higher education institutions worldwide. The rapid integration of AI into education in the GCC presents both opportunities and challenges for academic leaders.  

For universities, the challenge is no longer whether AI will be used, but how its use can be guided responsibly without undermining the core purpose of learning.  

In an interview with Education Middle East, Amal Dimashki, Regional Director, MENAT at Turnitin outlines practical tips on how universities can adapt to this evolving environment, from redesigning assessments to fostering a culture of learning integrity that aligns with regional values and global standards. 

What is the impact of unrestricted use of AI in academic institutions? 

The unrestricted use of AI in academic institutions risks turning technology into a substitute for critical thinking rather than a tool for learning. When students rely heavily on AI-generated answers, they miss the chance to develop their own problem-solving skills and truly understand the content. 

It’s essential to approach AI adoption with clear boundaries. By ensuring students engage authentically with the material and use AI as a supportive tool, we can avoid shortcuts that undermine education. For the UAE, where education is foundational to Vision 2030 and Vision 2071, prioritising meaningful learning is critical for preparing a tech-savvy workforce that upholds academic and professional integrity. 

Do the positives outweigh the negatives of AI adoption and use in education? 

When guided by ethical frameworks, the benefits of AI in education can far outweigh the risks. AI offers tailored learning experiences, provides real-time feedback and gives students modern workforce skills. These advancements align with the UAE’s goal of enhancing future readiness across all industries. 

Misuse is a concern, as students might use AI to avoid learning. This can be addressed with thoughtful assessments and policies. By embedding academic integrity into AI usage, we can ensure innovation and ethics coexist. AI should enhance human potential, not replace the effort and creativity that make education transformative. 

How prepared are universities and higher education institutions in the GCC to identify the (mis)use of AI in student assessments? 

Institutions in the region have been remarkably proactive. Governments and school systems across the region are also taking early, structured steps.  For example, the UAE’s Ministry of Education introducing an AI curriculum for 2025–2026 and Saudi Arabia’s push to embed AI education within school curricula are examples of proactive planning.  At the higher education level, universities are going a step further. Besides deploying advanced tools for similarity checking, many institutions are also updating the curriculum to emphasise responsible use of AI. 

Are universities in the region open to the idea of redesigning assessments? 

Regional universities are increasingly willing to innovate their assessment methods, shifting from standard assessment formats to approaches that track the learning journey, not just the final output. This could include oral evaluations, in-course writing and practical projects that measure the skills that AI cannot replicate. This willingness to experiment signals an understanding that education must evolve alongside technology to maintain its relevance and uphold integrity. 

How effective are current strategies that institutions in the region are adopting to regulate the use of AI-assisted misconduct, and what improvements are needed? 

The approach to academic integrity is evolving rapidly. Many institutions are embedding ‘integrity by design’ principles into their curricula, ensuring that AI literacy and ethics are taught alongside technical skills.   

However, one area of growth is faculty training. Educators need robust support and training on how to guide students towards using AI ethically. Clear institutional policies, informed by regional data protection laws and best practices, can provide the framework for this guidance. Shifting from enforcement to education creates a culture of integrity where ethical choices are second nature. 

What role should educators play in teaching students the ethical use of AI tools? 

Educators are central to this mission. By acting as role models and integrating AI literacy into their lessons, they can teach students how to use these tools responsibly. This means providing clear boundaries, showing the value of ethical practices in professional settings and emphasising that learning integrity is critical in a globalised, tech-driven workforce. By using AI as an aid rather than a crutch, educators can empower students with both the technical competencies and critical thinking skills needed for long-term success.  

How can technology providers and academic institutions collaborate? 

Collaboration is key to success. Technology providers must deliver more than software; they need to offer data-driven insights and help institutions customise their approach to ethical AI. At Turnitin, we partner with academic institutions to provide actionable insights through analytics and open conversations. By working hand in hand, universities and technology providers can ensure that education’s core purpose, to transform lives through learning, is preserved, even as the tools for delivering this transformation evolve. 

If you’d like to find out more about how Turnitin can support secure assessment in the age of AI within your institution, visit the website