<p dir="ltr">This study explores the use of a role-play video assessment to evaluate competency-based learning outcomes in a third-year undergraduate fluoroscopy module within the Diagnostic Radiography programme. Specifically, we utilised a Mini-Clinical Evaluation Exercise to assess students' knowledge and skills in managing radiation-related enquiries, a key entrustable professional activity (EPA) for diagnostic radiographers. The primary advantage of AI marking is its potential to save time, offering an efficient alternative to traditional manual marking. However, to maintain educational standards and ensure alignment with professional judgment, thorough validation of AI as a co-marker is essential. Our research objectives include: (1) evaluating the performance of a large-language model video analyzer AI agent as an independent marker, (2) comparing its performance to manual marking, and (3) recommending strategies to optimize its use alongside human evaluators. The goal is to identify sustainable assessment practices that effectively drive student learning. Initial findings indicate that AI can provide constructive feedback, enhancing the personalized assessment of our third-year radiography students' competency in managing CT fluoroscopy patients. Experienced radiography professionals can identify and fine-tune issues that the AI might overlook, highlighting the importance of incorporating contextual knowledge into the AI's marking prompts to enhance reliability and accuracy. Our study underscores the importance of a hybrid approach, where AI serves as a co-marker alongside human evaluators. This approach leverages the strengths of both AI and human expertise, ensuring a balanced and thorough assessment process. To optimize AI as a co-marker, we recommend integrating contextual knowledge into AI prompts and actively validating AI's performance. This will maximize the benefits of AI in marking, achieving a balanced and thorough assessment process. In conclusion, while AI has the potential to serve as a co-marker in competency-based role-play video assessments, its effectiveness depends on integrating contextual background, active collaboration, and ongoing validation with human evaluators.</p>