The European Board of Ophthalmology Diploma (EBOD) exam was the original exam run by the EBO for over 20 years. The exam is intended to further the mission of the EBO to harmonize the knowledge and training of ophthalmologists across Europe.
This communication outlines an update to our scoring and standard setting, which will adjust the way candidate results are deliberated. These changes have been made to standardise the approach taken by EBO Onsite and EBO Online exams. These changes will ensure we treat all cohorts of candidates (whether taking the exam online or onsite) in the same way. The Education Committee will watch closely as this updated approach is delivered to determine whether any further adjustments may need to be made, based on the outcomes seen in both onsite and online exams.
What has changed?Specifically, the manner in which candidates will be scored, and the passing requirement has been updated. The EBO Education Committee has developed these changes to ensure both exams require the same standard to pass.
What has not changed?
|
Please read the detail regarding these changes below.
Historically, the European Board of Ophthalmology comprehensive exam was organised into two sections:
MCQ section, which was composed of questions structured as a stem (context-setting phrase or term), followed by 5 statements. Candidates would mark each statement as “True” or “False”. Since 2016, negative marking was introduced to dissuade candidates from “wild guesses”. This was offset by a “Don’t know” option, which attracted a score of 0 – so if one did not know an answer, they could mark “Don’t know” and not be penalised by an incorrect answer.
The second section is a “viva voce” – a series of four 15-minute interviews with experts. Up until 2019, examiners would bring cases on PowerPoint slides to discuss with candidates. The slides would display medical history and diagnostic data, based on which candidates were expected to outline how they would handle the case. Candidates would receive a score on a scale from 4 – 10, reflecting the examiners’ estimation of how well a candidate would manage the case. A 6 was considered a pass mark. If an examiner awarded a score below 6 they were asked to provide comments as to why the candidate failed. This could be used in the deliberation of borderline candidates, and as feedback.
In practice, candidates would answer MCQ questions using pencil on OMR (Optical Marking Readable) score sheets. Examiners would likewise score candidates and provide comments for those who failed on OMR-readable score sheets. These sheets were designed and provided by Speedwell Software (UK).
The candidates’ outcome was determined in four steps:
In 2018, the EBO reviewed the exam content and decided to develop a more standardised approach for the viva voce section. This entailed preparing and using standardised cases for all candidates in a specific round. In 2019, a “pilot” exam was run with a small subgroup of the exam, who were given standardised cases – i.e. all candidates were given the same case to discuss in any given hour of examination. Different cases were used in different hours to prevent candidates informing peers of the subject matter / cases covered. However, the purpose of the case (its level of difficulty, approach, subject etc.) was kept similar across cases.
In 2018, there was also agreement to run a second, Autumn, exam at the DOG meeting in October 2020. Discussion at that point focused on whether to have two exams per year (one in May, at the French Society of Ophthalmology (SFO) conference in Paris, and one in October, with the German Ophthalmological Society (DOG) conference in Berlin), or to alternate the hosting of the exam over the years.
In 2020, the COVID pandemic restrictions prevented a large onsite exam from taking place. However, the Swiss Ophthalmology Society received special permission to host an exam in Interlaken. This exam assessed only Swiss candidates, who sat the usual MCQ and viva voce. The Speedwell eSystem (used as a question bank already by the EBO) was already in place and offered an online exam platform to deliver the MCQs for candidates to answer on laptops/ iPads. However, a novel approach was used for the Viva Voce, where one Swiss examiner was joined on Zoom by an international examiner to interview and mark candidates together. Again, the examiners could input scores for the candidates they interviewed using the Speedwell eSystem. In review of this exam, difficulty in connecting and maintaining connections for “remote” examiners was noted.
In 2021, COVID pandemic restrictions were in place across Europe, and there was little confidence about when they would be lifted. Several countries saw multiple lockdowns, and travel was severely disrupted. It was decided to run the EBOD examination fully online, so that candidates could sit the exam from home. At this time, several “online proctoring” services were available, that could monitor candidates taking exams online. The MCQ section was updated, with the introduction of Single Best Answer format questions, which is a more complex question type. This section could otherwise remain the same, as whether candidates were online or onsite, they would identify a correct answer to each question item they were presented. However, for the Viva Voce, a key issue from the previous year remained: if videoconferencing was used to connect candidates to examiners, there was a chance people could drop out or miss the exam. Therefore, EBO consulted with Speedwell about options for running an online version of the viva voce.
What was designed to replicate the classical Viva Voce was a “Clinical Case”. It has the same approach and purpose as a viva voce (to present candidates with a case that they must identify key diagnostic information, determine a diagnosis and identify what should be done next). Candidates type very short (up to 100 characters), free text responses to each question. Some model answers were provided, against which the candidate’s answers could be compared. If they matched they would be automatically awarded a pre-defined score. Scoring was aligned with the standardised viva voce – the candidate would be progressed through a case over around five questions. For each correct answer, they were awarded 1 point. There was no negative marking. In some questions, a candidate may be asked for multiple suggestions, in which case the marks awarded would be a fraction based on the number of items sought (e.g. each of “two diagnoses” may be worth 0.5 points; each of “three key features” may be worth 0.33 points). It was found that the system used did require human intervention to review and update the scores automatically awarded to candidates. Often, a difference that arose from spelling or description meant the system did not recognise the match between a candidate’s answer and the model answer. Therefore, examiners manually reviewed and updated the scoring awarded by the system.
On review of the results of Clinical Cases, EBO discovered a difference in the overall scores achieved by candidates for Viva Voce cases vs Clinical Cases. Scores achieved in the Clinical Cases section were significantly lower than those achieved in the classical Viva Voce. Several reasons for this disparity were proposed:
In light of these differences, and with review of the exam outcomes, the EBO decided to make some adjustments to allow for these differences, but also for the fact that in the first online exam, there were several various stressors that could affect candidate performance. And so, it was agreed to reduce the passing score to 5 (which yielded a similar pass rate to previous years). It was also agreed that candidates could score less than 6 in more than one section and still pass. In subsequent exams, EBO tried to add further nuance, requiring that candidates scored at least a 5 in the MCQ section (to confirm a good theoretical knowledge base). And in 2023, using scaled scoring in the Clinical Cases (whereby Clinical Case scores were rescaled from 4 – 10 to closer match the classical Viva Voce scores).
For 2024 and beyond, EBO are considering holding one onsite exam and one online exam per year, alternating the onsite exam between Paris in May (in association with the SFO) in one year with Berlin in October (in association with the DOG) the next. The online exam will take place in October when the onsite is in May, and then online in May when the onsite exam is in October. Surveys have found roughly 1/3 of candidates prefer an onsite exam, while 2/3 prefer online exams. This can be due to the convenience of online exams, expense of travelling to attend an onsite exam and busy work schedules of younger colleagues, who may also be juggling young families etc.
This raises a fundamental question on harmonising the exam scoring systems. One step is already taken, in that the onsite Viva Voce will use standardised cases (much like the “Clinical Cases”). EBO consulted with CESMA (Danny Mathysen, University of Antwerp, Belgium, who worked on the original scoring system for the EBO exam) and the statistician who has been assisting with scoring since 2018 (David Young, University of Strathclyde, Glasgow, Scotland).
Their advice was considered, and EBO will now apply the following system to both onsite and online exams.
Exam responses (for MCQ and Viva Voce) will be input directly to the online Speedwell eSystem. For the onsite EBO Comprehensive exam, EBO will provide iPads / tablets for candidates taking the exams. For the online exam, candidates will need to ensure their equipment at home will be compatible with the examination systems used (a test for this purpose will be provided a few weeks in advance of the exam).
With regard to each section:
Written / MCQ & SBA (Part I)
Viva Voce and Clinical Cases (Part II)
Final outcome
The “raw scores” of MCQs and Viva Voce / Clinical Cases will be scaled from 1 – 10.
For each part (MCQ, each of the four Viva Voce/ Clinical Case Stations) the rescaled score of 6 is the passing mark.
The MCQ and Viva Voce/ Clinical Case rescaled scores will be put through the original algorithm, which gives 40% weight to the MCQ score and 60% weight to the Viva Voce/ Clinical Cases: 0.4*MCQ + (0.15*Station A Score) + (0.15*Station B Score) + (0.15*Station C Score) + (0.15*Station D Score)
To pass, the result of this algorithm must be 6 or above.
Candidates should score at least 6 in all the sections, but may score lower than 6 in one Viva Voce / Clinical Case section if the candidate’s MCQ score is over 6, and their overall score is over 6.
For live exams, final outcomes will be released the next day. Unfortunately, some candidates may fail as a result of scoring lower than 6 in PART I (MCQ Section). However, these will not be identified until Saturday, as there is no way to process the results before PART II (Viva Voce) commences on Friday.
For online exams, final outcomes will be released after 2 weeks. This will allow time for review of all responses in the Clinical Cases section (free-text input, for which experts will review and update the machine-assigned scores for each response).
Summary
In this way, EBO hope to maintain a high-quality assessment of theoretical knowledge and clinical acumen for comprehensive ophthalmology. Furthermore, the harmonisation of scoring methodology and standard setting will ensure both onsite and online exams are recognised as being of equal quality, despite their differing delivery methods.
Subscribe to our updates & instant alerts | Enter email below
[subscribe2 hide='unsubscribe']