The number of unmatched medical graduates has been increasing steadily and has become the focus of attention in recent months, including a CMAJ article entitled “What to do about the Canadian Resident Matching Service.”1 The discussion has focused primarily on the number and distribution of training positions, with calls to increase the number of residency positions to 1.2 per medical graduate. Indeed, the Government of Ontario has recently funded a number of new positions to address this issue.
What has received less attention, however, are the problems with the existing process of resident selection. Although there are few studies of the match, there is existing evidence of bias within the system. A study performed at the University of Calgary found that there is significant in-group bias when selecting residents, with preference for applicants from within the institution.2 In the 2017 match, although 80% of male applicants and 83% of female applicants were matched to their first choice discipline, only 62% of female applicants matched to their first choice if it was a surgical discipline, compared with 72% of male applicants.3 In fact, as of the 2016 cohort, there was one surgical residency program in Ontario that had not taken a female trainee in five years, according to its website (https://fhs.mcmaster.ca/macortho/our_residents.html).
Behind closed doors, many students express frustration at a process they feel is too subjective. This is often attributed to the concept of “fit,” an abstract assessment of an applicant’s ability to integrate into the existing team. Of course, fit is important, but, according to the Canadian Resident Matching Service (CaRMS), it “should not be used to indulge personal biases or to discriminate against applicants.” I have heard my colleagues describe the match as a “black box,” with applicants unable to identify the qualities associated with success. Their frustrations are borne out by the existing data, which show a substantial element of subjectivity in applicant assessment.
An analysis of the plastic surgery match in the United States showed that program directors placed importance on membership in the Alpha Omega Alpha Honor Medical Society, publications, step 1 scores for the United States Medical Licensing Examination, medical class rank and letters of reference when assessing a candidate, whereas Canadian program directors relied heavily on reference letters and clinical interactions.4 More objective measures such as academic honours, awards and publications ranked lower in importance in Canada. The identity of the writer of the reference letter ranked higher in importance than the content. This discrepancy is somewhat to be expected as most Canadian medical schools use pass/fail grading systems, which preclude class rankings; however, it seems that the lack of objectivity is due not just to a paucity of data but, to some degree, to a failure to acknowledge value in the few objective measures available, including awards and publication record.
There is also a complete lack of transparency in this process. Students who have sought feedback have been met with resistance, with many programs refusing to disclose any information regarding the review of an applicant’s file. Given that students pay money to apply to these programs and that the stakes of the match are so high, this is simply not acceptable. It also leads one to wonder whether programs fear the scrutiny.
The office of Admissions and Evaluation for Postgraduate Medical Education at the University of Toronto has put a number of measures in place to try to ensure fairness in the residency selection process. The University of Toronto uses the Best Practices in Application and Selection, a set of standards developed for this purpose.5 This office runs many workshops to train program directors and members of the selection committee about file review and interviewing. However, training is not mandatory, and there is no auditing system in place to ensure that selection committees are following the best practices or adhering to hiring practices outlined by the Ontario Human Rights Commission. “We do the up-front training but we don’t go back and make sure that was followed necessarily. We have so many programs that it’s not really feasible to go back and check” (Dr. Linda Probyn, director, Admissions and Evaluation, Postgraduate Medical Education, Faculty of Medicine, University of Toronto, Toronto, Ont.: personal communication, 2018).
A recent study that assessed adherence to Best Practices in Application and Selection was completed recently at Dalhousie University.6 The authors of the study found that there were substantial weaknesses in the domains of applicant ranking, transparency and knowledge translation, with committee members failing to adhere to standards in all measures of applicant ranking, and five of seven measures of transparency.
There are several steps that could be taken to improve the process. First, all medical schools should adopt Best Practices in Application and Selection, with mandatory training for all members of selection committees. Second, postgraduate medical education- or CaRMS-based auditing of program selection practices should be instituted. It is not feasible to audit every program every year, but random samples of applicants’ files pre- and postinterview would likely provide enough information to assess objectivity and fairness on a recurring basis. Selection committees should also have access to additional objective information. Options include an examination in the style of the US Medical Licensing Examination Step 1 or, alternatively, the Medical Council of Canada Qualifying Examination Part 1 could be moved to the third year of medical school.
Advocates want to fund more residency positions, but to support the current system is to support a process that is fraught with bias and crippled by its subjectivity. At the end of the day, these are government-funded training positions and the medical profession has an obligation to ensure that they are being allocated fairly, using objective, merit-based criteria for ranking applicants. Fit should only be part of a larger, more impartial assessment. This is not a problem that will be solved with money, and failure to acknowledge all of the issues with the existing match process will only result in a failure to capitalize on any additional funding.
Footnotes
Competing interests: None declared.