Skip to main content

Main menu

  • Home
  • Content
    • Current issue
    • Past issues
    • Early releases
    • Collections
    • Sections
    • Blog
    • Infographics & illustrations
    • Podcasts
    • COVID-19 Articles
    • Obituary notices
  • Authors & Reviewers
    • Overview for authors
    • Submission guidelines
    • Submit a manuscript
    • Forms
    • Editorial process
    • Editorial policies
    • Peer review process
    • Publication fees
    • Reprint requests
    • Open access
    • Patient engagement
  • Members & Subscribers
    • Benefits for CMA Members
    • CPD Credits for Members
    • Subscribe to CMAJ Print
    • Subscription Prices
    • Obituary notices
  • Alerts
    • Email alerts
    • RSS
  • JAMC
    • À propos
    • Numéro en cours
    • Archives
    • Sections
    • Abonnement
    • Alertes
    • Trousse média 2023
    • Avis de décès
  • CMAJ JOURNALS
    • CMAJ Open
    • CJS
    • JAMC
    • JPN

User menu

Search

  • Advanced search
CMAJ
  • CMAJ JOURNALS
    • CMAJ Open
    • CJS
    • JAMC
    • JPN
CMAJ

Advanced Search

  • Home
  • Content
    • Current issue
    • Past issues
    • Early releases
    • Collections
    • Sections
    • Blog
    • Infographics & illustrations
    • Podcasts
    • COVID-19 Articles
    • Obituary notices
  • Authors & Reviewers
    • Overview for authors
    • Submission guidelines
    • Submit a manuscript
    • Forms
    • Editorial process
    • Editorial policies
    • Peer review process
    • Publication fees
    • Reprint requests
    • Open access
    • Patient engagement
  • Members & Subscribers
    • Benefits for CMA Members
    • CPD Credits for Members
    • Subscribe to CMAJ Print
    • Subscription Prices
    • Obituary notices
  • Alerts
    • Email alerts
    • RSS
  • JAMC
    • À propos
    • Numéro en cours
    • Archives
    • Sections
    • Abonnement
    • Alertes
    • Trousse média 2023
    • Avis de décès
  • Visit CMAJ on Facebook
  • Follow CMAJ on Twitter
  • Follow CMAJ on Pinterest
  • Follow CMAJ on Youtube
  • Follow CMAJ on Instagram
Analysis

Deconstructing the diagnostic reasoning of human versus artificial intelligence

Thierry Pelaccia, Germain Forestier and Cédric Wemmert
CMAJ December 02, 2019 191 (48) E1332-E1335; DOI: https://doi.org/10.1503/cmaj.190506
Thierry Pelaccia
Centre for Training and Research in Health Sciences Education (Pelaccia), Faculty of Medicine, University of Strasbourg; Hôpitaux universitaires de Strasbourg (Pelaccia), Strasbourg, France; Institute of Research in Computer Science, Mathematics, Automation and Signal (Forestier), Université de Haute-Alsace, Mulhouse, France; Computer Science and Imaging Research Institute (Wemmert), The Engineering Science, Computer Science and Imaging Laboratory, University of Strasbourg, Illkirch, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Germain Forestier
Centre for Training and Research in Health Sciences Education (Pelaccia), Faculty of Medicine, University of Strasbourg; Hôpitaux universitaires de Strasbourg (Pelaccia), Strasbourg, France; Institute of Research in Computer Science, Mathematics, Automation and Signal (Forestier), Université de Haute-Alsace, Mulhouse, France; Computer Science and Imaging Research Institute (Wemmert), The Engineering Science, Computer Science and Imaging Laboratory, University of Strasbourg, Illkirch, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Cédric Wemmert
Centre for Training and Research in Health Sciences Education (Pelaccia), Faculty of Medicine, University of Strasbourg; Hôpitaux universitaires de Strasbourg (Pelaccia), Strasbourg, France; Institute of Research in Computer Science, Mathematics, Automation and Signal (Forestier), Université de Haute-Alsace, Mulhouse, France; Computer Science and Imaging Research Institute (Wemmert), The Engineering Science, Computer Science and Imaging Laboratory, University of Strasbourg, Illkirch, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Tables
  • Related Content
  • Responses
  • Metrics
  • PDF
Loading
Submit a Response to This Article
Compose Response

More information about text formats

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
References
Author Information
First or given name, e.g. 'Peter'.
Your last, or family, name, e.g. 'MacMoody'.
Your email address, e.g. higgs-boson@gmail.com
Your role and/or occupation, e.g. 'Orthopedic Surgeon'. Minimum 7 characters.
Your organization or institution (if applicable), e.g. 'Royal Free Hospital'. Minimum 12 characters.
Your organization, institution's or residential address.
Statement of Competing Interests

Vertical Tabs

Jump to comment:

  • Intelligence in artificial intelligence (and its use)
    Thierry Pelaccia [MD, PhD], Germain Forestier [PhD] and Cédric Wemmert [PhD]
    Posted on: 21 April 2020
  • RE: human vs. artificial intelligence
    Noel Corser
    Posted on: 01 February 2020
  • RE: Artificial intelligence isn’t
    David M. Burns
    Posted on: 21 January 2020
  • RE: Ethical AI
    Aitzaz Bin sultan Rai
    Posted on: 09 January 2020
  • Beware AI vs Human false dichotomy
    Robert OConnor
    Posted on: 04 December 2019
  • RE: Machines over Human
    Dhastagir Sheriff
    Posted on: 02 December 2019
  • The Future is Friendly
    Mike Figurski
    Posted on: 02 December 2019
  • Posted on: (21 April 2020)
    Page navigation anchor for Intelligence in artificial intelligence (and its use)
    Intelligence in artificial intelligence (and its use)
    • Thierry Pelaccia [MD, PhD], Professor of Emergency Medicine, Centre for Training and Research in Health Sciences Education, Faculty of Medicine, University of Strasbourg, France
    • Other Contributors:
      • Germain Forestier, Professor of Mathematics
      • Cédric Wemmert, Professor of Mathematics

    We thank Dr. Burns for his comments on our article.[1] Many definitions of AI have been proposed. According to LeCun, AI "allows machines to perform tasks and solve problems normally reserved for humans”.[2] Since some machines aim to reproduce human tasks, particularly those related to diagnosis, it makes sense to compare how physicians and AI work, and to draw consequences for clinical practice. However, we need to agree on what should be called "intelligence". Strictly speaking, a machine is not intelligent when it performs a task. It does not understand the task, the process or the result it produces. The intelligence of the machine lies in its ability to learn.[2] AI is therefore "efficient" in its ability to solve clinical tasks, and "intelligent" in its ability to learn things and improve its performance. In the case of deep learning, this “intelligence to learn” is linked to a particular architecture: all layers of neural networks are trainable and the learning performed by one layer is used by the following layers to form increasingly complex and abstract concepts.[3]

    Dr. Burns is right to point out the risks of blind use of machines, i.e. ignoring their functioning and limitations. However, we must acknowledge that there are still many gray areas in human cognition, and that most physicians are unable to explain how they make decisions.[4] Yet human cognition has been at the heart of medical decision-making for centuries....

    Show More

    We thank Dr. Burns for his comments on our article.[1] Many definitions of AI have been proposed. According to LeCun, AI "allows machines to perform tasks and solve problems normally reserved for humans”.[2] Since some machines aim to reproduce human tasks, particularly those related to diagnosis, it makes sense to compare how physicians and AI work, and to draw consequences for clinical practice. However, we need to agree on what should be called "intelligence". Strictly speaking, a machine is not intelligent when it performs a task. It does not understand the task, the process or the result it produces. The intelligence of the machine lies in its ability to learn.[2] AI is therefore "efficient" in its ability to solve clinical tasks, and "intelligent" in its ability to learn things and improve its performance. In the case of deep learning, this “intelligence to learn” is linked to a particular architecture: all layers of neural networks are trainable and the learning performed by one layer is used by the following layers to form increasingly complex and abstract concepts.[3]

    Dr. Burns is right to point out the risks of blind use of machines, i.e. ignoring their functioning and limitations. However, we must acknowledge that there are still many gray areas in human cognition, and that most physicians are unable to explain how they make decisions.[4] Yet human cognition has been at the heart of medical decision-making for centuries.

    We believe that the combination of human and artificial intelligence will make it possible to overcome the limitations (including biases) associated with the two intelligence. For the physician, it will not be a question of "using" the intelligent tools, but of "knowing how to use" them. The difference lies in being aware of the added value of the tool (e.g. saving time when confronted with many data, directing attention when dealing with atypical data, gaining precision when interpreting complex data), and using the tool for this purpose alone, without wanting to substitute it for medical decision (and physician's intelligence).

    Show Less
    Competing Interests: None declared.

    References

    • 1. Pelaccia T, Forestier G, Wemmert C. Deconstructing the diagnostic reasoning of human versus artificial intelligence. CMAJ 2019;191:E1332-5.
    • 2. LeCun Y. L’apprentissage profond, une révolution en intelligence artificielle [Deep learning, a revolution in artificial intelligence]. La lettre du Collège de France 2016;41:13.
    • 3. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521:436-4.
    • 4. Pelaccia T, Tardif J, Triby E, Charlin B. A novel approach to study medical decision making in the clinical setting: the "own-point-of-view" perspective. Acad Emerg Med 2017;24:785-5.
  • Posted on: (1 February 2020)
    Page navigation anchor for RE: human vs. artificial intelligence
    RE: human vs. artificial intelligence
    • Noel Corser, Family Physician, Hinton, Alberta

    The place of AI in medicine has a certain fascination, mostly of the sci-fi variety, but I thoroughly appreciated this article's description of the mechanics of how AI "learns" as well as "reaches a diagnosis". I would go a bit further than the authors though.

    The quality of an AI diagnosis is strictly limited by the quality of it's model, and the "clean-ness" of the dataset it learns from, and then induces from. Our human understanding of disease causality is neither perfect, nor complete, making creation of good models challenging. Much more challenging is "clean data", meaning symptom X, or physical finding Y, or test result Z, are true descriptions of the patient’s condition. This can be due to a multitude of factors. Some (e.g. test results) might in future be made more accurate, and some (e.g. "I'm feeling dizzy”) will very likely never be accurate in the sense required by AI. This is because a human patient (their own history, and the multitude of external factors affecting them) is infinitely complex - figuratively, and perhaps literally. Reducing all of these potential causative factors to computable “data points” requires both simplification and evaluation, both of which tend to change the data itself. That’s why AI has really only proven effective at diagnostic tasks that involve relatively clean “data” - diagnostic imaging, photographs of skin lesions and retinas, and certain conditions...

    Show More

    The place of AI in medicine has a certain fascination, mostly of the sci-fi variety, but I thoroughly appreciated this article's description of the mechanics of how AI "learns" as well as "reaches a diagnosis". I would go a bit further than the authors though.

    The quality of an AI diagnosis is strictly limited by the quality of it's model, and the "clean-ness" of the dataset it learns from, and then induces from. Our human understanding of disease causality is neither perfect, nor complete, making creation of good models challenging. Much more challenging is "clean data", meaning symptom X, or physical finding Y, or test result Z, are true descriptions of the patient’s condition. This can be due to a multitude of factors. Some (e.g. test results) might in future be made more accurate, and some (e.g. "I'm feeling dizzy”) will very likely never be accurate in the sense required by AI. This is because a human patient (their own history, and the multitude of external factors affecting them) is infinitely complex - figuratively, and perhaps literally. Reducing all of these potential causative factors to computable “data points” requires both simplification and evaluation, both of which tend to change the data itself. That’s why AI has really only proven effective at diagnostic tasks that involve relatively clean “data” - diagnostic imaging, photographs of skin lesions and retinas, and certain conditions with relatively well-understood and objective markers.

    On the other hand, humans have had hundreds of thousands of years of practice at picking up on subtle cues emanating from other humans, and when this is combined with quality medical training and "practiced experience”, a physician's diagnostic ability is generally very good (and certainly very efficient, despite it’s potential flaws). However, we share something with AI - the requirement for quality feedback on whether we’re right or not. This is particularly challenging for generalists like family physicians, typically dealing with undifferentiated problems having a wide differential, 80% of which resolve on their own. Anesthetists, on the other hand, probably have a more dependable intuition, because they tend to get immediate feedback on their decisions. The point is that AI is likely to be more accurate than humans when the diagnostic model is relatively simple, the data involved is high-quality and “clean”, and especially where errors related to human reasoning are common.

    The other point to make is that the practice of medicine isn’t simply about diagnosis, but rather helping the patient attain, or regain, wellness. And this is why AI will never replace physicians. The ability to imagine novel solutions to complex problems, to feel empathy, and to make decisions when it’s entirely unclear if there’s a “right answer”, are all uniquely human abilities which, by it’s nature, AI is incapable of. Physicians are not competing with AI; we have different, and sometimes complimentary, strengths and weaknesses. We should carefully assess which type of questions and answers AI can help us with, and just as carefully consider how our own powers can best be used, to the benefit of the patient.

    Show Less
    Competing Interests: None declared.
  • Posted on: (21 January 2020)
    Page navigation anchor for RE: Artificial intelligence isn’t
    RE: Artificial intelligence isn’t
    • David M. Burns, Orthopaedic Surgery Resident, University of Toronto

    Seeking to compare the reasoning of human and artificial intelligence in the context of medical diagnosis is an overly optimistic anthropomorphism. The term artificial intelligence (AI), as used to describe machine learning algorithms employed in this domain, is itself a misnomer. This is apparent when comparing modern machine learning algorithms based on artificial neural networks to non neural algorithms (e.g. logistic regression). Unfortunately, this comparison was not made by the authors.

    Logistic regression, established in the 1800s, is the machine learning algorithm most commonly applied to structured medical data for diagnostic and prognostic purposes (e.g. Framingham Risk Score, Kocher Criteria, etc.). The same nomenclature of “learning” or “training” is equally well applied to this algorithm that we have been using for centuries. Simply put, machine learning algorithms are mathematical formulae with free parameters derived retrospectively from clinical data. These formulae are not intelligent according to even the most generous of definitions, and they have no capacity for reasoning.

    It is notable that non-neural machine learning algorithms are still the most accurate for structured clinical data, and continue to dominate the field. Neural network algorithms bring not intelligence, but rather the capacity to model more complex unstructured data (specifically natural language, images, and time series) and incorporate this information into our predic...

    Show More

    Seeking to compare the reasoning of human and artificial intelligence in the context of medical diagnosis is an overly optimistic anthropomorphism. The term artificial intelligence (AI), as used to describe machine learning algorithms employed in this domain, is itself a misnomer. This is apparent when comparing modern machine learning algorithms based on artificial neural networks to non neural algorithms (e.g. logistic regression). Unfortunately, this comparison was not made by the authors.

    Logistic regression, established in the 1800s, is the machine learning algorithm most commonly applied to structured medical data for diagnostic and prognostic purposes (e.g. Framingham Risk Score, Kocher Criteria, etc.). The same nomenclature of “learning” or “training” is equally well applied to this algorithm that we have been using for centuries. Simply put, machine learning algorithms are mathematical formulae with free parameters derived retrospectively from clinical data. These formulae are not intelligent according to even the most generous of definitions, and they have no capacity for reasoning.

    It is notable that non-neural machine learning algorithms are still the most accurate for structured clinical data, and continue to dominate the field. Neural network algorithms bring not intelligence, but rather the capacity to model more complex unstructured data (specifically natural language, images, and time series) and incorporate this information into our predictive tools. This capability for modelling complex inputs is the most compelling advantage of neural network algorithms.

    However, the downside of this complexity is that neural network models are typically uninterpretable, meaning humans can not understand or explain how the prediction is derived from the clinical data. The greatest risk in deploying neural network algorithms, or any machine learning algorithm with limited interpretability, is to assign too much trust to it and ignore the potential for unknown confounders or biases. Such confounders and biases could harm patients and are notoriously hard to identify and correct for. By conflating human and machine intelligence, we further this risk.

    Fortunately or unfortunately, true artificial intelligence with the capacity for reasoning, remains for now in the realm of science fiction. We should not pretend otherwise, because while there are many benefits of the technology currently available to us, there is also real capacity for harm by using it inappropriately.

    Show Less
    Competing Interests: None declared.
  • Posted on: (9 January 2020)
    Page navigation anchor for RE: Ethical AI
    RE: Ethical AI
    • Aitzaz Bin sultan Rai, Researcher, University of Oxford

    AI is an emerging technology that it is generating more fear and enthusiasm than is required. Just like historical revolutions, like the wheel, steam engine, printing press, computers, internet ,it is just another tool in the great human journey. We cannot run faster than a Tesla car, cannot write quicker than a printer and so on. Does that put us at a disadvantage? Absolutely not. AI is a tool that will do what we will tell it to do. Teaching ethics to AI will be our biggest challenge.

    Competing Interests: None declared.
  • Posted on: (4 December 2019)
    Page navigation anchor for Beware AI vs Human false dichotomy
    Beware AI vs Human false dichotomy
    • Robert OConnor, Family Medicine, Me'Chosen Medical Family Practice, Victoria, BC

    I am a family physician with a side background in software and web development.

    In the 10 year horizon, the near term state of what is popularly called 'artificial intelligence' is properly described as an 'expert system', since it is not intelligent.

    It is not whether a human family physician or an expert system is the victor on a contest of diagnosis/treatment, it is that a human family physician with an expert system beats them both.

    Competing Interests: None declared.
  • Posted on: (2 December 2019)
    Page navigation anchor for RE: Machines over Human
    RE: Machines over Human
    • Dhastagir Sheriff, Professor and independent Research worker, Reprolabs, Chennai, India

    Men make machines to perform delicate tasks to support their endeavor. Artificial intelligence (AI) a creation of human mind is another venture that is taken as a tool to help diagnosis and get results in time. This task is fully developed to interpret images are fed to the machine to come out with a result in time. AI may help to reduce strain on resources and give time for more patient-physician interaction. Machine intelligence is a support for the human touch and treatment of a practicing physician. It cannot replace humaneness of a doctor and the comfort a patient gets from the physician through interaction and reassurance. The article focuses on the diagnostic reasoning of AI over human. Many a time the clinical acumen of the physician plays a vital role in deciding how to treat a patient not the condition. Diagnostic accuracy and timely results may save a critical patient. It will not replace the master who feeds the data and decides what action to be taken based on the data made available. Human in the physician is what a patient looks for in a mechanized world. With all the gadgets at hand the diagnosis and treatment may improve but many a time an accurate diagnosis does guarantee better outcome. The physician's human touch and spiritual interaction makes medicine holistic.

    Competing Interests: None declared.
  • Posted on: (2 December 2019)
    Page navigation anchor for The Future is Friendly
    The Future is Friendly
    • Mike Figurski, GP Rural, Whitefoot Clinic

    I thank Drs.Thierry Pelaccia, Germain Forestier and Cédric Wemmert for their reasoned and optimistic predictions for directed beneficial application of technology to supplement human intelligence. They echoed "Lady Ada's Objection" (1) "... that such computing methods could not originate or create, but could only do things the programmer knew how to make them do". She developed the first machine aligorithm and predicted the scope of digital revolution almost 200 years ago. There has been waves of disturbing public predictions of dire smart machine consequences since then. They have proven mostly wrong except those regarding major economic disadvantage to late adopters.

    1. Isaacson, Walter (December 2015). The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution. Simon & Schuster. ISBN 1471138798.
    https://en.wikipedia.org/wiki/The_Innovators_(book).

    Competing Interests: NSERC MITACs Cluster with UBCO researchers for Machine Learning from unstructured clinical text on an academic all open source stack as CEO Vistacan. As such I have strong bias positive towards adopting AI for Clinical research and education.
PreviousNext
Back to top

In this issue

Canadian Medical Association Journal: 191 (48)
CMAJ
Vol. 191, Issue 48
2 Dec 2019
  • Table of Contents
  • Index by author

Article tools

Respond to this article
Print
Download PDF
Article Alerts
To sign up for email alerts or to access your current email alerts, enter your email address below:
Email Article

Thank you for your interest in spreading the word on CMAJ.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Deconstructing the diagnostic reasoning of human versus artificial intelligence
(Your Name) has sent you a message from CMAJ
(Your Name) thought you would like to see the CMAJ web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Deconstructing the diagnostic reasoning of human versus artificial intelligence
Thierry Pelaccia, Germain Forestier, Cédric Wemmert
CMAJ Dec 2019, 191 (48) E1332-E1335; DOI: 10.1503/cmaj.190506

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
‍ Request Permissions
Share
Deconstructing the diagnostic reasoning of human versus artificial intelligence
Thierry Pelaccia, Germain Forestier, Cédric Wemmert
CMAJ Dec 2019, 191 (48) E1332-E1335; DOI: 10.1503/cmaj.190506
Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like

Jump to section

  • Article
    • How do humans and AI perform diagnostic tasks and learn to make diagnoses?
    • How do humans and AI misdiagnose?
    • What evidence supports the role of AI in medical diagnosis?
    • What are the criticisms of AI in medical diagnosis?
    • Future directions
    • Acknowledgement
    • Footnotes
    • References
  • Figures & Tables
  • Related Content
  • Responses
  • Metrics
  • PDF

Related Articles

  • Deconstructing the diagnostic reasoning of human versus artificial intelligence
  • Artificial intelligence isn’t
  • PubMed
  • Google Scholar

Cited By...

  • Lupus or not? SLE Risk Probability Index (SLERPI): a simple, clinician-friendly machine learning-based model to assist the diagnosis of systemic lupus erythematosus
  • Artificial intelligence isnt
  • Deconstructing the diagnostic reasoning of human versus artificial intelligence
  • Google Scholar

More in this TOC Section

  • Physician workforce planning in Canada: the importance of accounting for population aging and changing physician hours of work
  • Gaslighting in academic medicine: where anti-Black racism lives
  • Assessing the need for Black mentorship within residency training in Canada
Show more Analysis

Similar Articles

 

View Latest Classified Ads

Content

  • Current issue
  • Past issues
  • Collections
  • Sections
  • Blog
  • Podcasts
  • Alerts
  • RSS
  • Early releases

Information for

  • Advertisers
  • Authors
  • Reviewers
  • CMA Members
  • CPD credits
  • Media
  • Reprint requests
  • Subscribers

About

  • General Information
  • Journal staff
  • Editorial Board
  • Advisory Panels
  • Governance Council
  • Journal Oversight
  • Careers
  • Contact
  • Copyright and Permissions
  • Accessibiity
  • CMA Civility Standards
CMAJ Group

Copyright 2023, CMA Impact Inc. or its licensors. All rights reserved. ISSN 1488-2329 (e) 0820-3946 (p)

All editorial matter in CMAJ represents the opinions of the authors and not necessarily those of the Canadian Medical Association or its subsidiaries.

To receive any of these resources in an accessible format, please contact us at CMAJ Group, 500-1410 Blair Towers Place, Ottawa ON, K1J 9B9; p: 1-888-855-2555; e: cmajgroup@cmaj.ca

Powered by HighWire