• Thorry@feddit.org
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    21 hours ago

    Agreed, this is exactly what reinforcement learning and neural networks are good at. Calling them AI is beyond dumb, but hey marketing will be marketing. It’s pattern recognition, which is cool, but nobody would call that intelligent otherwise. Another big issue with the marketing is they only report on the success rate and not the failure rate. Doctors praise the cases being caught, but dislike the models pointing out stuff that is clearly not a tumor. It wastes time for people already short on time. These models also risk doctors becoming over reliant on them, even though they can have serious blind spots and thus miss stuff a doctor would have caught. Or the other way around, have people receive treatment (often not without risk, discomfort and cost to the patient), where none was needed. The thing that bothers me the most is how it’s always framed as a win for AI. Like see AI is good at diagnosing cancer (which then gets extrapolated to curing cancer for some bizarre reason), so that useless chat bot is also good somehow. Because AI.

    • SaveTheTuaHawk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      18 hours ago

      Robert Murphy’s lab at Carnegie Mellon has developing learning sets like this for 20 years.

      This is not designed to replace medical opinion, it’s designed to cross check as pathologists and radiologist have about 1% misses which is not acceptable.