Language / Langue

Chronic Wound Classifier

Developed, trained, and deployed by John Boby Mesadieu.

A model I trained to look at a wound photo and guess which of four types it is. It's right roughly 8 times out of 10; this page also tells you when not to trust it.

Share this demo: https://huggingface.co/spaces/jbobym/wound-classifier

Upload a photo of a wound and the model picks one of four types (diabetic, pressure, surgical, or venous), with a confidence percentage for each.

A few things to know before you try it:

  • Centre the wound in the photo. The model only looks at a square in the middle of the image; anything in the corners gets cropped out.
  • JPEG or PNG. That's it.
  • Only upload wound photos. The model has to pick one of the four types. If you give it something else, it will still call it a wound. Watch the confidence percentage: if it comes back under 50%, the model is probably guessing.
  • Pressure ulcers are the model's weak spot. It gets them right roughly 4 times out of 10. When it says Pressure, take the answer with a grain of salt.

This is a research demo, not a medical device. It doesn't diagnose, triage, or replace a clinician's judgement. The Approach section below has the methodology and the headline accuracy.

Approach

I trained an image classifier (EfficientNet-B0) on the AZH Chronic Wound Database, a public research dataset of clinical wound photos. The training was set up so that the same patient's photos never appeared in both the training and test sets; that detail matters more than it sounds, because models on this dataset can otherwise inflate their accuracy by quietly memorising patients instead of learning what wounds actually look like.

On the held-out test set of 184 photos, the version of the model running here gets the wound type right 81 times out of 100. As a sanity check, I trained nine other versions of the same model on slightly different slices of the data and averaged their predictions; that combined version scored 80 out of 100 on the same test, which suggests the headline number is not a fluke.

Out of scope

Not for clinical decision-making. No claim of diagnostic accuracy on real patient cohorts. No fairness audit across skin tones, which is a known gap.

Author

John Boby Mesadieu.

Dataset citation

Anisuzzaman et al. 2022, Multi-modal wound classification using wound image and location by deep neural network, Sci. Rep. 12:20057.