New Imaging Tech Eases Bone Removal in Cochlear Surgery

SPIE--International Society for Optics and Photonics

Cochlear implant surgery helps people with severe hearing loss by placing an electronic device inside the inner ear. To reach the inner ear, surgeons must first remove part of a bone behind the ear, in a procedure called mastoidectomy.The shape of this surgically created cavity varies from patient to patient and has no clear outer boundary, making it difficult to anticipate using traditional image‑analysis tools. Better prediction of this shape before surgery could support navigation systems, robotic tools, and improved visualization for surgeons, along with better outcomes for patients.

Scientists have struggled for years to build computer tools that can reliably predict the mastoidectomy shape. Now, as reported in the Journal of Medical Imaging (JMI), a team of researchers from St. Mary's University, Trinity University, Vanderbilt University, and the Center for Advanced AI has developed an AI method that predicts how much bone will be removed during a key step of cochlear implant surgery. Their approach may make surgical planning safer and more efficient, especially in settings where experts cannot manually label large sets of medical images.

The research team created a two‑part AI method that learns from medical images even when clean, hand‑labeled data is not available:

  1. The system compares pre‑surgery CT scans to post‑surgery CT scans and teaches itself what bone was removed. Even though the post‑surgery images are noisy, the AI uses a type of mathematical comparison that focuses on overall structure rather than fine details. This helps it learn the bone‑removal pattern without any expert instructions.
  2. The predictions from the first model are used as "weak labels" for a second model. This second model uses a special 3D loss function based on the Student‑t distribution, which helps it handle messy or imperfect data. This step improves accuracy and makes the final prediction more reliable.

Together, these two steps form a new way of training medical imaging systems that works even when perfect training data is impossible to get.

The researchers tested their method using 751 pairs of pre‑ and post-surgery CT scans. When compared with 32 manually labeled examples from surgeons, the AI system achieved a mean Dice score of 0.72, which is higher than several popular medical imaging models. A higher Dice score means the predicted shape closely matches the real shape seen after surgery.

The team also showed that they could create a 3D model of the predicted post‑surgery bone surface. This could one day help guide surgeons during the operation or help train medical students.

This research is important because it demonstrates a new way to build AI systems for medical imaging when detailed labels are scarce or too difficult to produce. Many parts of the human body have complex shapes that are hard to outline by hand, and this method could help doctors analyze them more easily.

For patients, the technology could eventually make cochlear implant surgery safer by giving surgeons a clearer picture of what to expect. It could also support robotic tools or advanced navigation systems in the operating room.

Although the results are promising, the researchers note that more tests in different hospitals are needed before the tool can be used in everyday clinical care. They also hope to add more realistic texture to the 3D models to make them easier for surgeons to use during real procedures.

For details, see the original Gold Open Access article by Y. Zhang et al., " From preoperative computed tomography to postmastoidectomy mesh construction: mastoidectomy shape prediction for cochlear implant surgery ," J. Med. Imaging 13(1), 014004 (2026), doi: 10.1117/1.JMI.13.1.014004

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.