Skip to content

georgezero/rsna24-chatctp-dicom-deid-using-chatgpt-mllm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 

Repository files navigation

RSNA 2024 - DeID using ChatGPT and Multimodal LLMs (MLLM)

Deep Learning Lab Session - DLL08

Tuesday, Dec 3

11:00 AM - 12:00 PM CST

DEEP LEARNING LAB

Speakers

  • George Shih
  • Adam Flanders
  • Errol Colak
  • Hui-Min Lin
  • Chinmay Singhal

Outline

1. Session Intro - George

2. DICOM intro (DICOM tag deid issues) - Errol

3. NEW DeID tools from RSNA - Adam

4a. [Hands-On] DICOM Tags Exploration with LLMs - Hui-Ming

DICOM Tags Exploration with ChatGPT

💡 Example prompts:

DICOM Metadata

Tell me about a bit about the patient and the exam performed.
Analyze the DICOM metadata and give me all the values that contain personal health information.

Show this in a table format.
Identify all the DICOM metadata containing potential personal health information (PHI).
These can be directly identifying information (such as name, unique ID, etc)or indirectly
identifying information (such as demographic, other ID, etc).

Do not include fields that does not have a PHI risk such as technical details.

Show this in table format with the field name and value.
Deidentify the DICOM metadata containing personal health information using fake information.

Show the values before and after in table format.
Anonymize all the potential personal health information in the DICOM metadata.

Show the values before and after in table format.

Radiology Report

Analyze the radiology report and give me a list of all the personal health information.
Anonymize all the potential personal health information on the radiology report.

4b. [Hands-On] DICOM Image with (Fake) Burned-in PHI Exploration with Multimodal LLMs - Chinmay

Using ChatGPT Vision Model (GPT-4o) to examine radiology images with burned-in PHI

💡 Example images with fake burned-in PHI:

Chest Xray Ultrasound CT Abdomen
chest-xray ultrasound ct-abdomen

5. [OPEN SOURCE LLMs] Using local Multimodal LLMs for PHI detection on images - George

💡 Llama3.2-Vision-11b

image

Using Open Source Local Multimodal LLMs (SLIDES)

REFERENCES:

❤️ Ollama (LLM server) ❤️

https://ollama.com/

🌍 Open WebUI (Web app used with Ollama) 🌍

https://openwebui.com/

Anything LLM (Desktop app with Ollama)

https://anythingllm.com/

🔥 LM Studio (Desktop App with LLaMA.cpp as LLM server) 🔥

https://lmstudio.ai/

Collama (Ollama + Google Colab -- free GPU)

https://github.com/5aharsh/collama/

Tutorial: Install Ollama + Open WebUI on Mac / Linux / Windows (WSL)

https://www.saltyoldgeek.com/posts/ollama-llama3-openwebui/

6. Wrap-Up - George / Adam


Appendix

Use ChatGPT to generate a python script to deid DICOM tags

This URL forwards to the latest Google Colab notebook for DICOM DeID coded by ChatGPT - Colab notebook

💡 Example of python notebook output:

example-chatgpt-output

About

RSNA 2024 Deep Learning Lab Session - DeID using ChatGPT and Multimodal LLMs (MLLM)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •