We're thrilled to welcome Bayer as the latest member of the Stanford AIMI community! Our Industry Affiliate Program provides a dynamic platform for a two-way exchange of expertise, fostering tailored engagement and educational opportunities. While our members contribute insights on real-world use cases and provide essential support for our research, the AIMI community enriches the experience for our industry affiliates by fostering an open dialogue from academia to real-world applications. This collaboration advances health and medicine through shared AI initiatives and like-minded innovation. We look forward to great things ahead with Bayer! Discover more about the AIMI Industry Affiliate program and how you can get involved: https://lnkd.in/eagfttri #AIMIIndustryAffiliate #AcademicIndustryPartnerships
Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI)
Higher Education
Palo Alto, California 82,457 followers
On a mission to develop and support transformative medical AI applications
About us
Stanford has established the AIMI Center as a center of excellence to develop, evaluate, and disseminate artificial intelligence systems to benefit all patients. Our center conducts research that solves clinically important medical problems using machine learning and other artificial intelligence techniques.
- Website
-
http://aimi.stanford.edu
External link for Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI)
- Industry
- Higher Education
- Company size
- 51-200 employees
- Headquarters
- Palo Alto, California
- Type
- Educational
- Founded
- 2018
Locations
-
Primary
1701 Page Mill Rd
Palo Alto, California 94304, US
Employees at Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI)
-
Ty Vachon, M.D.
Radiologist | Entrepreneur | Navy Veteran
-
Junaid Bajwa
Chief Medical Scientist at Microsoft
-
Avishkar (Avi) Sharma, MD, CIIP
Director of AI | Body Radiologist | HealthTech Advisor
-
Zach Harned
Transactional Counsel for Tech, AI/ML and Digital Health Innovators | Trusted Privacy & IP Advisor
Updates
-
Reflecting on an inspiring trip to Toronto in June, where we connected with many AI in medicine colleagues! The first stop was University of Toronto for T-CAIREM's inaugural symposium, "Multi-Modal Data and the Future of Health AI." This was a fantastic opportunity to explore how multi-modal AI can enhance clinical practice and contribute to a broader understanding of health. Next, we attended the Alliance for Centers of AI in Medicine (ACAIM)'s annual in-person meeting. ACAIM is a consortium of academic centers from around the world with a focus on AI in all dimensions of medicine and healthcare. Founded by Anthony Chang, MD, MBA, MPH, MS, the group meets regularly to facilitate communication and collaboration among members. As a founding member, it's gratifying to witness ACAIM's growth with over 100 centers globally, including 40 AI initiatives in pediatric health institutions. We were proud to have Alaa Youssef from the AIMI Center represent us on the panels. A big thank you to our hosts in Canada for a memorable experience! #AIInMedicine #FutureOfHealth #ArtificialIntelligence
-
-
Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI) reposted this
🎉 Excited to introduce Merlin, a vision language foundation model for 3D computed tomography Trained to understand 3D abdominal CT scans using supervision from: Structured electronic health records (1.8 million codes) Natural language radiology reports (6 million tokens) 📰 Paper: https://lnkd.in/geCBMue5 Fantastic work led by Louis Blankemeier with a fantastic and large team of clinical experts across a multitude of domains! 👉 Over 85M CT scans annually in the US and increasing at 6% per year, contributing to a shortage of radiologists. AI, particularly vision language models (VLMs), could help offset this. However, current medical VLMs are generally limited to 2D images and short reports, and do not leverage electronic health record (EHR) data for supervision. Merlin addresses these concerns and incorporates all modalities EHR data and long radiology reports during pretraining. 👉 We evaluate Merlin on 6 task types and 752 individual tasks: Zero-shot findings classification (31 findings) Phenotype classification (692 phenotypes) Zero-shot cross-modal retrieval (image to findings and image to impressions) 5-year disease prediction (6 diseases) Radiology report generation 3D semantic segmentation (20 organs) 👉 Our evaluation dataset includes 5k CTs from Stanford and 7k CTs from an external site (besides TCIA and TotalSegmentator datasets). ❗️We outperform many task specific baselines using Merlin as the underlying model. ❗️For tasks such as zero-shot classification, we can see F1 scores of 0.7-0.8 across many common findings! ❗️For future disease prediction, we can accurately predict future CVD/MSK diseases using tens of positive labels only! ❗️We can attach LLMs to Merlin using adapters to directly start generating Findings sections of radiology reports! ❗️We validate that the features learned during model training are likely not spurious using counterfactual reasoning. 💰 We embrace being "GPU poor" and show how we can train such a model using ONLY 1 A6000 GPU in <2 days!! 👐 We plan on sharing code, models, and most importantly, a new dataset of 25k CT images radiology reports soon! Work done at Stanford Radiology and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI)
-
The future of AI in healthcare and medicine looks bright! ⭐ Just closed the book on two terrific weeks of growth and exploration at the AIMI Summer Research Internship and Bootcamp! 📘🚀 The AIMI Summer Research Internship offers high school students the opportunity to explore technical and clinical aspects of AI in healthcare. Out of 1500 incredibly talented applicants for this year, 26 interns and 5 student leads were selected to join us for hands-on research, career development, and lectures spanning a variety of subjects in healthcare AI. Topics included introductions to U.S. healthcare, AI and machine learning, basics of computer vision, language models, detecting and mitigating bias in machine learning and more! Concurrently, we ran the inaugural AIMI Summer Health AI Bootcamp--an enriching and engaging educational experience for high school learners of all technical levels. This year's program received 1,700 stellar applicants and 36 students were chosen to participate. We explored foundational concepts and principles for machine learning, deep learning and foundation models for healthcare, the current state of AI applications in health and medicine, ethics and responsible health AI, policy and legal aspects of medical AI technologies, evaluation and metrics and so much more! It was such a joy to virtually bring students together from all corners of the U.S. These learning opportunities wouldn't be possible without the help and leadership of our incredible community! High school programs co-directors: Johanna Kim & Alaa Youssef Guest Faculty: Matthew Lungren MD MPH, Serena Yeung, Kevin Schulman, Jason Fries, Zach Harned Mentors: Alaa Youssef, Jean-Benoit Delbrouck, Maya Varma, Sophie Ostmeier, Zhihong Chen and Magda Paschali Lunch & Learn Speakers: Curtis Langlotz, Dr Geraldine Dean, Jonathan H. Chen, Enhao GONG, Angela Aristidou, Parminder Bhatia, Jessica Mega, and Thomas Wang Staff: Michelle Phung, MS, PMP, Jacqueline Thomas, Gabriel Yip ⭐ Our next high school event will be a NextGen Tech Talk on August 26, 2024. RSVP here: https://lnkd.in/eT-WCdfG #HealthAI #AIInMedicine #ResponsibleAI #MachineLearning #LanguageModels
-
-
CheXpert Plus, the latest and most advanced version of the CheXpert dataset, was recently released and is already being used by many researchers around the world! We are so excited by the depth and breadth of its offerings and what they mean for human health! The History: The original CheXpert dataset, released by the AIMI Center five years ago under Matthew Lungren MD MPH's leadership, is a comprehensive collection of medical text and images. CheXpert has been used by researchers around the world and cited more than 6,000 times! Latest Developments: This effort has grown from its original form and continues to evolve! The newest iteration, CheXpert Plus, aims to address contrastive learning and algorithmic fairness. It includes radiology reports, demographic data, DICOM images, pathology labels and RadGraph extractions. Here's a look at this incredible dataset and its related new reports by the numbers: -223,462 unique pairs of radiology reports and chest X-rays -Across 187,711 studies -From 64,725 patients -187,711 radiology reports accompany the images -Each report is divided into 11 subsections -Includes annotations for 14 different chest pathologies across the studies, alongside eight metadata elements concerning patient information The data also underwent an extensive de-identification process, with help from the privacy offices at Stanford and our friends at VinBrain. Special thanks to Andrew Ng, Pranav Rajpurkar and Matthew Lungren for their original CheXpert vision! CheXpert Resources: Data set: https://lnkd.in/ea75XZAJ Arxiv paper: https://bit.ly/3xDIwmX De-ID Algorithm paper: https://bit.ly/3RJD88P Hugging Face De-ID Algorithm: https://bit.ly/3zmljq0 See more in this X post from the AIMI Center's Curtis Langlotz: https://bit.ly/3RJwr6A #CheXpert #Datasets #BigData #AIInMedicine #Radiology #Medicalmaging
-
-
Episode 4 of the AIMI Center's NextGen Tech Talk with Jessica Mega begins *today* at 12 p.m. PDT! Don't miss this free webinar, open to all ages, moderated by Lynbrook High School's Nidhi Parthasarathy and incoming Stanford University freshman Taylor Tam. Event Details: https://lnkd.in/gzsYffmH Registration: https://bit.ly/3RH9Asc About Next GenTech Talks: This engaging live webinar series is tailored for high school students who are interested in AI in medicine and health. These online events provide an opportunity to gain insights from distinguished experts who are shaping healthcare through technology. Participants will also have the chance to directly interact with speakers through a live Q&A session! #NextGenTechTalk #MachineLearning #MedicalImaging #AIInMedicine
-
-
Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI) reposted this
Thank you Prof Curtis Langlotz for inviting me to speak about my AI journey in UK and Europe at the Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI) running its annual AIMI Summer Research Internship and Summer AI Bootcamp program The audience's enthusiasm was remarkable as we discussed starting an AI journey including tips and learnings and initiating AI projects. Key topics included practical AI use cases, the importance of monitoring tools, addressing risks and ethical implications, breaking geographical barriers for global collaboration and the need for careful oversight to ensure ethical and equitable implementation. Well done to the selected 70 participants and good luck with the rest of your program! Thank you Michelle Phung, MS, PMP for organising such a great event and Johanna Kim for chairing .
-
-
-
-
-
1
-
-
Congratulations to Stanford Institute for Human-Centered Artificial Intelligence (HAI) on their fifth anniversary! HAI has become a leading force in driving advancements for responsible and #HumanCenteredAI to truly benefit society. Today's commemorative #HAIatFIVE event brought together experts to address some of the most pressing issues for AI, exploring how to ensure this technology is developed with humans at the center. AIMI Center's director Curtis Langlotz provided crucial insights on advancements in AI for health and medicine, highlighting the importance of continued innovation and partnerships in this field. We look forward to the continued impact we will make together in AI for health and medicine. Congratulations, again! Learn more: https://lnkd.in/eRvaXiqM
-
-
We are thrilled to share that EchoNet AI, developed through research supported in part by AIMI, has received FDA clearance! This major milestone highlights the power of seed funding in driving innovation and improving healthcare. We are proud and honored to contribute to this impactful journey. Congratulations to James Zou, Euan Ashley, and David Ouyang, MD! 🎉 The EchoNet-Dynamic, EchoNet LVH, and EchoNet-Pediatric datasets are available through the AIMI Shared Data program: https://lnkd.in/ebBb-ubX #EchoNet #shareddata #opendata #StanfordAIMI
🎉 Major milestone: our EchoNet AI has received FDA clearance! Led by David Ouyang, MD and Bryan He, we started developing #echonet in 2019. Validated in 2 Nature papers w/ an RCT, super exciting to see this come to market to help clinicians and patients! 💓 Big congratulations to the InVision team and many thanks to all the wonderful collaborators along the way! Press article https://lnkd.in/gExP8mMa
-
-
Following our exciting Stanford Health AI week, join us for two more insightful events on the ethical machine learning design and real-world perspectives on AI-generated patient communication. During today's IBIIS-AIMI Seminar, Mildred Cho discusses how an approach to machine learning design that considers the perspectives of specific stakeholder groups can guide the development of ethical machine learning for precision medicine. -"Facilitating Patient and Clinician Value Considerations into AI for Precision Medicine" -Wednesday, May 22, 11 a.m. - 12 p.m. PDT -Virtual and in-person attendance -Details and registration: https://lnkd.in/eQiBe8p2 At Thursday's BMIR (Center for Biomedical Informatics Research) Research Colloquium, Stanford Health Care's Patricia Garcia, UCI Health's Danielle Perret Karimi, MD, and UC San Diego's Ming Tai-Seale will present "AI Generated Draft Responses to Patient Inbox Messages: Perspectives from 3 Real-World Deployments." -Thursday, May 23, 12 - 1 p.m. PDT -In-person and virtual attendance -Details and registration: https://lnkd.in/eQD35KCH We hope to see you there! #IBIIS #MachineLearning #EthicalAI #PrecisionMedicine #PatientPortals
-