AI at Johns Hopkins combines the intuition and understanding of human intelligence with the raw power of empirical AI to transform discovery.

Background

Submit Research/Proposal

coming soon

News

Events

Related Institutes

MINDS & CIS Seminar – Amit Singer

Clark Hall, Room 110
12:00 pm / 1:00 pm
November 29
Related Institutes
3:30 pm / 4:30 pm
November 29
Come out and join us for HEMI's Extreme Tea!
Take some time out of your day and enjoy tea, coffee, cake and camaraderie.
 
When: Every Tuesday from 3:30 p.m. ? 4:30 p.m.
Where: Malone Hall Lobby
 
 
Read More
Related Institutes

LCSR Seminar: Careers in Robotics: A Panel Discussion With ExpertsFrom Industry and Academia

Hackerman B17
12:00 pm / 1:00 pm
November 30
Link for Live Seminar
Link for Recorded seminars ? 2022/2023 school year
 
Panel Speaker 1: Erin Sutton, PhD
Guidance and Control Engineer at the JHU Applied Physics Laboratory
Ph.D. Mechanical Engineering 2017, M.S. Mechanical Engineering 2016
Erin Sutton is a mechanical engineer at Johns Hopkins Applied Physics Laboratory. She received a BS in mechanical engineering from the University of Dayton and an MS and a PhD in mechanical engineering from Johns Hopkins University. She spent a year at the Naval Air Systems Command designing flight simulators before joining APL in 2019. Her primary research interest is in enhancing existing guidance and control systems with autonomy, and her recent projects range from hypersonic missile defense to civil space exploration.
 
Panel Speaker 2:Star Kim, PhD
Job title and affiliation: Management Consultant at McKinsey & Company
Ph.D. Mechanical Engineering 2021
Star is an Associate at a global business management consulting firm, McKinsey & Company. At JHU, she worked on personalizing cardiac surgery by creating patient specificvascular conduits at Dr. Axel Krieger's IMERSE lab. She made a virtual reality software for doctors to design and evaluate conduits for each patient. Her team filed a patent and founded a startup together, which receivedfunding from the State of Maryland. Before joining JHU, she was at the University of Maryland, College Park and the U.S. Food and Drug Administration. There, she developed and tested patient specific medical devices and systems such as virtual reality mental therapy and orthopedic surgical cutting guides.
 
Panel Speaker 3: Nicole Ortega, MSE
Senior Robotics and Controls Engineer at Johnson and Johnson, Robotics and Digital Solutions
JHU MSE Robotics 2018, JHU BS in Biomedical Engineering 2016
At Johnson and Johnson Nicole works on the Robotis and Controls team to improve the accuracy of their laparoscopic surgery platform.  Before joining J&J, Nicole worked as a contractor for NASA supporting Gateway and at Think Surgical supporting their next generation knee arthroplasty robot.
 
Panel Speaker 4: Ryan Keating, MSE
Software Engineer at Nuro
BS Mechanical Engineering 2013, MSE Robotics 2014
Bio: After finishing my degrees at JHU, I spent two and a half years working at Carnegie Robotics, where I wasprimarily involved in the development of a land-mine sweeping robot and an inertial navigation system. Following a brief stint working at SRI International to prototype a sandwich-making robot system (yes, really), I have been working on the perception team at Nuro for the past four and a half years. I've had the opportunity to work on various parts of the perception stack over that time period, but my largest contributions have been toour backup autonomy system, our object tracking system, and the evaluation framework we use to validate changes to the perception system.
Read More
Related Institutes

Minje Kim (Indiana University) “Personalized Speech Enhancement: Data- and Resource-Efficient Machine Learning”

Hackerman Hall B17 @ 3400 N. Charles Street, Baltimore, MD 21218
12:00 pm / 1:15 pm
December 2
Abstract
One of the keys to success in machine learning applications is to improve each user's personal experience via personalized models. A personalized model can be a more resource-efficient solution than ageneral-purpose model, too, because it focuses on a particular sub-problem, for which a smaller model architecture can be good enough. However,training a personalized model requires data from the particular test-timeuser, which are not always available due to their private nature and technical challenges. Furthermore, such data tend to be unlabeled as they can be collected only during the test time, once after the system is deployed to user devices. One could rely on the generalization power of a generic model, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk, I will present some techniques to circumvent the lack of labeled personal data in the context of speech enhancement. Our machine learning models will require zero or few data samples from the test-time users, while they canstill achieve the personalization goal. To this end, we will investigatemodularized speech enhancement models as well as the potential of self-supervised learning for personalized speech enhancement. Because our research achieves the personalization goal in a data- and resource-efficient way, it is a step towards a more available and affordable AI for society.
Biography
Minje Kim is an associate professor in the Dept. of Intelligent Systems Engineering at Indiana University, where he leads his research group, Signals and AI Group in Engineering (SAIGE). He is also an Amazon Visiting Academic, consulting for Amazon Lab126. At IU, he is affiliated with various programs and labs such as Data Science, Cognitive Science, Dept. of Statistics, and Center for Machine Learning. He earned his Ph.D. in the Dept. of Computer Science at the University of Illinois at Urbana-Champaign. Before joining UIUC, He worked as a researcher at ETRI, a national lab in Korea, from 2006 to 2011. Before then, he received his Master's and Bachelor's degrees in the Dept. of Computer Science and Engineeringat POSTECH (Summa Cum Laude) and in the Division of Information and Computer Engineering at Ajou University (with honor) in 2006 and 2004, respectively. He is a recipient of various awards including NSF Career Award (2021), IU Trustees Teaching Award (2021), IEEE SPS Best Paper Award (2020), and Google and Starkey's grants for outstanding student papers in ICASSP2013 and 2014, respectively. He is an IEEE Senior Member and also a member of the IEEE Audio and Acoustic Signal Processing Technical Committee (2018-2023). He is serving as an Associate Editor for EURASIP Journal of Audio, Speech, and Music Processing, and as a Consulting Associate Editor for IEEE Open Journal of Signal Processing. He is also a reviewer, program committee member, or area chair for the major machine learning and signal processing. He filed more than 50 patent applications as an inventor.
Read More