AI at Johns Hopkins combines the intuition and understanding of human intelligence with the raw power of empirical AI to transform discovery.

Background

Submit Research/Proposal

coming soon

News

Events

Related Institutes
11:00 am / 12:00 pm
January 13
Zoom Link
Abstract: Machine learning algorithms are everywhere, ranging from simple data analysis and pattern recognition tools used across the sciences to complex systems that achieve superhuman performance on various tasks. Ensuring that they are safe?that they do not, for example, cause harm to humans or act in a racist or sexist way?is therefore not a hypothetical problem to be dealt with in the future, but a pressing one that we can and should address now.
In this talk I will discuss some of my recent efforts to develop safe machine learning algorithms, and particularly safe reinforcement learning algorithms, which can be responsiblyapplied to high-risk applications. I will focus on the article ?Preventing undesirable behavior of intelligent machines? recently published in Science, describing its contributions, our subsequent extensions, and important areas of future work.

Bio: Philip Thomas is an assistant professorat UMass. He received his PhD from UMass in 2015 under the supervision ofAndy Barto, after which he worked as a postdoctoral research fellow at CMU for two years under the supervision of Emma Brunskill before returning to UMass. His research focuses on creating machine learning algorithms, particularly reinforcement learning algorithms, which provide high-probability guarantees of safety and fairness. He emphasizes that these algorithms are often applied by people who are experts in their own fields, but who may not be experts in machine learning and statistics, and so the algorithms must be easy to apply responsibly. Notable accomplishments include publication of a paper on this topic in Science titled ?Preventing Undesirable Behavior of Intelligent Machines? and testifying on this topic to the U.S. House of Representatives Taskforce on Artificial Intelligence at a hearing titled ?Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services.?
Read More
Related Institutes
3:00 pm / 5:00 pm
January 18
Searching the Scientific Literature: Why Google Doesn't Go Deep Enough
Stephen Stich
Librarian for Science and Engineering, Milton S. Eisenhower Library
Johns Hopkins University
January 18, 2021
3-5 PM ET
Registration link
Though Google Scholar has its merits, a comprehensive subject search requires the use of multiple scholarly resources when searching for relevant scientific references. This workshop will focus on the strengths and weaknesses of five important subscription based databases; and how to conduct comprehensive and efficient searches related to both chemistry and engineering.
The focus will be advanced searching with Google Scholar, Compendex, REAXYS, SciFinder-n, Scopus, and Web Of Science.
Instructor: Stephen Stich has been working with science and engineering faculty, staff, and students for more than 25 years. He is currently the Academic Liaison Librarian to the departments of Chemistry; Earth and Planetary Science; Chemical and Biomolecular Engineering; Civil and Systems Engineering; Environmental Health and Engineering; Materials Science and Engineering; Mechanical Engineering; and the associated Johns Hopkins Institutes. He provides library support in the research and teaching that goes on in these departments at the Homewood campus.
 
Read More
Related Institutes
1:00 pm / 5:15 pm
January 20
Culmination of the Materials in Extreme Dynamic Environments Collaborative Research Alliance (MEDE CRA).
This is an invitation-only event.
Read More
Related Institutes
11:00 am / 12:00 pm
February 15
ABSTRACT: Let us consider a difficult computer vision challenge. Would you want an algorithm to determine whether you should get a biopsy, based on an x-ray? That's usually a decision made by a radiologist, based on years of training. We know that algorithms haven't worked perfectly for a multitude of other computer vision applications, and biopsy decisions are harder than just about any other application of computer vision that we typically consider. The interesting question is whether it is possible that an algorithm could be a true partner to a physician, rather thanmaking the decision on its own. To do this, at the very least, we wouldneed an interpretable neural network that is as accurate as its black boxcounterparts. In this talk, I will discuss two approaches to interpretable neural networks: (1) case-based reasoning, where parts of images are compared to other parts of prototypical images for each class, and (2) neural disentanglement, using a technique called concept whitening. The case-based reasoning technique is strictly better than saliency maps, and theconcept whitening technique provides a strict advantage over the posthoc use of concept vectors. Here are the papers I will discuss:



This Looks Like That: Deep Learning for Interpretable Image Recognition. NeurIPS spotlight, 2019. https://arxiv.org/abs/1806.10574
IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography, 2021. https://arxiv.org/abs/2103.12308
Concept Whitening for Interpretable Image Recognition. Nature Machine Intelligence, 2020. https://rdcu.be/cbOKj
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and use Interpretable Models Instead, Nature Machine Intelligence, 2019. https://rdcu.be/bBCPd
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges, 2021 https://arxiv.org/abs/2103.11251



BIO: Coming Soon
Read More