Abstract

Neuromorphic computing represents an emerging form of synthetic biological intelligence that aims to model the cognitive capacities of sentient beings. By integrating biological components with artificial intelligence these ‘biocomputers’ are expected to advance our understanding of human cognition and provide new pathways toward sustainable high-performance computing[1]. Yet entities with human-level cognitive performance may call for moral patiency, a form of moral status whereby researchers and other agents become responsible for their wellbeing[2]. Ethicists debate which entities could acquire such moral status[3]. Empirical studies of the public’s moral intuitions about human brain organoids, for example, support conflicting theories about the relevance of consciousness for moral consideration[4, 5]. We hypothesized that an underlying psychology of consciousness, represented by variations in an individual’s attribution of consciousness to various entities, acts to influence the formation of attitudes concerning the moral status of biocomputers. We investigated the interaction between psychological attributions of consciousness and attitudes toward the capacities and moral consideration of biocomputers using an online survey of N=190 high-quality responses. Survey respondents were asked to assess the degree to which various humans (adults, newborns), non-humans (monkeys, cats, plants), and entities that are not “whole” organisms (e.g. AI, human neurons, brain organoids) are conscious. Respondents were then provided a short description of biocomputers followed by questions that assessed the perceived cognitive capacities and moral attributes of biocomputers. Using principal component and k-means clustering analysis, we discovered a latent structure in public attributions of consciousness. Participant views toward consciousness could be clustered into three groups: i) substrate independent (most beings are conscious), spectral (consciousness exists on a spectrum), and binary or substrate-dependent (beings are conscious or not with only biological organisms being conscious). These cluster identities were highly predictive of attitudes toward the capacities, moral attributes, and utility of biocomputers. We also found that the perceived benefits of using biocomputers, relative to ethical concerns, uniformly increased as respondents perceived biocomputers to be more human-like, while concern for their wellbeing remained unchanged. These findings demonstrate how the psychology of consciousness acts to foreground attitudes regarding the ethical treatment of neuromorphic entities.