It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
We're witnessing the development of a new frontier in computing, moving away from AI back to where it all started: the human brain.
Everyone is freaking out over artificial intelligence systems and their potential to disrupt, well, everything really. But this tunnel vision shouldn’t distract us from what can be achieved by tapping into natural intelligence, which is orders of magnitude more capable in some areas of computing than the biggest, meanest AIs and supercomputers.
Now, imagine how extraordinary it would be if we could somehow combine the raw computing power and precision of silicon-based computers with the cognitive abilities of the human brain. But is such a thing even possible? Indeed it may be, according to an international group of leading scientists who outlined their plan for so-called “organoid intelligence” (OI) enabled by biocomputers that use actual human brain cells rather than transistors to store, retrieve, and process information.
Let’s start off by recognizing the fact that there are many similarities between the architecture of the brain and that of a computer, as both consist of largely separate circuits for input, output, central processing, and memory. This is of course by design as the pioneers of computing modeled artificial thinking machines based on the human brain. For instance, the brief but profound book The Computer and the Brain by the brilliant and unequaled John von Neumann in the 1940s is still the basis of most modern computers.
At the current stage in which this research is right now, there should be no moral concerns over using these organoids. There’s essentially zero chance that these 3-D blobs of neurons are conscious, especially considering that the level of abstraction required to achieve consciousness is, from what neuroscientists know at the moment, predicated on having sensory input. A bodiless brain organoid — which isn’t actually a brain yet — with very limited sensory information relayed by some electrodes cannot ever achieve consciousness, but Hartung nevertheless is actively involving ethicists in all steps of this process, which he calls ’embedded ethics’.
If Hartung’s vision is ever achieved, the prospect of dish-grown ‘tiny brains’ becoming conscious — though still remote — will become increasingly possible.
Hartung mentions that there is now a fledgling OI community thanks to initiatives like a 2022 Johns Hopkins workshop that eventually led to the development of the Baltimore Declaration toward OI, which will be published shortly.
Nevertheless, as advances in the structural and functional complexity of OI systems begin to recapitulate aspects of human neurobiological (sub)-processes, such as learning and cognition, researchers will inevitably encounter the Greely Dilemma: a situation whereby incremental successes in modelling aspects of the human brain will raise the same kind of ethical concerns that originally motivated their development (223). Sufficient advances in OI will raise questions about the moral status[/] of these entities and concerns for their welfare.
Frameworks have been proposed to address these ethical concerns in research practices (224, 225) but it remains unknown whether these proposals adequately attend to moral concerns held by the public. For example, harm reduction policies are often unsuccessful in gaining public support when the underlying attitude is based on a moral conviction (226) with implications for public discourse (227).
Comprehensive ethical analysis of OI will require input from diverse public and relevant stakeholder groups (228), in order to (i) prevent misunderstandings from creating unintended moral appraisals, and (ii) and foster trust, confidence, and inclusion through responsible public engagement. Notably, moral attitudes toward OI may depend less on epistemological concerns mentioned above, such as the role of specific cognitive capacities in assessments of moral status, and more on ontological arguments of what constitutes a human being. Perceptions of (re)creating ‘human-like’ entities in the lab are likely to evoke concerns about infringing on human dignity that could reflect secular or theological beliefs about the "essential" nature of the human being (229, 230). Our approach to embedded ethics in OI will seek to identify and attend to these ethical concerns by informing future public engagement and deliberation on OI.
originally posted by: Timber13
If it's bio, that means you'll have to change out the mini brain? Shorter life expectancy of the "computer"? Do you need to feed it? How would it get it's vitamins and minerals to live? Wouldn't the tissue need some sort of vascular system to survive? If it's a bio brain, maybe it will suddenly become sentient and this could go way different than we think?
originally posted by: Timber13
If it's bio, that means you'll have to change out the mini brain? Shorter life expectancy of the "computer"? Do you need to feed it? How would it get it's vitamins and minerals to live? Wouldn't the tissue need some sort of vascular system to survive? If it's a bio brain, maybe it will suddenly become sentient and this could go way different than we think?
Scientists say biocomputers made from tiny ‘brains’ are the future.