in

From lab to ledger: Human keys secure scientific integrity | Opinion

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

The speed at which AI is outgrowing regulation poses a risk to data, identity, and reputation verification and, if left unchecked, may increase the prevalence of misinformation and slow the progress of scientific innovation. The march towards super-intelligent AI is represented by its most fervent leaders as a push towards a scientific golden age. However, this push increases the chances of an existential risk of our society hitting a degradative tech plateau where widescale adoption of immature AI technology caps, and over time, degrades human creativity and innovation.

This is a contradictory take to most accelerationists. AI is supposed to increase our ability to do work faster and synthesize greater amounts of information. However, AI cannot replace inductive reasoning or the experimental process. Today, anyone can use AI to manufacture a scientific hypothesis and use that as input to generate a scientific paper. The results of products like Aithor often appear authoritative on the surface and may even pass peer review. This is a big problem because AI-generated texts are already being curated as legitimate scientific findings and often include bogus fabricated data to support their claims. There is a major incentive for young researchers to utilize whatever means they have available to compete for a limited number of academic jobs and funding opportunities. The current incentive system in academia rewards those who can publish the most papers, whether or not those papers describe legitimate findings—they just need to pass peer review and obtain enough citations. 

Academic content with unverified authorship will also pose a significant issue for industries dependent on basic science to power their research and development, the very R&D that keeps our society functioning and maintains the quality of life for a growing global population. As a result, well-funded R&D can only trust the research it is able to perform and replicate on its own, increasing the value of trade secrets and dealing a devastating blow to open science and access to meaningful information. 

Expensive replication efforts can deal with misinformation on its own, however, the problem is much bigger than that. Today, we’re facing an erosion of trust in the very foundations of knowledge, where unverifiable claims and ambiguous attributions undermine scientific advancements, posing a threat to the scientific community. There is a pressing need to establish a truth-based economy to authenticate content and data reliably.

AI systems are as powerful as the data they are trained on 

Large language models are excellent tools for generating convincing content; however, they are only as informative as the data on which they are trained. Their ability to extrapolate outside of the training set still remains limited. The role of science isn’t just to synthesize existing knowledge but to create new informative artefacts that increase the entropy of the collective corpus of knowledge amassed by humanity. Over time, as more people use AI to generate content and fewer people generate original content, we’ll face a “low-entropy bloat” that does not introduce new information to the world but rather just recombines past knowledge. Primary sources will become lost as new “knowledge” is based on self-referential AI-generated content unless we build a resilient provenance and verified attribution layer into AI tools used for serious research.

This “lobotomization” of the intellectual depth of the collective human corpus will lead to lasting impacts on medical, economic, and academic research as well as the arts and creative pursuits. Unverified data can influence studies, skewing outcomes and leading to important policy or technology failures that erode the authority of scientific research. The risks of AI-generated “science” are multifarious. The mundane operation of normal science will stall on authorship debates, allegations of plagiarism, and impaired peer review. We will need to devote more time and energy to deal with the many consequences of declining quality and accuracy of scientific research. 

AI is a useful tool for provoking ideas, structuring thoughts, and automating repetitive tasks; it must remain a complement to human-created content and not a replacement. It should not be used to author scientific papers that propose original findings without doing the work but rather as an aid to increase the efficiency and accuracy of human-led efforts. For example, AI can be helpful in running simulations on existing data with already-known methods and automating this work to help discover new research directions. However, the experimental protocol and human creativity required for scientific inquiry cannot be easily replaced.

Building a truth-based economy

A truth-based economy establishes a framework with systems and standards to ensure information and data authenticity, integrity, transparency, and traceability. It addresses the need to establish trust and verifiability across technological society, allowing individuals and organizations to rely on the accuracy of shared knowledge. Value is rooted in the veracity of claims and authenticity of observations and primary sources. A truth-based economy will make digital knowledge “hard” in the way that Bitcoin made fiat hard. This is the promise of the decentralized science movement.

How do we get there? We need to start with the most important element in the scientific world, the individual researcher and their work. The current web standards for scientific identity today fall short of verifying claims on identity and proof of work. Current practice makes it very easy to manufacture a profile with a passable reputation; peer reviews are also at risk due to bias and collusion. Without verification of the metadata that accompanies a scientific claim, a truth-based economy for science cannot be established. 

Improvement to academic identity standards can begin with a simple cross-platform login powered by privacy-preserving identity verification technology. Users should be able to sign into any site with their credentials, prove authenticity, and selectively disclose facts about their reputation, data, or other agents or users.

An identity layer that is rooted in a verifiable researcher’s reputation is the base foundation of DeSci. A complete on-chain scientific economy will allow public and anonymous participation in massively online coordination for research activities. Research labs and decentralized autonomous organizations can create permissionless systems and bounty programs that can’t be gamed by fraudulent reputation or identity claims. A universal scientific registry secured on blockchain with identity claims would provide a frame of reference for autonomous organizations built to amass verifiable scientific knowledge and test falsifiable hypotheses.

Safeguarding the future of human progress

We need to establish the foundations of truth through information transparency and rigorous verification to avoid a breakdown of trust within expert research fields. The chances of our collective progress continuing for hundreds of more years, unlocking successive scientific revolutions in materials science, biotechnology, neuroscience, and complexity science, will hinge on the curation of quality research and sound data. This will be the difference between a future society that is as advanced as we are compared to pre-enlightenment societies. Otherwise, we will have to expect that this is as smart as we get as a species, and we will only get dumber. It’s not clear if DeSci will save us or not, but there is a limited time to get things right.

Shady El Damaty

Shady El Damaty

Shady El Damaty is the co-founder of the Holonym Foundation, seeking a solution for universal personhood and safe digital access with a decentralized identity protocol built on the magic of zero-knowledge proofs. In 2020, he built OpSci, the first decentralized science, or DeSci for short, organization. Prior to his career in crypto, Shady earned his PhD in neuroscience from Georgetown University, Washington, D.C., United States.

This article first appeared at crypto.news

What do you think?

Written by Outside Source

These Challenges Affect Institutional Crypto Adoption Strategies (Survey)

Here’s How Cardano’s 17% Weekly Correction Can Help ADA Hit $6 This Cycle: Analyst