Too much hyperbole and not enough transparency and public education and outreach. #ai
Bradley Greger’s Post
More Relevant Posts
-
In fact, a recent, convincing, and depressing paper found that the pace of invention is dropping in every field, from agriculture to cancer research. More researchers are required to advance the state of the art. In fact, the speed of innovation appears to be dropping by 50 percent every 13 years, slowing down economic growth. Part of the issue appears to be a growing problem with scientific research itself: there is too much of it. The burden of knowledge is increasing, in that there is too much to know before a new scientist has enough expertise to start doing research themselves. This is also why half of all pioneering contributions in science now happen after age forty, when it used to be that younger scientists were the ones who achieved breakthroughs. Similarly, start-up rates of STEM PhDs are down 38 percent in the last 20 years. The nature of science is growing so complex that PhD founders now need large teams and administrative support to make progress, so they go to big firms instead. Above quotes from the book Co-Intelligence: Living and Working with AI" by Ethan Mollick. https://lnkd.in/gMQGcWpJ
Co-Intelligence: Living and Working with AI
amazon.co.uk
To view or add a comment, sign in
-
CHANGEMAKER. Business Strategist Consultant, Researcher on Consumer Entertainment. Marketing and Strategic Planning
Appropriate scientific divulgation is key to get awareness and expand possibilities to create new fields of knowledge. Tracking the number and level of studies replication provides insights on how to understand and manage the so called "Replication Crisis". Interesting and easy to read article #appliedscience #AIuse #research #analytics
How AI Can Help Researchers Navigate the “Replication Crisis”
insight.kellogg.northwestern.edu
To view or add a comment, sign in
-
Read the article written by Dr. PRABINA RAJIB, Director, #BIMTECH, published in ETGovernment, where she delves into the pressing subject of the moment, "Union Budget 2024: Investing in minds for new-age skills in AI and other emerging sciences." Full Article: https://lnkd.in/gwHqd7WD #artificialintelligence #unionbudget2024 #newagelearning #datascience
Union Budget 2024: Investing in minds for new-age skills in AI and other emerging sciences - ET Government
government.economictimes.indiatimes.com
To view or add a comment, sign in
-
Generative AI’s emergence in scientific publishing is a double-edged sword. While its potential is vast, its indiscriminate and broad approach can lead to subpar works and potential misuse. It’s essential to recognize that tools like ChatGPT, akin to a multitool on a job site, aren’t honed for specific tasks like scientific paper writing or peer reviews. Their versatility, while admirable, isn’t equivalent to specialized proficiency. Detecting AI-authored content seems a Herculean challenge. True detection may only be plausible if one tracks the entire workflow, which raises privacy concerns and isn’t foolproof. The idea of leveraging LLMs for catalog searches is indeed novel, but without meticulous refinement, we run the risk of AI ‘hallucinations.’ I strongly advocate for a consortium where publishers collaboratively adapt existing open-source models. By focusing on their unique data and requirements, they can sculpt a specialized model, better ensuring reliability, accuracy, and ethical alignment. A united front might be the way to harness AI’s potential responsibly and effectively
It was fascinating talking to the Springer Nature Group reporter Gemma Conroy about the impact of #LargeLanguageModels & such on scientific publishing; especially on exacerbating existing disparities & creating new ones. This is a particularly comprehensive piece that delves deep into not only its current & likely use by researchers and publishers, but also its impact on scientific rigor & integrity, authentic versus fake information, equity & inequity, peer-review processes, all while focusing on ethical concerns in the light of the anticipated transformation. Glad to be quoted here along with great perspectives & insights from global researchers, legal scholars, computational biologist, publishers, & editorial executives including Domenico Mastrodicasa, University of Washington - School of Medicine; Michael Eisen University of California, Berkeley & Elife; Laura Feetham-Walker, IOP Publishing; Daniel Hook, Digital Science; Prof Sandra Wachter, University of Oxford; Giovanni Cacciamani MD, FEBU, University of Southern California; Bernd Pulverer, EMBO; Tatsuya Amano, The University of Queensland; Irene Li, The University of Tokyo; Christoph Steinbeck, Friedrich Schiller University Jena; Mohammad Hosseini, Northwestern University; Neal Woodbury, The University of Newcastle Office of Knowledge Exchange and Enterprise; Iris van Rooij, Radbound University; Gemma Derrick, University of Bristol; Patrick Mineault, Mila - Quebec Artificial Intelligence Institute. #Scholalry Publishing American Society for Investigative Pathology (ASIP), American Journal Of Pathology, The Journal of Molecular Diagnostics Read full article for free: https://lnkd.in/gET3gtHt
How ChatGPT and other AI tools could disrupt scientific publishing
nature.com
To view or add a comment, sign in
-
How to Write AI-Powered Literature Reviews: Balancing Speed, Depth, and Breadth in Academic Research
Conducting #literaturereviews is a cornerstone of graduate research. The initial literature search and synthesis process can be exponentially faster with emerging AI-powered tools like Consensus, scite, Elicit, Litmaps, and SciSpace. HOWEVER --- A significant portion of their data is extracted from Semantic Scholar. Semantic Scholar, although a comprehensive and widely respected database, hosts an uneven distribution of research papers across various disciplines, with a predominant focus on medicine and related fields, compared to art and philosophy. This disparity in representativeness can have profound implications on the outcomes of our research queries. When utilizing these AI search engines, we may encounter a bias towards topics with more substantial representation in the database, such as medical sciences, which could inadvertently influence the direction and depth of our research inquiries and results. 🔗 Link to full post in comments, along with the source.
To view or add a comment, sign in
-
-
📚 Almost 50% of all scientific publications are open-access! 🌐 🚀 Open science principles and initiatives have gained popularity since COVID-19, increasing the amount of scientific and scholarly content that is openly available, facilitating collaborations and interdisciplinary research, increasing the reproducibility and reuse of research results, and accelerating scientific discovery. 🔬 Recognizing the value and benefits of open science, #ELOQUENCE will adopt and implement the following open science practices as an integral part of its methodology: 📖 Open publications, 📊 Open data, 💻 Open software. 🔗 Explore more about the project at https://eloquenceai.eu/ #LLMs #ArtificialIntelligence #AI #HorizonEurope
To view or add a comment, sign in
-
-
Conducting #literaturereviews is a cornerstone of graduate research. The initial literature search and synthesis process can be exponentially faster with emerging AI-powered tools like Consensus, scite, Elicit, Litmaps, and SciSpace. HOWEVER --- A significant portion of their data is extracted from Semantic Scholar. Semantic Scholar, although a comprehensive and widely respected database, hosts an uneven distribution of research papers across various disciplines, with a predominant focus on medicine and related fields, compared to art and philosophy. This disparity in representativeness can have profound implications on the outcomes of our research queries. When utilizing these AI search engines, we may encounter a bias towards topics with more substantial representation in the database, such as medical sciences, which could inadvertently influence the direction and depth of our research inquiries and results. 🔗 Link to full post in comments, along with the source.
To view or add a comment, sign in
-
-
The quest for publication in high-profile journals like Nature, Science, or Cell becomes a pivotal point, and make-or-break moment shaping the trajectory of young careers in the academic world. It’s unfortunate to see winner-takes-all stakes got all-too-common in academic science evidenced by recent cases from Stanford, Harvard, and Tilburg, among others. When brilliant minds act unethically to win or disregard research results, imagine what we can expect with the rise of super-intelligent machines!, trained by some of us! … we face significant concerns about ethical practices and potential risks. #ethics #scandal #academicpublication #ai #superintelligence
Opinion | The Research Scandal at Stanford Is More Common Than You Think
https://www.nytimes.com
To view or add a comment, sign in
-
It is essential that research policy is based on facts and the best available knowledge. Fredrik Heintz, Professor, and Director of the WASP Graduate School, is one of several WASP researchers involved in AI research policies and strategies at the EU level. "To be able to work on AI related issues as a researcher, broad engagement in various networks and a willingness to take on indirectly research-related assignments are required", says Heintz. #WASP #artificialintelligence #ai #airesearch #research #policy #eu #europe #europeancommission #horizoneurope
WASP participation in the development of EU's research policy
wasp-sweden.org
To view or add a comment, sign in
-
I'm excited to host a terrific event today on AI, genetic data, and surveillance. The panel discussion is on honor of Yves Moreau, the latest Einstein Foundation Award recipient, for his dedication to ethical standards in DNA data use and privacy in AI. 🕒 Time: March 14, 2-4 PM CET 📍 Place: Robert Koch Forum, Berlin; or online 🔗 Register for in-person attendance: https://lnkd.in/eu3iGgRX Livestream: https://lnkd.in/eZKKaz_g Yves's commitment sets the stage for an exciting dialogue with our four distinguished speakers: – Susanne Schreiber (Einstein Prof of Theoretical Neurophysiology at HU Berlin; Vice Chair of the German Ethics Council) – Helena Mihaljevic (Prof of Data Science, HTW Berlin) – Vince Madai (Research Lead of the Responsible Algorithms team at the QUEST Center for Responsible Research at Charité) The panel discussion on "The Pitfalls of Bad Practices in Genetic Big Data and AI" will delve into the ethical standards necessary for handling personal data in AI, highlighting the urgent need for responsible research practices. We will touch on crucial points for the future of privacy in our societies, and aim to pinpoint strategies to harness the vast potential of AI-driven analysis in healthcare while avoiding undue surveillance and political pressures. Yves has done an amazing job in pushing the boundaries in this field, being also a very vocal researcher calling out numerous high-profile studies that have worked with genetic data obtained in obscure and illegitimate ways, particularly from vulnerable populations. Registration for both livestream and in-person attendance is still open. Do not miss this opportunity to contribute to a pivotal conversation. For more specific information, please refer to the event's page: https://lnkd.in/eu3iGgRX #EthicsInAI #GeneticData #BigData #AIForGood #DataPrivacy #EinsteinFoundation #Scientificquality #AIResearch #DigitalEthics #PrivacyPreservingAI cc: European New School of Digital Studies Einstein Foundation Berlin
The Pitfalls of Bad Practices in Genetic Big Data and AI
einsteinfoundation.de
To view or add a comment, sign in