Topic Editors

Headingley Campus, Leeds Beckett University, Leeds LS6 3QS, UK
Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, 42100 Reggio Emilia, Italy
School of Computer Science and Engineering, South China University of Technology, Guangzhou 510641, China
Department of Computer Science & Engineering (DISI), University of Bologna, 40136 Bologna, Italy
Biomedical Artificial Intelligence Research Unit (BMAI), Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8503, Japan
Department of English Language & Applied Linguistics, University of Reading, Reading RG6 6AH, UK

AI Chatbots: Threat or Opportunity?

Abstract submission deadline
closed (29 February 2024)
Manuscript submission deadline
closed (30 April 2024)
Viewed by
56214

Topic Information

Dear Colleagues,

ChatGPT, based on GPT-3, was launched by OpenAI in November 2022. On their website it is described as ‘a language model … designed to respond to text-based queries and generate natural language responses. It is part of the broader field of artificial intelligence known as natural language processing (NLP), which seeks to teach computers to understand and interpret human language’. More significantly, it is stated that ‘One of the main applications of ChatGPT is in chatbots, where it can be used to provide automated customer service, answer FAQs, or even engage in more free-flowing conversations with users. However, it can also be used in other NLP applications such as text summarization, language translation, and content creation. Overall, ChatGPT represents a significant advancement in the field of NLP and has the potential to revolutionize the way we interact with computers and digital systems’.

These claims, although containing relatively innocuous terms, have been seen by many as potentially ominous and with far-reaching ramifications. Teachers, already facing the issues of cut-and-paste-off-the-internet plagiarism, ghost-writing, and contract cheating, foresaw that AI chatbots such as ChatGPT, Bard, and Bing, would offer students new and more powerful opportunities to produce work for assessment. For some this was not a problem, but for others it appeared to be the beginning of the end for anything other than in-person assessments, including hand-written exams and vivas.

People began to experiment with ChatGPT, using it to produce computer code, speeches, and academic papers. In some cases, users expressed their astonishment at the high quality of the outputs, but others were far more skeptical. In the meantime, OpenAI released GPT-4, which is now incorporated into ChatGPT Plus. It is expected that GPT-5 will be available later this year, on top of which, autonomous AI agents such as Auto-GPT and Agent-GPT are now available. These developments, and others in the general area of AI, have led to calls for a pause in such developments, although others have expressed doubts that this will have any impact.

The issues raised by AI chatbots such as ChatGPT impact upon a range of practices and disciplines, as well as many facets of our everyday lives and interactions. Hence, this invitation to submit work comes from editors associated with a wide variety of MDPI journals, encompassing a range of inter-related perspectives on the topic. We are keen to receive submissions relating to the technologies behind the advance in these AI chatbots, and also with regard to the wider implications of their use in social, technical, and educational contexts.

We are open to all manner of submissions, but to give some indication of the aspects of key interest we list the following questions and issues.

  • The development of AI chatbots has been claimed to herald a new era, offering significant advances in the incorporation of technology into people’s lives and interactions. Is this likely to be the case, and if so, where are these impacts going to be the most pervasive and effective?
  • Is it possible to strike a balance regarding the impact of these technologies so that any potential harms are minimized, while potential benefits are maximized and shared?
  • How should educators respond to the challenge of AI chatbots? Should they welcome this technology and re-orient teaching and learning strategies around it, or seek to safeguard traditional practices from what is seen as a major threat?
  • There is a growing body of evidence that the design and implementation of many AI applications, i.e., algorithms, incorporate bias and prejudice. How can this be countered and corrected?
  • How can publishers and editors recognize the difference between manuscripts that have been written by a chatbot and "genuine" articles written by researchers? Is training to recognize the difference required? If so, who could offer such training?
  • How can the academic world and the wider public be protected against the creation of "alternative facts" by AI? Should researchers be required to submit their data with manuscripts to show that the data are authentic? What is the role of ethics committees in protecting the integrity of research?
  • Can the technology underlying AI chatbots be enhanced to guard against misuse and vulnerabilities?
  • Novel models and algorithms for using AI chatbots in cognitive computing;
  • Techniques for training and optimizing AI chatbots for cognitive computing tasks;
  • Evaluation methods for assessing the performance of AI chatbot-based - cognitive computing systems;
  • Case studies and experiences in developing and deploying AI chatbot-based cognitive computing systems in real-world scenarios;
  • Social and ethical issues related to the use of AI chatbots for cognitive computing.

The potential impact of these AI chatbots on the topics covered by journals is twofold: on the one hand, there is a need for research on the technological bases underlying AI chatbots, including the algorithmic aspects behind the AI; on the other hand, there are many aspects related to the support and assistance that these AI chatbots can provide to algorithm designers, code developers and others operating in the many fields and practices encompassed by this collection of journals.

Prof. Dr. Antony Bryant, Editor-in-Chief of Informatics
Prof. Dr. Roberto Montemanni, Section Editor-in-Chief of Algorithms
Prof. Dr. Min Chen, Editor-in-Chief of BDCC
Prof. Dr. Paolo Bellavista, Section Editor-in-Chief of Future Internet
Prof. Dr. Kenji Suzuki, Editor-in-Chief of AI
Prof. Dr. Jeanine Treffers-Daller, Editor-in-Chief of Languages
Topic Editors

Keywords

  • ChatGPT
  • OpenAI
  • AI chatbots
  • natural language processing
 

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
3.1 7.2 2020 17.6 Days CHF 1600
Algorithms
algorithms
1.8 4.1 2008 15 Days CHF 1600
Big Data and Cognitive Computing
BDCC
3.7 7.1 2017 18 Days CHF 1800
Future Internet
futureinternet
2.8 7.1 2009 13.1 Days CHF 1600
Informatics
informatics
3.4 6.6 2014 33 Days CHF 1800
Information
information
2.4 6.9 2010 14.9 Days CHF 1600
Languages
languages
0.9 1.4 2016 49.6 Days CHF 1400
Publications
publications
4.6 6.5 2013 35.8 Days CHF 1400

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (13 papers)

Back to TopTop