Background and Purpose: The present age of digitalization brings with it progress and new possibilities for health care in general and clinical psychology/psychotherapy in particular. Internet- and mobile-based interventions (IMIs) have often been evaluated. A fully automated version of IMIs are chatbots. Chatbots are automated computer programs that are able to hold, e.g., a script-based conversation with a human being. Chatbots could contribute to the extension of health care services. The aim of this review is to conceptualize the scope and to work out the current state of the art of chatbots fostering mental health. Methods: The present article is a scoping review on chatbots in clinical psychology and psychotherapy. Studies that utilized chatbots to foster mental health were included. Results: The technology of chatbots is still experimental in nature. Studies are most often pilot studies by nature. The field lacks high-quality evidence derived from randomized controlled studies. Results with regard to practicability, feasibility, and acceptance of chatbots to foster mental health are promising but not yet directly transferable to psychotherapeutic contexts. ­Discussion: The rapidly increasing research on chatbots in the field of clinical psychology and psychotherapy requires corrective measures. Issues like effectiveness, sustainability, and especially safety and subsequent tests of technology are elements that should be instituted as a corrective for future funding programs of chatbots in clinical psychology and psychotherapy.

Hintergrund und Ziel: Das gegenwärtige Zeitalter der Digitalisierung bringt Fortschritte und neue Möglichkeiten in der Gesundheitsversorgung im Allgemeinen und im Besonderen in der klinischen Psychologie und Psychotherapie mit sich. Internet- und mobilbasierte Interven­tionen (IMIs) sind inzwischen vielfach evaluiert. Eine vollständig automatisierte Form von IMIs stellen Chatbots dar. Chatbots sind automatisierte Computerprogramme, die z.B. eine skriptbasierte Konversation mit einem Menschen führen. Chatbots könnten zukünftig dazu beitragen, gesundheitsbezogene Versorgungsangebote zu erweitern. Ziel dieser Übersichtsarbeit ist die Konzeptualisierung des Gegenstandsbereichs und die Erarbeitung der Evidenzlage bezüglich Chatbots zur Förderung mentaler Gesundheit. Methoden: Der vorliegende Beitrag stellt ein Scoping-Review zum Gegenstandsbereich von Chatbots in der klinischen Psychologie und Psychotherapie dar. Eingeschlossen wurden Studien, die die Verwendung von Chatbots mit dem Ziel der Förderung mentaler Gesundheit untersuchen. Ergebnisse: Die Technologie der Chatbots ist noch als experimentell zu bezeichnen. Studien haben vorwiegend Pilotstudiencharakter. Es fehlt an qualitativ hochwertigen randomisiert-kontrollierten Studien. Die Ergebnisse der bisherigen Forschung im Hinblick auf die Praktikabilität, Durchführbarkeit und Akzeptanz von Chatbots zur Förderung mentaler Gesundheit sind zwar vielversprechend, jedoch derzeit noch nicht unmittelbar auf den psychotherapeutischen Kontext übertragbar. Diskussion:Themen wie Wirksamkeit, Nachhaltigkeit und insbesondere Sicherheit sowie Technologiefolgeuntersuchungen sind Bestandteile, die vor dem Hintergrund der wachsenden Forschungsak­tivitäten im Bereich Chatbots als Korrektiv ein fester ­Bestandteil kommender wissenschaftlicher Förderprogramme werden sollten.

SchlüsselwörterSoftware-Agent, Chatbot, Klinische Psychologie, Psychotherapie, Gesprächsbot

We find ourselves in the age of digitization, an age in which our work, the economy, and science are using increasingly intelligent machines that make our everyday life easier, including our private lives [World Economic Forum, 2018]. The Federal Ministry of Education and Research has declared artificial intelligence (AI) as one of the key technologies of the future and the theme of science year 2019 [Bundesministerium für Bildung und Forschung, 2018]. AI systems are technical learning systems that can process problems and adapt to changing conditions. They are already being used for road traffic control and to support emergency workers [Bundesmini­sterium für Bildung und Forschung, 2018]. New possibilities are also being created by AI in the clinical/medical context. User data can be linked with up-to-date research data, which could improve medical diagnoses and treatments [Salathé et al., 2018]. Digitization trends will also affect clinical psychology and psychotherapy, such as chatbots that could be used increasingly and gain importance as next generation of psychological interventions. In this context, chatbots are programs that hold conversations with users – in the first instance technically based on scripts (plots that steer the conversation), which should be created by psychotherapists [Dowling and Rickwood, 2013; Becker, 2018]. Such chatbots are currently closer to full-text search engines than to stand-alone AI systems [Yuan, 2018].

In Germany, only a small proportion of people who are diagnosed with a mental disorder have contact with the health care system with regard to their mental health problems within 1 year [Jacobi et al., 2014]. Those affected do not take advantage of the available services for various reasons [Andrade et al., 2014]. Among these are worry about stigmatization [Barney et al., 2006], limitations of time or location [Paganini et al., 2016], negative attitudes towards pharmacological and psychotherapeutic treatment options [Baumeister, 2012], negative experiences with professional caregivers [Rickwood et al., 2007], and lack of insight into their illness [Zobel and Meyer, 2018]. Psychological Internet interventions represent an innovative potential that has resulted from progress in digitization. Barak et al. [2009] define a web-based intervention as “a primarily self-guided intervention program that is executed by means of a prescriptive online program operated through a website and used by consumers seeking health- and mental-health-related assistance” [Barak et al., 2009; p. 5]. Psychological Internet interventions have frequently been evaluated and are viewed as a medium independent of time and place [Carl-bring et al., 2018; Ebert et al., 2018]. They might be able to help reduce treatment barriers and expand the availability of care [Baumeister et al., 2018; Carlbring et al., 2018; Ebert et al., 2018]. Numerous studies have shown that these interventions, often using cognitive-behavior­al techniques, are comparable in their effectiveness to classical face-to-face psychotherapy [Andersson et al., 2014; Carlbring et al., 2018]. Psychological problems such as anxiety and depression are already being effectively addressed in this way [Andersson et al., 2014, 2016; Carlbring et al., 2018]. When comparing the effectiveness of classical psychotherapy with Internet- and mo­bile-based interventions (IMIs), it should be considered that interindividual differences, for example with regard to openness to new experiences, could affect whether and how strongly people benefit from the different forms of presentation (online or classical) [Andersson et al., 2016; Carlbring et al., 2018].

Chatbots are an innovative variant of psychological IMI with the potential to effect lasting change in psychotherapeutic care, but also with substantial ethical, legal (data protection), and social implications. The key objectives of the present review are:

  1. conceptualizing the scope of chatbots in clinical psychology and psychotherapy;

  2. working through the evidence about chatbots in promoting mental health; and

  3. presenting the opportunities, limits, risks, and challenges of chatbots in clinical psychology and psychotherapy.

Chatbots are computer programs that hold a text- or speech-based dialogue with people through an interactive interface. Users thus have a conversation with a technical system [Abdul-Kader and Woods, 2015]. The program of the chatbot can imitate a therapeutic conversational style, enabling an interaction similar to a therapeutic conversa­tion [Fitzpatrick et al., 2017]. The chatbot interacts with the user fully automatically [Abdul-Kader and Woods, 2015].

Chatbots are a special kind of human-machine interface that provides users with chat-based access to functions and data of the application itself (e.g., Internet interventions). They are currently used mainly for custom­er communication in online shopping [Storp, 2002], but also in teaching [Core et al., 2006] and the game industry [Gebhard et al., 2008]. Chatbots are already par­ticularly important in the economic domain [World Economic Forum, 2018]. As demand for a certain application grows, new instances of a single chatbot can be launched with small technical efforts, so that the chatbot can hold many parallel conversations at the same time (high scalability). This enables people to use freed capacities for more complex aspects of their work [Juniper Research, 2018; World Economic Forum, 2018]. The rapid progress in new technologies is also bringing about changes and new opportunities in health care in general and clinical psychology/psychotherapy in particular [Juniper Research, 2018; World Economic Forum, 2018].

Research interest in chatbots for use in clinical psychol­ogy and psychotherapy is growing by leaps and bounds, as can be seen by the increasing number of (pilot) studies in this area [Dale, 2016; Brandtzaeg and Følstad, 2017], as well as the growing number of online services offered by health care providers (e.g., health apps with chat support).

Chatbots can be systematized with regard to (1) areas of application, (2) underlying clinical-psychological/psychotherapeutic approaches, (3) performance of the chatbot, (4) goals and endpoints, and (5) technical implementation (Fig. 1).

Fig. 1.

Characteristics of chatbots in clinical psychology and psychotherapy. AIML, artificial markup language.

Fig. 1.

Characteristics of chatbots in clinical psychology and psychotherapy. AIML, artificial markup language.

Close modal

Areas of Application

Promising areas for the use of chatbots in the psychotherapeutic context could be support for the prevention, treatment, and follow-up/relapse prevention of psycho­logical problems and mental disorders [Huang et al., 2015; D'Alfonso et al., 2017; Bird et al., 2018]. They could be used preventively in the future, for example for suicide prevention [Martínez-Miranda, 2017]. Current research shows that suicidal ideation and/or suicidal behavior among users of social media, for example, can be detected with auto­mated procedures [De Choudhury et al., 2016]. Then chatbots could, for example, automatically inform users of nearby psychological/psychiatric services. In the treatment of psychological problems, they might offer tools that participants could work with on their own. After the completion of classical psychotherapy, chatbots might be offered in the future to stabilize intervention effects, facilitate the transfer of the therapeutic content into daily life, and reduce the likelihood of relapse [D’Alfonso et al., 2017].

Clinical-Psychological/Psychotherapeutic Approaches

The scripts used by chatbots to address clinical-psychological issues should be based on well-evaluated principles of classical face-to-face psychotherapy (e.g., cognitive behavioral therapy). On the basis of such evidence-based scripts, within which the chatbot program follows options for action (e.g., using if-then rules), conversations can be held that are similar to therapeutic discussions [Dowling and Rickwood, 2013; D'Alfonso et al., 2017; Becker, 2018]. It is becoming clear from the research on IMIs that all approaches covered by the German Psychotherapy Guidelines, such as psychodynamic or behavioral therapy, can be suitable for translation into an online format [Paganini et al., 2016]. The approaches of interpersonal ther­apy, acceptance and commitment therapy, and mindfulness-based therapy have also been translated into online services and evaluated for use in either guided or unguided self-help interventions [Paganini et al., 2016].

Performance of the Chatbot

The effectiveness of the chatbot may vary depending on the modality in which the conversation occurs. There are text-basedchatbots (often referred to in the literature as conversational agentsor chatbots) and chatbots that use nat-ural-language, speech-basedinterfaces in dialogue systems (such as Apple's Siri, Amazon's Alexa, Microsoft's Cortana, and Google's Allo) [Bertolucci and Watson, 2016]. From a technical point of view, speech-based chatbots are text-­based chatbots that also have functions for speech recognition and speech synthesis (machine reading aloud).

The simpler chatbots are based mainly on recognizing certain key terms with which to steer a conversation. More powerful chatbots can analyze user input and communication patterns more comprehensively, thus responding in a more precise way and deriving contextual information, such as users' emotions [Bickmore et al., 2005a,b].

Relationalchatbots, sometimes referred to as contextualchatbots, simulate human capabilities, including social, emotional, and relational dimensions of natural conversations [Bickmore, 2010]. Computer-generated characters, so-called avatars, are often used in the design of a chatbot identity; these mimic the key attributes of human conversations and are often studied under the label embodied conversational agentsin the literature[Beun et al., 2003; Provoost et al., 2017]. The more mental attributes the chatbot has, the greater its similarity to humans (anthropomorphism) [Zanbaka et al., 2006]. Anthropomorphism refers to the extent to which the chatbot can imitate behavioral attributes of a therapist [Zanbaka et al., 2006].

In the psychotherapeutic context of promoting mental health, social attributes [Krämer et al., 2018] and the abil­ity of the chatbot to express empathy appear to be important factors in fostering a viable basis for interaction be­tween a person and a chatbot [Bickmore et al. 2005a; Morris et al., 2018; Brixey and Novick, 2019].

Goals and Endpoints

Chatbots could take over time-consuming psychother­apeutic interventions that do not require more complex therapeutic competences [Fitzpatrick et al., 2017]. These are often investigated under the designation microintervention.Examples of microinterventions that do not need a great deal of therapist contact and can be initiated and guided by chatbots include psychoeducation, goal-setting conversations, and behavioral activation [Fitzpatrick et al., 2017; Stieger et al., 2018]. An example of a paradigm that is currently receiving much attention in this research context is therapeutic writing [Tielman et al., 2017; Bendig et al., 2018; Ho et al., 2018]. In the future, chatbots may have the potential to convey therapeutic content [Ly et al., 2017] and to mirror therapeutic processes [Fitzpat­rick et al., 2017]. Combined with linguistic analyses such as sentiment analysis (a method for detecting moods), chatbots would be able to react to the mood of the users. This allows the selection of emotion-dependent response options and thematization of content adapted to the user's input [Ly et al., 2017] or the forwarding of relevant information about psychological variables to the practi­tioner. Regarding the possibilities that may emerge for psychotherapy, it is relevant to take up the current state of chatbot research in this context.

Technical Implementation

From a technical point of view, chatbots comprise var­ious internal subcomponents [Gregori, 2017]. The component for recognizing information from the messages of the conversation partner is elementary. First and foremost, the intentions and relevant entities must be identified from the statements of the interlocutor. This is done using natural language processing (NLP), which is the machine processing of natural language using statistical algorithms. NLP translates the user's input into a ­language that the chatbot can understand (Fig. 2). The identified user intentions and entities, in turn, serve as the foundation for the appropriate response from the chatbot based on its components for dialogue management and response generation. Response generation is based, for example, on stored scripts, a predetermined dialogue, and/or an integrated knowledge base. The answer is made available to users via the chat-based interface, such as an Internet intervention or an app (Fig. 2).

Fig. 2.

Graphic representation of the technical implementation of chatbots.

Fig. 2.

Graphic representation of the technical implementation of chatbots.

Close modal

Chatbots vary in their complexity of interaction.Sim­pler dialogue components are based exclusively on rule-based sequences (such as if-then rules). Conversations are modeled as a network of possible states. Inputs trigger the state transitions and associated responses of the bot, analogous to stimulus-response systems [Storp, 2002]. The chatbot internally follows a predefined decision tree [Chowdhury, 2003; Smola et al., 2008] (illustred in Fig. 3). The more complex chatbots also use knowledge bases or models of machine learning to construct their dialogues and AI to generate possible answers and to enhance the conversational proficiency of the bot (often called mental capacitiesin the research literature) [Lortie and Guitton, 2011; Ireland et al., 2016]. There are dedicated programming languages for programming chatbot dialogues. Artificial markup language (AIML; http://alicebot.wikidot.com/learn-aiml) has become particularly well established [Klopfenstein et al., 2017].

Fig. 3.

Simplified, schematic representation of a dialogue with the chatbot SISU [Bendig, Erb, Meißner, 2018].

Fig. 3.

Simplified, schematic representation of a dialogue with the chatbot SISU [Bendig, Erb, Meißner, 2018].

Close modal

Classification of the Topic in the “Next-Generation” Clinical-Psychological/Psychotherapeutic Interventions

Through the advance of technology, online services are gaining in importance for classical psychotherapy [World Economic Forum, 2018]. Hybrid forms of online and offline psychotherapy are being investigated, the so-called blended care approaches [Baumeister et al., 2018]. Another area of research is stepped-care approaches, in which online services represent a low-threshold first step in a tiered supply model [Bower and Gilbody, 2005; Domhardt and Baumeister, 2018]. In the “next-generation” IMIs, chatbots could become increasingly important, especially as a natural interaction interface between users and innovative technology-driven forms of therapy. In the near future, chatbots will at first mainly conduct dialogues based on scripts created by psychotherapists [Dowling and Rickwood, 2013; Becker, 2018]. The dia­logues can be of varying complexity depending on the content and objective. ELIZA [Weizenbaum, 1966], the first chatbot, simulated a psychotherapeutic conversation using a trivial script, resulting in a dialogue with users by means of set queries based on the user's inputs [Gesprächs­psychotherapie nach Rogers, 1957]. The SISU chatbot simulates a conversation modeled on the paradigm of therapeutic writing, with elements of acceptance and commitment therapy, which instruct participants to write about important life events [Bendig et al., 2018]. An ex­ample of such a dialogue is shown in Figure 3.

Next, we present the current state of research on chatbots that are based on a psychological/psychotherapeutic evidence-based script aimed at reducing the symptoms of psychological illness and improving mental well-being – that is, promoting mental health.

The present article is a scoping review of chatbots in clinical psychology and psychotherapy. This method is used for quick identification of research findings in a specific field, as well as summarizing and disseminating these findings [Peterson et al., 2017]. Scoping reviews are particularly suitable for presentation of ­complex topics, as well as for generating research desiderata. The objective of the review was defined, the inclusion and exclusion criteria were set, the relevant studies were searched, and titles, ­abstracts, and full texts were screened (E.B. and L.S.-T.). The generalizability of the research results is not a systematic goal of the present study [Moher et al., 2015; Peterson et al., 2017].

Inclusion and Exclusion Criteria

Studies that utilized chatbots to promote mental health were included. Only chatbots based on evidence-based clinical-psychological/psychotherapeutic scripts were considered. Thus, the only stud­ies that were included used chatbots based on principles that are judged positively in classical face-to-face psychotherapy (e.g., principles and techniques covered by the psychotherapy guidelines). Populations both with and without mental and/or chronic physical conditions were included. Only those studies were included that examined the psychological variables of depression, anxiety, stress, or psychological well-being as primary or secondary endpoints. Other inclusion criteria were adult age (age 18+) and complete automation of the investigated (micro)interventions. The latter criterion exclud­ed so-called Wizard-of-Oz studies, in which a person, such as a member of the research team, pretends to be a chatbot. Primary research findings were included, as well as study protocols that were generated in the context of, for example, randomized controlled (pilot) studies, experimental designs, or pre-post studies.

Search Strategy

To find suitable studies, an iterative process of keyword search and preliminary data analysis in March 2018 led to a final, system­atic search in the databases PsycArticles, All EBM Reviews, Ovid MEDLINE®, Embase, PsycINFO, Cochrane databases, and PsyINDEX, using the search string: chatterbot OR chatbot OR social bot OR conversational agent OR softbot OR virtual agent OR software agent OR conversational agent OR automated agent AND (psych* OR counseling OR mental health OR psychotherapy OR therap* OR mental* OR clinical psychology). The hits were independently screened by 2 authors (E.B. and L.S.-T.); 148 duplicate-adjusted articles were found, which were then reduced in accordance with the search criteria in 2 rounds by title and abstract screening to 17 full texts (E.B. and L.S.-T.). Studies in which there was disagreement regarding inclusion/exclusion criteria were discussed (E.B. and L.S.-T.), resulting in 6 finally included articles. The search process is depicted in Figure 4.

Fig. 4.

Search process of the literature included in the review.

Fig. 4.

Search process of the literature included in the review.

Close modal

Table 1provides an overview of the study character­istics. All studies were published in the last 8 years, with half of them coming from the USA. The sample size varied between N= 28 and N= 496, with a mean of 156.8 (SD = 174.52). Participants were adults (mean age = 28.54 years, SD = 5.48), and 50% of the studies included students. They were either experiencing a problem that was causing distress [Bird et al., 2018], displaying symptoms of anxiety and depression [Fitzpatrick et al., 2017], or participants who had not been screened for psychological issues before study inclusion [Ly et al., 2017; Suganuma et al., 2018]. In the study by Gardiner et al. [2017], patients from outpatient clinics were included, while in the study by Suganuma et al. [2018], working people, students, and housewives were included (Table 1). Chatbots that are essentially based on cognitive-behavioral scripts have been the most frequently studied (k = 5). Newer approaches from the third wave of behavioral therapy (such as mindfulness-based stress reduction) were also used (k = 1). The most commonly studied endpoints were depression, anxiety, mental well-being, and stress. The majority of participants in all the studies were female (>50%) and from nonclinical populations, with the largest proportion being students. The studies are mostly so-called feasibility or pilot studies, with the objective to assess acceptance and make an initial assessment of effectiveness (practicability).

Table 1.

Characteristics of the 6 studies included in the review

 Characteristics of the 6 studies included in the review
 Characteristics of the 6 studies included in the review

Evidence about Chatbots to Promote Mental Health

The MYLO chatbot (Manage Your Life Online, available at https://manageyourlifeonline.org) offers a self-help program for problem solving when a person is in distress [Gaffney et al., 2014]. The MYLO script is based on method-of-level therapy [Carey, 2006]. In a randomized controlled trial, participants were assigned either to the ELIZA [Weizenbaum, 1966] program, based on Rogerian client-centered therapy, or to the MYLO program [Bird et al., 2018]. Both chatbots impart problem-solving strategies and guide the user to focus on a specific problem. Through targeted questions, participants are encouraged to approach a circumscribed problem area from different perspectives. One objective of the study was to evaluate the effectiveness of MYLO in reducing problem-related distress compared to the active control group that used ELIZA. Self-reported distress improved significantly in both groups (F2, 338 = 51.10, p< 0.001, η2 = 0.23) [Bird et al., 2018]. MYLO was not superior to ELIZA in this regard, but participants rated MYLO subjectively as more helpful. This replicates the results of a smaller laboratory study on MYLO, which followed the same structure [Gaffney et al., 2014].

The WOEBOT chatbot offers a self-help program to reduce anxiety and depression [Fitzpatrick et al., 2017], with a script based on cognitive-behavioral principles. The aim of the study was to evaluate the feasibility, acceptability, and effectiveness of WOEBOT in a nonclinical sample (N= 70): 34 participants were randomized to the experimental group (EG), and 36 were assigned to an active control group (CG) (an eBook on depression). The authors reported a significant reduction in depressive symptoms (PHQ-8) [Kroenke et al., 2009] in EG (F1, 48 = 6.03, p= 0.017). With regard to anxiety symptoms, a sig­nificant decrease was observed in both groups, but the groups did not differ significantly from each other (p= 0.58) [Fitzpatrick et al., 2017]. In terms of feasibility and acceptance, chatbot users were overall significantly more satisfied than the participants in the active CG as well as with respect to the content [Fitzpatrick et al., 2017].

The SHIM chatbot is a self-help program to improve mental well-being [Ly et al., 2017]. The SHIM script is ­based on principles of cognitive-behavioral therapy as well as elements of positive psychology. The aim of the randomized controlled pilot study was to evaluate the effectiveness of SHIM and the adherence of the 28 participants: 14 in the EG compared to 14 in the waiting list CG (nonclinical sample). Intention-to-treat analyses revealed no sig­nificant differences between the groups regarding psychological well-being (Flourishing Scale [Diener et al., 2010]), Subjective Well-Being Scale [Diener et al., 1985]), and stress (Perceived Stress Scale [PSS-10] [Cohen et al., 1983]), while adherence to completion of the intervention was high (n= 13 in the EG, n= 14 in the waiting list CG), which speaks to the practicability of the chatbot. Complet­er analyses revealed significant effects for psychological well-being (Flourishing Scale) (F1, 27 = 5.12, p= 0.032) and perceived stress (PSS-10) (F1, 27 = 4.30, p= 0.048).

The SABORI chatbot was studied in a nonrandomized prospective pilot study [Suganuma et al., 2018]. SABORI offers a preventive self-help program for the promotion of mental health. The underlying script is based on principles of cognitive behavioral therapy and behavioral activation. Psychological well-being (WHO-5-Japanese [Psychiatric RU and Psychiatric CNZ, 2002]), psychological distress (Kessler 10, Japanese version [Kessler et al., 2002]), as well as behavioral activation (Behavioral Activation for Depression Scale [Kanter et al., 2009]) were compared for 191 participants in the EG versus 263 participants in the CG. Anal­yses of the data showed significant effects for all outcome variables (all p< 0.05), which suggests that the chatbot was practicable as a prevention program for the sample.

The GABBY chatbot was studied in a randomized controlled pilot study [Gardiner et al., 2017]. This self-help program is a comprehensive program for behavioral changes and stress management. The underlying stress reduction script is based on the principles of mindfulness-based stress reduction [Gardiner et al., 2013]. Unlike the EG (n= 31), the active CG (n= 30) received the contents of GABBY only as informational material (the contents of GABBY in written form + accompanying meditation exercises on CD/MP3). The findings showed no significant difference be­tween the groups in terms of perceived stress (p> 0.05), but there was a difference, for example, in stress-related alcohol consumption (p= 0.03). The results suggested the practicability of GABBY regarding the primary endpoints of feasibility (e.g., adherence, satisfaction, and proportion of participants belonging to an ethnic minority).

PEACH, a chatbot that is currently undergoing a ran­domized controlled trial [Stieger et al., 2018], is geared towards personality coaching. Based on research findings about mechanisms of action in psychotherapy, scripts were compiled for the microinterventions presented by PEACH. Among the things covered are the development of change motivation, psychoeducation, behavioral activation, self-reflection, and resource activation. Measures for acceptance and feasibility are explicitly considered in this study.

Some of the studies that did not fulfill the inclusion criteria of this review and were excluded from the full-text review (n= 11) nevertheless suggest further promising fields, which are yet to be developed in a broader context (health care in general) (Table 2).

Table 2.

Characteristics of the 11 studies excluded in the screening process

 Characteristics of the 11 studies excluded in the screening process
 Characteristics of the 11 studies excluded in the screening process

The present scoping review illustrates the growing attention being paid to psychological-psychotherapeutic (micro)interventions mediated via chatbot. The technol­ogy of chatbots can generally still be described as experimental. These are pilot studies, and all 6 studies were mainly concerned with evaluating the feasibility and acceptance of these chatbots. The findings suggest the practicability, feasibility, and acceptance of using chatbots to promote mental health. Participants seem to benefit from the content provided by the chatbot for psychological variables such as well-being, stress, and depression. In regard to evaluating effectiveness, it should be noted as a limitation that the samples are often very small and lack sufficient statistical power for a high-quality effectiveness assessment. The majority of the studies were published during the past 2 years, underlining the current relevance of the topic and predicting a substantial increase in research activity in this area over the coming years.

The first trials of chatbots for the treatment of mental health problems and mental disorders have already been done, for example the study by Gaffney et al. [2014], which used chatbots to help resolve problems caused by distress, as well as the study by Ly et al. [2017], in which chatbots were used to improve well-being and reduce perceived stress. Fitzpatrick et al. [2017], who used chatbots in a self-help program for students combating depression and anxiety disorders, were also able to support these findings with their study.

The descriptions of the technical background of the chatbots used, as well as more detailed descriptions of the content conveyed by the chatbots, are altogether too brief to assess their objective comprehensibility and to enable replication of the results. Since many startups are currently being created worldwide that use chatbots for clin­ical psychology and psychotherapy, commercial conflicts of interest are also an issue for the research teams or their clients (e.g., SHIM).

Opportunities, Challenges, and Limitations for Clinical Psychology and Psychotherapy

Evidence-based research in this field is still quite rare and qualitatively heterogeneous, which is why the actual influence of chatbots in psychotherapy cannot yet be reliably estimated. The rapid growth of technology has left ethical considerations and the development of necessary security procedures and framework conditions short of a desirable minimum.

An example is that many health-related chatbots are on the market that have not been empirically validated (e.g., TESS, LARK, WYSA, and FLORENCE). There are also many hybrid forms of chatbots and other therapeutic online tools. JOYABLE (https://joyable.com/) and TALK SPACE (https://www.talkspace.com/), for example, are commercial chatbots that are offered in combination with online sessions with a therapist.

In the current public perception, the topic of new technologies in the field of mental health is fraught with reservations and fears [DGPT, 2018; World Economic Forum, 2018]. It is important to keep informed about developments in this area to ensure that future technologies do not find their way into health care through the efforts of technologically and commercially driven interests. Rather, the effectiveness, but also the security and acceptance of new technologies, should be decisive for their implementation in our health care system. Serious considerations of the potential benefits and hazards of using chatbots are among the hitherto largely neglected topics. Questions regarding data protection and privacy attract particular attention here, given the very intimate data that users under stress may share too carelessly with dubious providers. Storage, safeguarding, and potential further processing, as well as further use, require urgent regulation to prevent misuse and to ensure the best-possible security for participants. Questions also arise about the susceptibility of participants to “therapeutic” advice from chatbots whose algorithms may be nontransparent. This pertains both to the danger of potential technical sources of error that may cause harmful chatbot comments and to the targeted use of the chatbots for purposes other than the well-being of the users (e.g., solicitation/in-app purchase options for other products). Finally, there is not only a need for more effective chatbots but also for chatbots with fewer undesired side effects. This refers especially to the potential of chatbots to respond appropriately in situations of crisis (e.g., suicidal communication).

Potential Opportunities and Advantages of Using Chatbots

Chatbots could be seen as an opportunity to further develop the possibilities offered by psychotherapy [Feijt et al., 2018]. An important aspect of the role of chatbots in clinical care is to address people in need of treatment who are not receiving any treatment at all because of var­ious barriers [Stieger et al., 2018]. Chatbots could bridge the waiting time before approval of psychotherapy and provide low-threshold access to care [Grünzig et al., 2018]. Psychiatric problems or illnesses could be addressed by chatbot interventions in the future, whereby at least an improvement in symptoms could be achieved compared to no treatment at all [Andersson et al., 2016; Carlbring et al., 2018]. Chatbots can also offer new opportunities for accessibility and responsiveness: People with a physical illness and depressive symptoms, for example, could interact in the future with the chatbot when they are being discharged from the hospital to receive psychoeducation and additional assistance relating to the psychological aspects of their illness [Bickmore et al., 2010b].

Another important application for chatbots is as an adjunct to psychotherapy. Chatbots could increase treatment success by increasing adherence to cognitive-behavioral homework. It is conceivable that by adopting microinterventions that require little therapist contact (such as anamnesis or psychoeducation), the therapists will have time to see more patients or to give more intensive care [Feijt et al., 2018]. Chatbots could also be used to improve the communication of therapeutic content [Feijt et al., 2018].

To advance research in this area, piloted chatbots should be studied in clinical populations. The content implement­ed to date is expandable (Table 1). To actually be a useful supplement to clinical psychological/psychotherapeutic services, substantial research seems necessary on the conceptual side as well. This applies, among other things, to the development of a robust database of psychotherapeutic conversational interactions, which could allow chatbots, drawing upon such elaborated databases, to achieve more effective interventions than they do at the present time. In addition, issues such as acceptance, sustainability, and especially security, as well as subsequent tests of the technol­ogy are corrective elements that, amid the prevailing euphoria over digitalization, should become an integral part of future scientific funding programs.

Potential Challenges and Limitations of the Use of Chatbots

Concerning the use of chatbots to promote mental health, numerous ethical and legal (data protection) ques­tions have to be answered. Studies on the anthropomorphization of technology (see section above Performance of the Chatbot) show that users quickly endow technical systems with human-like attributes [Cristea et al., 2013]. It will have to be clarified whether dangers may arise if users take a chatbot to be a real person, and the content expressed overwhelms the capabilities of the chatbot (for example, an unclear statement of suicidal thoughts).

In terms of the data protection law, it must be recog­nized that some of the conversational content is highly sensitive. In the development of new technological applications in general and chatbots in particular, it is important to systematically consider aspects of data security and confidentiality of communications from the outset. The context of mental health makes the requirements of data privacy and the security of the chatbot particularly essential. Quality criteria should be developed to identify evidence-based chatbots and to distinguish them from the multitude of unverified programs. New legislation that deals with this topic is needed to regulate the use of chatbots promoting mental health and to protect private content [Stiefel, 2018].

The German and European legal systems have a lack of quality criteria in regard to the implementation of IMIs in general and chatbots in particular. It is often difficult to differentiate between applications that are relevant to therapy and those that are not [Rubeis and Steger, 2019]. The certification of therapy-relevant applications as a medical prod­uct could resolve this problem in the future. Medical devices receive a CE (Conformité Européenne) label. This certificate could allow therapy-relevant chatbot applications to be distinguished from those that promise no clinical benefit or display low user security [Rubeis and Steger, 2019].

Future Prospects and Research Gaps

The findings of the present review suggest that chat­bots may potentially be used in the future to promote mental health in relation to psychological issues. A review of IMIs from an ethical perspective by Rubeis and Steger [2019] affirms the potentially positive effects on users' well-being, the low risk, the potential for equitable care for mental disorders, and greater self-determination for those affected [Rubeis and Steger, 2019]. In light of the limitations up to now respecting the generalizability and applicability of chatbots in psychotherapy, the present findings should be interpreted with caution. The findings of yet-to-be-performed randomized controlled trials of chatbots in the mental health context may, in the future, be integrated into the picture of research on psychological Internet- and mobile-based interventions. IMIs that promote mental health, such as to combat depression, have already proven effective [Karyotaki et al., 2017; Königbauer et al., 2017]. Some of the studies that did not meet the inclusion criteria of the present review and were excluded from the full-text review (n = 11) suggest prom­ising fields that are yet to be developed in a broader context (health care in general) [Bickmore et al., 2010a, 2010b, 2013; Hudlicka, 2013; Ireland et al., 2016; Sebas­tian and Richards, 2017]. This applies, for example, to studies using chatbots to increase medication adherence [Bickmore et al., 2010b; Fadhil, 2018] and the study by DeVault et al. [2014], which used semistructured interviews to diagnose mental health problems (Table 2).

Further research is needed to improve the psychother­apeutic content of chatbots and to investigate their usefulness through clinical trials. The studies often originate from authors who mainly approach psychological issues (such as depression) from a background in information technology and begin to collaborate with clinical-psychological scientists after developing their chatbots (bottom-up). There is thus a paucity of theoretical work that initial­ly evaluates what is needed in the clinical-psychological context (top-down). Using such analyses, assessments could be made about how certain psychological endpoints should be achieved and why.

There is a need for research into different forms of use (e.g., as an accompaniment to therapy vs. as a stand-alone intervention). Needs analyses could substantially enhance the research literature on the development of chatbots for use in therapy. What do therapists want? How can chatbots help to make therapists feel effectively supported by the application? It should also be examined whether chatbots can increase overall therapeutic success and whether, for example, they prove to be effective after the conclusion of psychotherapy to stabilize the treatment effects. It still seems to be an open question, just what the specific target groups require of the interface. Some studies deal with the question of which aspects of the design might be chosen for older people [Kowatsch et al., 2017] or for people with cognitive impairment [Baldauf et al., 2018]. It is important that the development of chatbots in clinical psychology and psychotherapy takes places within a protected and guideline-supported framework. Answering ethical questions about the information provided to people who live with a mental disorder is another priority task [Rubeis and Steger, 2019].

The review presents points of departure for future research in this field. Besides the need for replication of the studies in a randomized controlled design, the investigation of piloted (i.e., already proven potentially effective) chatbot interventions in clinical populations is another area still to be explored. Given that none of the previous studies are based on a true clinical sample, the transferability of the results to the psychotherapeutic context remains in question. Psychotherapeutic processes are much more complex than the conversational logic of current chatbots can convey.

The study was translated into English by Susan Welsh.

The authors declare that there are no conflicts of interest. The authors declare that the research occurred without any commercial or financial relationships that could potentially create a conflict of interest.

There were no sources of funding for this study.

The conceptualization and development of the manuscript was performed by E.B. B.E. and H.B. commented on the first draft. The comments were incorporated by E.B. The search and review of relevant literature were performed by E.B. and L.S.-T. Manuscript preparation and editing were performed by E.B., revision and finalization by E.B., B.E., and H.B.

1.
Abdul-Kader
SA
,
Woods
J
.
Survey on Chatbot Design Techniques in Speech Conversation Systems
.
Int J Adv Comput Sci Appl
.
2015
;
6
:
72
80
.
2.
Andersson
G
,
Cuijpers
P
,
Carlbring
P
,
Riper
H
,
Hedman
E
.
Guided Internet-based vs. face-to-face cognitive behavior therapy for psychiatric and somatic disorders: a systematic review and meta-analysis
.
World Psychiatry
.
2014
Oct
;
13
(
3
):
288
95
.
[PubMed]
1723-8617
3.
Andersson
G
,
Topooco
N
,
Havik
O
,
Nordgreen
T
.
Internet-supported versus face-to-face cognitive behavior therapy for depression
.
Expert Rev Neurother
.
2016
;
16
(
1
):
55
60
.
[PubMed]
1473-7175
4.
Andrade
LH
,
Alonso
J
,
Mneimneh
Z
,
Wells
JE
,
Al-Hamzawi
A
,
Borges
G
, et al
.
Barriers to mental health treatment: results from the WHO World Mental Health surveys
.
Psychol Med
.
2014
Apr
;
44
(
6
):
1303
17
.
[PubMed]
0033-2917
5.
Baldauf
M
, et al
.
Exploring requirements and opportunities of conversational user interfaces for the cognitively impaired.
In:
Proceedings of the 20th International Conference on human-computer interaction with mobile devices and services adjunct
;
2018
:
119
-
126
.
6.
Barak
A
,
Klein
B
,
Proudfoot
JG
.
Defining internet-supported therapeutic interventions
.
Ann Behav Med
.
2009
Aug
;
38
(
1
):
4
17
.
[PubMed]
0883-6612
7.
Barney
LJ
,
Griffiths
KM
,
Jorm
AF
,
Christensen
H
.
Stigma about depression and its impact on help-seeking intentions
.
Aust N Z J Psychiatry
.
2006
Jan
;
40
(
1
):
51
4
.
[PubMed]
0004-8674
8.
Baumeister
H
,
Grässle
C
,
Ebert
DD
,
Krämer
LV
.
Blended Psychotherapy – verzahnte Psychotherapie: Das Beste aus zwei Welten
.
Psychother Dialog
.
2018
;
19
(
04
):
33
8
. 1438-7026
9.
Baumeister
H
.
Inappropriate prescriptions of antidepressant drugs in patients with subthreshold to mild depression: time for the evidence to become practice
.
J Affect Disord
.
2012
Aug
;
139
(
3
):
240
3
.
[PubMed]
0165-0327
10.
Becker
D
.
Possibilities to Improve Online Mental Health Treatment: Recommendations for Future Research and Developments
.
Futur Inf Commun Conf
.
2018
.
11.
Bendig
E
,
Erb
B
,
Meißner
D
et al
: A Software agent (Chatbot) providing a writing Intervention for Self-help to Uplift psychological wellbeing (SISU) - A pretest-posttest pilotstudy. DRKS
2018
; DRKS-ID: DRKS00014933.
12.
Bertolucci
J
,
Watson
I
. Chat bots 101: a primer for app developers. IBM
2016
. https://www.ibm.com/blogs/watson/2016/10/chat-bots-101-primer-app-developers/
13.
Beun
RJ
,
de Vos
E
,
Witteman
C
. Embodied Conversational Agents: Effects on Memory Performance and Anthropomorphisation. In:
Rist
T
,
Aylett
RS
,
Ballin
D
,
Rickel
J
, editors
.
Intelligent Virtual Agents. IVA 2003
.
Berlin, Heidelberg
:
Springer
;
2003
. pp.
315
9
.
14.
Bickmore
T
,
Gruber
A
,
Picard
R
.
Establishing the computer-patient working alliance in automated health behavior change interventions
.
Patient Educ Couns
.
2005
a
Oct
;
59
(
1
):
21
30
.
[PubMed]
0738-3991
15.
Bickmore
T
. Relational Agents for chronic disease self management. In:
Hayes
BM
,
Aspray
W
, editors
.
Healing Informatics: A Patient-Centered Approach to Diabetes
.
Cambridge
:
MIT Press
;
2010
. pp.
181
204
.
16.
Bickmore
TW
,
Caruso
L
,
Clough-Gorr
K
,
Heeren
T
.
“It’s just like you talk to a friend” relational agents for older adults
.
Interact Comput
.
2005
b;
17
(
6
):
711
35
. 0953-5438
17.
Bickmore
TW
,
Mitchell
SE
,
Jack
BW
,
Paasche-Orlow
MK
,
Pfeifer
LM
,
Odonnell
J
.
Response to a relational agent by hospital patients with depressive symptoms
.
Interact Comput
.
2010
a
Jul
;
22
(
4
):
289
98
.
[PubMed]
0953-5438
18.
Bickmore
TW
,
Puskar
K
,
Schlenk
EA
,
Pfeifer
LM
,
Sereika
SM
.
Maintaining reality: relational agents for antipsychotic medication adherence
.
Interact Comput
.
2010
b;
22
(
4
):
276
88
.
[PubMed]
0953-5438
19.
Bickmore
TW
,
Schulman
D
,
Sidner
C
.
Automated interventions for multiple health behaviors using conversational agents
.
Patient Educ Couns
.
2013
Aug
;
92
(
2
):
142
8
.
[PubMed]
0738-3991
20.
Bird
T
,
Mansell
W
,
Wright
J
,
Gaffney
H
,
Tai
S
.
Manage Your Life Online: A Web-Based Randomized Controlled Trial Evaluating the Effectiveness of a Problem-Solving Intervention in a Student Sample
.
Behav Cogn Psychother
.
2018
Sep
;
46
(
5
):
570
82
.
[PubMed]
1352-4658
21.
Bower
P
,
Gilbody
S
.
Stepped care in psychological therapies: access, effectiveness and efficiency. Narrative literature review
.
Br J Psychiatry
.
2005
Jan
;
186
(
01
):
11
7
.
[PubMed]
0007-1250
22.
Brandtzaeg
PB
,
Følstad
A
. Why People Use Chatbots. In:
Kompatsiaris
I
,
Cave
J
,
Satsiou
A
, et al, editors
.
Internet Sci
.
Cham
:
Springer International Publishing
;
2017
. pp.
377
92
.
23.
Brixey
J
,
Novick
D
. Building Rapport with Extraverted and Introverted Agents. In:
Eskenazi
M
,
Devillers
L
,
Mariani
J
, editors
.
Advanced Social Interaction with Agents. Lecture Notes in Electrical Engineering
.
Volume 510
.
Cham
:
Springer
;
2019
. pp.
3
13
.
24.
Bundesministerium für Bildung und Forschung: Richtlinie zur Förderung von Projekten im Wissenschaftsjahr 2019. Berlin,
2018
.
25.
Burton C, Szentagotai Tatar A, McKinstry B, Matheson C, Matu S, Moldovan R, et al. Pilot randomised controlled trial of Help4Mood, an embodied virtual agent-based system to support treatment of depression. J Telemed Telecare. 2016 Sep;22(6):348–55.
26.
Carey
TA
.
The Method of Levels: How to Do Psychotherapy Without Getting in the Way
.
Chicago
:
Living Control Systems Publishing
;
2006
.
27.
Carlbring
P
,
Andersson
G
,
Cuijpers
P
,
Riper
H
,
Hedman-Lagerlöf
E
.
Internet-based vs. face-to-face cognitive behavior therapy for psychiatric and somatic disorders: an updated systematic review and meta-analysis
.
Cogn Behav Ther
.
2018
Jan
;
47
(
1
):
1
18
.
[PubMed]
1650-6073
28.
Chowdhury
GG
.
Natural language processing
.
Annu Rev Inform Sci Tech
.
2003
;
37
(
1
):
51
89
. 0066-4200
29.
Cohen
S
,
Kamarck
T
,
Mermelstein
R.
.
Perceived stress scale.
Meas. Stress A Guid. Heal. Soc Sci,
1994
.
30.
Core
M
,
Traum
D
,
Lane
HC
,
Swartout
W
,
Gratch
J
,
van Lent
M
, et al
.
Teaching negotiation skills through practice and reflection with virtual humans
.
Simulation
.
2006
;
82
(
11
):
685
701
. 0037-5497
31.
Cristea
IA
,
Sucalǎ
M
,
David
D
.
Can you tell the difference? comparing face-to-face versus computer-based interventions. The “Eliza” effect in psychotherapy
.
J Cogn Behav Psychother
.
2013
;
13
:
291
8
.1584-7101
32.
Dale
R
.
The return of the chatbots
.
Nat Lang Eng
.
2016
;
22
(
05
):
811
7
. 1351-3249
33.
D’Alfonso
S
,
Santesteban-Echarri
O
,
Rice
S
,
Wadley
G
,
Lederman
R
,
Miles
C
, et al
.
Artificial Intelligence-Assisted Online Social Therapy for Youth Mental Health
.
Front Psychol
.
2017
Jun
;
8
:
796
.
[PubMed]
1664-1078
34.
De Choudhury
M
, et al
.
Discovering Shifts to Suicidal Ideation from Mental Health Content in Social Media.
HHS Public Access 2016:2098–110. doi: https://doi.org/https://doi.org/10.1145/2858036.2858207
35.
DeVault
D
, et al
SimSensei kiosk: a virtual human interviewer for healthcare decision support
.
USC Institute for Creative Technologies
;
2014
. pp.
1061
8
.
36.
DGPT
: Stellungnahme der Deutschen Gesellschaft für Psychoanalyse; Psychotherapie, Psychosomatik und Tiefenpsychologie (DGPT) zur „Blended Therapy“,
2018
.
37.
Diener
E
,
Emmons
RA
,
Larsen
RJ
,
Griffin
S
.
The satisfaction with life scale
.
J Pers Assess
.
1985
Feb
;
49
(
1
):
71
5
.
[PubMed]
0022-3891
38.
Diener
E
,
Wirtz
D
,
Tov
W
,
Kim-Prieto
C
,
Choi
D
,
Oishi
S
, et al
.
New well-being measures: short scales to assess flourishing and positive and negative feelings
.
Soc Indic Res
.
2010
;
97
(
2
):
143
56
. 0303-8300
39.
Domhardt
M
,
Baumeister
H
.
Psychotherapy of adjustment disorders: current state and future directions.
World J Biol Psychiatry.
2018
;19 sup1:S21–35.
40.
Dowling
M
,
Rickwood
D
.
Online counseling and therapy for mental health problems: A systematic review of individual synchronous interventions using chat
.
J Technol Hum Serv
.
2013
;
31
(
1
):
1
21
. 1522-8835
41.
Ebert
DD
,
Van Daele
T
,
Nordgreen
T
,
Karekla
M
,
Compare
A
,
Zarbo
C
, et al
.
Internet- and Mobile-Based Psychological Interventions: Applications, Efficacy, and Potential for Improving Mental Health
.
Eur Psychol
.
2018
;
23
(
2
):
167
87
. 1016-9040
42.
Fadhil
A
.
A Conversational Interface to Improve Medication Adherence: Towards AI Support in Patient’s Treatment
.
Cornell University
;
2018
.
43.
Feijt
MA
,
de Kort
YA
,
Bongers
IM
,
IJsselsteijn
WA
.
Perceived Drivers and Barriers to the Adoption of eMental Health by Psychologists: The Construction of the Levels of Adoption of eMental Health Model
.
J Med Internet Res
.
2018
Apr
;
20
(
4
):
e153
.
[PubMed]
1438-8871
44.
Fitzpatrick
KK
,
Darcy
A
,
Vierhile
M
.
Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial
.
JMIR Ment Health
.
2017
Jun
;
4
(
2
):
e19
.
[PubMed]
2368-7959
45.
Gaffney
H
,
Mansell
W
,
Edwards
R
,
Wright
J
.
Manage Your Life Online (MYLO): a pilot trial of a conversational computer-based intervention for problem solving in a student sample
.
Behav Cogn Psychother
.
2014
Nov
;
42
(
6
):
731
46
.
[PubMed]
1352-4658
46.
Gardiner
P
,
Hempstead
MB
,
Ring
L
,
Bickmore
T
,
Yinusa-Nyahkoon
L
,
Tran
H
, et al
.
Reaching women through health information technology: the Gabby preconception care system
.
Am J Health Promot
.
2013
Jan-Feb
;
27
(
3
Suppl
):
eS11
20
.
[PubMed]
0890-1171
47.
Gardiner
PM
,
McCue
KD
,
Negash
LM
,
Cheng
T
,
White
LF
,
Yinusa-Nyahkoon
L
, et al
.
Engaging women with an embodied conversational agent to deliver mindfulness and lifestyle recommendations: A feasibility randomized control trial
.
Patient Educ Couns
.
2017
Sep
;
100
(
9
):
1720
9
.
[PubMed]
0738-3991
48.
Gebhard
P
, et al
IDEAS4Games: Building expressive virtual characters for computer games. In:
Prendigner
H
,
Lester
J
,
Ishizuka
M
, editors
.
Intell. Virtual Agents. IVA 2008
.
Lect. Notes Comput. Sci.
Volume 5208
.
Berlin, Heidelberg
:
Springer
;
2008
. pp.
426
40
.
50.
Grünzig
SD
,
Baumeister
H
,
Bengel
J
,
Ebert
D
,
Krämer
L
.
Effectiveness and acceptance of a web-based depression intervention during waiting time for outpatient psychotherapy: study protocol for a randomized controlled trial
.
Trials
.
2018
May
;
19
(
1
):
285
.
[PubMed]
1745-6215
51.
Ho
A
,
Hancock
J
,
Miner
AS
.
Psychological, Relational, and Emotional Effects of Self-Disclosure After Conversations With a Chatbot
.
J Commun
.
2018
Aug
;
68
(
4
):
712
33
.
[PubMed]
0021-9916
52.
Huang
J
, et al
TeenChat: A Chatterbot System for Sensing and Releasing Adolescents’ Stress. In:
Yin
X
,
Ho
K
,
Zeng
D
, et al, editors
.
Heal. Inf. Sci
.
Cham
:
Springer International Publishing
;
2015
. pp.
133
45
.
53.
Hudlicka
E
.
Virtual training and coaching of health behavior: example from mindfulness meditation training
.
Patient Educ Couns
.
2013
Aug
;
92
(
2
):
160
6
.
[PubMed]
0738-3991
54.
Ireland
D
,
Atay
C
,
Liddle
J
,
Bradford
D
,
Lee
H
,
Rushin
O
, et al
.
Hello harlie: enabling speech monitoring through chat-bot conversations
.
Stud Health Technol Inform
.
2016
;
227
:
55
60
.
[PubMed]
1879-8365
55.
Jacobi
F
,
Höfler
M
,
Strehle
J
,
Mack
S
,
Gerschler
A
,
Scholl
L
, et al
.
Psychische Störungen in der Allgemeinbevölkerung : Studie zur Gesundheit Erwachsener in Deutschland und ihr Zusatzmodul Psychische Gesundheit (DEGS1-MH)
.
Nervenarzt
.
2014
Jan
;
85
(
1
):
77
87
.
[PubMed]
0028-2804
56.
Josephine
K
,
Josefine
L
,
Philipp
D
,
David
E
,
Harald
B
.
Internet- and mobile-based depression interventions for people with diagnosed depression: A systematic review and meta-analysis
.
J Affect Disord
.
2017
Dec
;
223
:
28
40
.
[PubMed]
0165-0327
57.
Juniper Research
. Chatbots, a game changer for banking & healthcare, saving $8 billion annually by 2022,
2018
. https://www.juniperresearch.com/press/press-releases/chatbots-a-game-changer-for-banking-healthcare
58.
Karyotaki
E
,
Riper
H
,
Twisk
J
,
Hoogendoorn
A
,
Kleiboer
A
,
Mira
A
, et al
.
Efficacy of Self-guided Internet-Based Cognitive Behavioral Therapy in the Treatment of Depressive Symptoms: A Meta-analysis of Individual Participant Data
.
JAMA Psychiatry
.
2017
Apr
;
74
(
4
):
351
9
.
[PubMed]
2168-622X
59.
Kanter
JW
,
Rusch
LC
,
Busch
AM
,
Sedivy
SK
.
Validation of the Behavioral Activation for Depression Scale (BADS) in a Community Sample with Elevated Depressive Symptoms
.
J Psychopathol Behav Assess
.
2009
;
31
(
1
):
36
42
. 0882-2689
60.
Kessler
RC
,
Andrews
G
,
Colpe
LJ
,
Hiripi
E
,
Mroczek
DK
,
Normand
SL
, et al
.
Short screening scales to monitor population prevalences and trends in non-specific psychological distress
.
Psychol Med
.
2002
Aug
;
32
(
6
):
959
76
.
[PubMed]
0033-2917
61.
Klopfenstein
LC
, et al
.
The Rise of Bots.
Proc. 2017 Conf. Des. Interact. Syst. - DIS ’17. New York, USA, ACM Press,
2017
; 555–65. doi: https://doi.org/https://doi.org/10.1145/3064663.3064672
62.
Kowatsch
T
, et al
.
Text-based healthcare chatbots supporting patient and health professional teams: preliminary results of a randomized controlled trial on childhood obesity
, in: Proceedings of the persuasive embodied agents for behavior change (PEACH2017) workshop, co-located with the 17th international conference on intelligent virtual agents (IVA 2017), Stockholm, Sweden.
63.
Krämer
NC
, et al
Social snacking with a virtual agent – On the interrelation of need to belong and effects of social responsiveness when interacting with artificial entities. Int J Hum Comput Stud.
2018
;109:112–21. doi: https://doi.org/https://doi.org/10.1016/j.ijhcs.2017.09.001
64.
Kroenke
K
,
Strine
TW
,
Spitzer
RL
,
Williams
JB
,
Berry
JT
,
Mokdad
AH
.
The PHQ-8 as a measure of current depression in the general population
.
J Affect Disord
.
2009
Apr
;
114
(
1-3
):
163
73
.
[PubMed]
0165-0327
65.
Lortie
CL
,
Guitton
MJ
.
Judgment of the humanness of an interlocutor is in the eye of the beholder
.
PLoS One
.
2011
;
6
(
9
):
e25085
.
[PubMed]
1932-6203
66.
Ly
KH
,
Ly
AM
,
Andersson
G
.
A fully automated conversational agent for promoting mental well-being: A pilot RCT using mixed methods.
Internet Interv
2017
;10:39–46. doi: https://doi.org/https://doi.org/10.1016/j.invent.2017.10.002 https://doi.org/
67.
Martínez-Miranda
J
.
Embodied Conversational Agents for the Detection and Prevention of Suicidal Behaviour: Current Applications and Open Challenges
.
J Med Syst
.
2017
Sep
;
41
(
9
):
135
.
[PubMed]
0148-5598
68.
Moher
D
,
Stewart
L
,
Shekelle
P
.
All in the Family: systematic reviews, rapid reviews, scoping reviews, realist reviews, and more
.
Syst Rev
.
2015
Dec
;
4
(
1
):
183
.
[PubMed]
2046-4053
69.
Morris
RR
,
Kouddous
K
,
Kshirsagar
R
,
Schueller
SM
.
Towards an Artificially Empathic Conversational Agent for Mental Health Applications: System Design and User Perceptions
.
J Med Internet Res
.
2018
Jun
;
20
(
6
):
e10148
.
[PubMed]
1438-8871
70.
Paganini
S
,
Lin
J
,
Ebert
DD
,
Baumeister
H
.
Internet- und mobilebasierte Intervention bei psychischen Störungen
.
Neurotransmitter (Houst)
.
2016
;
27
(
1
):
48
57
. 2375-2440
71.
Peterson
J
,
Pearce
PF
,
Ferguson
LA
,
Langford
CA
.
Understanding scoping reviews: Definition, purpose, and process
.
J Am Assoc Nurse Pract
.
2017
Jan
;
29
(
1
):
12
6
.
[PubMed]
2327-6924
72.
Provoost
S
,
Lau
HM
,
Ruwaard
J
,
Riper
H
.
Embodied conversational agents in clinical psychology: A scoping review
.
J Med Internet Res
.
2017
May
;
19
(
5
):
e151
.
[PubMed]
1438-8871
73.
Psychiatric RU, Psychiatric CNZ. World Health Organisation-Five Well-Being Index (WHO-5-J) WHO-5-Japanese. 2002. Available from: https://www.psykiatri-regionh.dk/who-5/Documents/WHO5_Japanese.pdf.
74.
Rickwood
DJ
,
Deane
FP
,
Wilson
CJ
.
When and how do young people seek professional help for mental health problems
.
Med J Aust
.
2007
Oct
;
187
(
7
Suppl
):
S35
9
.
[PubMed]
0025-729X
75.
Rogers
CR
.
The necessary and sufficient conditions of therapeutic personality change
.
J Consult Psychol
.
1957
Apr
;
21
(
2
):
95
103
.
[PubMed]
0095-8891
76.
Rubeis
G
,
Steger
F
.
Die Implementierung internet- und mobil-gestützter Interventionen (IMIs) bei psychischen Störungen in Deutschland aus ethischer Sicht
.
Nervenarzt
.
2019
;
•••
:
1
6
.0028-2804
77.
Salathé
M
,
Wiegand
T
,
Wenzel
M
.
Focus Group on Artificial Intelligence for Health.
2018
. https://arxiv.org/ftp/arxiv/papers/1809/1809.04797.pdf
78.
Sebastian
J
,
Richards
D
.
Changing stigmatizing attitudes to mental health via education and contact with embodied conversational agents
.
Comput Human Behav
.
2017
;
73
:
479
88
. 0747-5632
79.
Smola
A
, et al
.
Introduction to machine learning.
Alex.smola.org.
2008
. http://alex.smola.org/drafts/thebook.pdf
80.
Stiefel
S
.
The Chatbot Will See You Now
.
Mental Health Confidentiality Concerns in Software Therapy. SSRN Electron J.
;
2018
.
81.
Stieger
M
,
Nißen
M
,
Rüegger
D
,
Kowatsch
T
,
Flückiger
C
,
Allemand
M
.
PEACH, a smartphone- and conversational agent-based coaching intervention for intentional personality change: study protocol of a randomized, wait-list controlled trial
.
BMC Psychol
.
2018
Sep
;
6
(
1
):
43
.
[PubMed]
2050-7283
82.
Storp
M
.
Chatbots. Möglichkeiten und Grenzen der maschinellen Verarbeitung natürlicher Sprache
.
Networx
;
2002
. p.
25
.
83.
Suganuma
S
,
Sakamoto
D
,
Shimoyama
H
.
An Embodied Conversational Agent for Unguided Internet-Based Cognitive Behavior Therapy in Preventative Mental Health: Feasibility and Acceptability Pilot Trial
.
JMIR Ment Health
.
2018
Jul
;
5
(
3
):
e10454
.
[PubMed]
2368-7959
84.
Tielman
ML
,
Neerincx
MA
,
van Meggelen
M
,
Franken
I
,
Brinkman
WP
.
How should a virtual agent present psychoeducation? Influence of verbal and textual presentation on adherence
.
Technol Health Care
.
2017
Dec
;
25
(
6
):
1081
96
.
[PubMed]
0928-7329
85.
Weizenbaum
J
.
ELIZA—a computer program for the study of natural language communication between man and machine
.
Commun ACM
.
1966
;
9
(
1
):
36
45
. 0001-0782
86.
World Economic Forum
.
The Future of Jobs Report
2018
. http://www3.weforum.org/docs/WEF_Future_of_Jobs_2018.pdf
87.
Yuan
M
. Building Intelligent, Cross-platform, Messaging Bots.
2018
. https://dl.acm.org/citation.cfm?id=3217681
88.
Zanbaka
C
,
Goolkasian
P
,
Hodges
L
.
Can a virtual cat persuade you?
Proc. SIGCHI Conf. Hum. Factors Comput. Syst. - CHI ’06, New York, USA, ACM Press,
2006
, p 1153. doi: https://doi.org/https://doi.org/10.1145/1124772.1124945
89.
Zobel
A
,
Meyer
A
.
Psyche und Psychosomatik.
In: Bundesarbeitsgemeinschaft für Rehabilitation e.V. (BAR) (eds): Rehabilitation. Berlin, Heidelberg, Springer,
2018
, pp 27-36. doi: https://doi.org/https://doi.org/10.1007/978-3-662-54250-7_4