skip to main content
extended-abstract

Using ChatGPT in HCI Research—A Trioethnography

Published: 19 July 2023 Publication History
  • Get Citation Alerts
  • Abstract

    This paper explores the lived experience of using ChatGPT in HCI research through a month-long trioethnography. Our approach combines the expertise of three HCI researchers with diverse research interests to reflect on our daily experience of living and working with ChatGPT. Our findings are presented as three provocations grounded in our collective experiences and HCI theories. Specifically, we examine (1) the emotional impact of using ChatGPT, with a focus on frustration and embarrassment, (2) the absence of accountability and consideration of future implications in design and raise (3) questions around bias from a Global South perspective. Our work aims to inspire critical discussions about utilizing ChatGPT in HCI research and advance equitable and inclusive technological development.

    References

    [1]
    Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, and others. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58: 82–115.
    [2]
    The Times via Monitor Author. XXXX. Saudi Arabia: Haraj.sa app reportedly “facilitating exploitation” of “auctioned" Domestic Migrant Workers; incl. comment from Apple. Business & Human Rights Resource Centre. Retrieved from https://www.business-humanrights.org/en/latest-news/saudi-arabia-harajsa-app-facilitating-exploitation-of-auctioned-domestic-migrant-workers-says-ngo-equidem-incl-comment-from-apple/
    [3]
    Yavar Bathaee. 2017. The artificial intelligence black box and the failure of intent and causation. Harv. JL & Tech. 31: 889.
    [4]
    Valentina Bellemo, Zhan W. Lim, Gilbert Lim, Quang D. Nguyen, Yuchen Xie, Michelle Y.T. Yip, Haslina Hamzah, Jinyi Ho, Xin Q. Lee, and Wynne Hsu. 2019. Artificial intelligence using deep learning to screen for referable and vision- threatening diabetic retinopathy in Africa: a clinical validation study. The Lancet Digital Health 1, 1: 35–44.
    [5]
    Ruha Benjamin. 2019. Race after technology: Abolitionist tools for the new jim code. Social forces.
    [6]
    Meredith Broussard. 2018. Artificial unintelligence: How computers misunderstand the world. MIT Press.
    [7]
    Albert Camus. 1965. The myth of Sisyphus, and other essays. H. Hamilton, London.
    [8]
    Marco Cascella, Jonathan Montomoli, Valentina Bellini, and Elena Bignami. 2023. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems 47, 1: 1–5.
    [9]
    Marta E. Cecchinato, Anna L. Cox, and Jon Bird. 2017. Always On(line)? User Experience of Smartwatches and their Role within Multi-Device Ecologies. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), 3557–3568. https://doi.org/10.1145/3025453.3025538
    [10]
    Jessie Chin and Smit Desai. 2021. Being a Nice Partner: The Effects of Age and Interaction Types on the Perceived Social Abilities of Conversational Agents. In Technology, Mind, and Behavior. https://doi.org/10.1037/tms0000027
    [11]
    R. Chowdhury. 2023. Ai desperately needs global oversight. Wired. Retrieved from https://www.wired.com/story/ai-desperately-needs-global-oversight/
    [12]
    Marco Costa, Wies Dinsbach, Antony S. R. Manstead, and Pio Enrico Ricci Bitti. 2001. Social Presence, Embarrassment, and Nonverbal Behavior. Journal of Nonverbal Behavior 25, 4: 225–240. https://doi.org/10.1023/A:1012544204986
    [13]
    Benjamin R. Cowan, Nadia Pantidi, David Coyle, Kellie Morrissey, Peter Clarke, Sara Al-Shehri, David Earley, and Natasha Bandeira. 2017. “What can i help you with?”: infrequent users’ experiences of intelligent personal assistants. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’17), 1–12. https://doi.org/10.1145/3098279.3098539
    [14]
    Smit Desai and Michael Twidale. 2022. Is Alexa like a computer? A search engine? A friend? A silly child? Yes. In Proceedings of the 4th Conference on Conversational User Interfaces (CUI ’22), 1–4. https://doi.org/10.1145/3543829.3544535
    [15]
    Sebastian Deterding, Andrés Lucero, Jussi Holopainen, Chulhong Min, Adrian Cheok, Annika Waern, and Steffen Walz. 2015. Embarrassing Interactions. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15), 2365–2368. https://doi.org/10.1145/2702613.2702647
    [16]
    Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, and Justin D. Weisz. 2021. Expanding Explainability: Towards Social Transparency in AI systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21), 1–19. https://doi.org/10.1145/3411764.3445188
    [17]
    Elizabeth Baylor Emma Beede, Anna Iurchenko Fred Hersch, Paisan Ruamviboonsuk Lauren Wilcox, and Laura M. Vardoulakis. 2020. A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proceedings of the 2020 CHI conference on human factors in computing systems, 1–12.
    [18]
    Virginia Eubanks. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.
    [19]
    Dylan Evans. 2002. Emotion: The science of sentiment. Oxford University Press, USA.
    [20]
    Tristan Garcia. 2014. Form and object. Edinburgh University Press.
    [21]
    Ella Glikson and Anita Williams Woolley. 2020. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals 14, 2: 627–660.
    [22]
    G.K. Hadfield. 2022. Explanation and justification: Ai decision-making, Law, and the rights of Citizens. Retrieved from https://srinstitute.utoronto.ca/news/hadfield-justifiable-ai
    [23]
    K. Harrisberg, M. Farouk, and B. Adebayo. 2023. Concerns over abuse and exploitation as house-cleaning service apps take off. TimesLIVE. Retrieved from https://www.timeslive.co.za/news/africa/2023-02-16-concerns-over-abuse-and-exploitation-as-house-cleaning-service-apps-take-off/#google_vignette
    [24]
    Morten Hertzum and Kasper Hornbæk. 2023. Frustration: Still a Common User Experience. ACM Transactions on Computer-Human Interaction. https://doi.org/10.1145/3582432
    [25]
    S. Hiriyur. 2018. Are service apps for Domestic Workers Reproducing Old Systems of power? Feminism in India. Retrieved from https://feminisminindia.com/2018/08/06/service-apps-domestic-workers/
    [26]
    A. Holzinger, A. Carrington, and H. Müller. 2020. Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations. Kunstliche intelligenz 34, 2: 193–198. https://doi.org/10.1007/s13218-020-00636-z
    [27]
    Noura Howell, Audrey Desjardins, and Sarah Fox. 2021. Cracks in the Success Narrative: Rethinking Failure in Design Research through a Retrospective Trioethnography. ACM Transactions on Computer-Human Interaction 28, 6: 42:1-42:31. https://doi.org/10.1145/3462447
    [28]
    Dhruv Jain, Audrey Desjardins, Leah Findlater, and Jon E. Froehlich. 2019. Autoethnography of a Hard of Hearing Traveler. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’19), 236–248. https://doi.org/10.1145/3308561.3353800
    [29]
    Deborah G Johnson and Keith W Miller. 2008. Un-making artificial moral agents. Ethics and Information Technology 10: 123–133.
    [30]
    Enkelejda Kasneci, Kathrin Seßler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, and others. 2023. ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences 103: 102274.
    [31]
    A.D. Kerr. 2021. Artificial Intelligence, Gender, and Oppression. In Gender Equality. Encyclopedia of the UN Sustainable Development Goals, W. Leal Filho, A. Marisa Azul, L. Brandli, A. Lange Salvia and T. Wall (eds.). Springer, Cham. https://doi.org/10.1007/978-3-319-95687-9_107
    [32]
    P.P. Liang, C. Wu, L.-P. Morency, and R.Salakhut- dinov. 2021. Towards understanding and mitigating social biases in language models. In International Conference on Machine Learning. PMLR, 6565–6576.
    [33]
    Dan Lockton, Tammar Zea-Wolfson, Jackie Chou, Yuhan (Antonio) Song, Erin Ryan, and CJ Walsh. 2020. Sleep Ecologies: Tools for Snoozy Autoethnography. In Proceedings of the 2020 ACM Designing Interactive Systems Conference (DIS ’20), 1579–1591. https://doi.org/10.1145/3357236.3395482
    [34]
    Andrés Lucero. 2018. Living Without a Mobile Phone: An Autoethnography. In Proceedings of the 2018 Designing Interactive Systems Conference (DIS ’18), 765–776. https://doi.org/10.1145/3196709.3196731
    [35]
    Andrés Lucero, Audrey Desjardins, Carman Neustaedter, Kristina Höök, Marc Hassenzahl, and Marta E. Cecchinato. 2019. A Sample of One: First-Person Research Methods in HCI. In Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion (DIS ’19 Companion), 385–388. https://doi.org/10.1145/3301019.3319996
    [36]
    Kelly Mack, Maitraye Das, Dhruv Jain, Danielle Bragg, John Tang, Andrew Begel, Erin Beneteau, Josh Urban Davis, Abraham Glasser, Joon Sung Park, and Venkatesh Potluri. 2021. Mixed Abilities and Varied Experiences: a group autoethnography of a virtual summer internship. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’21), 1–13. https://doi.org/10.1145/3441852.3471199
    [37]
    Arunesh Mathur, Mihir Kshirsagar, and Jonathan Mayer. 2021. What Makes a Dark Pattern... Dark? Design Attributes, Normative Considerations, and Measurement Methods. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21), 1–18. https://doi.org/10.1145/3411764.3445610
    [38]
    Mvurya Mgala and Audrey Mbogho. 2015. Data-driven intervention-level prediction modeling for academic performance. In Proceedings of the Seventh International Conference on Information and Communication Technologies and Development, 1–8.
    [39]
    Mário W.L. Moreira, Joel J.P.C. Rodrigues, Francisco H.C. Carvalho, Naveen Chil- amkurti, Jalal Al-Muhtadi, and Victor Denisov. 2019. Biomedical data analytics in mobile-health environments for high-risk pregnancy outcome prediction. Journal of Ambient Intelligence and Humanized Computing 10, 10: 4121–4134.
    [40]
    Ye Ni, Xutao Li, Yunming Ye, Yan Li, Chunshan Li, and Dianhui Chu. 2020. An investigation on deep learning approaches to combining nighttime and daytime satellite imagery for poverty prediction. IEEE Geoscience and Remote Sensing Letters.
    [41]
    Safiya Umoja Noble. 2018. Algorithms of oppression. New York University Press.
    [42]
    Kristine L. Nowak and Frank Biocca. 2003. The effect of the agency and anthropomorphism on users’ sense of telepresence, copresence, and social presence in virtual environments. Presence: Teleoperators and Virtual Environments 12: 481–494. https://doi.org/10.1162/105474603322761289
    [43]
    C.T. Okolo, N. Dell, and A. Vashistha. 2022. Making AI explainable in the Global South: A systematic review. In ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies (COMPASS. https://doi.org/10.1145/3530190.3534802
    [44]
    OpenAI. 2023. GPT-4 Technical Report. https://doi.org/10.48550/arXiv.2303.08774
    [45]
    Billy Perrigo. 2023. Elon Musk Signs Open Letter Urging AI Labs to Pump the Brakes. Time. Retrieved April 5, 2023 from https://time.com/6266679/musk-ai-open-letter/
    [46]
    Alisha Pradhan and Amanda Lazar. 2021. Hey Google, Do You Have a Personality? Designing Personality and Personas for Conversational Agents. In Proceedings of the 3rd Conference on Conversational User Interfaces (CUI ’21), 1–4. https://doi.org/10.1145/3469595.3469607
    [47]
    Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, and Vinodkumar Prabhakaran. 2021. Re-imagining algorithmic fairness in india and beyond. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 315–328.
    [48]
    Richard D. Sawyer and Joe Norris. 2012. Duoethnography. Oxford University Press, Oxford, New York.
    [49]
    T. Scholz. 2016. Platform Cooperativism: Challenging the Corporate Sharing Economy. Retrieved from https://rosalux.nyc/platform-cooperativism-2/
    [50]
    John S. Seberger and Geoffrey C. Bowker. 2021. Humanistic infrastructure studies: hyper-functionality and the experience of the absurd. Information, Communication & Society 24, 12: 1712–1727. https://doi.org/10.1080/1369118X.2020.1726985
    [51]
    Ramya Srinivasan and Beatriz San Miguel González. 2022. The role of empathy for artificial intelligence accountability. Journal of Responsible Technology 9: 100021.
    [52]
    Mahyat Shafapour Tehrany, Biswajeet Pradhan, and Mustafa Neamah Jebur. 2014. Flood susceptibility mapping using a novel ensemble weights-of-evidence and support vector machine models in GIS. Journal of hydrology 512: 332–343.
    [53]
    Nitasha Tiku. 2022. The Google engineer who thinks the company's AI has come to life. Washington Post. Retrieved April 5, 2023 from https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
    [54]
    Daniel Ullman and Bertram F Malle. 2017. Human-robot trust: Just a button press away. In Proceedings of the companion of the 2017 ACM/IEEE international conference on human-robot interaction, 309–310.
    [55]
    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \ Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems 30.
    [56]
    W. Kip Viscusi. 2018. Pricing Lives: Guideposts for a Safer Society. Princeton University Press. https://doi.org/10.2307/j.ctvc772d8
    [57]
    L. Weidinger, J. Mellor, M. Rauh, C. Griffin, J.Ue- sato, P.-S. Huang, M. Cheng, M. Glaese, B. Balle, and A. Kasirzadeh. 2021. Ethical and social risks of harm from language models.
    [58]
    Langdon Winner. 1978. Autonomous technology: Technics-out-of-control as a theme in political thought. Mit Press.
    [59]
    Chloe Xiang. 2023. “He Would Still Be Here”: Man Dies by Suicide After Talking with AI Chatbot, Widow Says. Vice. Retrieved April 7, 2023 from https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
    [60]
    Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20), 295–305. https://doi.org/10.1145/3351095.3372852
    [61]
    2023. Futurama (New York World's fair). Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Futurama_(New_York_World%27s_Fair)
    [62]
    Introducing ChatGPT. Retrieved from https://openai.com/blog/chatgpt

    Cited By

    View all
    • (2024)Challenges and Opportunities of LLM-Based Synthetic Personae and Data in HCIExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3636293(1-5)Online publication date: 11-May-2024
    • (2024)Playing with Perspectives and Unveiling the Autoethnographic Kaleidoscope in HCI – A Literature Review of AutoethnographiesProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642355(1-20)Online publication date: 11-May-2024
    • (2024)User Experience Design Professionals’ Perceptions of Generative Artificial IntelligenceProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642114(1-18)Online publication date: 11-May-2024

    Index Terms

    1. Using ChatGPT in HCI Research—A Trioethnography
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CUI '23: Proceedings of the 5th International Conference on Conversational User Interfaces
      July 2023
      504 pages
      ISBN:9798400700149
      DOI:10.1145/3571884
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 19 July 2023

      Check for updates

      Author Tags

      1. ChatGPT
      2. Large Language Models (LLMs)
      3. Situated XAI
      4. Trioethnography

      Qualifiers

      • Extended-abstract
      • Research
      • Refereed limited

      Conference

      CUI '23
      Sponsor:
      CUI '23: ACM conference on Conversational User Interfaces
      July 19 - 21, 2023
      Eindhoven, Netherlands

      Acceptance Rates

      Overall Acceptance Rate 34 of 100 submissions, 34%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)538
      • Downloads (Last 6 weeks)28

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Challenges and Opportunities of LLM-Based Synthetic Personae and Data in HCIExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3636293(1-5)Online publication date: 11-May-2024
      • (2024)Playing with Perspectives and Unveiling the Autoethnographic Kaleidoscope in HCI – A Literature Review of AutoethnographiesProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642355(1-20)Online publication date: 11-May-2024
      • (2024)User Experience Design Professionals’ Perceptions of Generative Artificial IntelligenceProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642114(1-18)Online publication date: 11-May-2024

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media