Interactivity

Accepted Interactivity

Interactivity Demos

Here is the list of CHI 2017 Interactivity Demos:

 

Keep in Touch: Portable Haptic Display With 192 High Speed Taxels

Presenter: Juan Zarate

Abstract: We present a portable 12x16 taxel haptic display optimized to rapidly display dynamic graphical information. Each taxel changes state (up/down) in under 5 milliseconds, allowing the entire display of 192 independent taxels to be refreshed in under 2 seconds. The user uses his sense of fine touch to explore the 7-inch display. We demonstrate applications in serious gaming (tactile Pong for the visually impaired), remote collaboration between sighted and visually-impaired users (remote user draws in real-time on the local haptic display), and navigation scenarios. Information can be displayed as a series of static relief images, or as a static image with moving or vibrating taxels. For the navigation task, the outline of a room and furniture is shown first as a static relief, the path to be followed is added as a moving taxels, and the user location is shown as a vibrating taxel. The taxels latch in both up and down states, leading to low power consumption.

 

Mapping Memory Routes: a Multisensory Interface for Sensorial Urbanism and Critical Heritage Studies

Presenter: Alda Terracciano

Abstract: This demonstration offers the opportunity to explore a multisensory digital interface as part of the wider research project ‘Mapping Memory Routes: Eliciting Culturally Diverse Memes for Digital Archives’.The interface is conceived as a tool for capturing memes rooted in the rich intangible heritage of culturally diverse communities in London, opening up a space for intercultural exchange to be used in meaningful urban design. Based on a model developed by artist and researcher Alda Terracciano for her multisensory installation ‘Streets of...7 cities in 7 minutes’, the interface is used to explore new design methods to elicit cultural memories through the use of multisensory technology. The tool aims to stimulate collective curatorial practices aimed at democratising decision-making processes in critical heritage studies and urban planning.

 

Selection and Manipulation Methods for a Menu Widget on the Human Forearm

Presenter: Takumi Azai

Abstract: Mixed reality (MR) space merges the real and virtual worlds in real time and makes it possible to present and manipulate virtual objects in the real world. However, to manipulate virtual objects, menus are required, and where to display menus in MR space and how to manipulate them are often problems. For example, a virtual touch menu shown in front of a user's face cannot provide the user with touch sensation and interferes with the user's sight. In this study, we propose a method to display a menu on the user's forearm, which is always within reach of the user's hand. The user can obtain touch feeling by directly touching their forearm. An application was developed using this menu, and an informal user study at a previous conference was successful, leaving some minor points to be improved.

 

Motion Log Skateboard: Visualizing Pressure Distribution of Skateboarding

Presenter: Hyung Kun Park

Abstract: Skateboarding is an extreme sport that is consists of various trick performances. You can control your board with various foot movements, and slip over the obstacle with the board. In this research, we provided a Motion Log Skateboard system that visualizes the pressure distribution of the foot on the board. A customizable pressure sensor sheet was attached to the top of the skateboard deck, and the distribution of pressure was imaged and reproduced along with the recorded video in smartphone App. We have focused on visualizing non-visible information of body movement, which is not easily observed but acts as important elements in sports activity. We expect that the providing these kinds of information through the interactive technology will encourage discussion on the body movement and induce people to share their boy movement with others.

 

Conveyor World: Mixed Reality Game on Physically Actuated Game Stage

Presenter: Jiwoo Hong

Abstract: In this research, we develop an immersive mixed reality game environment using an actuated conveyor belt surface as a game stage. One game player creates the game environment by arranging tangible objects; those objects linearly flow and interact with a virtual character manipulated by another player. We expect that game enjoyment could be leveraged while being highly immersed into mixed reality game world. Also, new kinds of interaction between two players with different game roles are expected. The prototype was pilot-tested and planned for a future demonstration.

 

Play With Temperature: Exploring Functions of Thermal Feedback in Virtual Reality Experience

Presenter: Roshan Lalintha Peiris

Abstract: Head Mounted Displays (HMDs) provide a promising opportunity for providing haptic feedback on the head for an enhanced immersive experience. In ThermoVR, we integrated five thermal feedback modules on the HMD to provide thermal feedback directly onto the user's face. We conducted evaluations with 15 participants using two approaches: Firstly, we provided simultaneously actuated thermal stimulations (hot and cold) as directional cues and evaluated the accuracy of recognition; secondly, we evaluated the overall immersive thermal experience that the users experience when provided with thermal feedback on the face. Results indicated that the recognition accuracy for cold stimuli were of approx. 89.5% accuracy while the accuracy for hot stimuli were 68.6%. Also, participants reported that they felt a higher level of immersion on the face when all modules were simultaneously stimulated (hot and cold). The presented applications demonstrate the ThermoVR's directional cueing and immersive experience.

 

HeadPhones: Ad Hoc Mobile Multi-Display Environments through Head Tracking

Presenter: Jens Grubert

Abstract: We present HeadPhones (Headtracking + smartPhones), a novel approach for the spatial registration of multiple mobile devices into an ad hoc multi-display environment. We propose to employ the user's head as external reference frame for the registration of multiple mobile devices into a common coordinate system. Our approach allows for dynamic repositioning of devices during runtime without the need for external infrastructure such as separate cameras or fiducials. Specifically, our only requirements are local network connections and mobile devices with built-in front facing cameras. This way, HeadPhones enables spatially-aware multi-display applications in mobile contexts. A user study and accuracy evaluation indicate the feasibility of our approach.

 

Quietto: An Interactive Timepiece Molded in Concrete and Milled Wood

Presenter: Kyung-Ryong Lee

Abstract: We introduce Quietto: an interactive timepiece made of molded concrete and milled wood. It shows upcoming daily schedules and the time through the quiet, ambient motions of a clock hand and light through the concrete touch interface. The results of an in-field user observation of 10 participants over 3 days showed the possibilities of using concrete as a unique and attractive material for designing a tangible interface due to its unexpected haptic feeling. We also found that Quietto provides an intuitive and effective representation of its users' daily schedules and can be used as a private, personal device. Through its distinctive design, Quietto can provide a new way of understanding scheduling through its concrete texture and amusing interaction qualities.

 

Close the Circuit 'N Play the Electrons: Learning Electricity with an Augmented Circuit Exhibit

Presenter: Elham Beheshti

Abstract: Understanding electrical circuits can be difficult for novices of all ages. In this paper, we describe a science museum exhibit that enables visitors to make circuits on an interactive tabletop and observe a simulation of electrons flowing through the circuit. Our goal is to use multiple representations to help convey basic concepts of current and resistance. To study visitor interaction and learning, we tested the design at a popular science museum with 60 parent-child dyads in three conditions: a control condition with no electron simulation; a condition with the simulation displayed alongside the circuit on the same screen; and an augmented reality condition, with the simulation displayed on a tablet that acts as a lens to see into the circuit. Our findings show that children did significantly better on a post-test in both experimental conditions, with children performing best in the AR condition. However, analysis of session videos shows unexpected parent-child collaboration in the AR condition.

 

Providing Haptics to Walls & Heavy Objects in Virtual Reality by Means of Electrical Muscle Stimulation

Presenter: Pedro  Lopes

Abstract: We explore how to add haptics to walls and other heavy objects in virtual reality. When a user tries to push such an object, our system actuates the user’s shoulder, arm, and wrist muscles by means of electrical muscle stimulation, creating a counter force that pulls the user's arm backwards. Our device accomplishes this in a wearable form factor. In our first user study, participants wearing a head-mounted display interacted with objects provided with different types of EMS effects. The repulsion design (visualized as an electrical field) and the soft design (visualized as a magnetic field) received high scores on “prevented me from passing through” as well as “realistic.” In a second study, we demonstrate the effectiveness of our approach by letting participants explore a virtual world in which all objects provide haptic EMS effects, including walls, gates, sliders, boxes, and projectiles.

 

Demonstrating IllumiPaper: Illuminated Interactive Paper

Presenter: Konstantin Klamka

Abstract: Due to their simplicity and flexibility, digital pen-and-paper solutions have a promising potential to become a part of our daily work. Unfortunately, they lack dynamic visual feedback and thereby restrain advanced digital functionalities. In this paper, we investigate new forms of paper-integrated feedback, which build on emerging paper-based electronics and novel thin-film display technologies. Our approach focuses on illuminated elements, which are seamlessly integrated into standard paper. For that, we introduce an extended design space for paper-integrated illuminations. As a major contribution, we present a systematic feedback repertoire for real-world applications including feedback components for innovative paper interaction tasks in five categories. Furthermore, we contribute a fully-functional research platform including a paper-controller, digital pen and illuminated, digitally controlled papers that demonstrate the feasibility of our techniques. Finally, we report on six interviews, where experts rated our approach as intuitive and very usable for various applications, in particular educational ones.

 

FaceDisplay: Enabling Multi-User Interaction for Mobile Virtual Reality

Presenter: Jan Gugenheimer

Abstract: We present FaceDisplay, a multi-display mobile virtual reality (VR) head mounted display (HMD), designed to enable non-HMD users to perceive and interact with the virtual world of the HMD user. Mobile VR HMDs offer the ability to immerse oneself wherever and whenever the user wishes to. This enables application scenarios in which users can interact with VR in public places. However, this results in excluding all the people in the surrounding without an HMD to become sole bystanders and onlookers. We propose FaceDisplay, a multi-display mobile VR HMD, allowing by-standers to see inside the immersed users virtual world and enable them to interact via touch. We built a prototype consisting of three additional screens and present interaction techniques and an example application that leverage the FaceDisplay design space.

 

Illumination Aesthetics: Light as a Creative Material within Computational Design

Presenter: Cesar  Torres

Abstract: Recent digital fabrication tools have enabled new form-giving using a wide range of physical materials. However, light as a first class creative material has been largely ignored within the design of our electronic objects. Our work expands the illumination design space by treating light as a physical material. We introduce a digital design tool that simulates and visualizes physical light interactions with a variety of materials for creating custom luminaires. We further develop a computational design and fabrication process for creating custom secondary optics elements (SOEs), which provides additional handles for users to physically shape and redirect light to compose, fill, and evenly diffuse planar and volumetric geometries. Through a workshop study with novice electronic designers, we show how incorporating physical techniques to shape light alters how users view the role and function of LEDs and electronics. We produce example pieces that showcase how our approach expands the electronics aesthetic and discuss how viewing light as material can engender novel, expressive artifacts.

 

Calibration Methods for Effective Fish Tank VR in Multi-screen Displays

Presenter: Dylan Fafard

Abstract: We present cubic and spherical multi-screen fish tank virtual reality displays that use novel interactive and automatic calibration techniques to achieve convincing 3D effects. The two displays contrast the challenges and benefits of multiple projectors or flat-panel screens, borders or borderless, and the performance of headtrackers. Individuals will be able to subjectively evaluate the visual fidelity of the displays by comparing physical objects to their virtual counterparts, comparing the two displays, and by changing the level of calibration accuracy. They will be able to test the first markerless, interactive, and user-dependent headtracker calibration that promises accurate viewpoint registration without the need for manual measurements. In conjunction with an automatic screen calibration technique, the displays will offer a unique and convincing 3D experience.

 

SoPhy: A Wearable Technology for Video Consultations of Physiotherapy

Presenter: Deepti Aggarwal

Abstract: Physiotherapists are increasingly using video conferencing tools for their teleconsultations. Yet, the assessment of subtle differences in body movements remains a challenge. To support lower limb assessment in video consultations, we present SoPhy, a wearable technology consisting of a pair of socks with embedded sensors for patients to wear; and a web interface that displays information about range of weight distribution, foot movement, and foot orientation for physiotherapists in real-time. We conducted a laboratory study of 40 video consultations, in which postgraduate physiotherapy students assessed lower limb function. We compare assessment with and without SoPhy. Findings show that SoPhy increased the confidence in assessing squats exercise and fewer repetitions were required to assess patients when using SoPhy. We discuss the significance of SoPhy to address the challenges of assessing bodily information over video, and present considerations for its integration with clinical practices and tools.

 

Sharing Tea over a Distance with the Messaging Kettle

Presenter: Aloha May Ambe

Abstract: This paper presents the concept of technology individuation and explores its role in design. Individuation expresses how, over time, a technology becomes personal and intimate, unique in purpose, orchestrated in place, and how people eventually come to rely on it to sustain connection with others. We articulate this concept as a critical vantage point for designing augmented everyday objects and the Internet of Things. Individuation foregrounds aspects of habituation, routines and arrangements that through everyday practices reveal unique meaning, reflect self-identity and support agency. The concept is illustrated through three long term case studies of technology in use, involving tangible and embodied interaction with devices that afford communication, monitoring, and awareness in the home setting. The cases are analysed using Hornecker and Buur’s Tangible Interaction Framework. We further extend upon this framework to better reveal the role played by personal values, history of use, and arrangements, as they develop over time in the home setting, in shaping tangible and embodied interaction with individuated technologies.

 

VersaPen: Exploring Multimodal Interactions with a Programmable Modular Pen

Presenter: Marc Teyssier

Abstract: We introduce and demonstrate VersaPen, a modular pen for expanding input capabilities. Users can create their own digital pens by stacking different input/output modules that define both the look and feel of the customized device. VersaPen investigate the benefits of adaptable devices and enriches interaction by providing multimodal capabilities, allows in-place interaction, it reduces hand movements and avoids cluttering the interface with menus and palettes. The device integrates seamlessly thanks to a visual programming interface, allowing end users to connect input and output sources in other existing software. We present various applications to demonstrate the power of VersaPen and how it enables new interaction techniques.

 

TJBot: An Open Source DIY Cardboard Robot for Programming Cognitive Systems

Presenter: Victor Dibia

Abstract: TJBot is an open source, interactive robot designed to encourage people to build with cognitive services in a fun way. He is a paper robot, which can also be 3D printed, and comes with an initial set of recipes that bring him to life. Recipes are a combination of step-by-step instructions plus sample code that walk people through the assembly of the robot, its hardware components, and software code that connects him to Watson cognitive services. TJBot can be programmed to listen, speak, see and recognize, shine his LED, understand emotions, and wave his arm. TJBot was designed for two communities: makers, who enjoy the DIY aspects of building and programming novel devices, and students, who can learn about programming cognitive systems. At our demo, people can build their very own TJBot out of cardboard and interact with him through speech.

 

Electrick: Low-Cost Touch Sensing Using Electric Field Tomography

Presenter: Yang Zhang

Abstract: Current touch input technologies are best suited for small and flat applications, such as smartphones, tablets and kiosks. In general, they are too expensive to scale to large surfaces, such as walls and furniture, and cannot provide input on objects having irregular and complex geometries, such as tools and toys. We introduce Electrick, a low-cost and versatile sensing technique that enables touch input on a wide variety of objects and surfaces, whether small or large, flat or irregular. This is achieved by using electric field sensing in concert with an electrically conductive material, which can be easily and cheaply added to objects and surfaces. We show that our technique is compatible with commonplace manufacturing methods, such as spray/brush coating, vacuum forming, and casting/molding – enabling a huge range of possible uses and outputs. Our technique can also be used to bring touch interactivity to rapidly fabricated objects, including those that are laser cut or 3D printed. Through a series of user studies and illustrative example uses, we show that Electrick can enable new interactive opportunities on a diverse set of objects and surfaces that were previously static.

 

Synthetic Sensors: Towards General-Purpose Sensing

Presenter: Gierad Laput

Abstract: The promise of smart environments and the Internet of Things (IoT) relies on robust sensing of diverse environmental facets. Traditional approaches rely on direct and distributed sensing, most often by measuring one particular aspect of an environment with a special purpose sensor. This approach can be costly to deploy, hard to maintain, and aesthetically and socially obtrusive. In this work, we explore the notion of general purpose sensing, wherein a single enhanced sensor can indirectly monitor a large context, without direct instrumentation of objects. Further, through what we call Synthetic Sensors, we can virtualize raw sensor data into actionable feeds, whilst simultaneously mitigating immediate privacy issues. A series of structured, formative studies informed the development of our new sensor hardware and accompanying information architecture. We deployed our system across many months and environments, the results of which show the versatility, accuracy and potential utility of our approach.

 

Guidelines to Incorporate a Clinician User Experience (UX) into the Design of Patient-Operated mHealth

Presenter: Harry Tunnell

Abstract: This interactivity demonstration paper highlights how a patient-operated mHealth solution can be designed to improve clinician understanding of a patient’s health status during a first face-to-face encounter. Patients can use smartphones to retrieve difficult-to-recall-from memory personal health information. This provides an opportunity to improve patient-clinician collaboration. To explore this idea, a mixed method study with 12 clinicians in a simulated encounter was conducted. A smartphone personal health record was prototyped and used for an experimental study. Communication, efficiency, and effectiveness was improved for clinicians who experienced the prototype. Study outcomes included a validated set of design guidelines for mHealth tools to support better patient-clinician communication.

 

Deus EM Machina: On-Touch Contextual Functionality for Smart IoT Appliances

Presenter: Robert Xiao

Abstract: Homes, offices and many other environments will be increasingly saturated with connected, computational appliances, forming the “Internet of Things” (IoT). At present, most of these devices rely on mechanical inputs, webpages, or smartphone apps for control. However, as IoT devices proliferate, these existing interaction methods will become increasingly cumbersome. Will future smart-home owners have to scroll though pages of apps to select and dim their lights? We propose an approach where users simply tap a smartphone to an appliance to discover and rapidly utilize contextual functionality. To achieve this, our prototype smartphone recognizes physical contact with uninstrumented appliances, and summons appliance-specific interfaces. Our user study suggests high accuracy – 98.8% recognition accuracy among 17 appliances. Finally, to underscore the immediate feasibility and utility of our system, we built twelve example applications, including six fully functional end-to-end demonstrations.

 

The Club of The Future: Participatory Clubbing Experiences

Presenter: Thomas Röggla

Abstract: This article showcases our effort to explore the music club of the future. We present the development and results of an end-to-end system which enhances the club-going experience through the use of wearable technology. Each party guest wearing one of the wristbands actively contributes to the overall experience with their movement and location patterns. The system collects acceleration data from each of the attendees in real-time and feeds it into a pluggable network infrastructure, which processes the data, affecting the environment via data visualization or controlling of the light and sound system of a curated space within the club. Finally, we describe the results of a two night, 450 person per night.

 

Calm Automaton: A DIY Toolkit for Ambient Displays

Presenter: Daniel Saakes

Abstract: The abundance of information technology in today’s society results in “Alert Fatigue” due to the overwhelming number of alarms and notifications that attempt to grab our attention. We introduce Calm Automaton, a customizable and programmable physical display that gently visualizes abstract data in a pleasing and meaningful way, without attracting attention. We extend the concept of calm technology with a DIY toolkit to make information and notifications comfortable, personal, and embedded in the periphery. We describe the design and implementation of the motion modules that make the automaton and report on the experience of people using these displays.

 

Morphology Extension Kit: A Modular Robotic Platform for Customizable and Physically Capable Wearables

Presenter: Sang-Won Leigh

Abstract: Robotic and shape-changing interfaces hint at a way to incorporate physical materials as extensions for human users, however, rapidly changing environments pose a diverse set of problems that are difficult to solve with a single interface. To address this, we propose a modular hardware platform that allows users or designers to build and customize physical augmentations. The process of building an augmentation is simply to connect actuator and sensor blocks and attach the resulting wearable to the body. The current design can be easily modified to incorporate additional electronics for desired sensing capabilities. Our universal connector designs can be extended to utilize any motors within afforded power, size, and weight constraints.

 

Digital and Analog Metamaterial Mechanisms

Presenter: Alexandra  Ion

Abstract: In this paper, we explore how to embody mechanical computation into 3D printed objects, i.e., without electronic sensors, actuators, or controllers typically used for this purpose. A key benefit of our approach is that the resulting objects can be 3D printed in one piece and thus do not require assembly. We are building on 3D printed cell structures, also known as metamaterials. We introduce a new type of cell that propagates a digital mechanical signal using an embedded bistable spring. When triggered, the embedded spring discharges and the resulting impulse triggers one or more neighboring cells, resulting in signal propagation. We extend this basic mechanism to implement simple logic functions. We demonstrate interactive objects based on this concept, such as a combination lock. We present a custom editor that allows users to model 3D objects, route signals, simulate signal flow, and synthesize cell patterns.

 

E-vita, a Tactile Feedback Based Tablet with Programmable Friction

Presenter: Yosra Rekik

Abstract: We investigate the relevance of surface haptic rendering techniques for tactile devices. We focus on the two major existing techniques and show that they have complementary benefits. The first one, called Surface Haptic Object (SHO), which is based on finger position, is shown to be more suitable to render sparse textures; while the second one, called Surface Haptic Texture (SHT), which is based on finger velocity, is shown to be more suitable for dense textures and fast finger movements. We hence propose a new rendering technique, called Localized Haptic Texture (LHT), which is based on the concept of taxel considered as an elementary tactile information that is rendered on the screen. By using a grid of taxels to encode a texture, LHT is shown to provide a consistent tactile rendering across different velocities for high density textures, and is found to reduce user error rate by up to 77.68% compared to SHO.

 

bioSync: A Paired Wearable Device for Blending Kinesthetic Experiences

Presenter: Jun Nishida

Abstract: We present a novel, paired, wearable system for combining the kinesthetic experiences of two persons. These devices allow users to sense and combine muscle contraction and joint rigidity bi-directionally. This is achieved through kinesthetic channels based on electromyogram (EMG) measurement and electrical muscle stimulation (EMS). We developed a pair of wearable kinesthetic input-output (I/O) devices called bioSync that uses specially designed electrodes to perform biosignal measurement and stimulation simultaneously on the same electrodes. In a user study, participants successfully evaluated the strength of their partners' muscle contractions while exerting their own muscles. We confirmed that the pair of devices could help participants synchronize their hand movements through tapping, without visual and auditory feedback. The proposed interpersonal kinesthetic communication system can be used to enhance interactions such as clinical gait rehabilitation and sports training, and facilitate sharing of physical experiences with Parkinson's patients, thereby enhancing understanding of the physical challenges they face in daily life.

 

StatPlayground: Exploring Statistics through Visualizations

Presenter: Krishna Subramanian

Abstract: Statistical analysis is a crucial part of many research fields: it is used by the researcher to validate her hypothesis and to communicate her findings to the community. Null Hypothesis Significance Testing (NHST), a commonly used statistical analysis approach in many research fields, has been criticized over the years due to the prevalence of misconceptions and improper practice. We introduce StatPlayground, an exploratory tool as a viable solution to the root problem of statistical illiteracy. StatPlayground allows users to control data characteristics (e.g., mean, variance of distributions) by directly manipulating visualizations (e.g., box plots) to see the effect on the resulting inferential statistics (e.g., p-value, effect size) and vice versa. We believe that StatPlayground has the potential to help users improve certain statistical literacy skills. We elaborate on the motivation behind this tool and demonstrate its features through use cases.

 

Bitbarista: Exploring Perceptions of Data Transactions in the Internet of Things

Presenter: Larissa Pschetz

Abstract: We are surrounded by a proliferation of connected devices performing increasingly complex data transactions. Traditional design methods tend to simplify or conceal this complexity to improve ease of use. However, the hidden nature of data is causing increasing discomfort. This paper presents BitBarista, a coffee machine designed to explore perceptions of data processes in the Internet of Things. BitBarista reveals social, environmental, qualitative and economic aspects of coffee supply chains. It allows people to choose a source of future coffee beans, situating their choices within the pool of decisions previously made. In doing so, it attempts to engage them in the transactions that are required to produce coffee. Initial studies of BitBarista with 42 participants reveal challenges of designing for connected systems, particularly in terms of perceptions of data gathering and sharing, as well as assumptions generated by current models of consumption. A discussion is followed by a series of suggestions for increasing positive attitudes towards data use in interactive systems.

 

Scientific Outreach with Teegi, a Tangible EEG Interface to Talk about Neurotechnologies

Presenter: Jeremy Frey

Abstract: Teegi is an anthropomorphic and tangible avatar exposing a users' brain activity in real time. It is connected to a device sensing the brain by means of electroencephalography (EEG). Teegi moves its hands and feet and closes its eyes along with the person being monitored. It also displays on its scalp the associated EEG signals, thanks to a semi-spherical display made of LEDs. Attendees can interact directly with Teegi -- e.g. move its limbs -- to discover by themselves the underlying brain processes. Teegi can be used for scientific outreach to introduce neurotechnologies in general and brain-computer interfaces (BCI) in particular.

 

A Collaborative 3D Manipulation Challenge

Presenter: Jerônimo Grandi

Abstract: Object manipulation in 3D virtual environments demands a combined coordination of rotations, translations and scales, as well as the camera control to change the user's viewpoint. Then, for many manipulation tasks, it would be advantageous to share the interaction complexity among team members. In this paper we propose a novel 3D manipulation interface based on a collaborative action coordination approach. Our technique explores a smartphone -- the touchscreen and inertial sensors -- as input interface, enabling several users to collaboratively manipulate the same virtual object with their own devices. We first assessed our interface design on a docking and an obstacle crossing tasks with teams of two users. Then, we conducted a study with 60 users to understand the influence of group size in collaborative 3D manipulation. We evaluated teams in combinations of one, two, three and four participants. Experimental results show that teamwork increases accuracy when compared with a single user. The accuracy increase is correlated with the number of individuals in the team and their work division strategy.

 

Para: Expressive Procedural ArtCreation through Direct Manipulation

Presenter: Jennifer Jacobs

Abstract: Computation is a powerful artistic medium. Artists with experience in programming have demonstrated the unique creative opportunities of using code to make art. Currently, manual artists interested in using procedural techniques must undergo the difficult process of learning to program, and must adopt tools and practices far removed from those to which they are accustomed. We hypothesize that, through the right direct manipulation interface, we can enable accessible and expressive procedural art creation. To explore this, we developed Para, a digital illustration tool that supports the creation of declarative constraints in vector artwork. Para's constraints enable procedural relationships while facilitating live manual control and non-linear editing. Constraints can be combined with duplication behaviors and ordered collections of artwork to produce complex, dynamic compositions. We use the results of two open-ended studies with professional artists and designers to provide guidelines for accessible tools that integrate manual and procedural expression.

 

Sparkle: Tactile Feedback with Touchable Electric Arcs

Presenter: Deepak Sahoo

Abstract: Many finger sensing input devices now support proximity input, enabling users to perform in-air gestures. While near-surface interactions increase the input vocabulary, they lack tactile feedback, making it hard for users to perform gestures or to know when the interaction takes place. Sparkle stimulates the fingertip with touchable electric arcs above a hover sensing device to give users in-air tactile or thermal feedback, sharper and more feelable than acoustic mid-air haptic devices. We present the design of a high voltage resonant transformer with a low-loss soft ferrite core and self-tuning driver circuit, with which we create electric arcs 6 mm in length, and combine this technology with infrared proximity sensing in two proof-of-concept devices with form factor and functionality similar to a button and a touchpad. We provide design guidelines for Sparkle devices and examples of stimuli in application scenarios, and report the results of a user study on the perceived sensations. Sparkle is the first step towards providing a new type of hover feedback, and it does not require users to wear tactile stimulators.

 

Crafting Tools for Textile Electronic Making

Presenter: Irene Posch

Abstract: This abstract introduces a set of tools for electronic textile crafts. The goal is to combine established textile craft routines with electronic functionalities while avoiding being limited to either crafting or engineering qualities. A discipline’s toolset is decisive in what material can be handled and what output can be produced as well as the skills needed to handle it. New tools presented here allow for new interactions with materials and make new routines accessible to the processes of electronic making and textile crafting, potentially being inclusive to new user groups in the respective fields. Here, traditional needlecraft tools are adapted to integrate electric engineering needs. Functionally, these tools augment processes of textile crafting with information about electric properties of the artifact in production. More broadly, they are a chance to reflect on the roles and cultural assumptions of the technologies, and indeed crafts, that these tools enable.

 

Changing the Appearance of Real-World Objects by Modifying Their Surroundings

Presenter: David Lindbauer

Abstract: We present an approach to alter the perceived appearance of physical objects by controlling their surrounding space. Many real-world objects cannot easily be equipped with displays or actuators in order to change their shape. While common approaches such as projection mapping enable changing the appearance of objects without modifying them, certain surface properties (e.g. highly reflective or transparent surfaces) can make employing these techniques difficult. In this work, we present a conceptual design exploration on how the appearance of an object can be changed by solely altering the space around it, rather than the object itself. In a proof-of-concept implementation, we place objects onto a tabletop display and track them together with users to display perspective-corrected 3D graphics for augmentation. This enables controlling properties such as the perceived size, color, or shape of objects. We characterize the design space of our approach and demonstrate potential applications. For example, we change the contour of a wallet to notify users when their bank account is debited. We envision our approach to gain in importance with increasing ubiquity of display surfaces.

 

Interactivity Installations

Here is the list of CHI 2017 Interactivity Installations:

 

Tea with Crows: Towards Socially Engaging Digital Interaction

Presenter: Young Suk Lee

Abstract: "Tea with Crows" introduces an innovative design concept that plays a transformational role. Actively engaging with a meaning-making process, where the viewer can create a variable form and function, deconstructs the traditional vision of art.

 

HoloARt: Painting with Holograms in Mixed Reality

Presenter: Judith Amores

Abstract: HoloARt is a system that allows the user to virtually spray and splatter hologram paint on top of physical objects and surfaces as well as painting in the air by only using their hands.

 

The Living Net: A Haptic Experience of Personal Data

Presenter: Jessica Rajko

Abstract: "The Living Net" encourages discourse about where we archive our daily lives. Haptic transducers and participants' personal objects are woven into a hand-crocheted net that people can touch.

 

Audiovisual Playground: A Music Sequencing Tool for 3D Virtual Worlds

Presenter: Annie Kelly

Abstract: VR has given developers the power to redesign existing 2D interfaces as immersive 3D interfaces. Musical interfaces adapted to 3D worlds may result in increased levels of user engagement.

 

Tactile Drones - Providing Immersive Tactile Feedback in Virtual Reality through Quadcopters

Presenter: Pascal Knierim

Abstract: We propose quadcopters to provide tactile stimulation in VR. While the user is visually and acoustically immersed in VR, small quadcopters simulate bumblebees, arrows, and other objects hitting the user.

 

Game of Light: Modeling Diversity Through Participatory Interaction

Presenter: Clement Zheng

Abstract: This project explores diversity plays in ecosystems, through participatory design and modular paper lanterns. Building on Conway's Game of Life, the lanterns light up in pattens that reflect the changing diversity of the system.

 

Virtual Interactive Human Anatomy: Dissecting the Domain, Navigating the Politics, Creating the Impossible

Presenter: Weiquan Lu

Abstract: This project is an early exploration into designing active learning interactions for learning human anatomy using commercial off the shelf Virtual Reality hardware.

 

FusePrint: A DIY 2.5D Printing Technique for Good-fit Fabrication with Daily Objects

Presenter: Kening Zhu

Abstract: FusePrint is a Stereolithography-based 2.5D rapid prototyping technique allowing high-precision fabrication without high-end modeling tools, enabling mixing everyday artifacts with photo-reactive resin, facilitating the creation of new objects that fit the daily objects.

 

Holograms without Headsets: Projected Augmented Reality with the RoomAlive Toolkit

Presenter: Andrew Wilson

Abstract: The RoomAlive Toolkit is an open source SDK that enables developers to create interactive projection mapping applications. We demonstrate a RoomAlive installation using three projectors and four cameras.

 

Demonstrating TrussFab:  Fabricating Sturdy Large-Scale Structures on Desktop 3D Printers

Presenter: Robert Kovacs

Abstract: TrussFab is and end-to-end fabrication system, which assists users to form structurally sound node link structures based on 3D printed hubs combined with plastic bottles.

 

Quick Facts

Important Dates:

  • Installations
    • Submission deadline: 12 October 2016 (20:00 EDT)
    • Notification: 12 December 2016
  • Demonstrations
    • Submission deadline 11 January 2017 (20:00 EST)
    • Notification: 10 February 2017
    • Publication-ready deadline: 15 February 2017

 

Submission Details:

  • Online Submission: PCS Submission System
  • Template: Extended Abstracts Format
  • Submission Format: 4-page Extended Abstract, video preview, still image, and a supplemental PDF describing what attendees will experience as well as technical and space requirements.
  • Submissions are not anonymous and should include all author names, affiliations, and contact information.

 

Selection Process: Curated

 

Chairs: Ido Guy and Steve Voida (interactivity demonstrations: demos@chi2017.acm.org), Adrian Friday and Shengdong Zhao (interactivity installations, interactivity@chi2017.acm.org)

 

At the Conference: Accepted Interactivity will be presented during the conference in the exhibits area.

 

Archives: Interactivity descriptions will be published in the Extended abstracts; ACM Digital Library

 

Message from the Interactivity Chairs

Interactivity is a high-visibility, high-impact forum of the Technical Program that allows you to present your hands-on demonstration, share novel interactive technologies, and stage interactive experiences. We encourage submissions from any area of human computer interaction, games, entertainment, digital and interactive art, and design. Interactivity promotes and provokes discussion on novel technologies, and invites contributions from industry, research, startups, maker communities, the arts, and design. The Interactivity track showcases this year's most exciting interactive technologies and installations. If you have an interesting prototype, device, system, exhibit or installation, we want to know about it. Sharing hands-on experiences of your work is often the best way to communicate your creation.

 

This year we are dividing Interactivity into two tracks with an early and a late deadline.  Interactivity Installations are complex exhibits that require special spaces, equipment or lighting conditions.  Interactivity Demonstrations are for self-contained exhibits with logistically simple requirements.

 

Ido Guy, Yahoo!

Stephen Voida, CU-Boulder

demos@chi2017.acm.org

 

Adrian Friday, Lancaster, UK

Shengdong Zhao, NUS

interactivity@chi2017.acm.org

 

Interactivity: Installations

Working on something that makes a big impact? We are looking for artworks, design experiences as well as inspirational technologies that the audience can engage with intellectually, imaginatively, and physically. Interactivity’s Installations track is for projects that need non-standard space, projection, special lighting conditions, or other equipment and setup needs to fully showcase their contributions.

 

Please give your requirements using the Installations Supplement.

 

Deadline: 12 October 2016 (20:00 EDT)

 

Interactivity: Demonstrations

Working on something exciting but self-contained? We want cutting-edge prototypes and demonstrations that will excite the CHI audience.  Interactivity’s Demonstrations track is for exhibits that can be shown in a standard booth space. Demonstrations presenters can also request a 24” LCD display, a 50” Plasma Display, speakers, and Wi-Fi. If you have needs beyond these items, you must submit to the Installations track.

 

Please give your requirements using the Demonstration Supplement.

 

Deadline: 11 January 2017 (20:00 EST)

 

Preparing and Submitting your Interactivity Submission

Previously published work will be accepted into the Interactivity track, on condition that the publication and presentation history is clearly outlined in the submission. The Interactivity track encourages authors of submitted CHI Papers or Notes to submit an extended abstract for Interactivity, although there is no formal association between Interactivity and accepted papers and notes.

 

  • Installations: 12 October 2016
  • Demonstrations: 11 January 2017

 

Submissions must have the following components:

 

1. Extended Abstract

The extended abstract is a 4-page short paper in the Extended Abstracts Format. It should be self-contained and clearly describe the novelty and distinguishing ideas of your project, even for readers who are not able to view the related demonstration at the conference or associated videos. Your abstract should include:

  • A description of the system, installation, exhibit or performance and the problem it addresses. Where relevant, discuss the broader context and questions that your work promotes reflection upon.
  • A description of the audience the work intends to serve.
  • A description of the relevance of the work to the immediate CHI conference community, as well as to the broader CHI community, emphasizing its novelty, uniqueness, and rationale.

 

2. Video

A video is a good way to communicate interactive projects to the reviewers and provides an archive of the work. You must submit a video in addition to your written documentation. The video must be no longer than 5 minutes and all uploaded content (PDF(s) + image + video) must be less than 100 MB. Please make sure that your video is playable on standard PC and Macintosh computers. We recommend that you encode your video as an MP4 using the H.264 codec. Most video editing software provides an exporting option to MP4/H.264, for example iMovie, Adobe Premiere, and Final Cut Pro. If you prefer to use free software, x264 can encode any video into H.264. Alternatively you can try uploading the video to YouTube and downloading the encoded result. Submitted videos will be used for review purposes. The videos may also be displayed at the Interactivity site and possibly on web sites previewing CHI content (as an example see CHI 2010 Madness videos on youtube).

 

3. Still Image

You will also need to upload a still image of at least 1500 x 1200 px that represents your work. The image is required for publications and conference publicity.

 

4. Supplement

This supplement is mandatory for all Interactivity submissions and must include technical set-up and space requirements. This supplement is useful for describing anything that does not fit or is not appropriate for the extended abstract and is used to determine how to organise Interactivity exhibits.  Abstracts that do not provide a complete supplement using the template will not be accepted. Supplement materials are for the purpose of review and planning only and will not be published. This information is used to determine spatial, technical, lighting, power (etc.) requirements for the demonstration, exhibit, or installation. The supplement should be no longer than 4 pages. Like all other materials, the supplement must be submitted through the PCS submission system and the total of PDF(s), still, video, and supplement cannot exceed 100 MB.

 

You must use the appropriate template.

 

Interactivity Selection Process

The CHI 2017 Interactivity forum will be curated and will include work that may have been invited or selected from submissions. The selection process includes reviews by the Interactivity program committee, and will account for feasibility, available space at the conference and other relevant information. Our intention is to ensure that the Interactivity track represents the range of projects being undertaken across diverse CHI and related communities and that these projects can be presented appropriately at the conference.

 

Our intention is to ensure that the Interactivity track represents the range of projects being undertaken across CHI communities and these projects can be presented appropriately at the conference.

 

Submissions should not contain sensitive, private, or proprietary information that cannot be disclosed at publication time.  Submissions should NOT be anonymous. However, confidentiality of submissions will be maintained during the review process. All rejected submissions will be kept confidential in perpetuity. All submitted materials for accepted submissions will be kept confidential until the start of the conference, with the exception of title and author information which will be published on the website prior to the conference.

 

At the Conference

If accepted to Interactivity as an Installation or a Demonstration, you will be assigned a space in the interactivity space or at other locations in the conference venue. Support for building on-site and moving in/out of large/heavy exhibits is only provided before the conference start and after the conference. At CHI Interactivity you will have a space for your work, but you are responsible for bringing and setting up most of any other equipment that is required for presenting your work.

 

For accepted interactivity exhibits, we can provide help with projectors, plasma displays, etc. only when these are absolutely required to enable the Interactivity.  Please note that Demonstrations can only request limited equipment.  If you need anything additional, you must apply to the Installation track.  Please provide these details using the supplement PDF for the appropriate track. Note that although student volunteers will be present in the Interactivity space at all times, CHI will not be able to provide anyone to run your demonstration.

 

After the Conference

Accepted Interactivity extended abstracts and videos will be distributed in the CHI Conference Extended Abstracts USB and placed in the ACM Digital Library. Those extended abstracts that are associated to accepted Notes and Papers will link to the associated archival publication.

 

 

 

© copyright 2017 | ACM SIGCHI

QUICK LINKS

Interactivity