Activities per year
Abstract
Typically, people perform visual data analysis using mouse and touch interactions. While such interactions are often easy to use, they can be inadequate for users to express complex information and may require many steps to complete a task. Recently natural language interaction has emerged as a promising technique for supporting exploration with visualization, as the
user can express a complex analytical question more easily. In this paper, we investigate how to synergistically combine language and mouse-based direct manipulations so that weakness of one modality can be complemented by the other. To this end, we have developed a novel system, named Multimodal Interactions System for Visual Analysis (MIVA), that allows user to provide
input using both natural language (e.g., through speech) and direct manipulation (e.g., through mouse or touch) and presents the answer accordingly. To answer the current question in the context of past interactions, the system incorporates previous utterances and direct manipulations made by the user within a finite-state model. We tested the applicability of MIVA on several dashboards including a COVID-19 dashboard that visualizes coronavirus cases around the globe. Our demonstration provides initial indication that the MIVA system enhances the flow of visual analysis by enabling fluid, iterative exploration and refinement of data in a dashboard with multiple-coordinated views.
user can express a complex analytical question more easily. In this paper, we investigate how to synergistically combine language and mouse-based direct manipulations so that weakness of one modality can be complemented by the other. To this end, we have developed a novel system, named Multimodal Interactions System for Visual Analysis (MIVA), that allows user to provide
input using both natural language (e.g., through speech) and direct manipulation (e.g., through mouse or touch) and presents the answer accordingly. To answer the current question in the context of past interactions, the system incorporates previous utterances and direct manipulations made by the user within a finite-state model. We tested the applicability of MIVA on several dashboards including a COVID-19 dashboard that visualizes coronavirus cases around the globe. Our demonstration provides initial indication that the MIVA system enhances the flow of visual analysis by enabling fluid, iterative exploration and refinement of data in a dashboard with multiple-coordinated views.
Original language | English |
---|---|
Title of host publication | 24th International Conference Information Visualisation (IV) |
Place of Publication | United States |
Publisher | IEEE, Institute of Electrical and Electronics Engineers |
Pages | 674-677 |
Number of pages | 4 |
ISBN (Electronic) | 9781728191348 |
ISBN (Print) | 9781728191355 |
DOIs | |
Publication status | Published - 2020 |
Event | 24th International Conference Information Visualisation: IV2020 - online Duration: 07 Sept 2020 → 11 Sept 2020 http://iv.csites.fct.unl.pt/at/ |
Conference
Conference | 24th International Conference Information Visualisation |
---|---|
Period | 07/09/20 → 11/09/20 |
Internet address |
Fingerprint
Dive into the research topics of 'MIVA: Multimodal interactions for facilitating visual analysis with multiple coordinated views'. Together they form a unique fingerprint.Activities
- 1 Visiting an external organisation
-
York University
Kabir, A. (Visiting researcher)
04 Nov 2019 → 08 Nov 2019Activity: Visiting an external institution › Visiting an external organisation › Academic