Submissions are now closed. See below for the solicitation.
We are soliciting position papers for the workshop around five grand challenges--problems that, if we can make some progress on, will significantly increase the effectiveness, usability, and value of AI in personal informatics systems. We expect position papers will promote a vision or strategy for addressing one of these challenges. We encourage you to draw on your empirical work designing, studying, or evaluating personal informatics systems with AI, but expect position papers to primarily discuss the implications of that work on the above grand challenges.
We are specifically in position papers which speak to:
Workshop position papers should be no more than 3 pages (excluding references) in ACM's template. Please submit by emailing your position paper to Lena Mamykina ([email protected]) by (extended!) March 4th, 2022. Each submission will be reviewed by at least one workshop organizer and be given some light feedback.
We are soliciting position papers for the workshop around five grand challenges--problems that, if we can make some progress on, will significantly increase the effectiveness, usability, and value of AI in personal informatics systems. We expect position papers will promote a vision or strategy for addressing one of these challenges. We encourage you to draw on your empirical work designing, studying, or evaluating personal informatics systems with AI, but expect position papers to primarily discuss the implications of that work on the above grand challenges.
We are specifically in position papers which speak to:
- Forms of support and interaction paradigms: One of the main tasks of AI-powered systems is to inform human judgments, decisions, and actions. However, questions remain as to the specific forms of support that can be of value to the person(s) whose data is being collected, and how might those forms of support complement or augment existing support systems. For example, what forms of output (inferences, predictions, recommendations) are appropriate and useful in different contexts? Are there tensions between more direct support for action (predictions/recommendations) and support for self-awareness and learning, typical for personal informatics, and how can AI-powered systems support both? Correspondingly, different forms of support may require different types of interaction, including embodied agents, voice assistants, chatbots, and graphic interfaces. What forms of interaction are appropriate for AI-driven personal informatics systems? ? How does the utility of each interaction vary based on the form of output, personal preferences, context, goals, and length of engagement (e.g., days versus years)? What opportunities exist for multimodal interactions with AI-powered systems?
- Limitations of self-tracking data in AI models: Some of the greatest breakthroughs in ML came from models that require vast datasets (e.g., deep learning). However, these datasets are not always feasible in personal informatics where collection of personal data may be burdensome due to it often involving self-report/manual logging (e.g. diet tracking, or mood tracking). What ML approaches can help to arrive at robust inferences with sparse, individual, and/or short-term records? Furthermore, passive tracking has less burden but may result in noisy datasets and often require mapping between what is being captured (galvanic skin response, steps) and what investigators wish to capture (stress levels, physical activity). How do we create useful AI-driven interactions which effectively make these limitations more visible and intelligible to users? Are there opportunities to triangulate between passively and actively collected data to get closer to the “truth”? What ML approaches allow us to overcome the limitations of learning on the data from a single individual, while still providing personalized, relevant support? How can AI-based systems be useful--and thus support ongoing engagement—before there is sufficient data to enable accurate inferences/predictions?
- Representativeness of AI models: Most AI models are trained on populations with majority identities (e.g., people in high income countries, and those tend to be younger, urban, and healthier). It is well-known that personal informatics systems are more widely-used by these majority groups, and their use often deepens or surfaces disparities around the ability to interpret complex representations of personal data, perform physical activity, have access to healthy food, and more. What social justice issues need to be considered when these AI models are implemented widely in personal informatics interventions? Would people with minority identities (racial or ethnic minorities; LGBTQIA+; older adults and kids; people with disabilities; socio-economically disadvantaged individuals) be marginalized, and if so, in what ways? What approaches to establishing models and adapting interventions should we take to mitigate these disparities?
- Personalization: Leveraging AI to personalize delivery of recommendations or advice based on PI data is a frequent topic of interest, such as in Just-In-Time Adaptive Interventions (JITAIs). Open questions remain about when or how to personalize delivery of these interventions, or whether personalization may negatively impact the intervention’s adherence to population-level guidelines for health and wellbeing. For example, if the system consistently reduces physical activity goals because the user’s calendar indicates that they are busy, this may lead to lower overall activity levels than if the system did not do such tailoring of support. What opportunities exist for personalization of interventions, such as their timing, form, and tone, and in what situations personalization can harm and marginalize rather than benefit users? More broadly, what are the tradeoffs of personalizing support vs. not in different contexts and across different types of PI systems? Similarly, interactive ML or machine teaching approaches can support fine-tuning and personalizing the underlying model of a person, combining personal data and explicit input from the person. What contexts would benefit from these teachable moments, and how can technology support this process? In addition, in such hybrid, human-in-the-loop systems, how do different user actions (provision of information about the current state, reactions to provided support, etc.) enhance the functioning of the algorithms contained in AI-based PI systems?
- The role of the human: One of the defining characteristics of personal informatics systems is the support for user agency: PI users typically have full control over data collection and use. However, many contemporary AI-powered solutions leave humans out of the data collection loop and often view them only as data contributors, receivers of inferences, predictions, and recommendations. Are there opportunities to leverage the intelligence of new generations of AI-powered personal informatics systems without losing user agency, autonomy, and control? For example, what techniques can help users control not only data collection (selection of data streams and variables) but also the learning process, and transition from machine learning to machine teaching, for example through increased visibility of internal model’s representations of the user? Furthermore, what is the long-term impact of increased reliance on AI-powered systems on people’s self-management and self-awareness goals, which are at the heart of personal informatics systems? Finally, with the increasing popularity of AI-powered personal informatics systems in health, how do these systems impact human-human relationships in this context? For instance, what impact would recommendations, insights, plans, or procedures driven by AI systems leveraging personal informatics data have on the therapeutic alliance between a mental health patient and their clinician, or the collaborative efforts between members of the treatment team comprising clinicians and caregivers?
Workshop position papers should be no more than 3 pages (excluding references) in ACM's template. Please submit by emailing your position paper to Lena Mamykina ([email protected]) by (extended!) March 4th, 2022. Each submission will be reviewed by at least one workshop organizer and be given some light feedback.