Improving Steering and Verification in AI-Assisted Data Analysis
Microsoft Research UK - as Research Intern
LLM-powered tools like ChatGPT Data Analysis, have the potential to help users tackle the challenging task of data analysis programming, which requires expertise in data processing, programming, and statistics. However, our formative study (n=15) uncovered serious challenges in verifying AI-generated results and steering the AI (i.e., guiding the AI system to produce the desired output). We developed two contrasting approaches to address these challenges. The first (Stepwise) decomposes the problem into step-by-step subgoals with pairs of editable assumptions and code until task completion, while the second (Phasewise) decomposes the entire problem into three editable, logical phases: structured input/output assumptions, execution plan, and code. A controlled, within-subjects experiment (n=18) compared these systems against a conversational baseline. Users reported significantly greater control with the Stepwise and Phasewise systems, and found intervention, correction, and verification easier, compared to the baseline. The results suggest design guidelines and trade-offs for AI-assisted data analysis tools.
Interactive Task Decomposition in AI-Assisted Data Analysis
Improving Steering and Verification in AI-Assisted Data Analysis with Interactive Task Decomposition
UIST 2024 · (Conditionally Accepted) ACM Symposium on User Interface Software Technology
Deploying an LLM-based Coding Assistant in the Classroom
University of Toronto - as PhD Student
We developed CodeAid, an LLM-powered programming assistant designed to offer students timely and personalized feedback without directly revealing code solutions. This tool aids in answering conceptual questions, generating pseudo-code, and suggesting corrections for incorrect code. We deployed CodeAid in a large class of 700 students and conducted a thematic analysis of its 8,000 usages, supplemented by weekly surveys and student interviews. Further feedback was obtained from eight programming educators. Results showed that most students used CodeAid for understanding concepts and debugging, though some students directly asked for code solutions. While educators valued its educational merits, they also highlighted concerns about occasional inaccuracies and the potential for students to rely on tools like ChatGPT.
We then concluded with four key design considerations for AI assistants in educational contexts, centered around four main stages of a student's help-seeking process:
- Exploiting Unique Advantages of AI: Decision to use the AI tool, emphasizing the unique advantages of AI over other resources.
- Designing the AI Querying Interface: Query formulation, providing context, and balancing user-friendliness with meta-cognitive engagement.
- Balancing the Directness of AI Responses: Nature of AI responses, managing directness, scaffolding type, and learning engagement.
- Supporting Trust, Transparency and Control: Post-response actions, ensuring accuracy, trust, transparency, and control.
Deploying an LLM-based Coding Assistant in the Classroom
CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Student and Educator Needs
CHI 2024 · ACM Conference on Human Factors in Computing Systems
Studying LLM-Based Code Generators in K-12 Computing Education
University of Toronto - as PhD Student
We studied the impact of Large Language Model (LLM)-based code generators, such as OpenAI Codex, on novice programmers (ages 10-17). In a controlled experiment involving 69 novices working on 45 Python tasks, we found that using Codex led to a 1.15x increase in code-authoring completion rate and 1.8x higher scores, without diminishing manual code-modification capabilities. Interestingly, those with prior Codex exposure had slightly improved performance in evaluations a week later. A deeper dive into data from 33 participants who used Codex revealed various ways they interacted with the tool. We identified four coding strategies: AI Single Prompt, AI Step-by-Step, Hybrid, and Manual coding. The AI Single Prompt strategy yielded the highest correctness in code-authoring but struggled in code-modification tasks. Our findings highlighted both the potentials and pitfalls of LLMs in educational settings, emphasizing the need for balanced integration and curriculum development.
ArticleLearning to code with and without AI
Austin Henley's Blog · March 2024
ArticleUsing an AI code generator with school-age beginner programmers
Raspberry Pi Foundation · March 2024
ArticleAI Code Generators Could Make Learning to Code Easier for Young Students
UofT · CS Department · March 2023
Studying LLM-Based Code Generators in K-12 Computing Education
Studying the effect of AI Code Generators on Supporting Novice Learners in Introductory Programming
CHI 2023 · ACM Conference on Human Factors in Computing Systems
How Novices Use LLM-Based Code Generators to Solve CS1 Coding Tasks in a Self-Paced Learning Environment
Koli Calling 2023 · ACM Koli Calling Conference on Computing Education Research
From Blocks to Text-based Programming
University of Toronto - as PhD Student
We designed CodeStruct, an intermediary programming environment aimed at assisting learners transition from block-based programming, like Scratch, to text-based languages such as Python. CodeStruct bridges the learning curve between these two paradigms, offering design features that significantly reduce completion times and help requests when compared to a direct transition. In a study with 26 high school students, results indicated that those using CodeStruct had a smoother transition with a decrease in data-type and syntax issues, especially when initially aided by a structured editor. However, once they moved to an unstructured editor, the rate of syntax errors increased, though they still outperformed peers who transitioned directly.
From Blocks to Text-based Programming
CodeStruct: Design and Evaluation of an Intermediary Programming Environment for Novices to Transition from Scratch to Python
IDC 2022 · ACM Conference on Interaction Design and Children
Scaffolding Progress: How Structured Editors Shape Novice Errors When Transitioning from Blocks to Text
SIGCSE 2023 · ACM Technical Symposium on Computer Science Education
Embedded Programming Development Environment
University of California, Berkeley - as Visiting Graduate Researcher
A key challenge in developing and debugging custom embedded systems is understanding their behavior, particularly at the boundary between hardware and software. Bifröst automatically instruments and captures the progress of the user's code, variable values, and the electrical and bus activity occurring at the interface between the processor and the circuit it operates in. This data is displayed in a linked visualization that allows navigation through time and program execution, enabling comparisons between variables in code and signals in circuits.
Embedded Programming Development Environment
Bifröst: Visualizing and Checking Behavior of Embedded Systems across Hardware and Software
UIST 2017 · ACM Symposium on User Interface Software Technology
Programming by Demonstration for Kids
Microsoft Research, Redmond - as Research Intern
GestureBlocks incorporates a demonstrate-edit-review Machine Learning pipeline for authoring sensor-based gestures into Microsoft MakeCode and allows novices to program behaviors using both data-driven and conventional paradigms.
Programming by Demonstration for Kids
GestureBlocks: A Gesture Recognition Toolkit for Children
ICER 2017 Workshop · Workshop on Learning about Machine Learning
Interactive Wearables using Tangible Programming
Interactive Wearables using Tangible Programming
University of Maryland - as MSc Student
Wearable construction kits have shown promise in attracting underrepresented groups to STEM, and empowering users to create personally meaningful computational designs. These kits, however, require programming, circuits, and manual craft skills. Therefore, to lower the barriers of entry and help empowering young children create interactive wearables, I led a two-year iterative design process, including participatory design sessions with children design probe sessions with STEM educators, and iteratively building and pilot testing prototypes with children.
Informed by these experiences, we built MakerWear: a modular and wearable construction kit with a focus on enabling children to leverage the richness of wearability-their changing environments, their body movements, and social interactions. Our novel approach enabled children to program complex trigger-action behaviors using tangible modules. Our evaluations of MakerWear at multi-session workshops, show that children (ages 5-10) were able to successfully create a wide variety of wearable designs, and actively apply computational thinking.
MakerWear: A Tangible Approach to Interactive Wearable Creation for Children
CHI 2017 · ACM Conference on Human Factors in Computing Systems
Best Paper AwardReWear: Early Explorations of a Modular Wearable Construction Kit for Young Children
CHI 2016 LBW · ACM Conference on Human Factors in Computing Systems
Best LBW Paper AwardMakerShoe: Towards a Wearable E-Textile Construction Kit to Support Creativity, Playful Making, and Self-Expression
IDC 2015 Demo · ACM Conference on Interaction Design and Children
Exploring Example Code Usage by Programmers
Sharif University of Technology - as Undergraduate Researcher
When programmers face new frameworks they usually rely on example codes to learn about the API and accomplish their tasks. This work investigates and analyzes the activities performed by programmers when such sample codes are being used for task completion.
Exploring Sample Code Usage by Programmers
Activities performed by programmers while using framework examples as a guide
SAC 2014 · ACM Symposium of Applied Computing
Carl Ma
University of Toronto
Justin Chow
University of Toronto
Viktar Chyhir
University of Toronto
Jason McPeak
University of Maryland
Alexander Jiao
University of Maryland
Katie Wang
University of Maryland