Projects

GridBook

More often that not, people doing data analysis are experts in a domain that is not CS (e.g., salesperson writing spreadsheets to analyze sales data, or biologist cobbling up R code to analyze experimental data). They lack programming expertise, but program they must. to accomplish their tasks. Along the way, they face challenges setting up their environments, identifying the right libraries and APIs for the task, composing them, and then dealing with the burden of syntax and debugging when things go wrong. What if we can offload all this burden to the computer, and the user expresses intent in natural language (just like they would, in some detail, to a programmer?). With advancements like Codex, this is no more a science fiction, but it brings user interaction challenges such as making the NL systems intelligible to the user, communicate ambiguities if any and providing affordances to fix them, or overriding intelligence’s behavior and preserving them. Our recent paper (accepted to IUI’2022) explores some of these concerns. Various flavors of my research on intelligence x spreadsheets are currently shipping as part of various features in Microsoft Excel. (I can’t wait for an official announcement!).

Spreadsheet
(mis)comprehension

Spreadsheets are the most widely used programming environments, and the spreadsheet formula language is a Turing complete fully functional programming language. But, this language is being programmed often by people with little to no programming expertise, and the programs are notoriously buggy. Decades of research has looked into things from formal specifications to test cases to anomaly detection in spreadsheets. But what if we take a human approach? Turns out, among the top classes of problems with spreadsheets is their miscomprehension. We think miscomprehension might also be the root cause underlying other manifest errors (e.g., formula errors, data errors, errors inherited from reuse). This study explores the role of (mis)comprehension in spreadsheet errors, and more broadly the challenges in spreadsheet comprehension (so we can make the lives of 500+ million spreadsheet users better!).
Turns out, users simply must guess their way through spreadsheets (that their colleagues send, for example) and building on top of assumptions after assumptions is the problem. Also, can you believe that in the most widely used programming environments, programmers must use addressing such as #12EA65?(Spreadsheet’s A1:A234 is nothing but this!). Paper at CHI’21.

TweakIt

Stack-overflow driven development is the order of the day, especially for end-user programmers. When an end-user data analyst must accomplish a task (e.g., split each comma separated value in a column to multiple rows), they look for an example on the internet that does the same or something close, they copy-paste the code to their IDE, and then tweak and adapt it to their data. This is not easy though, in particular for end-user programmers and novice programmers who must understand what is going on and what they should change to make something work. Such comprehension is also essential for debugging. TweakIt aims to empower novice and end-user programmers inspect what each step of code is doing to their input, and what bit of code does exactly what, so they can isolate the parts of code to tweak. At its core, it is a live programming environment that helps programmers hover over each function call and compare its output to the output of the previous function call. This way, the code example stops being a black box blob, and a programmer can see what is going on in each little step along the way. CHI’21 Paper. This was led by Sam Lau, an intern at MSR.

Tool design for
software history

There is a lot of information sitting in a project’s version control history — every single line committed over years is available. But do programmers ever go back to them? If so, for what? How far back do they go? How easy or hard is it for them to do it? Turns out, there are three classes of information that people look at, in three distinct areas of software history (hot changes, recent changes and ancient changes). We argue that each of these require a very different set of lenses for the programmer to find stuff and do whatever it is they need to do. We call this the three-lens model for software history tool. See our ICSME 2015 paper for more. It won an IEEE Distinguished Paper Award.

Given that developers’ needs with software history was almost always about information seeking, and developers faced a ton of challenges with it, we asked ourselves, how can we systematically improve existing tools. We turned to a theory, namely Information Foraging Theory, that has in the past been very influential in information design (in web design / web search design / visualization design). We used the same theory as an analytical lens to evaluate existing version control tools, and proposed a framework for evaluating and fixing information seeking issues in existing tools. This paper also reveals a bunch of more fundamental problems with existing version control tools, and the need for rethinking version control from first principles. Applying a theoretical lens also showed a paucity of human theories for information creation, which is the starting point of how information comes into version control. It’s an intellectual exercise to build out the factors and tradeoffs in creating information in a certain way. TSE 2019. (I think building out the theory also holds the key for why developers are always complaining about their code!).

Theory of variations foraging

One approach to designing good versioning tools is to fix existing tools / build new tools and features. The downside is that fundamental limitations of these tools (e.g., saving changes as text) get in the way. The root cause is that the tools were built without deeply understanding people and their behaviors. What if we go back to the first principles and think deeply about how versioning tools ideally should be? What is it that people think of when they think versioning, and how can tools reflect those mental models of people? That is what my Ph.D dissertation was all about. (CHI 2016, CHI 2017, Ph.D. Thesis). I developed the theory of variations foraging — a variant of Information Foraging Theory for variations in information.
The cool thing here is this: Information Foraging Theory is all about how people find stuff (e.g., among different pages in a website, or different methods of a program). Often, such information is dissimilar. But with variations, stuff can all look similar (e.g., copies of a file, program, design mockups, different ML models, presentation deck). Does the theory still hold good? Turns out, the answer is yes, but there’s a lot of quirks to foraging among variants. One is that this is very comparison-centric! The other cool thing is that people build stories in their head (often incomplete and wrong) and then use that to make sense of what is going on. I believe it would be interesting to explore the role of stories in sensemaking more broadly! This got an ACM SIGCHI Best Paper Award.

Others

1. An ongoing collaboration with Dr. Sandeep Kuttal and her student at University of Tulsa explores how we can learn the costs and values of programmer-relevant information on the internet, so we can deliver high value – low cost content to developers in IDEs.

2. In the summer of 2018, I built a debugging + formula comprehension/auditing interface for Calculation View. Calculation shows spreadsheet formulas to users top-down in some order (just like a program). I took advantage of the program-like layout to build a tool that rendered the program slice for the execution of a cell’s formula in various logical orders. It’s a partly failed project, and I haven’t gotten around to salvaging it (yet!), and so it never got published. But it is a piece of engineering I’m proud of.

3. I led a team of user researchers and designers to build a set of data-driven personas for spreadsheet formula authors. The intent is to put the focus on the diversity of users in all roles and phases of product development.

4. Long ago, on an unpublished project on developers’ web search behaviors, I collaborated with Andrew Head and Daniel Lin. The project was abandoned for various reasons, but I enjoyed the time, and I’ve learnt valuable lessons from it. [I call it my best failed project].

5. When I started my Ph.D., I was in Danny Dig’s lab. I spent a quarter here dabbling with program transformations for parallelism in Javascript (given what Mozilla was then doing with Paralleljs). We (and Mozilla who was then heavily invested in JS parallelism) were getting nowhere and so we abandoned this project. [Mozilla eventually invested in Rust, precisely to deal with the challenges in parallelism!].
Fun fact: It was also the same time Dr. Margaret Burnett’s and Dr. Carlos Jensen’s courses exposed me to the fascinating space that is human-computer interaction. At the end of the year, I jumped to a HCI-heavy lab, and I am glad I did 🙂 I couldn’t be having more fun!

%d bloggers like this: