PhD in Computer Science and Engineering · HCI + ML + Quantitative UX Research

Adaptive interfaces, telemetry, and AI evaluation before those threads converged.

I study how technology can learn from context, respect human judgment, and support better decisions. This site brings together my doctoral work in adaptive user interfaces, selected publications, and public-safe summaries of talks spanning product telemetry, choice modeling, simulation-based evaluation, and trust in AI systems.

  • August 2008: PhD thesis on context-aware adaptive desktop applications.
  • May 2016: ACM CHI publication on data-driven personas from telemetry.
  • 2020-2022: Talks and methods across Verily, Wing, startup UX simulation, and AI trust evaluation.

About

Rigorous quantitative research with an HCI backbone.

I’m a researcher with a PhD in Computer Science and Engineering whose work sits at the intersection of human-computer interaction, machine learning, behavioral data, and product decision-making. My early work focused on adaptive user interfaces that learn from user context. Later work extended into large-scale telemetry, data-driven personas, jobs-to-be-done framing, survey measurement, choice modeling, and trust evaluation for AI-assisted systems.

I’m especially interested in the moment where research stops being descriptive and starts becoming operational: when evidence can shape product direction, clarify tradeoffs, or help teams deploy intelligent systems more responsibly. The values underneath this work are service, rigor, and human dignity, expressed in ways that travel comfortably across academic, product, and consulting settings.

What this site includes

  • My doctoral thesis on adaptive user modeling and context-aware personalization.
  • A peer-reviewed ACM CHI publication on constructing personas from clickstreams and telemetry.
  • Selected talk summaries from March 2020 through 2022 on AI evaluation, simulation, health research, and choice methods.
  • Methods I use to connect statistical depth with product strategy impact.

PhD Thesis

Adaptive user interfaces before the current AI-product wave.

August 2008 University of Nevada, Reno Doctor of Philosophy, Computer Science and Engineering

SYCOPHANT: A Context Based Generalized User Modeling Framework for Desktop Applications

This dissertation developed a context-aware framework for learning user-preferred actions in desktop applications. The core idea was simple and still feels timely: software becomes more useful when it can sense relevant context, learn individual preferences, and adapt its behavior accordingly.

Why it matters now

Years before today’s AI assistants normalized context-aware interaction, this work explored interface-level intelligence: systems that predict preferred actions instead of making users repeatedly reconfigure the same behavior.

What the system did

Sycophant combined keyboard, mouse, speech, and motion signals to build user-context features, then used machine learning to predict preferred actions for applications such as Google Calendar and Winamp.

Evidence base

Four user studies tested generalizability across participants, applications, and long-term use. The results showed that removing user-context features materially degraded predictive performance.

Research lineage

The dissertation also fed into conference work across IEEE, IUI, and GECCO, making the thesis part of a broader early program of adaptive-interface research.

The linked PDF is preserved as an archival document and retains the original publication byline.

Selected Publications

Research that ties behavioral evidence to product-facing models.

May 2016 ACM CHI Peer-reviewed

Data-driven Personas: Constructing Archetypal Users with Clickstreams and User Telemetry

Xiang Zhang, Hans-Frederick Brown, and Anil Gypsy Wild (archival publication record preserved in the PDF).

This paper proposed a bottom-up quantitative method for persona construction using observed product behavior instead of relying only on interviews or self-report. The work aggregated 3.5 million clicks from roughly 2,400 users into 39,000 clickstreams, organized them into ten workflows via hierarchical clustering, and used mixed models to derive five representative personas grounded in actual product use.

Telemetry Clustering Mixed models Persona systems Behavioral UX research

The linked PDF preserves the original conference publication record exactly as published.

Selected Talks

Public-safe summaries from method and strategy talks.

March-May 2020

Simulation-based UX evaluation for complex workflows

A talk built around CogTool and cognitive modeling to estimate skilled-user performance from prototypes when participant access, budget, or timing made traditional studies difficult. The practical frame was startup reality: move quickly, generate defensible estimates, compare workflows, and identify UX risk before a larger study is feasible.

May 2020

Research strategy for AI and health-product systems at Verily

This material connected adaptive-interface thinking, jobs-to-be-done research, telemetry verification, and evidence-based product stages. A central theme was that technology should adapt to human needs instead of forcing clinicians and domain experts to adapt to brittle workflows.

2021-2022

Choice methods for high-stakes product decisions at Wing / Google X

A methods-forward talk on choice experiments, MaxDiff, Kano, and attribute-based preference modeling. The emphasis was not only on collecting stated preferences, but on structuring tradeoffs so product teams could make better decisions around prioritization, hiring, capability transfer, and AI-system requirements.

2021-2022

Quantifying distrust in AI-assisted reasoning systems

This work framed distrust as a measurable construct rather than a vague sentiment. Using survey design and ordinal modeling, the project examined how people evaluate a reasoning platform and how locus of control, suspicion, and system confidence can shape responsible adoption of AI in high-consequence settings.

Methods

Statistical depth paired with product-facing breadth.

Behavioral telemetry Workflow analysis Adaptive interfaces User modeling Survey design Latent constructs Choice experiments MaxDiff Kano Segmentation Clustering Classification Mixed models Ordinal regression Experimentation Simulation-based evaluation Trust and distrust measurement Jobs to be done

How I think about methods

The strongest quantitative UX work does more than report descriptive metrics. It turns behavior into structure: segmentations that sharpen design decisions, models that forecast tradeoffs, and measurements that make fuzzy product concerns testable. That orientation runs from my dissertation through later work in telemetry, simulated-user analysis, and AI-system evaluation.