MSSISS 2024 Invited Presentations – MSSISS 2024

MSSISS 2024 Invited Presentations

Adaptive Decision Tree Methods

Abstract: This talk will discuss different theoretical and practical aspects of adaptive decision tree methodology and its applications to causal inference and decision-making. Both positive and negative results will be presented, illustrating the broad applicability and limitations of these methods in data science.

The talk is based on the following three papers:

LLMs, Productivity and Decision Making

Abstract: In this seminar, I am going to discuss parts of several recent papers exploring LLM’s impact on productivity and decision-making. In these papers, we provide a framework for thinking about how LLMs affect our cognition and decision-making. And explore several examples such as conversion journeys for durable goods, and studying for exams. We do not just catalog the effects on productivity (speed and quality), but the effort of the experience, and how the models, tools, and user’s co-evolution can better optimize for key objectives of all stakeholders. I will conclude with examples of my own use of LLM’s in surveying public opinion and exploring the production, consumption, and impact of news.

Harnessing Geometric Signatures in Causal Representation Learning

Abstract: Causal representation learning aims to extract high-level latent causal factors from low-level sensory data. Many existing methods often identify these factors by assuming they are statistically independent. In practice, however, the factors are often correlated, causally connected, or arbitrarily dependent. In this talk, we explore how one might identify such dependent causal latent factors from data, whether passive observations, interventional experiments, or multi-domain datasets. The key observation is that, despite correlations, the causal connections (or the lack of) among factors leave geometric signatures in the latent factors’ support – the ranges of values each can take. Leveraging these signatures, we show that observational data alone can identify the latent factors up to coordinate transformations if they bear no causal links. When causal connections do exist, interventional data can provide geometric clues sufficient for identification. In the most general case of arbitrary dependencies, multi-domain data can separate stable factors from unstable ones. Taken together, these results showcase the unique power of geometric signatures in causal representation learning.

The talk is based on the following three papers:

lsa logoum logoU-M Privacy StatementAccessibility at U-M