Jake Bowers, University of Illinois at Urbana-Champaign

Franz Hall 2258A

"Rules of Engagement in Evidence-Informed Policy: Practices and Norms of Statistical Science in Government"

Abstract: Collaboration between statistical scientists (data scientists, behavioral and social scientists, statisticians) and policy makers promises to improve government and the lives of the public. And the data and design challenges arising from governments offer academics new chances to improve our understanding of both extant methods and behavioral and social science theory. However, the practices that ensure the integrity of statistical work in the academy — such as transparent sharing of data and code — do not translate neatly or directly into work with governmental data and for policy ends. This paper proposes a set of practices and norms that academics and practitioners can agree on before launching a partnership so that science can advance and the public can be protected while policy can be improved. This work is at an early stage. The aim is a checklist or statement of principles or memo of understanding that can be a template for the wide variety of ways that statistical scientists collaborate with governmental actors.

Erin Hartman, University of California Los Angeles

CCPR Seminar Room 4240 Public Affairs Building, Los Angeles, CA, United States

Covariate Selection for Generalizing Experimental Results

Researchers are often interested in generalizing the average treatment effect (ATE) estimated in a randomized experiment to non-experimental target populations. Researchers can estimate the population ATE without bias if they adjust for a set of variables affecting both selection into the experiment and treatment heterogeneity.Although this separating set has simple mathematical representation, it is often unclear how to select this set in applied contexts. In this paper, we propose a data-driven method to estimate a separating set. Our approach has two advantages. First, our algorithm relies only on the experimental data. As long as researchers can collect a rich set of covariates on experimental samples, the proposed method can inform which variables they should adjust for. Second, we can incorporate researcher-specific data constraints. When researchers know certain variables are unmeasurable in the target population, our method can select a separating set subject to such constraints, if one is feasible. We validate our proposed method using simulations, including naturalistic simulations based on real-world data.

Co-Sponsored with The Center for Social Statistics

Adrian Raftery, University of Washington

CCPR Seminar Room 4240 Public Affairs Building, Los Angeles, CA, United States

Bayesian Population Projections with Migration Uncertainty

The United Nations recently issued official probabilistic population projections for all countries for the first time, using a Bayesian hierarchical modeling framework developed by our group at the University of Washington. These take account of uncertainty about future fertility and mortality, but not international migration. We propose a Bayesian hierarchical autoregressive model for obtaining joint probabilistic projections of migration rates for all countries, broken down by age and sex. Joint trajectories for all countries are constrained to satisfy the requirement of zero global net migration. We evaluate our model using out-of-sample validation and compare point projections to the projected migration rates from a persistence model similar to the UN's current method for projecting migration, and also to a state of the art gravity model. We also resolve an apparently paradoxical discrepancy between growth trends in the proportion of the world population migrating and the average absolute migration rate across countries. This is joint work with Jonathan Azose and Hana Ševčíková.

Co-sponsored with the Center for Social Statistics 

Rocio Titiunik, University of Michigan

CCPR Seminar Room 4240 Public Affairs Building, Los Angeles, CA, United States

Internal vs. external validity in studies with incomplete populations

Researchers working with administrative data rarely have access to the entire universe of units they need to estimate effects and make statistical inferences. Examples are varied and come from different disciplines. In social program evaluation, it is common to have data on all households who received the program, but only partial information on the universe of households who applied or could have applied for the program. In studies of voter turnout, information on the total number of citizens who voted is usually complete, but data on the total number of voting-eligible citizens is unavailable at low levels of aggregation. In criminology, information on arrests by race is available, but the overall population that could have potentially been arrested is typically unavailable. And in studies of drug overdose deaths, we lack complete information about the full population of drug users.

In all these cases, a reasonable strategy is to study treatment effects and descriptive statistics using the information that is available. This strategy may lack the generality of a full-population study, but may nonetheless yield valuable information for the included units if it has sufficient internal validity. However, the distinction between internal and external validity is complex when the subpopulation of units for which information is available is not defined according to a reproducible criterion and/or when this subpopulation itself is defined by the treatment of interest. When this happens, a useful approach is to consider the full range of conclusions that would be obtained under different possible scenarios regarding the missing information. I discuss a general strategy based on partial identification ideas that may be helpful to assess sensitivity of the partial-population study under weak (non-parametric) assumptions, when information about the outcome variable is known with certainty for a subset of the units. I discuss extensions such as the inclusion of covariates in the estimation model and different strategies for statistical inference.

Co-sponsored with the Political Science Department, Statistics Department and the Center for Social Statistics 

Kosuke Imai, Harvard University

CCPR Seminar Room 4240 Public Affairs Building, Los Angeles, CA, United States

Matching Methods for Causal Inference with Time-Series Cross-Section Data

Matching methods aim to improve the validity of causal inference in observational studies by reducing model dependence and offering intuitive diagnostics. While they have become a part of standard tool kit for empirical researchers across disciplines, matching methods are rarely used when analyzing time-series cross-section (TSCS) data, which consist of a relatively large number of repeated measurements on the same units.

We develop a methodological framework that enables the application of matching methods to TSCS data. In the proposed approach, we first match each treated observation with control observations from other units in the same time period that have an identical treatment history up to the pre-specified number of lags. We use standard matching and weighting methods to further refine this matched set so that the treated observation has outcome and covariate histories similar to those of its matched control observations. Assessing the quality of matches is done by examining covariate balance. After the refinement, we estimate both short-term and long-term average treatment effects using the difference-in-differences estimator, accounting for a time trend. We also show that the proposed matching estimator can be written as a weighted linear regression estimator with unit and time fixed effects, providing model-based standard errors. We illustrate the proposed methodology by estimating the causal effects of democracy on economic growth, as well as the impact of inter-state war on inheritance tax. The open-source software is available for implementing the proposed matching methods.

Co-sponsored with the Political Science Department, Statistics Department and the Center for Social Statistics

Workshop: Merging Entities – Deterministic, Approximate, & Probabilistic

4240 Public Affairs Bldg

Instructor: Michael Tzen Title: Merging Entities: Deterministic, Approximate, & Probabilistic Location: January 31, 2019, 2:00-3:00 PM 4240 Public Affairs Building CCPR Seminar Room Content: Combining information from different groups is a fundamental procedure in the data analysis pipeline. Using NBA and NCAA data, we will walk through deterministic, approximate, and probabilistic methods to merge entities […]

Adeline Lo, Princeton University

1434A Physics and Astronomy Building

Covariate screening in high dimensional data: applications to forecasting and text data

High dimensional (HD) data, where the number of covariates and/or meaningful covariate interactions might exceed the number of observations, is increasing used in prediction in the social sciences. An important question for the researcher is how to select the most predictive covariates among all the available covariates. Common covariate selection approaches use ad hoc rules to remove noise covariates, or select covariates through the criterion of statistical significance or by using machine learning techniques. These can suffer from lack of objectivity, choosing some but not all predictive covariates, and failing reasonable standards of consistency that are expected to hold in most high-dimensional social science data. The literature is scarce in statistics that can be used to directly evaluate covariate predictivity. We address these issues by proposing a variable screening step prior to traditional statistical modeling, in which we screen covariates for their predictivity. We propose the influence (I) statistic to evaluate covariates in the screening stage, showing that the statistic is directly related to predictivity and can help screen out noisy covariates and discover meaningful covariate interactions. We illustrate how our screening approach can removing noisy phrases from U.S. Congressional speeches and rank important ones to measure partisanship. We also show improvements to out-of-sample forecasting in a state failure application. Our approach is applicable via an open-source software package.

Workshop: Grad Student Panel Discussing the Causal Toolkit

4240 Public Affairs Bldg

Title: Grad Student Panel Discussing the Causal Toolkit Location: February 27, 2019, 2:00-3:30 PM 4240 Public Affairs Building CCPR Seminar Room Content: Focusing on the uses of the causal toolkit, several grad students will share a-ha moments and lessons learned from their own applied research. The target audience are grad students and researchers who wish […]

Lan Liu, University of Minnesota at Twin Cities

Lan Liu, University of Minnesota at Twin Cities "Parsimonious Regressions for Repeated Measure Analysis"  Abstract: Longitudinal data with repeated measures frequently arises in various disciplines. The standard methods typically impose a mean outcome model as a function of individual features, time and their interactions. However, the validity of the estimators relies on the correct specifications […]

Eloise Kaizar, Ohio State University

Eloise Kaizar, Ohio State University Randomized controlled trials are often thought to provide definitive evidence on the magnitude of treatment effects. But because treatment modifiers may have a different distribution in a real world population than among trial participants, trial results may not directly reflect the average treatment effect that would follow real world adoption […]

Susan Athey, Stanford University

CCPR Seminar Room 4240 Public Affairs Building, Los Angeles, CA, United States

Estimating Heterogeneous Treatment Effects and Optimal Treatment Assignment Policies

This talk will review recently developed methods for estimating conditional average treatment effects and optimal treatment assignment policies in experimental and observational studies, including settings with unconfoundedness or instrumental variables. Multi-armed bandits for learning treatment assignment policies will also be considered.

Co-sponsored with the Center for Social Statistics

Brandon Stewart, Princeton University

CCPR Seminar Room 4240 Public Affairs Building, Los Angeles, CA, United States

How to Make Causal Inferences Using Texts

Texts are increasingly used to make causal inferences: either with the document serving as the treatment or the outcome. We introduce a new conceptual framework to understand all text-based causal inferences, demonstrate fundamental problems that arise when using manual or computational approaches applied to text for causal inference, and provide solutions to the problems we raise.  We demonstrate that all text-based causal inferences depend upon a latent representation of the text and we provide a framework to learn the latent representation.  Estimating this latent representation, however, creates new risks: we may unintentionally create a dependency across observations or create opportunities to fish for large effects.  To address these risks, we introduce a train/test split framework and apply it to estimate causal effects from an experiment on immigration attitudes and a study on bureaucratic responsiveness.  Our work provides a rigorous foundation for text-based causal inferences, connecting two previously disparate literatures. (Joint Work with Egami, Fong, Grimmer and Roberts)

Co-sponsored with the Center for Social Statistics

Workshop: Getting All Your Research Computing Tools for Summer and Beyond – Hardware and Software

4240 Public Affairs Bldg

Title: Getting All Your Research Computing Tools for Summer and Beyond - Hardware and Software Location: May 22, 2019 @ 12:00-1:30 PM 4240 Public Affairs Building CCPR Seminar Room Instructors: Matt Lahmann & Mike Tzen Content: We’ll get CCPR researchers all the computing tools for a productive summer of data science exploration. We'll get you […]

Workshop: Getting The Data Yourself – A Web Scraping Code Through

4240 Public Affairs Bldg

Title: Getting The Data Yourself: A Web Scraping Code Through Location: May 29, 2019 @ 12:00-1:30 PM 4240 Public Affairs Building CCPR Seminar Room Instructors: Chad Pickering & Mike Tzen Content: We’ll empower CCPR researchers to get the domain-relevant data they want   slides exercise

Summer Institute in Computational Social Science

CCPR Seminar Room 4240 Public Affairs Building, Los Angeles, CA, United States

The purpose of the Summer Institute is to bring together graduate students, postdoctoral researchers, and early career faculty interested in computational social science. The Summer Institute is open to both social scientists (broadly conceived) and data scientists (broadly conceived).

Summer Institute in Computational Social Science Panel Presentation

Luskin Conference Center Laureate Room

Summer Institute in Computational Social Science Panel Presentation

Friday June 21, 2019 2:00pm – 5:00pm
Reception 5:00pm – 6:00pm
Luskin Conference Center Laureate Room
• 2:00pm – 3:15pm Digital Demography
Prof. Dennis Feehan, UC Berkeley and Prof. Ka-Yuet Liu, UCLA
• 3:30pm – 4:45pm Computational Causal Inference
Prof. Judea Pearl, UCLA and Prof. Sam Pimentel, UC Berkeley

Big Data for Big Social Issues

UCLA Neuroscience Research Building Auditorium (NRB 132)

Big Data for Big Social Issues Summer Institute in Computational Social Science Panel: 1:00pm - 2:45pm Prof. John Friedman, Brown University: "Income Inequality and Social Mobility: What Can We Learn from Big Data?" 3:00pm-5:00pm Reception 5:00-6:00pm Click here to view a recording of the talk  A defining feature of the American Dream is upward income […]

Summer Institute in Computational Social Science

4240 Public Affairs Bldg

CCPR June 15 – 26, 2020 4240 Public Affairs Building The purpose of the Summer Institute is to bring together graduate students, postdoctoral researchers, and early career faculty interested in computational social science. The Summer Institute is open to both social scientists (broadly conceived) and data scientists (broadly conceived).

SICSS Conference 2023

From June 20 to June 30, 2023, the University of California, Los Angeles (UCLA) Division of Social Sciences and the California Center for Population Research will sponsor the Summer Institute in Computational Social Science, to be held at the University of California Los Angeles.   For more information about the event go here: https://sicss.io/2023/ucla/

Development workshop, 2/13 at 3pm “Scientific Accountability and Data Production”

4240A Public Affairs Bldg

A panel discussion about open science, ethical risks, and potential drawbacks for certain forms of knowledge production with Irene Bloemraad (1), Cecilia Menjivar (2), Zachary Steinert-Threlkeld (3), and Jennifer Wagman (4)/ (1) UC Berkeley Sociology, (2) UCLA Sociology, (3) UCLA Luskin School of Public Affairs, (4) UCLA Fielding School of Public Health