Event Studies

Simple Panel Data Approaches with Binary Treatment

Vladislav Morozov

Introduction

Lecture Info

Learning Outcomes

This lecture is about a simple kind of causal studies with panel data — comparing outcomes from before and after treatment


By the end, you should be able to

  • Write down event study estimators with two or more periods of data
  • Define an appropriate causal framework
  • Prove consistency of event study estimators under an assumption of no trends

References

Empirical Motivation

Event

March 10, 2023: third-largest bank failure in the US: the Silicon Valley Bank (SVB) collapses


  • Bank collapses are painful for depositors and creditors
  • Collapses also raise fears of broader financial contagion — danger to other financial institutions

Empirical Question

How did the collapse of the SVB affect stock prices of other US financial institutions in the next 10 days?

  • Causal question: comparing prices after collapse with prices without collapse
  • How to do this comparison? What assumptions do we need?
Expand for list of institutions of interest
tickers = [
    "ALLY",  # Ally Financial Inc.
    "AMTB",  # Amerant Bancorp Inc.
    "ABCB",  # Ameris Bancorp
    "ASB",   # Associated Banc-Corp
    "AUB",   # Atlantic Union Bankshares Corporation
    "AX",    # Axos Financial Inc.
    "BANC",  # Banc of California Inc.
    "BK",    # Bank of New York Mellon Corporation
    "BAC",   # Bank of America Corporation
    "BOH",   # Bank of Hawaii Corporation
    "BKU",   # BankUnited Inc.
    "BHB",   # Bar Harbor Bankshares Inc.
    "BHLB",  # Berkshire Hills Bancorp Inc.
    "BRBS",  # Blue Ridge Bankshares Inc.
    "CADE",  # Cadence Bank
    "COF",   # Capital One Financial Corporation
    "C",     # Citigroup Inc.
    "CFG",   # Citizens Financial Group Inc.
    "CMA",   # Comerica Incorporated
    "CFR",   # Cullen/Frost Bankers Inc. 
    "FNB",   # F.N.B. Corporation
    "FBK",   # FB Financial Corporation
    "FITB",  # Fifth Third Bancorp
    "HBAN",  # Huntington Bancshares Incorporated
    "KEY",   # KeyCorp
    "MTB",   # M&T Bank Corporation
    "PNC",   # PNC Financial Services Group Inc.
    "RF",    # Regions Financial Corporation
    "STT",   # State Street Corporation
    "SYF",   # Synchrony Financial
    "USB",   # U.S. Bancorp
    "WFC",   # Wells Fargo & Company
    "SCHW",  # Charles Schwab Corporation
    "AXP",   # American Express Company
    "DFS",   # Discover Financial Services
    "NTB",   # Bank of N.T. Butterfield & Son Limited
    "EWBC",  # East West Bancorp Inc.
    "WAL",   # Western Alliance Bancorporation
    "SSB",   # SouthState Corporation
    "WBS",   # Webster Financial Corporation
    "FHN",   # First Horizon Corporation
    "PNFP",  # Pinnacle Financial Partners Inc.
    "HOMB",  # Home BancShares Inc.
    "HTH",   # Hilltop Holdings Inc.
    "GBCI",  # Glacier Bancorp Inc.
    "BOKF",  # BOK Financial Corporation
    "ZION",  # Zions Bancorporation
    "TCBI",  # Texas Capital Bancshares Inc.
    "CIVB",  # Civista Bancshares Inc.
    "CFFI",  # C&F Financial Corporation
    "BANF",  # BancFirst Corporation
    "FULT",  # Fulton Financial Corporation
    "ONB",   # Old National Bancorp
    "PB",    # Prosperity Bancshares Inc.
    "UBSI",  # United Bankshares Inc.
    "VLY",   # Valley National Bancorp
    "TRMK",  # Trustmark Corporation 
    "CASH",  # Meta Financial Group Inc.
    "CUBI",  # Customers Bancorp Inc.
    "CFFN",  # Capitol Federal Financial Inc.
    "FFIN",  # First Financial Bankshares Inc. 
    "SFBS",  # ServisFirst Bancshares Inc.
    "TBBK",  # The Bancorp Inc. 
    "WSBC",  # WesBanco Inc.
    "WTFC",  # Wintrust Financial Corporation
]

Two Periods

Estimator

Simple Event Study Setting

Begin with the simplest possible panel setting with binary treatment:

  • Two periods with \(N\) units:
    • No treatment in period 1
    • All units treated in period 2
  • Data: outcomes \((Y_{i1}, Y_{i2})\).

Object of interest: “average effect of treatment”

Simple Estimator — Average Change

Simplest approach: compute average change in \(Y_{it}\) across periods \[ \widehat{AE}_{ES} = \dfrac{1}{N}\sum_{i=1}^N (Y_{i2}- Y_{i1}). \tag{1}\]

Estimator (1) — simplest example of event study estimators (see Freyaldenhoven et al. 2021; Miller 2023)

Example Framework

Possible empirical framework

  • Units \(i\): firms that make phones
  • Outcome \(Y_{it}\): their stock price
  • Periods:
    1. One week before Apple announces the iPhone
    2. One week after the announcement

Effect of interest: change in stock prices due to the announcement of iPhone

What Does (1) Do?

Proposition 1 (Asymptotics for \(\widehat{AE}_{ES}\)) Let

  • (Cross-sectional random sampling): \((Y_{i1}, Y_{i2})\) be independent and identically distributed (IID)
  • Finite first moments: \(\E[\abs{Y_{it}}]<\infty\)

Then \[ \widehat{AE}_{ES} \xrightarrow{p} \E[Y_{i2} - Y_{i1}]. \]

Causal Analysis

Causal Framework

Is \(\E[Y_{i2} - Y_{i1}]\) interesting (=causal)?

Need a causal framework to talk about causal effects!

Again work in the familiar potential outcomes framework:

  • \(Y_{it}^0\) — outcome for \(i\) in period \(t\) if not treated
  • \(Y_{i1}^1\) — outcome for \(i\) in period \(t\) if treated
  • Treatment effect for \(i\) in \(t\): \(Y_{it}^1- Y_{it}^0\)

For short, use \(Y_{it}^d\) where \(d=0, 1\)

Limit of ES Estimator and Causality

Potential and realized outcomes are connected as \[ Y_{i2} = Y_{i2}^1, \quad Y_{i1} = Y_{i1}^0. \]

It follows that \[ \widehat{AE}_{ES} \xrightarrow{p} \E[Y_{i2}^1- Y_{i1}^0]. \]

\(\E[Y_{i2}^1- Y_{i1}^0]\) is not necessarily a treatment effect — mixes effect of treatment and effects of time!

Example

Context Again consider the iPhone example. Then

  • \(Y_{i2}^1 - Y_{i2}^0\) — treatment effect, change in price because of the iPhone announcement
  • \(Y_{i2}^0 - Y_{i1}^0\) — change in a world without iPhone


Realized difference = combination of both changes \[ Y_{i2} - Y_{i1} = Y_{i2}^1- Y_{i1}^0 = [Y_{i2}^1- Y_{i2}^0] + [Y_{i2}^0 - Y_{i1}^0] \]

Solution: Restrict Changes over Time

Simple solution: rule out changes over time

Assumption: no variation in potential outcomes \[ Y_{i2}^d= Y_{i1}^d, \quad d=0, 1 \]

Then \(\widehat{AE}_{ES}\) is estimating a causal parameter — average effects: \[ \begin{aligned} \widehat{AE}_{ES} & \xrightarrow{p} \E[Y_{i1}^1- Y_{i1}^0] = \E[Y_{i2}^1- Y_{i2}^0] \end{aligned} \]

Summary so Far

Regression Interpretation

Regression Setting

Can also connect \(\widehat{AE}_{ES}\) and OLS

Consider regressing \(Y_{it}\) on \((1, D_{it})\) where \[ \begin{aligned} % Y_{it} & = \beta_0 + \beta_1 D_{it} + U_{it}, \\ D_{it} & = \begin{cases} 1, & t= 1 \\ 0, & t =0 \end{cases} \end{aligned} \tag{2}\] and where we simply treat \((Y_{i1}, D_{i1})\) and \((Y_{i2}, D_{i2})\) as separate observations

Event Studies and OLS

Can use all results developed for OLS for \(\widehat{AE}_{ES}\)

Event Study and Regression I

A way to think about regression in causal settings:

  • Write down the regression in terms of parameters of interest: e.g. let \[ \begin{aligned} Y_{it} & = \beta_0 + \beta_1 D_{it} + U_{it}, \\ \beta_0 & = \E[Y_{i1}^0], \quad \beta_1 = \E[Y_{i2}^1- Y_{i2}^0] \end{aligned} \]
  • Connect regression to potential outcomes: what is \(U_{it}\) in terms of potential outcomes? (exercise)

Event Study and Regression II

Let \(\bX_{it} = (1, D_{it})'\). Then

  • By properties of OLS know that OLS is consistent for \(\bbeta = (\beta_0, \beta_1)\) if
    • \(\E[\bX_{it}U_{it}] =0\)
    • \(\E[\bX_{it}\bX_{it}']\) invertible
  • So just need to check if this \(U_{it}\) satisfies \(\E[\bX_{it}U_{it}] =0\)
  • If yes, OLS can estimate average effects of interest

Remember: the OLS “model” and the underlying causal model are separate things! Here the causal model is “nonparametric”

Multiple Periods

Estimation and Causal Framework

Multiple Period Framework

  • Often have more than 2 periods of data
  • Want to use that data

New framework:

  • \(T\) periods in total
  • Treatment starts in period \(t_0\)
  • We see \(Y_{it}^0\) for \(t<t_0\) and \(Y_{it}^1\) for \(t\geq t_0\)

Expanded Regression

New variables for treatment: \[\small D_{it, \tau} = \begin{cases} 1, & t= \tau, \\ 0, & t\neq \tau \end{cases} \]

Can try similar regression: \[\small Y_{it} = \beta_0 + \sum_{\tau = t_0}^{T} \beta_\tau D_{it, \tau} + u_{it} \tag{3}\] Let \(\hat{\bbeta}=(\hat{\beta}_0, \hat{\beta}_{t_0}, \dots, \hat{\beta}_{T}\)) be the OLS estimator

OLS Estimator Expression

Fairly easy to show that \[ \begin{aligned} \hat{\beta}_{\tau} & = \dfrac{1}{N} \sum_{i=1}^N Y_{i\tau} - \dfrac{1}{N(t_0-1)} \sum_{i=1}^N\left[ Y_{i1} + \dots + Y_{it_0-1} \right] \\ & \xrightarrow{p} \E\left[Y_{i\tau}^1 - \dfrac{1}{t_0-1}(Y_{i1}^0+ \dots + Y_{it_0-1}^0) \right] \end{aligned} \] More general version of the simple estimator of before

Dynamic Treatment Effects?

If \(\beta_{\tau}\) — average effect in period \(\tau\), then model (3) seems to allow for dynamic effects


Dynamic effects realistic: effect of treatment may change over time. Example: impact of job training on earnings:

  • Disappearing: you forget the training over time
  • Increasing: job training lets you jump to a higher position and gain experience quicker for the rest of your life

Recovering Dynamic Average Treatment Effects

Under assumption of no trends in the baseline: \[ \begin{aligned} & \E\left[Y_{i\tau}^1 - \dfrac{1}{t_01}(Y_{i1}^0+ \dots + Y_{it_0-1}^0) \right] = \E[Y_{i \tau}^1 - Y_{i\tau}^0] \end{aligned} \]


Right hand side is average effect in period \(\tau\)

Asymptotic Properties

Consistency in the Multivariate Case

Unbiasedness of OLS

Moreover, under no trends in the baseline \[ \E[\hat{\beta}_{\tau}] = \E[Y_{i \tau}^1 - Y_{i\tau}^0] \]


In other words, the OLS estimator is unbiased

Asymptotic Distribution

Asymptotic Variance

Need variance \(V\) to construct confidence intervals for \(\E[Y_{i \tau}^1 - Y_{i\tau}^0]\).

\[ \begin{aligned} \hat{\beta}_{\tau} & = \dfrac{1}{N} \sum_{i=1}^N Z_i\\ Z_i & = Y_{i\tau} - \dfrac{1}{(t_0-1)}\left[ Y_{i1} + \dots + Y_{it_0-1} \right] \end{aligned} \]

From central limit theorem: \[ V = \var(Z_i) \]

Estimator for Asymptotic Variance

Can estimate \(V\) with \[ \hat{V} = \widehat{\var}(Z_i) = \dfrac{1}{N}\left(Z_i - \dfrac{1}{N}\sum_{j=1}^N Z_j \right)^2 \]

Estimated standard error of \(\hat{\beta}_{\tau}\): \[ \widehat{se}(\hat{\beta}_{\tau}) = \sqrt{ \dfrac{\hat{V}}{N} } \]

Inference on Average Effects I

Can now construct confidence intervals and hypothesis tests about \(\E[Y_{i \tau}^1 - Y_{i\tau}^0]\). E.g. an asymptotic 95% confidece interval: \[ \widehat{CI}_{95\%} = \left[\hat{\beta}^{OLS}_{\tau} - z_{1-\alpha/2}\widehat{se}(\hat{\beta}_{\tau}), \hat{\beta}^{OLS}_{\tau} + z_{1-\alpha/2}\widehat{se}(\hat{\beta}_{\tau}) \right] \] where the critical values \(z_{1-\alpha/2}\) come from the standard normal distribution \[ z_{1-\alpha/2}= \Phi^{-1}\left(1 - \dfrac{\alpha}{2} \right) \]

Inference on Average Effects II

But what if we want to set the joint hypothesis \[ H_0: \beta_{\tau} = 0, \quad \tau = t_0, \dots, T \]

  • This \(H_0\) is a hypothesis above a vector of effects
  • To write down a Wald test we need the joint asymptotic distribution of \(\hat{\bbeta}\)

Joint Asymptotic Distribution

Proposition 6 (Joint Asymptotics for Estimated Effects) Let

  • Assumption of no trends in the baseline hold
  • \((Y_{i1}, Y_{i2}, \dots, Y_{iT})\) be IID (over \(i\)) with finite second moments

Then
\[ \small \sqrt{N}(\hat{\bbeta} -\bbeta) \Rightarrow N(0, \avar(\hat{\bbeta})) \]

Joint Inference on Average Effects: Wald Test

Can use Proposition 6 to create a Wald test:

  • Write \(H_0\) as \(H_0: \bR\bbeta = \bq\) for \[ \small \bR = \begin{pmatrix} \mathbf{0} & \bI_{T-t_0+1} \end{pmatrix}, \quad \bq = \mathbf{0} \]

  • Write down Wald statistic: \[ \small W = N(\bR\hat{\bbeta}-\bq)'(\bR\widehat{\avar}(\hat{\bbeta})\bR')^{-1}(\bR\hat{\bbeta}-\bq) \]

  • Compare to \((1-\alpha)\)th quantile of \(\chi^2_{T-t_0+1}\) distribution

Empirical Application

Formalizing the Application

Back to Empirical Application

Now let’s answer our empirical question — impact of the SVB collapse on stock returns of US financial institutions Context


We need

  • Outcome variables (which returns?)
  • Time context \(t\)
  • How to define the treatment

Some Finance Background: Abnormal Returns

In finance, usually work with abnormal returns \(AR_{it}\) (MacKinlay 1997)


  • Each stock \(i\) assumed to have a “normal” or “expected” return \(ER_{it}\) given market conditions on day \(t\) — everything “expected” by the market
  • \(AR_{it}\) — differences between actual return \(R_{it}\) and \(ER_{it}\): \[ \small R_{it} = ER_{it} + AR_{it} \]

Interest: Impact on Abnormal Returns

Outcome of interest: \(AR_{it}\)


Potential outcomes
\(d=0\) \(d=1\)
Only idiosyncratic day-to-day shifts Impact of SVB collapse + idiosyncratic movements


Assume that \(\E[AR_{it}^0] = 0\) and \(AR_{it}^0\) uncorrelated with broader market characteristics

Computing Expected and Abnormal Returns

Expected returns usually given by some model:

  • Simple mean model: let \(R_{it}\) be return of asset \(i\) on \(t\). Expected return \(ER_{it} = \E[R_{it}]\)
  • Factor models: \[\small ER_{it} = f_i\left( \text{Some market characteristics on }t \right) \] Example: \(ER_{it} = \bbeta_i'\bx_{t}\) where \(\bx_{t}\) includes market return on \(t\) (CAPM), or also small minus big, high minus low (Fama-French 3 factor)

Estimating Model Parameters

  • Can compute \(ER_{it}\) (and \(AR_{it}\)) if know model parameters
  • But how to compute parameters?


Select estimation window with \(D_{it}=0\). Then \[ R_{it} = \bbeta_i'\bx_t + AR_{it}^0, \quad \E[AR_{it}^0\bx_t] =0 \] Can consistently estimate \(\bbeta_i\) by regressing \(R_{it}\) on \(\bx_t\) (why — measurement error in dependent variable)

Computing Abnormal Returns

Work with estimated abnormal returns: \[ \widehat{AR}_{it} = R_{it} - \widehat{ER}_{it} = R_{it} - \hat{\bbeta}'\bx_t \]


One issue: \(\widehat{AR}_{it}\) has measurement error. But

  • Measurement error in dependent variable not a problem if uncorrelated with covariates
  • If estimation window for \(\hat{\bbeta}\) large, measurement error likely small

Preparing Data

Define Time Frames

When does treatment turn on? What \(T\) to take?

  • Key public announcement — March 8, 2023 (our \(t_0\))
  • Look for 10 days around (\(T=20\))
event_date = pd.Timestamp("2023-03-08")
event_window = pd.Timedelta(days=10)  


For estimation, use long window before SVB collapse:

estimation_window = pd.Timedelta(days=500) 
start_date = event_date - event_window - estimation_window 

Obtaining Data

Necessary data:

  • Returns on tickers of interest
  • Market data for estimating \(ER_{it}\) — will use 3-factor Fama-French model

Can obtain ticker data from Yahoo Finance directly with yfinance package (abbreviated as yf)

Expand for data call and data preparation
# Download the tickers and rename columns to only retain ticker names
stock_data = yf.download(
    tickers, 
    start=start_date, 
    end=event_date + 4*event_window,
    progress=False,
)
stock_data = stock_data.iloc[:, 0:len(tickers)]
stock_data.columns = (
    stock_data.columns.map(lambda col: re.sub(r"[()\s']", "", col[1]))
)

# Calculate daily returns
returns = stock_data.pct_change().dropna()*100

# Read in FF 3 factor daily data
ff_path = Path() / 'data' / 'fama-french-3.csv'
fama_french_data = pd.read_csv(ff_path).iloc[:, :-1]
# Read the Date column as a date and set it as the index
fama_french_data["Date"] = pd.to_datetime(
    fama_french_data["Date"], 
    format="%Y-%m-%d", 
)
fama_french_data = fama_french_data.set_index("Date")

# Merge returns and FF data, drop unnecessary data
merged_data = pd.concat(
    [returns, fama_french_data], 
    axis=1
).dropna(axis=0)

Estimating Abnormal Returns

Estimate outcome variable \(AR_{it}\) using Fama-French 3 market model:

  • Separately regress returns of each \(i\) on Fama-French factors over estimation window
  • Obtained fitted values (\(\widehat{ER}_{it}\))
  • Compute \(\widehat{AR}_{it}\)} and store

For contrast, also estimate mean model

Expand for estimation of abnormal returns
# Create arrays to hold abnormal returns   
abnormal_returns_factor = pd.DataFrame()
abnormal_returns_mean = pd.DataFrame()    

# Estimate abnormal returns for each 
for ticker in tickers:
    # Estimate the 3-factor abnormal returns
    X = merged_data.loc[
        merged_data.index < event_date - event_window, 
        ["Mkt-RF", "SMB", "HML"]
    ]   
    y = merged_data.loc[
        merged_data.index < event_date - event_window, 
        ticker
    ] 
    X = sm.add_constant(X) 
    model = sm.OLS(y, X).fit()   

    # Compute expected returns during the window
    X_event = (
        merged_data.loc[
            ((merged_data.index >= event_date - event_window) & 
            (merged_data.index <= event_date + event_window)), 
            ["Mkt-RF", "SMB", "HML"]
        ]
    )
    X_event = sm.add_constant(X_event)
    expected_returns = model.predict(X_event)

    # Extract realized returns during the event
    event_data = (
        merged_data.loc[
            ((merged_data.index >= event_date - event_window) & 
            (merged_data.index <= event_date + event_window)), 
            ticker
        ]
    )

    # Abnormal returns are residuals
    abnormal_returns_factor[ticker] = event_data - expected_returns
    abnormal_returns_mean[ticker] = event_data - y.mean()

Visualizing Average Abnormal Returns

Figure 1: Averages of abnormal returns with 95% pointwise confidence intervals

Looking For Pretrends

Figure 1 suggests idea:

  • Cannot check no trends assumption for all periods (why?)
  • But can check pretreatment periods for trends in means


Figure 1: zero mean for \(t\) before March 8 — supports no trends

Estimation and Inference

Preparing Data for Event Study

Now need to create \(\bX\) and \(\by\) matrices to apply event study OLS


Key challenge: how do \(\bX\) and \(\by\) look like?

  • Regressors: indicators \(D_{it, \tau}\) of every day after treatment start
  • Express in tabular form: each row in \(\bX\) contains data on one stock \(i\) on one day \(t\), columns — \(\widehat{AR}_{it}\) and \(D_{it, \tau}\)

Generate Event Study Data

Expand for construction of data matrices
# Create an array for day dummies
dummy_df = (
    pd.get_dummies(
        np.maximum(
            (abnormal_returns_mean.index - pd.to_datetime(event_date)).days+1, 
            0,
        ), 
        prefix='day',
    )
)

# Set the index again
dummy_df.index = abnormal_returns_factor.index
# Drop the column of days preceding the event
dummy_df = dummy_df.drop("day_0", axis=1)

# Melt abnormal returns data
ar_factor_long = abnormal_returns_factor.reset_index().melt(id_vars="Date")
ar_factor_long.head()

# Merge data
ar_factor_days = pd.merge(
    ar_factor_long, 
    dummy_df, 
    how='left', 
    left_on='Date', 
    right_index=True,
)
# Split into two
exog = ar_factor_days.filter(regex="day*", axis=1).astype('float64')
exog = sm.add_constant(exog)
endog = ar_factor_days.loc[:, "value"]

# View
ar_factor_days.head()
Date variable value day_1 day_2 day_3 day_6 day_7 day_8 day_9 day_10
0 2023-02-27 ALLY -2.330220 False False False False False False False False
1 2023-02-28 ALLY 2.108663 False False False False False False False False
2 2023-03-01 ALLY 0.247488 False False False False False False False False
3 2023-03-02 ALLY -0.029412 False False False False False False False False
4 2023-03-03 ALLY -1.057665 False False False False False False False False

Event Study OLS

Can now apply the event study estimator

factor_model = sm.OLS(endog, exog).fit(cov_type="HC0")
print(factor_model.summary())
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                  value   R-squared:                       0.253
Model:                            OLS   Adj. R-squared:                  0.247
Method:                 Least Squares   F-statistic:                     25.25
Date:                Tue, 20 May 2025   Prob (F-statistic):           1.50e-35
Time:                        21:48:24   Log-Likelihood:                -2430.0
No. Observations:                 975   AIC:                             4878.
Df Residuals:                     966   BIC:                             4922.
Df Model:                           8                                         
Covariance Type:                  HC0                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
const         -0.3726      0.063     -5.946      0.000      -0.495      -0.250
day_1          0.1577      0.130      1.212      0.225      -0.097       0.413
day_2         -2.2013      0.255     -8.617      0.000      -2.702      -1.701
day_3         -0.3894      0.417     -0.933      0.351      -1.208       0.429
day_6         -5.3649      0.966     -5.551      0.000      -7.259      -3.471
day_7          1.7937      0.510      3.516      0.000       0.794       2.794
day_8          1.5680      0.420      3.734      0.000       0.745       2.391
day_9          1.6477      0.349      4.715      0.000       0.963       2.333
day_10        -1.5878      0.254     -6.240      0.000      -2.087      -1.089
==============================================================================
Omnibus:                      714.350   Durbin-Watson:                   1.914
Prob(Omnibus):                  0.000   Jarque-Bera (JB):            46544.049
Skew:                          -2.701   Prob(JB):                         0.00
Kurtosis:                      36.414   Cond. No.                         5.88
==============================================================================

Notes:
[1] Standard Errors are heteroscedasticity robust (HC0)

Cumulative Effects

There are (at least) two kinds parameters of interest:

  • Coefficients \(\beta_{\tau}\)
  • Cumulative effects up to each given day

Can compute cumulative effects and their variances as linear combinations of \(\beta_{\tau}\)

Computing Cumulative Effects

Expand to see computation of cumulative effects
# Extract coefs and SEs of daily indicators
dummy_coefficients = factor_model.params.iloc[1:]  
dummy_cov_matrix = factor_model.cov_params().iloc[1:, 1:]  

# Compute the cumulative effects
cumulative_effects = dummy_coefficients.cumsum()

# Compute the standard errors of the cumulative effects
cumulative_se = np.zeros(len(cumulative_effects))
for t in range(1, len(cumulative_effects) + 1):
    # Create a vector of ones up to day t and zeros afterwards
    indicator_vector = np.concatenate(
        [np.ones(t), np.zeros(len(cumulative_effects) - t)]
    )

    # Compute the SE of the cumulative effect
    cumulative_se[t - 1] = np.sqrt(
        indicator_vector @ dummy_cov_matrix @ indicator_vector
    )

# Create a DataFrame to store the results
results = pd.DataFrame(
    {
        "Day": factor_model.params.index[1:].map(lambda x: x[4:]),
        "Cumulative Effect": cumulative_effects,
        "SE": cumulative_se,
    }
)

# Prepend day 0 as a reference
day0_df = pd.DataFrame(
    {
        "Day": 0,
        "Cumulative Effect": 0,
        "SE": 0,
    },
    index = np.array(['day_0'])
)
results = pd.concat([day0_df, results])

# Construct confidence intervals
results["ci_upper"] = results["Cumulative Effect"] + 1.96 * results["SE"]
results["ci_lower"] = results["Cumulative Effect"] - 1.96 * results["SE"]

# Set index
results.index = abnormal_returns_mean.index[6:]

# Print the results
print(results)
           Day  Cumulative Effect        SE  ci_upper  ci_lower
Date                                                           
2023-03-07   0           0.000000  0.000000  0.000000  0.000000
2023-03-08   1           0.157665  0.130041  0.412546 -0.097216
2023-03-09   2          -2.043602  0.300032 -1.455538 -2.631665
2023-03-10   3          -2.433045  0.529120 -1.395970 -3.470119
2023-03-13   6          -7.797969  1.112404 -5.617657 -9.978280
2023-03-14   7          -6.004278  1.236561 -3.580619 -8.427937
2023-03-15   8          -4.436278  1.320873 -1.847366 -7.025189
2023-03-16   9          -2.788578  1.383458 -0.077000 -5.500156
2023-03-17  10          -4.376390  1.426062 -1.581308 -7.171473

Visualizing Cumulative Effects

Testing Joint Zero Effect

Can also formally test our null that \(H_0: \beta_{\tau}=0\) for all \(\tau\)


Use Wald test:

R = np.concatenate( (np.zeros((8, 1)), np.eye(len(factor_model.params)-1)), axis=1)
wald_results = factor_model.wald_test(R,  scalar=True)
print(wald_results)
<Wald test (chi2): statistic=201.99417148348502, p-value=2.428069144484446e-39, df_denom=8>

Empirical Conclusions

So what is the final conclusion?


Find that collapse of the SVB had significant negative effects on returns over the next 10 days

Recap and Conclusions

Recap

In this lecture we

  1. Introduced event studies — the simplest panel causal estimator
  2. Discussed causal properties under various assumptions of no trends
  3. Considered dynamic treatment effects
  4. Expressed event studies as regression
  5. Did a detailed study on the collapse of the SVB

Next Questions

  • What if the assumption of no trends is unreasonable?
  • What if the treatment is not binary?

References

Fama, Eugene F. 2014. “Two Pillars of Asset Pricing.” American Economic Review 104 (6): 1467–85. https://doi.org/10.1257/aer.104.6.1467.
Freyaldenhoven, Simon, Christian Hansen, Jorge Pérez Pérez, and Jesse Shapiro. 2021. “Visualization, Identification, and Estimation in the Linear Panel Event-Study Design.” w29170. Cambridge, MA: National Bureau of Economic Research. https://doi.org/10.3386/w29170.
Huntington-Klein, Nick. 2025. The Effect: An Introduction to Research Design and Causality. S.l.: Chapman and Hall/CRC.
MacKinlay, A. Craig. 1997. “Event Studies in Economics and Finance.” Journal of Economic Literature 35 (1): 13–39.
Miller, Douglas L. 2023. “An Introductory Guide to Event Study Models.” Journal of Economic Perspectives 37 (2): 203–30. https://doi.org/10.1257/jep.37.2.203.