Overview
The wide availability of panel data presents unique opportunities for researchers. It is essential to understand the intuition and implications of the estimators and tests that are currently available to successfully take advantage of panel data. This course will focus on core panel data techniques, building a strong foundation before moving into more cutting edge methods that have been recently developed. The class will also discuss in detail implementation of panel data estimators and tests using the popular open source software R. The class will take place over five days to ensure maximum coverage of the core topics.
Objectives
Participants of this course should leave with the ability to understand the nuances of panel data estimators and the empirical implications that manifest. Further, participants should be able to successfully integrate their data into R and construct appropriate panel data models which they can estimate, conduct inference and rigorously interpret to provide sound policy insights. All methods discussed will be accompanied with corresponding R code, data and documentation to the literature at large making it easy for participants to follow along in the class as well as a check once the class has ended and they are engaged in their own analysis.
Course Outline
Day 1:
- Introduction to R using the basic linear regression model (all notes, data and examples will be provided)
- Advantages of panel data in applied work
- Introduction of heterogeneity in the panel framework
- The one-way error component model
- Individual effects
- The fixed effects framework
- The random effects framework
- Fixed versus random effects
- Tests of poolability
- Computer Tutorial
Day 2:
- Fixed versus random effects
- The Hausman test
- Test for existence of unobserved effects
- Computer Tutorial
Day 3:
- The two-way error components model
- The fixed effects framework
- The random effects framework
- Unbalanced panel data estimation
- Computer Tutorial
Day 4:
- System estimation with panel data
- Hausman-Taylor estimation
- Instrumental variables estimation
- Threshold panel data estimation and inference
- Computer Tutorial
Day 5:
- Dynamic panel data models
- The Arellano and Bond estimator
- Too many instruments
- The Bazzi and Clemens critique
- A primer on kernel smoothing
- Nonparametric panel data estimation
- Nonparametric estimation of unobserved heterogeneity
- Computer Tutorial
Pre-Requisites
The course level is appropriate for participants with a background in economics, statistics, mathematics, and/or public policy. A strong background in quantitative analysis is required. Basic knowledge of the statistical software R is desirable. A general fluency in the statistical/econometric lingo at the (post-) doctoral level (hope-fully in a non-statistics/econometric discipline) is required.
Software Requirements
This course will heavily leverage implementation in R, a powerful statistical software package that is freely available. R possesses the facilities to implement an impressive array of stochastic frontier methods. Moreover, R's real strength is that users can readily and easily construct their own estimators and tests so that canned approaches do not need to be relied on, allowing users to stand on their legs when conducting empirical research.
ONLINE APPLICATION
In order to apply for this course, AGRODEP members must complete the following by July 20, 2015:
If you would like to practice using Stata before taking the proficiency test, please review the modules below. Information included covers Stata use for beginners, linear regressions, bivariate regressions, and panel data. You will need to know this information to successfully complete the test.
- Training Module 1: Introduction to Stata
- Training Module 2: Basic Data Management, Graphs, and Log-Files
- Training Module 3: Linear Regressions
- Training Module 4: Bivariate Regressions
- Training Module 5: Panel Data Regressions
Instructor