Discussion questions#
We have gone over three different model evaluation metrics: RSE, \(r^2\), & F-test. In what contexts would each method be preferred over the others? Provide one example for all three metrics.
OLS chooses parameters that minimize squared error. Maximum likelihood chooses parameters that make the observed data most probable. Why do these two completely different ideas give the same answer in linear regression? What assumption about reality must be hiding underneath?