The study of epidemiology is a cumulative discipline, so while the emphasis for Exam 2 is on the material since the first Exam, you are still expected to know about earlier concepts that are built upon such as rates, etc.
Â
Epidemiology Calculations/Measures of Association
Odds Ratios and their Variance (covered in biostats)
Matched Odds Ratios
Relative Risks and their Variance (covered in biostats)
Confidence Intervals (covered in biostats)
Stratum-specific odds ratios (or stratum-specific relative risks)
Adjusted odds ratios (or adjusted relative risks) [Conceptual understanding, not calculation of MH Adjusted measure] Attributable Risk Measures (in the exposed) Understand the distinction between relative measures of association and absolute measures
Knowledge you have gained for the 2nd exam Study designs and primary limitations and strengths Cohort studies (retrospective and prospective), Case-control studies and experimental studies RCT not improtant. Â Of course understanding cross-sectional and ecologic studies is important as well.
Exam preparation
⢠  Re-read Gordis and answer questions at the end of each chapter
⢠  Re-read all PowerPoints. Review the additional content posted as a practice/exercise/discussion.
⢠  Re-do homeworks with the book closed! Understand what you got wrong and Why
⢠  Work though practice questions in the Midterm 2 Review tab on Bb
Why is epidemiology important? Â
To answer this, remember back to Week 1 when you learned the definition of Epidemiology (âThe study of the distribution and determinants of health-related states or events in specified populations and the application of this study to control health problemsâ Further, you learned that disease or health events are not randomly distributed in the population - thus what we learn from using epidemiologic methods allows us the opportunity to intervene through primary, secondary, or tertiary prevention approaches.
Â
We use epidemiologic reasoning to: Describe disease or health event in terms of person, place and time.Determine whether there is there an association between an exposure (or characteristic) and a disease or health event? Determine whether the association causal?You learned that to describe disease or health events in terms of person, place or time you needed to understand ratios, rates and/or proportions. You learned about prevalence, cumulative incidence, and incidence density measures. Â Each of these measures helps you describe disease or health events. Â
Â
Prevalence gives us a good idea about disease burden in a population but it tells us nothing about disease risk. Â
Incidence measures tell us about disease risk (or the probability of disease development).  Cumulative incidence is a âpure riskâ measure with new cases of disease in the numerator and the population at risk of developing the disease, who are observed during entire defined period  Incidence density also has new cases of disease in the numerator and the ? of units of time that each person at risk is observed in the denominator, as such, it is a âpureâ rate in that it has a time component in the denominator (person-time). Â
Some examples of these measures are:
? Â Â Prevalence of hypertension in a community.
? Â Â Mortality rates (crude, age-specific, cause specific, etc.)
? Â Â Attack rates (including secondary attack rates)
? Â Â Incidence of cardiovascular disease in a population known to be free of cardiovascular disease at the beginning of the period of study.
? Â Â Incidence rate of mesothelioma in a cohort of workers in a benzene factory â where one can obtain person-time exposed to benzene and determine the development of this disease.
You then began to investigate whether disease/outcome was associated with some exposure or characteristic. Here is where you began the process of comparing (a fundamental aspect of epidemiology). Â You learned that while disease rates might look different in comparing across groups (City A to City B), sometimes those apparent differences are due to an outside variable that differs between the groups (e.g. age distribution). Â This was really your first exposure to confounding. Â To eliminate the effects of differential age distribution between the two Cities you learned to do rate adjustment.
Â
Then you began to study the various measures of association and the analytical study designs that epidemiologists use to assess the relationship between an exposure (or a characteristic) and a health outcome of some kind.
Ecologic studies are often a first step in assessing the relationship between two factors. Â An example of this is the mortality rate from breast cancer in countries compared with the number of prescription for hormone replacement therapy in countries. Â These studies are not based on individuals but on countries (or other such groups). A significant problem that can occur with this design is related to the ecologic fallacy: Correlation between exposure and disease at the group level is used to infer an association between exposure and disease at the individual level (individual-level risk). However, often these kinds of studies are used as hypothesis generating studies. The measure of association here is simply a statistical correlation. Â
Â
Cross sectional studies are those studies wherein the information on disease and exposure (or other characteristics) are obtained at the same time (e.g in a survey). Â Data on disease are in the form of prevalence measures. Â This study design is another step in hypothesis generation â but the major concern here is that since exposure and disease are measured at the same time one cannot conclude which came first â thus no causal associations can be determined. Â A common measure to determine simple association here is a X2 but also measures of association can be used with cross-sectional data: e.g., prevalence odds ratio.
Â
Case-control studies are a frequently used study design in epidemiology. Â You should review the strengths and weaknesses of this design, but you will remember that we canât calculate incidence rates, so we cannot estimate risk. The measure of association calculated with these data is the odds ratio. These studies are âidealâ for rare diseases.Cohort studies are the closest observational study to an experimental study. Â We can generate incidence rates, which is why we âlikeâ these studies, and we can estimate relative risk. Â They have major limitations in that they are expensive and can take a lot of time. These studies are âidealâ for rare exposures.
Â
IGNORE Experimental studies (Randomized Clinical Trials) â these are not observational studies, but experimental studies. The researcher is imposing some kind of intervention (a drug or placebo for example) and waiting to observe results. Â This is the âgold standardâ to assess cause and effect. However, most of epidemiology is not clinical trials but observational studies. (NOTE: we will cover RCTs after the midterm, so this design will not be included in midterm 2).
Â
Letâs pause for a minute and remember that we rely on statistical theory and statistics to generate an estimate of the association between an exposure (or characteristic) and a disease (or health event). Thus, with each design (except the ecologic study design) we generate an estimated effect (we are estimating what is happening at the population level using the âsampleâ of individuals in our study).
Â
We generate odds ratios, relative risks and rate ratios and then their respective confidence intervals. Â The measure of association that is greater than 1.0 is suggestive of a risk factor, while the measure of association that is less than 1.0 is suggestive of a protective factor. Â The measure of association that is equal to 1.0 means there is no association.
Â
Statistical significance for these measures is obtained in two ways. Â From an epi perspective, the X2 tells us whether the observed data we have is different than what is expected. Â If the value of the X2 exceeds the appropriate critical value then we can say that the association we found is statistically significant (as it differs from what is expected). Â A second method for assessing statistical significance is by generating confidence intervals. Â However, it is important to realize that the utility of the confidence interval is determining the precision of the point estimate (ie., measure of association).
Â
If the confidence interval contains the value of 1.0 then this is an indication that the association is not statistically significant because. Â If the confidence interval does not contain 1.0 then this is an indication that the association is statistically significant. Â Further, we evaluate the width of a confidence interval to help us determine how âpreciseâ our measure of association is. Â A wide confidence interval indicates low precision which is most likely due to small sample size. Â If the interval, however does not include 1.0 this would still indicate statistical significance.
  Â
You learned about various types of bias, confounding, effect modification and causal criteria Bias is an error in study design or conduct, and the result is that the observed measure of association is invalid Confounding while true, distorts the observed measure of association of a n exposure-outcome relationship Interaction (effect modification) means that the effect of the exposure on the outcome differs depending on whether or not the effect modifier is present (or depending upon levels of the effect modifier).
Â
We adjust for confounding (i.e., report the adjusted measure of association When we detect interaction then we report the stratum-specific measures of association(Be sure and review the slides on the topic, Week 10).
A couple of other important notes here:
1) if you have an indication that effect modification exists then reporting an adjusted odds ratio (or relative risk) is not appropriate. Â The underlying theory about adjusted odds ratios is that the stratum specific measures are not that different from one another;
2) effect modification (interaction) can occur in different ways (remember that there are different models of interaction â additive and multiplicative).
Â
As to the issues of bias, there are many forms of bias as illustrated in your readings and some are related to a particular study design. Â It is important that you remember that the reason we need to understand bias is because we are trying to explain the associations we observe. Â We need to think about why the results we obtain are âgoodâ or not â did we have differential misclassification? Is there the potential for recall bias? Were our controls selected in a biased manner? These are important issues to consider because we are doing âobservational studiesâ not experiments (for the most part).
Â
An example to consider: Some ecologic studies suggested that rates of colon cancer in women in various countries were negatively correlated with prescriptions for hormone replacement therapy. Â Subsequent cross-sectional studies demonstrated the same results.Researchers decided to conduct a case-control study on this association in women. Â A case-control study approach was used because access to women with colon cancer could be obtained from a cancer registry. Controls were selected from the spouses of males with colon cancer.
  Â
The initial analysis generated an odds ratio of 0.68 (95% Confidence Interval 0.43 â 0.92) between ever use of hormone replacement therapy and colon cancer. Â What is your interpretation of this? The next step was to consider whether this association was confounded by anything. Â How would you do this? Â Consider other potential confounding variables (age, race, smoking, other health conditions, diet, etc.). Â Consider the potential for effect modification. Â How would you do this?
It turns out that several variables were confounders and one variable appeared to be an effect modifier (but it was not statistically significant). Â Regression analyses were conducted to control for confounding and a final result was obtained: Â OR = 0.78 (95% Confidence Interval 0.67-0.93). Â Now what do you conclude? Â What kinds of biases are possible here?