Chapter 4 - Summarizing Numerical Data15.075 Cynthia RudinHere are some ways we can summarize data numerically. Sample Mean:ni 1x̄ : nxi.Note: in this class we will work with both the population mean µ and the samplemean x̄. Do not confuse them! Remember, x̄ is the mean of a sample taken from thepopulation and µ is the mean of the whole population. Sample median: order the data values x(1) x(2) · · · x(n) , so thenmedian : x̄ : n oddx( n 1 )2. x( n2 1) ] n even1[x n2 (2)Mean and median can be very different: 1, 2, 3, 4, 500 .outlierThe median is more robust to outliers. Quantiles/Percentiles: Order the sample, then find x̃p so that it divides the datainto two parts where:– a fraction p of the data values are less than or equal to x̃p and– the remaining fraction (1 p) are greater than x̃p .That value x̃p is the pth -quantile, or 100 pth percentile. 5-number summary{xmin , Q1 , Q2 , Q3 , xmax },where, Q1 θ.25 , Q2 θ.5 , Q3 θ.75 . Range: xmax xmin measures dispersion Interquartile Range: IQR : Q3 Q1 , range resistant to outliers1

Sample Variance s2 and Sample Standard Deviation s:2s : 1n 1} {zn (xi x̄)2 .i 1see why laterRemember, for a large sample from a normal distribution, 95% of the sample falls in[x̄ 2s, x̄ 2s].Do not confuse s2 with σ 2 which is the variance of the population. Coefficient of variation (CV) : xs , dispersion relative to size of mean. z-scorexi x̄.s– It tells you where a data point lies in the distribution, that is, how many standarddeviations above/below the mean.E.g. zi 3 where the distribution is N (0, 1).zi : – It allows you to compute percentiles easily using the z-scores table, or a commandon the computer.Now some graphical techniques for describing data. Bar chart/Pie chart - good for summarizing data within categories2

Pareto chart - a bar chart where the bars are sorted. HistogramBoxplot and normplotScatterplot for bivariate dataQ-Q Plot for 2 independent samplesHans Rosling3

Chapter 4.4: Summarizing bivariate dataTwo Way TableHere’s an example:Respiratory Problem?yes no row totalsmokers25 2550non-smokers 5 4550column total 30 70100Question: If this example is from a study with 50 smokers and 50 non-smokers, is itmeaningful to conclude that in the general population:a) 25/30 83% of people with respiratory problems are smokers?b) 25/50 50% of smokers have respiratory problems?Simpson’s Paradox Deals with aggregating smaller datasets into larger ones. Simpson’s paradox is when conclusions drawn from the smaller datasets are the oppositeof conclusions drawn from the larger dataset. Occurs when there is a lurking variable and uneven-sized groups being combinedE.g. Kidney stone treatment (Source: Wikipedia)Which treatment is more effective?Treatment A Treatment B78%27335083%289350Including information about stone size, now which treatment is more effective?smallstonesTreatment Agroup 193% 8187Treatment Bgroup 287% 234270largestonesgroup 373% 192263group 469% 5580both78%27335083%What happened!?4289350

Continuing with bivariate data: Correlation Coefficient- measures the strength of a linear relationship between twovariables:Sxysample correlation coefficient r : ,Sx SywherenSxy 1 X (xi x̄)(yi ȳ)n 1 i 1nSx2 1 X(xi x̄)2 .n 1 i 1This is also called the “Pearson Correlation Coefficient.”– If we rewriten1 X (xi x̄) (yi ȳ)r ,SxSyn 1 i 1you can see thatx)(xi Sxandy)(yi Syare the z-scores of xi and yi .– r [ 1, 1] and is 1 only when data fall along a straight line– sign(r) indicates the slope of the line (do yi ’s increase as xi ’s increase?)– always plot the data before computing r to ensure it is meaningful– Correlation does not imply causation, it only implies association (there may belurking variables that are not recognized or controlled)For example: There is a correlation between declining health and increasing wealth. Linear regression (in Ch 10)y ȳx x̄ r.SxSy5

Chapter 4.5: Summarizing time-series data Moving averages. Calculate average over a window of previous timepoints–xt w 1 · · · xt,wwhere w is the size of the window. Note that we make window w smaller at thebeginning of the time series when t w.M At ExampleTo use moving averages for forecasting, given x1 , . . . , xt 1 , let the predicted valueat time t be x̂t M At 1 . Then the forecast error is:et xt x̂t xt M At 1 . The Mean Absolute Percent Error (MAPE) is:T1 X etM AP E · 100%.T 1 t 2 xt6

The MAPE looks at the forecast error et as a fraction of the measurement value xt .Sometimes as measurement values grow, errors, grow too, the MAPE helps to even thisout.For MAPE, xt can’t be 0. Exponentially Weighted Moving Averages (EWMA).– It doesn’t completely drop old values.EW M At ωxt (1 ω)EW M At 1 ,where EW M A0 x0 and 0 ω 1 is a smoothing constant.Example– here ω controls balance of recent data to old data– called “exponentially” from recursive formula:EW M At ω[xt (1 ω)xt 1 (1 ω)2 xt 2 . . . ] (1 ω)t EW M A0– the forecast error is thus:et xt x̂t xt EW M At 1– HW? Compare MAPE for MA vs EWMA Autocorrelation coefficient. Measures correlation between the time series and alagged version of itself. The k th order autocorrelation coefficient is:XTrk : t k 1 (xt k x̄)(xtXT2t 1 (xt x̄)Example7 x̄)

MIT OpenCourseWare / ESD.07J Statistical Thinking and Data AnalysisFall 2011For information about citing these materials or our Terms of Use, visit:

Continuing with bivariate data: Correlation Coefficient- measures the strength of a. linear relationship between two variables: S. xy. sample correlation coefficient r : ,