Recommended reference: Numerical Recipes Ch. 14.
These codes (same as presented in lecture) are related to this exercise.
Plot a histogram of the samples, usings Scott's rule of thumb for the histogram bin size (dx=2.7*gamma/n^(1/3)). Estimate the sampling errors in in your bins using Poisson statistics (even just the simple analytical form sqrt(n_i) for bin i) and confirm by random trials that the error estimates are realistic.
Try this same exercise with other bin sizes. What is accomplished by adopting Scott's rule of thmb?
Try a different experiement, this time between a more general Cauchy distribution, with PDF of 1/(pi*gam)/(1+((x-x0)/gam)**2) and CDF equal to 1/pi*arctan((x-x0)/gam)+0.5. You can generate random samples using x=np.random.standard_cauchy(n); x*=gam; x+=x0. How much well can you discriminate between the true "gam" and a model one using the KS test?
Finally, run your sample through the Shapiro-Wilkes test for Gaussianity. (It is unclear why I am asking you to do this.)
Try with one Gaussian and one Cauchy variate with zero median and comparably FWHM. How big an N do you need to discriminate between them? What about the case of two Gaussian (or two Cauchy) samples with similar (but not exact) medians and widths (FWHM or std dev)?
Try also the Anderson test on pairs of Gaussians and C=auchy distributed samples.