Empiricism works in the material domain of physical reality. While it can be done by anyone, in practice most emprical inquiries are expensive and require specialized equipment, and so is usually only done by trained scientists working at institutions with laboratories set up for the purpose of enabling their inquiries. Example disciplines include physics, chemistry, biology, volcanology, and pharmacology.
Empiricism works by constructing, embelleshing, and re-evaluating an understanding of physical reality that's composed of scientific theories. Theories start off as plausible explanations, which are then decomposed into specific, testable experimental hypotheses. After the experiment is conducted, the results are analyzed to determine whether a reasonable default assumption, the null hypothesis, was false.
Much of empiricism is also subject to peer review, in which third-party scientists with relevant background knowledge evaluate the quality and results of an experiment. Furthermore, because the experiment design is published along with the results, the experiment can be re-run by other groups of scientists to confirm or refute the results.
Empirical science is founded on a number of assumptions:
That the results being examined are objective, or subject-invariant. This means that any scientist (with the appropriate resources) should be able to duplicate any experiment, and that the results of the experiment should be applicable to everyone.
Assumes the universe is isotropic, or that the process behind physical reality is generally the same in all places and times. Essentially, it says that scientific experiments don't intrinsically depend on where they are performed, which direction or orientation they are performed along, and when they are conducted [1].
[1]Note that relativity does bend this rule somewhat, where physical results begin to fall apart or act differently at high speeds, low temperatures, and small scales. Nevertheless, if one controls for these variables, one reclaims the isotropic assumption.
Empirical truth operates within paradigms [2], or broader consensuses in the scientific community. The general pattern of empirical progress is that a paradigm will lay out a way of thinking about the world, and the scientific community explores the limits of the paradigm's application in small, incremental advances. Eventually, however, the limits become too constrained, the failure cases too persistent [3] and the workarounds too clunky, at which point the community will start to be open to considering a newer, more powerful paradigm to resolve these contradictions. Once a new paradigm emerges that appears to explain the existing empirical phenomena as well as the contradictions that plagued the old paradigm, the consensus will rapidly shift to the new paradigm, and then proceed in the slow, incremental fashion again from there.
[2]Fun fact: the word 'paradigm' originally referred to what we might today call "the canonical example". All that changed in the wake of Thomas Kuhn's Structure of Scientific Revolutions ⬈, which introduced the term paradigm shift to the world.
[3]Consider the beta-amyloid hypothesis for Alzheimer's, which says the disease is caused by build-up of protein plaques in the brain. For all the years and money being put into research for treatments, there have been zero successes [3-1].
[3-1]It's actually an excellent example of ideological failure.
The key observation here is that the body of empirical knowledge is not monolithic, and its progress is not constant. Despite its assumption of objectivity, it is very much a social phenomenon.
Most empirical science depends on the scientific method.
See an interesting phenomenon that you would like to learn more about.
Develop a theory about how the phenomenon works, and then select a testable hypothesis about the phenomenon that would distinguish a world in which your theory is true against one in which it isn't.
Design an experiment that tests this hypothesis.
Select a null hypothesis, a statement that reflects the state of the world as though the phenomenon did not exist.
Determine exactly what evidence would need be gathered in order to test the hypothesis.
Determine the power level of the experiment, which is the strength of the effect you'd need to see in order to be satisfied that your null hypothesis must be false. Note that this value is arbitrary, though an unfortunate number of researchers grab p=0.05 and run with it blindly [5].
Conduct the experiment.
Set up your environment to control for extraneous influences [4].
Collect your data in a means that prevents you as the experimenter from subtley messing with the results.
Analyze the data according to your original test plan. Draw conclusions based off of that plan.
[4]Different disciplines will adapt and modify the scientific method to suit their particular area of study. For example, some disciplines (like volcanology) aren't capable of setting up controlled experiments, and so much draw observations as they appear naturally beyond the control of the empiricist.
[5]
Q:
What does p-value mean?
A:
The p-value is the probability that an experiment would see a result at least as extreme as the one actually observed, provided that the null hypothesis were true. Repeat after me: p-value is not a margin of confidence!
Q:
What are some other common p-values?
A:
Depends on the discipline. As the p-value gets smaller, the amount of data required goes up, and with it the cost of the experiment. The generally most rigorous discipline is physics.
Q:
How do I choose the right p-value?
A:
The p-value should be chosen by the key decision-maker. This is whoever will depend on the results of the experiment, and it is their job to choose a value that trades off the expense of the experiment (they're also typically the bank-roller) against the amount of risk they are willing to accept that they might get an incorrect result.
Q:
Do I have to have a p-value?
A:
No. p-values are figment of using frequentist statistical analysis to evaluate results. If you are using Bayesian analysis, you will want a confidence interval. And if you don't care about rigor or empirical correctness at all, you can ignore it altogether.