Formalization is the proccess by which a model [1] is made objective. We are used to maintaining models in our heads as toys or play-thinks to reason about the world, but it is when we put them down on paper or code them up on a computer that they become something that can be more readily shared. A formalized model has the advantages that it can be passed around, reasoned with by entities, and used as the basis for calculation, prediction, or verification. Models can be formalized in many different ways. If you can prove that two formalizations are isomorphic to each other (i.e. that there is a 1-to-1 mapping between them), then you can demonstrate that they are in fact equivalent. If they are not isomorphic, then technically they are not the same model. Nevertheless, depending on what you are trying to do with them, they may still be effectively interchangeable.
[1]I assume that most of you are familiar mith modeling. As such, rather than use this article to describe what modeling is, I'm going to use it more as a grab-bag of perspectives / nuances / observations / theories / etc about the process of modeling.
Descriptive models
merely seek to describe the some aspect of reality. The purpose of these models is to facilitate structuralization, the process of looking at the continuous flow of reality and separating or deconstructing it into parts. This process involves the following general steps:
Segmentation: The process of separating bits and pieces of reality from each other. Ex:"This image is composed of lines and colors like so".
Recognition: The process of categorizing these bits and pieces against prototypes for which some body of knowledge already exists. Ex:"These lines and colors make up a collection of cars".
Synthesis: The process of assembling these bits and pieces, using the knowledge of their prototypes, into a larger understanding of the corner of reality being structuralized. Ex:"I'm looking at a traffic jam".
Descriptive modeling makes up the bulk of rational thinking, and the majority of it happens well beneath human awareness. We tend to think about things at the highest available level [2], revisiting and revising lower models only when the task at hand requires it, or to explore anomalies or correct for errors.
[2]While it's tempting to reach for "universal" models when describing the world, it's important to recognize that suitability is dictated by the local context. For example, airplanes are the fastest practical way to get around the world, but one would not take an airplane to get from the couch to the bathroom.
Descriptive models can be reasoned about by their accuracy, or how closely they match the portion of reality being described. A big part of learning about the world is simply finding more and more accurate models for understanding it. Note that there is typically no a priori "winner" within a collection of competing models; which one is most accurate is a function of what you want to do with it.
Prescriptive models
are simply models that are used as the basis for action. Ex:"When your taxi driver tells you to buy stock X, you sell it instead". Prescriptive models are extensions beyond descriptive models (which merely say what things are, and not what you should do with them).
Once you have a prescriptive model, you can start to measure its accuracy [3]. Accuracy is simply a measure of how close the model matches reality (and can be used to compare models' suitabilities). This is done by making predictions, acting on the predictions, and comparing the expected results against the actual results.
[3]Models do not strictly have to be prescriptive to measure their accuracy. However since they are the ones that are being used as the basis for action, they are the ones whose accuracy you are more likely to care about, since being wrong now starts to mean making mistakes. Additionally, testing descriptive models often amounts to just waiting and seeing, which takes a lot of the experimental control out of your hands.
At the heart of modeling is what I call the fundamental modeling tradeoff. All models are subject to this tradeoff, and this tradeoff is often the overriding concern in the selection of a model. The tradeoff states:
The maximum accuracy of a model is bound by its complexity.
Cast into another light, it roughly says that more accurate models tend to be more complex. This does not mean that more complex models are necessarily more accurate; it's very easy to take a model and add unnecessary and useless complexity to it.
The tradeoff has two extreme cases:
Trivial Simplicity: When a model's complexity is reduced to the absolute minimum, it trivializes the model. Trivial models always make the exact same predictions, or the exact same description, and thus the best it can do is to match the most common cases. Ex:"Our model for predicting whether it will snow tomorrow is to assume 'no'."
Perfect Simulation: When a model's accuracy exactly matches the thing it is trying to model. In that case, the model must be at least as complex as the thing itself, and in these cases, it often makes more sense to not use a model at all. Ex:"Our model for predicting whether it will snow tomorrow is to wait and see."
Almost all models fall somewhere between these extremes, and there are often various levels between them that can be chosen. Ex:"Modeling the human body at the level of organs, cells, molecules", or subatomic particles [4].
[4]Reminds me of a joke. The definition of an expert.
Definition Expert (noun):
A person who knows more and more about less and less, until they know absolutely everything about nothing.