Following the AIC readings, I can’t but help feel the same level of insecurity regarding just how much our models can tell us about the world around us. Even though AIC allows to choose the best model from a set of candidates, this measure is still limited by our a priori understanding, and the old adage “garbage in, garbage out” applies once more. AIC can only tell us the best relative model, but if we fail to include any good models, it will only be able to tell which of our bad models comes closest to being analyze the data effectively. This is sci-existentially terrifying. Incredibly, even with this fancy new tool, we can only be so certain that the particular model and/or method that we use to analyze our given data is correct. I’m sure that my thoughts on the subject will change over the course of next week, although I suppose my insecurity over choosing the right models to use with AIC will remain.
On a completely unrelated note, I came across an interesting example of the usefulness of simulations for revealing new insights regarding scientific phenomena. I’ll leave a link to the article below, but in short a group of astrophysicists and special effects artists were able to generate a physically/mathematically correct simulated black hole that turned out to be the most accurate visualization of a black hole to date. In fact, some aspects of the visualization that were initially thought of as bugs in the program turned out to make sense in a physics context, and led to new insights into the appearance of black holes. Overall, it’s definitely an interesting article that I would recommend checking out.