A challenge in functional MRI is figuring out what's in our data. Our raw data are like 3D movies. They have three spatial dimensions, plus time; they are motion pictures or moving pictures of the brain. But in one sense they are worse than a real movie, in that a frame or still from a real movie has got information about the movie, while any one of the images we acquire has no information about brain function. That's because the information about brain function is contained solely in how our images change with time. The challenge then is to analyze these weird brain movies to get, y'know, fMRI activation maps, the familiar (color!) glamour pix.
How do we do this? Mostly by asking, "When you do something in the scanner – like tapping your fingers – where in the brain does the MRI signal time course resemble the time course of that activity?" By the way, this approach is usually called the General Linear Model (GLM). It works. It may be fair to say that it was the predominant approach ten years ago, when the Plurality & Resemblance paper came out, and it may be fair to say that it is still the predominant approach to fMRI data analysis today.
The GLM is, as you said, like, "putting a frame down on the landscape." It's seeing things in one way, and asking, simply, how much of that one way there is at each brain location. The problem with this is that it does not tell us how well that one way matches or describes what's really going on at each brain location. Fitting a model does not validate it.
What to do? Why not try different models, and also try approaches that are, say, more loosely framed in the first place, or more exploratory in nature? Why not try a bunch of different appproaches, and compare results? In their paper, Lange et al did just that! They provide:
"... comparisons of results across methods includ[ing] a voxel-specific concordance correlation coefficient for reproducibility, and a resemblance measure... These measures can assist researchers by identifying groups of models producing similar and dissimilar results, and thereby help to validate, consolidate, and simplify reports of statistical findings. "
Ten years later, it's still a great paper.