Not too long ago, IT embraced the pattern language concepts of Christopher Alexander. Being an architect, of the more traditional variety, his ideas were based on creating spaces in which people felt good, even if they didn’t comprehend exactly why. Architected spaces need to express multiple qualities that include being alive, whole, comforting, free, exact, egoless, and eternal. The more those qualities were embodied, the better people responded to the desirability of the space.
In specifics, these patterns explored how many feet of counter space in a kitchen made the homeowner feel at ease and highlighted the tipping point when more counter space started making the home feel less homey and more commercial. Young architects were instructed to drive out to the location where a building was to be erected and probe the view from all angles, figuratively touch the earth, then close their eyes and imagine what shapes and placements best pleased their psyches.
Applying these pattern language notions to IT systems development lacks the application of these qualities or approach, even metaphorically. It would be hard to make the argument that a data user has any feeling at all about a batch, background process. The user likely relates to the layout of any interface, the speed of a response, but not to the inner harmony of the code working behind it. Certainly, the business community has a stake in things being done quickly, accurately, and inexpensively. And business would be much happier relating to the IT function overall if IT was their business partner and not a necessary evil. But whether or not those attitudes are coaxed out, I am hard pressed to believe the use of a pattern language is the catalyst.
For data modeling, one could use a super-type/sub-type scenario as input for a pattern. A parent entity and one or more child entities. Semantically, the attributes of the parent apply to all of the child types; but the attributes of each child are unique to that child.
In implementation, consider these three different patterns/approaches that might be used. First, a table for each entity may be created. Next, a table might be established for each child entity; on each child entity are all of the attributes associated with the parent entity, so the parent entity is not created as a separate table at all. Third, and last, only one table is established, with all attributes from the parent, and all child entities are created as columns on that one table. Arguments about the suitability of each approach generally focus on the use of NULL columns. The third solution basically forces all of the sub-type related columns to allow NULLs, because when dealing with one sub-type the other sub-type has no values. The second option would avoid requiring NULLs, assuming that there are no super-types that are not also at least one of the sub-types. Arguments against the first option usually are based on someone feeling the need to avoid extra joins. But I find it unlikely to hear of a DBA or developer arguing about one approach being more comforting, freeing, or eternal.
Regardless of the wonderful appeal of a pattern language for programming, or a pattern language for data modeling, the expected qualities associated with a pattern language are hard, if not impossible, to apply to a software circumstance. However, reuse and consistency, in and of themselves, have always been good traits of quality software solutions that aid development endeavors in many ways.
Having a preplanned approach, a template, to start on various often-encountered complex situations, is useful for speeding up solution delivery. Describing design components using a “pattern language” veneer, may be gilding the lily somewhat, but what’s wrong with a little flowery speech among friends?