Agile Principles, Patterns, and Practices. Notes

Agile development is the ability to develop software quickly, in the face of rapidly changing requirements. In order to achieve this agility, we need to use practices that provide the necessary discipline and feedback.

Section I. Aglie Developments

1. Agile Principles

  1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software
  2. Welcome changing requirements, even late in development
  3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter time scale
  4. Businesspeople and developers must work together daily throughout the project
  5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done
  6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation
  7. Working software is the primary measure of progress
  8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely
  9. Continuous attention to technical excellence and good design enhances agility
  10. Simplicity — the art of maximizing the amount of work not done — is essential
  11. The best architectures, requirements, and designs emerge from self-organizing teams
  12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly

2. Overview of Extreme Programming

  • Whole Team
  • User Stories: A user story is a mnemonic token of an ongoing conversation about a requirement. A user story is a planning tool that the customer uses to schedule the implementation of a requirement, based on its priority and estimated cost
  • Short Cycles
    • Iteration plan
    • Release plan
  • Pair Programming
  • TDD
  • Collective Ownership
  • Continuous Integration
  • Sustainable Pace
  • Open Workspace
  • The Planning GAME: The essence of the planning game is the division of responsibility between business and development. The businesspeople — customers — decide how important a feature is, and the developers decide how much that feature will cost to implement
  • Simple Design:
    1. Consider the simplest thing that could possibly work
    2. You aren’t going to need it
    3. Once and only once
  • Refactoring
  • Metaphor

Extreme Programming is a set of simple and concrete practices that combine into an agile development process. XP is a good general-purpose method for developing software.

3. Planning

Initial Exploration

At the start of the project, the developers and customers have conversations about the new system in order to identify all the significant features that they can. However, they don’t try to identify all features. As the project proceeds, the customers will continue to discover more features. The flow of features will not shut off until the project is over.

Spiking, Splitting, and Velocity

Stories that are too large or too small are difficult to estimate. Developers tend to underestimate large stories and overestimate small ones. Any story that is too big should be split into pieces that aren’t too big. Any story that is too small should be merged with other small stories.

Release Planning

The developers and customers agree on a date for the first release of the project. This is usually a matter of 2–4 months in the future. The customers pick the stories they want implemented within that release and the rough order they want them implemented in. The customers cannot choose more stories than will fit according to the current velocity.

Iteration Planning

Next, the developers and customers choose an iteration size: typically, 1 or 2 weeks. Once again, the customers choose the stories that they want implemented in the first iteration but cannot choose more stories than will fit according to the current velocity.

Defining ‘DONE’

A story is not done until all its acceptance tests pass. Those acceptance tests are automated. They are written by the customer, business analysts, quality assurance specialists, testers, and even programmers, at the very start of each iteration. These tests define the details of the stories and are the final authority on how the stories behave. We’ll have more to say about acceptance tests in the next chapter.

Task Planning

At the start of a new iteration, the developers and customers get together to plan. The developers break the stories down into development tasks. A task is something that one developer can implement in 4–16 hours. The stories are analyzed, with the customers’ help, and the tasks are enumerated as completely as possible.

A list of the tasks is created on a flip chart, whiteboard, or some other convenient medium. Then, one by one, the developers sign up for the tasks they want to implement, estimating each task in arbitrary task points.

Half way through the iteration, the team holds a meeting. At this point, half of the stories scheduled for the iteration should be complete. If half the stories aren’t complete, the team tries to reapportion tasks and responsibilities to ensure that all the stories will be complete by the end of the iteration. If the developers cannot find such a reapportionment, the customers need to be told. The customers may decide to pull a task or story from the iteration. At very least, they will name the lowest-priority tasks and stories so that developers avoid working on them.

Conclusion

From iteration to iteration and release to release, the project falls into a predictable and comfortable rhythm. Everyone knows what to expect and when to expect it. Stakeholders see progress frequently and substantially. Rather than being shown notebooks full of diagrams and plans, stakeholders are shown working software that they can touch, feel, and provide feedback on.

Developers see a reasonable plan, based on their own estimates and controlled by their own measured velocity. Developers choose the tasks they feel comfortable working on and keep the quality of their workmanship high.

Managers receive data every iteration. They use this data to control and manage the project. They don’t have to resort to pressure, threats, or appeals to loyalty to meet an arbitrary and unrealistic date.

4. Testing

Test-Driven Development

  1. Don’t write any production code until you have written a failing unit test
  2. Don’t write more of a unit test than is sufficient to fail or fail to compile
  3. Don’t write any more production code than is sufficient to pass the failing test

Callable / Testable / Decoupling / Documentation

Acceptance Tests

Unit tests are necessary but insufficient as verification tools. Unit tests verify that the small elements of the system work as they are expected to, but they do not verify that the system works properly as a whole. Unit tests are white box tests that verify the individual mechanisms of the system. Acceptance tests are black box tests that verify that the customer requirements are being met.

Acceptance tests are the ultimate documentation of a feature. Once the customer has written the acceptance tests that verify that a feature is correct, the programmers can read those acceptance tests to truly understand the feature. So, just as unit tests serve as compilable and executable documentation for the internals of the system, acceptance tests serve as compilable and executable documentation of the features of the system. In short, the acceptance tests become the true requirements document.

5. Refactoring

In Refactoring, his classic book, Martin Fowler defines refactoring as “the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure.” But why would we want to improve the structure of working code? What about “If it’s not broken, don’t fix it!”?

Every software module has three functions:

  • First is the function it performs while executing. This function is the reason for the module’s existence.
  • The second function of a module is to afford change. Almost all modules will change in the course of their lives, and it is the responsibility of the developers to make sure that such changes are as simple as possible to make. A module that is difficult to change is broken and needs fixing, even though it works.
  • The third function of a module is to communicate to its readers. Developers who are not familiar with the module should be able to read and understand it without undue mental gymnastics. A module that does not communicate is broken and needs to be fixed.

What does it take to make a module easy to read and easy to change? Much of this book is dedicated to principles and patterns whose primary goal is to help you create modules that are flexible and adaptable. But it takes something more than just principles and patterns to make a module that is easy to read and change. It takes attention. It takes discipline. It takes a passion for creating beauty.

Conclusion

Refactoring is like cleaning up the kitchen after dinner. The first time you skip cleaning up, you are done with dinner sooner. But the lack of clean dishes and clear working space makes dinner take longer to prepare the next day. This makes you want to skip cleaning again. Indeed, you can always finish dinner faster today if you skip cleaning. But the mess builds and builds. Eventually, you are spending an inordinate amount of time hunting for the right cooking utensils, chiseling the encrusted dried food off the dishes, scrubbing them down so they are suitable to cook with, and so on. Dinner takes forever. Skipping the cleanup does not really make dinner go more quickly.

The goal of refactoring, as depicted in this chapter, is to clean your code every day, every hour, and every minute. We don’t want the mess to build. We don’t want to have to chisel and scrub the encrusted bits that accumulate over time. We want to be able to extend and modify our systems with a minimum of effort. The most important enabler of that ability is the cleanliness of the code.

Section II. Agile Design

In an agile team, the big picture evolves along with the software. With each iteration, the team improves the design of the system so that it is as good as it can be for the system as it is now. The team does not spend very much time looking ahead to future requirements and needs. Nor does it try to build in today the infrastructure to support the features that may be needed tomorrow. Rather, the team focuses on the current structure of the system, making it as good as it can be.

This is not an abandonment of architecture and design. Rather, it is a way to incrementally evolve the most appropriate architecture and design for the system. It is also a way to keep that design and architecture appropriate as the system grows and evolves over time. Agile development makes the process of design and architecture continous.

How do we know how whether the design of a software system is good? Chapter 7 enumerates and describes symptoms of poor design. Such symptoms, or design smells often pervade the overall structure of the software:

Rigidity. The design is difficult to change.

  • Fragility. The design is easy to break
  • Immobility. The design is difficult to reuse
  • Viscosity. It is difficult to do the right thing
  • Needless complexity. Overdesign
  • Needless repetition. Mouse abuse
  • Opacity. Disorganized expression

Chapters 8–12 describe object-oriented design principles that help developers eliminate the symptoms of poor design—design smells—and build the best designs for the current set of features.

The principles are:

  • Chapter 8: The Single-Responsibility Principle (SRP)
  • Chapter 9: The Open/Closed Principle (OCP)
  • Chapter 10: The Liskov Substitution Principle (LSP)
  • Chapter 11: The Dependency-Inversion Principle (DIP)
  • Chapter 12: The Interface Segregation Principle (ISP)

7. What is Agile Design?

The design of a software project is an abstract concept. It has to do with the overall shape and structure of the program, as well as the detailed shape and structure of each module, class, and method. The design can be represented by many different media, but its final embodiment is source code. In the end, the source code is the design.

Design Smells

You know that the software is rotting when it starts to exhibit any of the following odors:

  • Rigidity: Rigidity is the tendency for software to be difficult to change, even in simple ways
  • Fragility: Fragility is the tendency of a program to break in many places when a single change is made
  • Immobility: A design is immobile when it contains parts that could be useful in other systems, but the effort and risk involved with separating those parts from the original system are too great
  • Viscosity: Viscosity comes in two forms: viscosity of the software and viscosity of the environment.
    • When faced with a change, developers usually find more than one way to make that change. Some of the ways preserve the design; others do not (i.e., they are hacks). When the design-preserving methods are more difficult to use than the hacks, the viscosity of the design is high. It is easy to do the wrong thing but difficult to do the right thing. We want to design our software such that the changes that preserve the design are easy to make
    • Viscosity of environment comes about when the development environment is slow and inefficient. For example, if compile times are very long, developers will be tempted to make changes that don’t force large recompiles, even though those changes don’t preserve the design. If the source code control system requires hours to check in just a few files, developers will be tempted to make changes that require as few check-ins as possible, regardless of whether the design is preserved.
  • Needless complexity: A design smells of needless complexity when it contains elements that aren’t currently useful
  • Needless repetition: Cut and paste may be useful text-editing operations, but they can be disastrous code-editing operations
  • Opacity: Opacity is the tendency of a module to be difficult to understand

Why Software Rots

In nonagile environments, designs degrade because requirements change in ways that the initial design did not anticipate. Often, these changes need to be made quickly and may be made by developers who are not familiar with the original design philosophy. So, though the change to the design works, it somehow violates the original design. Bit by bit, as the changes continue, these violations accumulate until malignancy sets in.

Conclusion

In short, the agile developers knew what to do because they followed these steps.

  1. They detected the problem by following agile practices
  2. They diagnosed the problem by applying design principles
  3. They solved the problem by applying an appropriate design pattern

So, what is agile design? Agile design is a process, not an event. It’s the continous application of principles, patterns, and practices to improve the structure and readability of the software. It is the dedication to keep the design of the system as simple, clean, and expressive as possible at all times.

8. The Single-Responsibility Principle (SRP)

A class should have only one reason to change.

Why was it important to separate these two responsibilities into separate classes? The reason is that each responsibility is an axis of change. When the requirements change, that change will be manifest through a change in responsibility among the classes. If a class assumes more than one responsibility, that class will have more than one reason to change.

If a class has more than one responsibility, the responsibilities become coupled. Changes to one responsibility may impair or inhibit the class’s ability to meet the others. This kind of coupling leads to fragile designs that break in unexpected ways when changed.

Defining a Responsibility

In the context of the SRP, we define a responsibility to be a reason for change. If you can think of more than one motive for changing a class, that class has more than one responsibility. This is sometimes difficult to see. We are accustomed to thinking of responsibility in groups.

Conclusion

The Single-Responsibility Principle is one of the simplest of the principles but one of the most difficult to get right. Conjoining responsibilities is something that we do naturally. Finding and separating those responsibilities is much of what software design is really about. Indeed, the rest of the principles we discuss come back to this issue in one way or another.

9. The Open/Closed Principle (OCP)

Software entities (classes, modules, functions, etc.) should be open for extension but closed for modification.

When a single change to a program results in a cascade of changes to dependent modules, the design smells of rigidity. OCP advises us to refactor the system so that further changes of that kind will not cause more modifications. If OCP is applied well, further changes of that kind are achieved by adding new code, not by changing old code that already works. This may seem like motherhood and apple pie — the golden, unachievable ideal — but in fact, there are some relatively simple and effective strategies for approaching that ideal.

Description of OCP

  1. They are open for extension. This means that the behavior of the module can be extended. As the requirements of the application change, we can extend the module with new behaviors that satisfy those changes. In other words, we are able to change what the module does
  2. They are closed for modification. Extending the behavior of a module does not result in changes to the source, or binary, code of the module. The binary executable version of the module—whether in a linkable library, a DLL, or a .EXE file—remains untouched

How is it possible that the behaviors of a module can be modified without changing its source code? Without changing the module, how can we change what a module does?

The answer is abstraction. In C# or any other object-oriented programming language (OOPL), it is possible to create abstractions that are fixed and yet represent an unbounded group of possible behaviors. The abstractions are abstract base classes, and the unbounded group of possible behaviors are represented by all the possible derivative classes.

Conclusion

In many ways, the Open/Closed Principle is at the heart of object-oriented design. Conformance to this principle is what yields the greatest benefits claimed for object-oriented technology: flexibility, reusability, and maintainability. Yet conformance to this principle is not achieved simply by using an object-oriented programming language. Nor is it a good idea to apply rampant abstraction to every part of the application. Rather, it requires a dedication on the part of the developers to apply abstraction only to those parts of the program that exhibit frequent change. Resisting premature abstraction is as important as abstraction itself.

10. The Liskov Substitution Principle (LSP)

Subtypes must be substitutable for their base types.

When considering whether a particular design is appropriate, one cannot simply view the solution in isolation. One must view it in terms of the reasonable assumptions made by the users of that design.

Not as far as the author of g is concerned! A square might be a rectangle, but from g’s point of view, a Square object is definitely not a Rectangle object. Why? Because the behavior of a Square object is not consistent with g’s expectation of the behavior of a Rectangle object. Behaviorally, a Square is not a Rectangle, and it is behavior that software is really all about. LSP makes it clear that in OOD, the IS-A relationship pertains to behavior that can be reasonably assumed and that clients depend on.

Conclusion

The Open/Closed Principle is at the heart of many of the claims made for object-oriented design. When this principle is in effect, applications are more maintainable, reusable, and robust. The Liskov Substitution Principle is one of the prime enablers of OCP. The substitutability of subtypes allows a module, expressed in terms of a base type, to be extensible without modification.

That substitutability must be something that developers can depend on implicitly. Thus, the contract of the base type has to be well and prominently understood, if not explicitly enforced, by the code.

The term IS-A is too broad to act as a definition of a subtype. The true definition of a subtype is substitutable, where substitutability is defined by either an explicit or implicit contract.

11. The Dependency-Inversion Principle (DIP)

A. High-level modules should not depend on low-level modules. Both should depend on abstractions
B. Abstractions should not depend upon details. Details should depend upon abstractions

Over the years, many have questioned why I use the word inversion in the name of this principle. The reason is that more traditional software development methods, such as structured analysis and design, tend to create software structures in which high-level modules depend on low-level modules and in which policy depends on detail. Indeed, one of the goals of these methods is to define the subprogram hierarchy that describes how the high-level modules make calls to the low-level modules.

Consider the implications of high-level modules that depend on low-level modules. It is the high-level modules that contain the important policy decisions and business models of an application. These modules contain the identity of the application. Yet when these modules depend on the lower-level modules, changes to the lower-level modules can have direct effects on the higher-level modules and can force them to change in turn.

This predicament is absurd! It is the high-level, policy-setting modules that ought to be influencing the low-level detailed modules. The modules that contain the high-level business rules should take precedence over, and be independent of, the modules that contain the implementation details. High-level modules simply should not depend on low-level modules in any way.

Moreover, it is high-level, policy-setting modules that we want to be able to reuse. We are already quite good at reusing low-level modules in the form of subroutine libraries. When high-level modules depend on low-level modules, it becomes very difficult to reuse those high-level modules in different contexts. However, when the high-level modules are independent of the low-level modules, the high-level modules can be reused quite simply. This principle is at the very heart of framework design.

Layering

According to Booch, “all well structured object-oriented architectures have clearly-defined layers, with each layer providing some coherent set of services through a well-defined and controlled interface.” A naive interpretation of this statement might lead a designer to produce a structure similar to Figure 11-1.

TODO

In this diagram, the high-level Policy layer uses a lower-level Mechanism layer, which in turn uses a detailed-level Utility layer. Although this may look appropriate, it has the insidious characteristic that the Policy layer is sensitive to changes all the way down in the Utility layer. Dependency is transitive. The Policy layer depends on something that depends on the Utility layer; thus, the Policy layer transitively depends on the Utility layer. This is very unfortunate.

TODO

Figure 11-2 shows a more appropriate model. Each upper-level layer declares an abstract interface for the services it needs. The lower-level layers are then realized from these abstract interfaces. Each higher-level class uses the next-lowest layer through the abstract interface. Thus, the upper layers do not depend on the lower layers. Instead, the lower layers depend on abstract service interfaces declared in the upper layers. Not only is the transitive dependency of PolicyLayer on UtilityLayer broken; so too is the direct dependency of the PolicyLayer on MechanismLayer.

Dependence on Abstractions

A somewhat more naive, yet still very powerful, interpretation of DIP is the simple heuristic: “Depend on abstractions.” Simply stated, this heuristic recommends that you should not depend on a concrete class and that rather, all relationships in a program should terminate on an abstract class or an interface.

  • No variable should hold a reference to a concrete class.
  • No class should derive from a concrete class.
  • No method should override an implemented method of any of its base classes.

This is the reason that the heuristic is a bit naive. If, on the other hand, we take the longer view that the client modules or layers declare the service interfaces that they need, the interface will change only when the client needs the change. Changes to the classes that implement the abstract interface will not affect the client.

Conclusion

Traditional procedural programming creates a dependency structure in which policy depends on detail. This is unfortunate, since the policies are then vulnerable to changes in the details. Object-oriented programming inverts that dependency structure such that both details and policies depend on abstraction, and service interfaces are often owned by their clients.

Indeed, this inversion of dependencies is the hallmark of good object-oriented design. It doesn’t matter what language a program is written in. If its dependencies are inverted, it has an OO design. If its dependencies are not inverted, it has a procedural design.

12. The Interface Segregation Principle (ISP)

Clients should not be forced to depend on methods they do not use

This principle deals with the disadvantages of “fat” interfaces. Classes whose interfaces are not cohesive have “fat” interfaces. In other words, the interfaces of the class can be broken up into groups of methods. Each group serves a different set of clients. Thus, some clients use one group of methods, and other clients use the other groups.

Interface Pollution

A syndrome that is common in statically typed languages, such as C#, C++, and Java. The interface of Door has been polluted with a method that it does not require. It has been forced to incorporate this method solely for the benefit of one of its subclasses. If this practice is pursued, every time a derivative needs a new method, that method will be added to the base class. This will further pollute the interface of the base class, making it “fat.”

Separate Clients Mean Separate Interfaces

When clients are forced to depend on methods they don’t use, those clients are subject to changes to those methods. This results in an inadvertent coupling between all the clients. Said another way, when a client depends on a class that contains methods that the client does not use but that other clients do use, that client will be affected by the changes that those other clients force on the class. We would like to avoid such couplings where possible, and so we want to separate the interfaces.

Class Interfaces Versus Object Interfaces

  • Separation Through Delegation
  • Separation Through Multiple Inheritance

Conclusion

Fat classes cause bizarre and harmful couplings between their clients. When one client forces a change on the fat class, all the other clients are affected. Thus, clients should have to depend only on methods that they call. This can be achieved by breaking the interface of the fat class into many client-specific interfaces. Each client-specific interface declares only those functions that its particular client or client group invoke. The fat class can then inherit all the client-specific interfaces and implement them. This breaks the dependence of the clients on methods that they don’t invoke and allows the clients to be independent of one another.

13. Overview of UML for C# Programmers

The Unified Modeling Language (UML) is a graphical notation for drawing diagrams of software concepts. One can use it for drawing diagrams of a problem domain, a proposed software design, or an already completed software implementation. Fowler describes these three levels as conceptual, specification, and implementation. This book deals with the last two.

Specification- and implementation-level diagrams have a strong connection to source code. Indeed, it is the intent for a specification-level diagram to be turned into source code. Likewise, it is the intent for an implementation-level diagram to describe existing source code. As such, diagrams at these levels must follow certain rules and semantics. Such diagrams have very little ambiguity and a great deal of formality.

On the other hand, diagrams at the conceptual level are not strongly related to source code. Rather, they are related to human language. They are a shorthand used to describe concepts and abstractions that exist in the human problem domain. Since they don’t follow strong semantic rules, their meaning can be ambiguous and subject to interpretation.

UML has three main kinds of diagrams.

  • Static diagrams describe the unchanging logical structure of software elements by depicting classes, objects, and data structures and the relationships that exist among them
  • Dynamic diagrams show how software entities change during execution, depicting the flow of execution, or the way entities change state
  • Physical diagrams show the unchanging physical structure of software entities, depicting physical entities, such as source files, libraries, binary files, data files, and the like, and the relationships that exist among them

Class / Object / Sequence / Collaboration / State Diagram

14. Working with Diagrams

Why Model?

Why do engineers build models? Why do aerospace engineers build models of aircraft? Why do structural engineers build models of bridges? What purposes do these models serve?

These engineers build models to find out whether their designs will work. Aerospace engineers build models of aircraft and then put them into wind tunnels to see whether they will fly. Structural engineers build models of bridges to see whether they will stand. Architects build models of buildings to see whether their clients will like the way they look. Models are built to find out whether something will work.

This implies that models must be testable. It does no good to build a model if you cannot apply criteria to that model in order to test it. If you can’t evaluate the model, the model has no value.

Why don’t aerospace engineers simply build the plane and try to fly it? Why don’t structural engineers simply build the bridge and then see whether it stands? Very simply, airplanes and bridges are a lot more expensive than the models. We investigate designs with models when the models are much cheaper than the real thing we are building.

Why Build Models of Software?

We make use of UML when we have something definitive we need to test and when using UML to test it is cheaper than using code to test it. For example, let’s say that I have an idea for a certain design. I need to test whether the other developers on my team think that it is a good idea. So I write a UML diagram on the whiteboard and ask my teammates for their feedback.

Make Effective Use of UML

  • Communicating with others: Good for design ideas, Bad for complex logic (even simple algorithms)
  • Creating road maps: UML can be useful for creating road maps of large software structures. Such road maps give developers a quick way to find out which classes depend on which others and provide a reference to the structure of the whole system
  • Back-end documentation
  • What to keep and what to throw away

Iterative Refinement

  • Behavior first
  • Check the structure
  • Envisioning the code

When and How to Draw DIagrams

Draw diagrams when:

  • Several people need to understand the structure of a particular part of the design because they are all going to be working on it simultaneously. Stop when everyone agrees that they understand
  • You want team consensus, but two or more people disagree on how a particular element should be designed. Put the discussion into a time box, then choose a means for deciding, such as a vote or an impartial judge. Stop at the end of the time box or when the decision can be made. Then erase the diagram
  • You want to play with a design idea, and the diagrams can help you think it through. Stop when you can finish your thinking in code. Discard the diagrams
  • You need to explain the structure of some part of the code to someone else or to yourself. Stop when the explanation would be better done by looking at code
  • It’s close to the end of the project, and your customer has requested them as part of a documentation stream for others

Do not draw diagrams:

  • Because the process tells you to
  • Because you feel guilty not drawing them or because you think that’s what good designers do. Good designers write code. They draw diagrams only when necessary
  • To create comprehensive documentation of the design phase prior to coding. Such documents are almost never worth anything and consume immense amounts of time
  • For other people to code. True software architects participate in the coding of their designs

Conclusion

A few folks at a whiteboard can use UML to help them think through a design problem. Such diagrams should be created iteratively, in very short cycles. It is best to explore dynamic scenarios first and then determine their implications on the static structure. It is important to evolve the dynamic and static diagrams together, using very short iterative cycles on the order of five minutes or less.

15. State Diagrams

UML has a rich set of notations for describing finite state machines (FSMs). In this chapter, we’ll look at the most useful bits of that notation. FSMs are an enormously useful tool for writing all kinds of software. I use them for GUIs, communication protocols, and any other type of event-based system.

States / Transitions / Superstates / Events / Initial/Final Pseudostates / STate Transation Table

Conclusion

Finite state machines are a powerful concept for structuring software. UML provides a very powerful notation for visualizing FSMs. However, it is often easier to develop and maintain an FSM by using a textual language rather than diagrams.

16. Object Diagrams

Sometimes, it can be useful to show the state of the system at a particular time. Like a snapshot of a running system, a UML object diagram shows the objects, relationships, and attribute values that obtain at a given instant.

  • Active Objects
  • Snapshots

Conclusion

Object diagrams provide a snapshot of the state of the system at a particular time. This can be a useful way to depict a system, especially when the system’s structure is built dynamically instead of imposed by the static structure of its classes.

However, one should be leery of drawing many object diagrams. Most of the time, they can be inferred directly from corresponding class diagrams and therefore serve little purpose.

17. Use Cases

Use cases are a wonderful idea that has been vastly overcomplicated. Over and over again, I have seen teams sitting and spinning in their attempts to write use cases. Typically, such teams thrash on issues of form rather than substance. They argue and debate over preconditions, post-conditions, actors, secondary actors, and a bevy of other things that simply don’t matter.

The real trick to use cases is to keep them simple. Don’t worry about use case forms; simply write them on blank paper or on a blank page in a simple word processor or on blank index cards. Don’t worry about filling in all the details. Details aren’t important until much later. Don’t worry about capturing all the use cases; that’s an impossible task.

The one thing to remember about use cases is: Tomorrow, they are going to change. No matter how diligently you capture them, no matter how fastidiously you record the details, no matter how thoroughly you think them through, no matter how much effort you apply to exploring and analyzing the requirements: Tomorrow, they are going to change.

If something is going to change tomorrow, you don’t need to capture its details today. Indeed, you want to postpone the capture of the details until the last possible moment. Think of use cases as just-in-time requirements.

Writing Use Cases

We write use cases; we don’t draw them. Use cases are not diagrams. Use cases are textual descriptions of behavioral requirements, written from a certain point of view.

A use case is a description of the behavior of a system. That description is written from the point of view of a user who has just told the system to do something in particular. A use case captures the visible sequence of events that a system goes through in response to a single user stimulus.

A visible event is one that the user can see. Use cases do not describe hidden behavior at all. They don’t discuss the hidden mechanisms of the system. They describe only those things that a user can see.

Primary Courses

Typically, a use case is broken up into two sections. The first is the primary course. Here, we describe how the system responds to the stimulus of the user and assume that nothing goes wrong.

For example, here is a typical use case for a point-of-sale system.

Check Out Item:

  1. Cashier swipes product over scanner; scanner reads UPC code
  2. Price and description of item, as well as current subtotal, appear on the display facing the customer. The price and description also appear on the cashier’s screen
  3. Price and description are printed on receipt
  4. System emits an audible “acknowledgment” tone to tell the cashier that the UPC code was correctly read

How can you estimate a use case if you don’t record its detail? You talk to the stakeholders about the detail, without necessarily recording it. This will give you the information you need to give a rough estimate. Why not record the detail if you’re going to talk to the stakeholders about it? Because tomorrow, the details are going to change. Won’t that change affect the estimate? Yes, but over many use cases, those effects integrate out. Recording the detail too early just isn’t cost-effective.

Alternate Courses

Some of those details will concern things that can go wrong. During the conversations with the stakeholders, you’ll want to talk over failure scenarios. Later, as it gets closer and closer to the time when the use case will be implemented, you’ll want to think through more and more of those alternative courses. They become addenda to the primary course of the use case. They can be written as follows.

  • UPC Code Not Read
  • No UPC Code

Diagramming Use Cases

Of all the diagrams in UML, use case diagrams are the most confusing and the least useful. I recommend that you avoid them entirely, with the exception of the system boundary diagram.

Conclusion

This was a short chapter. That’s fitting because the topic is simple. That simplicity must be your attitude toward use cases. If once you proceed down the dark path of use case complexity, forever will it dominate your destiny. Use the force, and keep your use cases simple.

18. Sequence Diagrams

Sequence diagrams are the most common of the dynamic models drawn by UML users. As you might expect, UML provides lots and lots of goodies to help you draw truly incomprehensible diagrams. In this chapter, we describe those goodies and try to convince you to use them with great restraint.

The Basics

  • Objects, Lifelines, Messages, and Other Odds and Ends
  • Creation and Destruction
  • Simple loops (use with caution)
  • Cases and Scenarios (don’t draw sequence diagrams with too many elements)

Conclusions

As we have seen, sequence diagrams are a powerful way to communicate the flow of messages in an object-oriented application. We’ve also hinted that they are easy to abuse and easy to overdo.

An occasional sequence diagram on the whiteboard can be invaluable. A very short paper with five or six sequence diagrams denoting the most common interactions in a subsystem can be worth its weight in gold. On the other hand, a document filled with a thousand sequence diagrams is not likely to be worth the paper it’s printed on.

19. Class Diagrams

UML class diagrams allow us to denote the static contents of—and the relationships between—classes. In a class diagram, we can show the member variables and member functions of a class. We can also show whether one class inherits from another or whether it holds a reference to another. In short, we can depict all the source code dependencies between classes.

The Basics

  • Classes (+/-/#)
  • Association (1 to 1, 1 to N, …)
  • Inheritance (inheritance / implementation)

The Details

  • <>
  • <>
  • {abstract}
  • Aggregation / Composition
  • Multiplicity
  • Association Stereotypes
  • Association Classes/Qualifiers

Conclusion

UML has lots of widgets, adornments, and whatchamajiggers. There are so many that you can spend a long time becoming an UML language lawyer, enabling you to do what all lawyers can: write documents nobody else can understand.

In this chapter, I have avoided most of the arcana and byzantine features of UML. Rather, I have shown you the parts of UML that I use. I hope that along with that knowledge, I have instilled within you the values of minimalism. Using too little of UML is almost always better than using too much.

Section III. The Payroll Case Study

In the next several chapters, we explore the design and implementation of a batch payroll system, a rudimentary specification of which follows. As part of that design and implementation, we will make use of several design patterns: COMMAND, TEMPLATE METHOD, STRATEGY, SINGLETON, NULL OBJECT, FACTORY, and FACADE. These patterns are the topic of the next several chapters. In Chapter 26, we work through the design and implementation of the payroll problem.

21. Command and Active Object: Versatility and Multitasking

Of all the design patterns that have been described over the years, COMMAND impresses me as one of the simplest and most elegant. But we shall see, the simplicity is deceptive. The range of uses that COMMAND may be put to is probably without bound.

The simplicity of COMMAND, as shown in below, is almost laughable. It doesn’t do much to dampen the levity. It seems absurd that we can have a pattern that consists of nothing more than an interface with one method.

1
2
3
4
public interface Command
{
void Execute();
}

Simple Commands

By encapsulating the notion of a command, this pattern allowed us to decouple the logical interconnections of the system from the devices that were being connected. This was a huge benefit.

Transactions

  • Physical and Temporal Decoupling
  • Undo Method
  • Active Object

Conclusion

The simplicity of the COMMAND pattern belies its versatility. COMMAND can be used for a wonderful variety of purposes, ranging from database transactions to device control to multithreaded nuclei to GUI do/undo administration.

It has been suggested that the COMMAND pattern breaks the OO paradigm by emphasizing functions over classes. That may be true, but in the real world of the software developer, usefulness trumps theory. The COMMAND pattern can be very useful.

22. Template Method and Strategy: Inheritance versus Delegation

Both TEMPLATE METHOD and STRATEGY solve the problem of separating a generic algorithm from a detailed context. We frequently see the need for this in software design. We have an algorithm that is generically applicable. In order to conform to the Dependency-Inversion Principle (DIP), we want to make sure that the generic algorithm does not depend on the detailed implementation. Rather, we want the generic algorithm and the detailed implementation to depend on abstractions.

Conclusion

TEMPLATE METHOD is simple to write and simple to use but is also inflexible. STRATEGY is flexible, but you have to create an extra class, instantiate an extra object, and wire the extra object into the system. So the choice between TEMPLATE METHOD and STRATEGY depends on whether you need the flexibility of STRATEGY or can live with the simplicity of TEMPLATE METHOD. Many times, I have opted for TEMPLATE METHOD simply because it is easier to implement and use. For example, I would use the TEMPLATE METHOD solution to the bubble sort problem unless I was very sure that I needed different sort algorithms.

23. Facade and Mediator

The two patterns discussed in this chapter have a common purpose: imposing some kind of policy on another group of objects. FACADE imposes policy from above; MEDIATOR, from below. The use of FACADE is visible and constraining; that of MEDIATOR, invisible and enabling.

Facade

The FACADE pattern is used when you want to provide a simple and specific interface onto a group of objects that have a complex and general interface.

Mediator

The MEDIATOR pattern also imposes policy. However, whereas FACADE imposes its policy in a visible and constraining way, MEDIATOR imposes its policies in a hidden and unconstraining way.

Conclusion

Imposing policy can be done from above, using FACADE, if that policy needs to be big and visible. On the other hand, if subtlety and discretion are needed, MEDIATOR may be the more appropriate choice. FACADEs are usually the focal point of a convention. Everyone agrees to use the FACADE instead of the objects beneath it. MEDIATOR, on the other hand, is hidden from the users. Its policy is a fait accompli rather than a matter of convention.

24. Singleton and Monostate

Usually, there is a one-to-many relationship between classes and instances. You can create many instances of most classes. The instances are created when they are needed and are disposed of when their usefulness ends. They come and go in a flow of memory allocations and deallocations.

But some classes should have only one instance. That instance should appear to have come into existence when the program started and should be disposed of only when the program ends. Such objects are sometimes the roots of the application. From the roots, you can find your way to many other objects in the system. Sometimes, these objects are factories, which you can use to create the other objects in the system. Sometimes, these objects are managers, responsible for keeping track of certain other objects and driving them through their paces.

Whatever these objects are, it is a severe logic failure if more than one of them is created. If more than one root is created, access to objects in the application may depend on a chosen root. Programmers, not knowing that more than one root exists, may find themselves looking at a subset of the application objects without knowing it. If more than one factory exists, clerical control over the created objects may be compromised. If more than one manager exists, activities that were intended to be serial may become concurrent.

It may seem that mechanisms to enforce the singularity of these objects is overkill. After all, when you initialize the application, you can simply create one of each and be done with it.1 In fact, this is usually the best course of action. Such a mechanism should be avoided when there is no immediate and significant need. However, we also want our code to communicate our intent. If the mechanism for enforcing singularity is trivial, the benefit of communication may outweigh the cost of the mechanism.

Singleton

Benefits:

  • Cross-platform: Using appropriate middleware (e.g., Remoting), SINGLETON can be extended to work across many CLRs (Common Language Runtime) and many computers
  • Applicable to any class: You can change any class into a SINGLETON simply by making its constructors private and adding the appropriate static functions and variable
  • Can be created through derivation: Given a class, you can create a subclass that is a SINGLETON
  • Lazy evaluation: If the SINGLETON is never used, it is never created

Costs:

  • Destruction undefined: There is no good way to destroy or decommission a SINGLETON. If you add a decommission method that nulls out theInstance, other modules in the system may still be holding a reference to the SINGLETON. Subsequent calls to Instance will cause another instance to be created, causing two concurrent instances to exist. This problem is particularly acute in C++, in which the instance can be destroyed, leading to possible dereferencing of a destroyed object
  • Not inherited: A class derived from a SINGLETON is not a SINGLETON. If it needs to be a SINGLETON, the static function and variable need to be added to it
  • Efficiency: Each call to Instance invokes the if statement. For most of those calls, the if statement is useless
  • Nontransparent: Users of a SINGLETON know that they are using it, because they must invoke the Instance method

Monostate

The MONOSTATE pattern is another way to achieve singularity. It works through a completely different mechanism.

No matter how many instances of Monostate you create, they all behave as though they were a single object. You can even destroy or decommission all the current instances without losing the data.

Benefits:

  • Transparency: Users do not behave differently from users of a regular object. The users do not need to know that the object is monostate
  • Derivability: Derivatives of a monostate are monostates. Indeed, all the derivatives of a monostate are part of the same monostate. They all share the same static variables
  • Polymorphism: Since the methods of a monostate are not static, they can be overridden in a derivative. Thus, different derivatives can offer different behavior over the same set of static variables
  • Well-defined creation and destruction: The variables of a monostate, being static, have well-defined creation and destruction times

Costs:

  • No conversion: A nonmonostate class cannot be converted into a monostate class through derivation
  • Efficiency: Because it is a real object, a monostate may go through many creations and destructions. These operations are often costly
  • Presence: The variables of a monostate take up space, even if the monostate is never used
  • Platform local: You can’t make a monostate work across several CLR instances or across several platforms

Conclusion

It is often necessary to enforce a single instantiation for a particular object. This chapter has shown two very different techniques. SINGLETON makes use of private constructors, a static variable, and a static function to control and limit instantiation. MONOSTATE simply makes all variables of the object static.

SINGLETON is best used when you have an existing class that you want to constrain through derivation and don’t mind that everyone will have to call the Instance() method to gain access. MONOSTATE is best used when you want the singular nature of the class to be transparent to the users or when you want to use polymorphic derivatives of the single object.

25. Null Object

Conclusion

Those of us who have been using C-based languages for a long time have grown accustomed to functions that return null or 0 on some kind of failure. We presume that the return value from such functions needs to be tested. The NULL OBJECT pattern changes this. By using this pattern, we can ensure that functions always return valid objects, even when they fail. Those objects that represent failure do “nothing.”

Section IV. Packaging the Payroll System

28. Principles of Package and Component Design

As software applications grow in size and complexity, they require some kind of highlevel organization. Classes are convenient unit for organizing small applications but are too finely grained to be used as the sole organizational unit for large applications. Something “larger” than a class is needed to help organize large applications. That something is called a package, or a component.

Packages and Components

As vitally important elements of large software systems, components allow such systems to be decomposed into smaller binary deliverables. If the dependencies between the components are well managed, it is possible to fix bugs and add features by redeploying only those components that have changed.

More important, the design of large systems depends critically on good component design, so that individual teams can focus on isolated components instead of worrying about the whole system.

This begs a large number of questions.

  1. What are the principles for allocating classes to components?
  2. What design principles govern the relationships between components?
  3. Should components be designed before classes (top down)? Or should classes be designed before components (bottom up)?
  4. How are components physically represented? In C#? In the development environment?
  5. Once created, to what purpose will we put these components?

Principles of Component Cohesion: Granularity

The principles of component cohesion help developers decide how to partition classes into components. These principles depend on the fact that at least some of the classes and their interrelationships have been discovered. Thus, these principles take a bottom-up view of partitioning.

The Reuse/Release Equivalence Principle (REP)

The granule of reuse is the granule of release.

What do you expect from the author of a class library that you are planning to reuse? Certainly, you want good documentation, working code, well-specified interfaces, and so on. But there are other things you want, too.

First, to make it worth your while to reuse this person’s code, you want the author to guarantee to maintain it for you.

Second, you are going to want the author to notify you of any changes planned to the interface and functionality of the code. But notification is not enough. The author must give you the choice to refuse to use any new versions. After all, the author might introduce a new version just as you are entering a severe schedule crunch or might make changes to the code that are simply incompatible with your system.

In either case, should you decide to reject that version, the author must guarantee to support your use of the old version for a time. Perhaps that time is as short as 3 months or as long as a year; that is something for the two of you to negotiate. But the author can’t simply cut you loose and refuse to support you. If the author won’t agree to support your use of older versions, you may have to seriously consider whether you want to use that code and be subject to the author’s capricious changes.

This issue is primarily political. It has to do with the clerical and support effort that must be provided if other people are going to reuse code. But those political and clerical issues have a profound effect on the packaging structure of software. In order to provide the guarantees that reusers need, authors organize their software into reusable components and then track those components with release numbers.

Thus, REP states that the granule of reuse, a component, can be no smaller than the granule of release. Anything that we reuse must also be released and tracked. It is not realistic for a developer to simply write a class and then claim that it is reusable. Reusability comes only after a tracking system is in place and offers the guarantees of notification, safety, and support that the potential reusers will need.

REP gives us our first hint at how to partition our design into components. Since reusability must be based on components, reusable components must contain reusable classes. So, at least some components should comprise reusable sets of classes.

It may seem disquieting that a political force would affect the partitioning of our software, but software is not a mathematically pure entity that can be structured according to mathematically pure rules. Software is a human product that supports human endeavors. Software is created by humans and is used by humans. And if software is going to be reused, it must be partitioned in a manner that humans find convenient for that purpose.

What does this tell us about the internal structure of a component? One must consider the internal contents from the point of view of potential reusers. If a component contains software that should be reused, it should not also contain software that is not designed for reuse. Either all the classes in a component are reusable, or none of them are.

The Common Reuse Principle (CRP)

The classes in a component are reused together. If you reuse one of the classes in a component, you reuse them all.

I want to make sure that when I depend on a component, I depend on every class in that component. To say this another way, I want to make sure that the classes that I put into a component are inseparable, that it is impossible to depend on some and not the others. Otherwise, I will be revalidating and redeploying more than is necessary and will be wasting significant effort.

CRP tells us more about what classes shouldn’t be together than what classes should be together. CRP says that classes that are not tightly bound to each other with class relationships should not be in the same component.

The Common Closure Principle (CCP)

The classes in a component should be closed together against the same kinds of changes. A change that affects a component affects all the classes in that component and no other components.

This is the Single-Responsibility Principle (SRP) restated for components. Just as SRP says that a class should not contain multiple reasons to change, CCP says that a component should not have multiple reasons to change.

Principles of Component Coupling: Stability

The Acyclic Dependencies Principle (ADP)

Allow no cycles in the component dependency graph.

Have you ever worked all day, gotten some stuff working, and then gone home, only to arrive the next morning to find that your stuff no longer works? Why doesn’t it work? Because somebody stayed later than you and changed something you depend on! I call this “the morning-after syndrome.”

Over the past several decades, two solutions to this problem have evolved: the weekly build and ADP. Both solutions have come from the telecommunications industry.

The weekly build

The weekly build is common in medium-sized projects. It works like this: For the first 4 days of the week, all the developers ignore one another. They all work on private copies of the code and don’t worry about integrating with one another. Then, on Friday, they integrate all their changes and build the system. This has the wonderful advantage of allowing the developers to live in an isolated world for four days out of five. The disadvantage, of course, is the large integration penalty that is paid on Friday.

Eliminating dependency cycles

The solution to this problem is to partition the development environment into releasable components. The components become units of work that can be the responsibility of a developer or a team of developers. When developers get a component working, they release it for use by the other developers. They give it a release number, and move it into a directory for other teams to use, and continue to modify their component in their own private areas. Everyone else uses the released version.

As new releases of a component are made, other teams can decide whether to immediately adopt the new release. If they decide not to, they simply continue using the old release. Once they decide that they are ready, they begin to use the new release.

This is a very simple and rational process and is widely used. However, to make it work, you must manage the dependency structure of the components. There can be no cycles. If there are cycles in the dependency structure, the morning-after syndrome cannot be avoided.

Breaking the cycle:

  1. Apply the Dependency-Inversion Principle
  2. Create a new component that both ComponentA and ComponentB depend on
The Stable-Dependencies Principle (SDP)

Depend in the direction of stability.

Designs cannot be completely static. Some volatility is necessary if the design is to be maintained. We accomplish this by conforming to CCP. Using this principle, we create components that are sensitive to certain kinds of changes. These components are designed to be volatile; we expect them to change.

Any component that we expect to be volatile should not be depended on by a component that is difficult to change! Otherwise, the volatile component will also be difficult to change.

Stability is related to the amount of work required to make a change.

1
Stability = afferenct / (afferent + efferent)

If all the components in a system were maximally stable, the system would be unchangeable. This is not a desirable situation. Indeed, we want to design our component structure so that some components are instable and some are stable.

The changeable components are on top and depend on the stable component at the bottom. Putting the instable components at the top of the diagram is a useful convention, since any arrow that points up is violating SDP.

The Stable-Abstractions Principle (SAP)

A component should be as abstract as it is stable.

This principle sets up a relationship between stability and abstractness. It says that a stable component should also be abstract so that its stability does not prevent it from being extended. On the other hand, it says that an instable component should be concrete, since its instability allows the concrete code within it to be easily changed.

Thus, if a component is to be stable, it should also consist of abstract classes so that it can be extended. Stable components that are extensible are flexible and do not overly constrain the design.

Combined, SAP and SDP amount to DIP for components. This is true because the SDP says that dependencies should run in the direction of stability, and SAP says that stability implies abstraction. Thus, dependencies run in the direction of abstraction.

1
Abstractness = Number of Abstract Classes / Number of Classes

Stability (I) / Abstractness (A) graph.

It seems clear that we’d like our volatile components to be as far from both zones of exclusion as possible. The locus of points that is maximally distant from each zone is the line that connects (1,0) and (0,1). This line is known as the main sequence.

A component that sits on the main sequence is not “too abstract” for its stability; nor is it “too instable” for its abstractness. It is neither useless nor particularly painful. It is depended on to the extent that it is abstract, and it depends upon others to the extent that it is concrete.

Conclusion

The dependency management metrics described in this chapter measure the conformance of a design to a pattern of dependency and abstraction that I think is a “good” pattern. Experience has shown that certain dependencies are good and others are bad. This pattern reflects that experience. However, a metric is not a god; it is merely a measurement against an arbitrary standard. It is certainly possible that the standard chosen in this chapter is appropriate only for certain applications and not for others. It may also be that far better metrics can be used to measure the quality of a design.

29. Factory

As we saw in the FACTORY example, static typing can lead to dependency knots that force modifications to source files for the sole purpose of maintaining type consistency. In our case, we have to change the ShapeFactory interface whenever a new derivative of Shape is added. These changes can force rebuilds and redeployments that would otherwise be unecessary. We solved that problem when we relaxed type safety and depended on our unit tests to catch type errors; we gained the flexibility to add new derivatives of Shape without changing ShapeFactory.

A strict interpretation of DIP would insist on using factories for every volatile class in the system. What’s more, the power of the FACTORY pattern is seductive. These two factors can sometimes lure developers into using factories by default. This is an extreme that I don’t recommend.

I don’t start out using factories. I put them into the system only when the need for them becomes great enough. For example, if it becomes necessary to use the PROXY pattern, it will probably become necessary to use a factory to create the persistent objects. Or, if through unit testing, I come across situations in which I must spoof the creator of an object, I will likely use a factory. But I don’t start out assuming that factories will be necessary.

Factories are a complexity that can often be avoided, especially in the early phases of an evolving design. When they are used by default, factories dramatically increase the difficulty of extending the design. In order to create a new class, one may have to create as many as four new classes: the two interface classes that represent the new class and its factory and the two concrete classes that implement those interfaces.

Conclusion

Factories are powerful tools. They can be of great benefit in conforming to DIP. They allow high-level policy modules to create instances of objects without depending on the concrete implementations of those objects. Factories also make it possible to swap in completely different families of implementations for a group of classes. However, factories are a complexity that can often be avoided. Using them by default is seldom the best course of action.

31. Composite

Of course, not all 1:many relationship can be reverted to 1:1 by using COMPOSITE. Only those in which every object in the list is treated identically are candidates. For example, if you maintained a list of employees and searched through that list for employees whose paydate is today, you probably shouldn’t use the COMPOSITE pattern, because you wouldn’t be treating all the employees identically.

Conclusion

Quite a few 1:many relationships qualify for conversion to COMPOSITE. The advantages are significant. Instead of duplicating the list management and iteration code in each of the clients, that code appears only once in the composite class.

32. Observer: Evolving into a Pattern

This chapter serves a special purpose. In it, I describe the OBSERVER pattern, but that is a minor objective. The primary objective of this chapter is to demonstrate how your design and code can evolve to use a pattern.

OBSERVER is one of those patterns that, once you understand it, you see uses for it everywhere. The indirection is very cool. You can register observers with all kinds of objects rather than writing those objects to explicitly call you. Although this indirection is a useful way to manage dependencies, it can easily be taken to extremes. Overuse of OBSERVER tends to make systems difficult to understand and trace.

Conclusion

If you are familiar with design patterns, an appropriate pattern will very likely pop into your mind when you’re faced with a design problem. The question, then, is whether to implement that pattern directly or instead to evolve it into place through a series of small steps. This chapter showed what the second option is like. Rather than simply leaping to the conclusion that the Observer pattern was the best choice for the problem at hand, I slowly maneuvered the code in that direction.

At any point during that evolution, I could have found that my problem was solved and stopped evolving. Or, I might have found that I could solve the problem by changing course and going in a different direction.

33. Abstract Server, Adapter, and Bridge

The ADAPTER solution is simple and direct. It keeps all the dependencies pointing in the right direction, and it’s very simple to implement. The BRIDGE solution is quite a bit more complex. I would not suggest embarking down that road until you had very strong evidence that you needed to completely separate the connection and communication policies and that you needed to add new connection policies.

The lesson here, as always, is that a pattern is something that comes with both costs and benefits. You should find yourself using the ones that best fit the problem at hand.

34. Proxy and Gateway: Managing Third-Party APIs

There are many barriers in software systems. When we move data from our program into the database, we are crossing the database barrier. When we send a message from one computer to another, we are crossing the network barrier.

Crossing these barriers can be complicated. If we aren’t careful, our software will be more about the barriers than about the problem to be solved. The PROXY pattern helps us cross such barriers while keeping the program centered on the problem to be solved.

It is very tempting to anticipate the need for PROXY long before the need exists. This is almost never a good idea. I recommend that you start with TABLE DATA GATEWAY or some other kind of FACADE and then refactor as necessary. You’ll save yourself time and trouble if you do.

35. Visitor

The VISITOR family allows new methods to be added to existing hierarchies without modifying the hierarchies. The patterns in this family are:

  • VISITOR
  • ACYCLIC VISITOR
  • DECORATOR
  • EXTENSION OBJECT

Visitor

The Modem interface contains the generic methods that all modems can implement. Three derivatives are shown: one that drives a Hayes modem, one that drives a Zoom modem, and one that drives the modem card produced by Ernie, one of our hardware engineers. How can we configure these modems for UNIX without putting the ConfigureForUnix method in the Modem interface? We can use a technique called dual dispatch, the mechanism at the heart of the VISITOR pattern.

Having built this structure, new operating system configuration functions can be added by adding new derivatives of ModemVisitor without altering the Modem hierarchy in any way. So the VISITOR pattern substitutes derivatives of ModemVisitor for methods in the Modem hierarchy.

Acyclic Visitor

Note that the base class of the visited (Modem) hierarchy depends on the base class of the visitor hierarchy (ModemVisitor). Note also that the base class of the visitor hierarchy has a function for each derivative of the visited hierarchy. This cycle of dependencies ties all the visited derivatives—all the modems—together, making difficult to compile the visitor structure incrementally or to add new derivatives to the visited hierarchy.

The VISITOR pattern works well in programs in which the hierarchy to be modified does not need new derivatives very often. If Hayes, Zoom, and Ernie were the only Modem derivatives that were likely to be needed or if the incidence of new Modem derivatives was expected to be infrequent, VISITOR would be appropriate.

On the other hand, if the visited hierarchy is highly volatile, such that many new derivatives will need to be created, the visitor base class (e.g., ModemVisitor) will have to be modified and recompiled along with all its derivatives every time a new derivative is added to the visited hierarchy. ACYCLIC VISITOR can be used to solve these problems.

Uses of Visitor
  • Report generation: The VISITOR pattern is commonly used to walk large data structures and to generate reports

Each new report can be written as a new visitor. We write the Accept function of Assembly to visit the visitor and also call Accept on all the contained Part instances. Thus, the entire tree is traversed. For each node in the tree, the appropriate Visit function is called on the report. The report accumulates the necessary statistics. The report can then be queried for the interesting data and presented to the user.

In general, the VISITOR pattern can be used in any application having a data structure that needs to be interpreted in various ways. Compilers often create intermediate data structures that represent syntactically correct source code. These data structures are then used to generate compiled code. One could imagine visitors for each processor and/or optimization scheme. One could also imagine a visitor that converted the intermediate data structure into a cross-reference listing or even a UML diagram.

Decorator

Once again, the Common Closure Principle (CCP) comes into play. We want to separate those things that change for different reasons. We can also invoke the Single-Responsibility Principle (SRP), since the need to dial loudly has nothing to do with the intrinsic functions of Modem and should therefore not be part of Modem.

DECORATOR solves the issue by creating a completely new class: LoudDialModem. LoudDialModem derives from Modem and delegates to a contained instance of Modem, catching the Dial function and setting the volume high before delegating.

Extension Object

Still another way to add functionality to a hierarchy without changing it is to use the EXTENSION OBJECT pattern. This pattern is more complex than the others but is also much more powerful and flexible. Each object in the hierarchy maintains a list of special extension objects. Each object also provides a method that allows the extension object to be looked up by name. The extension object provides methods that manipulate the original hierarchy object.

The fact that the extension objects can be loaded into the object creates a great deal of flexibility. Certain extension objects can be inserted or deleted from objects depending upon the state of the system. It would be easy to get carried away with this flexibility. For the most part, you probably won’t find it necessary.

Conclusion

The VISITOR family of patterns provides us with a number of ways to modify the behavior of a hierarchy of classes without having to change them. Thus, they help us maintain the Open/Closed Principle. They also provide mechanisms for segregating various kinds of functionality, keeping classes from getting cluttered with many different functions. As such, they help us maintain the Common Closure Principle. It should be clear that LSP and DIP are also applied to the structure of the VISITOR family.

The VISITOR patterns are seductive. It is easy to get carried away with them. Use them when they help, but maintain a healthy skepticism about their necessity. Often, something that can be solved with a VISITOR can also be solved by something simpler.

36. State

Finite state automata are among the most useful abstractions in the software arsenal and are almost universally applicable. They provide a simple and elegant way to explore and define the behavior of a complex system. They also provide a powerful implementation strategy that is easy to understand and easy to modify. I use them in all levels of a system, from controlling the high-level GUI to the lowest-level communication protocols.

  • Transition Table
  • Table Interpretation

One powerful benefit is that the code that builds the transition table reads like a canonical state transition table. The four AddTransition lines can be very easily understood. The logic of the state machine is all in one place and is not contaminated with the implementation of the actions.

Maintaining an FSM like this is very easy compared to the nested switch/case implementation. To add a new transition, one simply adds a new AddTransition line to the Turnstile constructor.

Another benefit of this approach is that the table can easily be changed at runtime. This allows for dynamic alteration of the logic of the state machine. I have used mechanisms like that to allow hot patching of FSMs.

Still another benefit is that multiple tables can be created, each representing a different FSM logic. These tables can be selected at runtime, based on starting conditions.

The cost of the approach is primarily speed. It takes time to search through the transition table. For large state machines, that time may become significant.

The State Pattern

Another technique for implementing FSMs is the STATE pattern. This pattern combines much of the efficiency of the nested switch/case statement with much of the flexibility of interpreting a transition table.

Figure 36-2 is strongly reminiscent of the STRATEGY pattern. Both have a context class, and both delegate to a polymorphic base class that has several derivatives. The difference (see Figure 36-3) is that in STATE, the derivatives hold a reference back to the context class. The primary function of the derivatives is to select and invoke methods of the context class through that reference. In the STRATEGY pattern, no such constraint or intent exists. The STRATEGY derivatives are not required to hold a reference to the context and are not required to call methods on the context. Thus, all instances of the STATE pattern are also instances of the STRATEGY pattern, but not all instances of STRATEGY are STATE.

Cost and Benefits

The STATE pattern provides a strong separation between the actions and the logic of the state machine. The actions are implemented in the Context class, and the logic is distributed through the derivatives of the State class. This makes it very simple to change one without affecting the other. For example, it would be very easy to reuse the actions of the Context class with a different state logic by simply using a different set of derivatives of the State class. Alternatively, we could create Context subclasses that modify or replace the actions without affecting the logic of the State derivatives.

Another benefit of this technique is that it is very efficient. It is probably just as efficient as the nested switch/case implementation. Thus, we have the flexibility of the table-driven approach with the efficiency of the nested switch/case approach.

The cost of this technique is twofold. First, the writing of the State derivatives is tedious at best. Writing a state machine with 20 states can be mind numbing. Second, the logic is distributed. There is no single place to go to see it all. This makes the code difficult to maintain. This is reminiscent of the obscurity of the nested switch/case approach.

The State Machine Compiler (SMC)

The tedium of writing the derivatives of State, and the need to have a single place to express the logic of the state machine led me to write the SMC compiler that I described in Chapter 15.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
FSMName Turnstile
Context TurnstileActions
Initial Locked
Exception FSMError
{
Locked
{
Coin Unlocked Unlock
Pass Locked Alarm
}
Unlocked
{
Coin Unlocked Thankyou
Pass Locked Lock
}
}

Classes of State Machine Application

High-Level Application Policies for GUIs

In modern GUIs, a great deal of work is put into keeping common features on the screen at all times and making sure that the user does not get confused by hidden states.

It is ironic, then, that the code that implements these “stateless” GUIs is strongly state driven. In such GUIs, the code must figure out which menu items and buttons to gray out, which subwindows should appear, which tab should be activated, where the focus ought to be put, and so on. All these decisions are decisions about the state of the interface.

GUI Interaction Controllers

Imagine that you want to allow your users to draw rectangles on the screen. The gestures they use are as follows. A user clicks the rectangle icon in the pallet window, positions the mouse in the canvas window at one corner of the rectangle, presses the mouse button, and drags the mouse toward the desired second corner. As the user drags, an animated image of the potential rectangle appears on the screen. The user manipulates the rectangle to the desired shape by continuing to hold the mouse button down while dragging the mouse. When the rectangle is right, the user releases the mouse button. The program then stops the animation and draws a fixed rectangle on the screen.

Distributed Processing

Distributed processing is yet another situation in which the state of the system changes based on incoming events. For example, suppose that you had to transfer a large block of information from one node on a network to another. Suppose also that because network response time is precious, you need to chop up the block and send it as a group of small packets.

Conclusion

Finite state machines are underutilized. In many scenarios their use would help to create clearer, simpler, more flexible, and more accurate code. Making use of the STATE pattern and simple tools for generating the code from state transition tables can be of great assistance.

Appendix What is Software?

“Are software developers engineers?”

In other words, engineers produced documents, not things. Other people took those documents and produced things. So, my wandering mind asked the question, “Out of all the documentation that software projects normally generate, was there anything that could truly be considered an engineering document?” The answer that came to me was yes there was such a document, and only one—the source code.

For almost 10 years I have felt that the software industry collectively misses a subtle point about the difference between developing a software design and what a software design really is.

Designing software is an exercise in managing complexity. The complexity exists within the software design itself, within the software organization of the company, and within the industry as a whole. Software design is very similar to systems design. It can span multiple technologies and often involves multiple sub-disciplines.

Most current software development processes try to segregate the different phases of software design into separate pigeon-holes. The top level design must be completed and frozen before any code is written. Testing and debugging are necessary just to weed out the construction mistakes. In between are the programmers, the construction workers of the software industry. Many believe that if we could just get programmers to quit “hacking” and “build” the designs as given to them (and in the process, make fewer errors) then software development might mature into a true engineering discipline. Not likely to happen as long as the process ignores the engineering and economic realities.

On any software project of typical size, problems like these are guaranteed to come up. Despite all attempts to prevent it, important details will be overlooked. This is the difference between craft and engineering. Experience can lead us in the right direction. This is craft. Experience will only take us so far into uncharted territory. Then we must take what we started with and make it better through a controlled process of refinement. This is engineering.

  • Real software runs on computers. It is a sequence of ones and zeros that is stored on some magnetic media. It is not a program listing in C++ (or any other programming language).
  • A program listing is a document that represents a software design. Compilers and linkers actually build software designs.
  • Real software is incredibly cheap to build, and getting cheaper all the time as computers get faster.
  • Real software is incredibly expensive to design. This is true because software is incredibly complex and because practically all the steps of a software project are part of the design process.
  • Programming is a design activity—a good software design process recognizes this and does not hesitate to code when coding makes sense.
  • Coding actually makes sense more often than believed. Often the process of rendering the design in code will reveal oversights and the need for additional design effort. The earlier this occurs, the better the design will be.
  • Since software is so cheap to build, formal engineering validation methods are not of much use in real world software development. It is easier and cheaper to just build the design and test it than to try to prove it.
  • Testing and debugging are design activities—they are the software equivalent of the design validation and refinement processes of other engineering disciplines. A good software design process recognizes this and does not try to short change the steps.
  • There are other design activities—call them top level design, module design, structural design, architectural design, or whatever. A good software design process recognizes this and deliberately includes the steps.
  • All design activities interact. A good software design process recognizes this and allows the design to change, sometimes radically, as various design steps reveal the need.
  • Many different software design notations are potentially useful—as auxiliary documentation and as tools to help facilitate the design process. They are not a software design.
  • Software development is still more a craft than an engineering discipline. This is primarily because of a lack of rigor in the critical processes of validating and improving a design.
  • Ultimately, real advances in software development depend upon advances in programming techniques, which in turn mean advances in programming languages. C++ is such an advance. It has exploded in popularity because it is a mainstream programming language that directly supports better software design.
  • C++ is a step in the right direction, but still more advances are needed.