The Coad Letter: Modeling and Design Edition, Issue 117, Strategies for Maintaining and Improving Quality, by Stephen Palmer

By: Coad Letter Modeling Editor

Abstract: Delivering high quality software is a stated goal of many development teams and it is a very admirable goal. To reach this goal we need to decide what high quality really means and we need to have some clue about how to achieve it?

Quality is often defined as the level of conformance to agreed requirements. However, as Gerald Weinberg [1] points out, there is a question of whose requirements these are. The requirements of the developers working on a piece of software are different from the requirements of those that use it. These two perspectives provide a useful approach to discussing quality. Users are concerned with "external quality" and developers are concerned with "internal quality".

External Quality

Ask the users of a software product whether they think it is of high quality and they will answer based on externally observable attributes of the software. It is likely to be considered of low quality if it hangs or crashes too often, is too slow in producing results, or sometimes produces the wrong results, if it does not work well with other pieces of software, or if it has an overcomplicated or tedious user interface. In contrast, if a piece of software does what it should reasonably quickly under normal circumstances, handles abnormal circumstances gracefully, works well with other software components, and is relatively easy to use then it is likely to be considered of high quality.

The relative importance of these various factors obviously differs for different types of software but it can also vary for different kinds of user. For example, occasional users may value ease of use over performance but everyday users may value performance over ease of use. A particular functional defect might be of no importance to users doing one kind of work but critical to users doing another.

We can define external quality as the level a software product achieves for the following factors:

  • functional correctness, completeness and reliability - how often does it produce the right results under normal circumstances?
  • speed - does it produce results fast enough under normal circumstances?
  • robustness - does it handle abnormal circumstances sensibly?
  • compatibility - does it interoperate with other software components well?
  • user friendliness - is it easy and intuitive to use?

...and more importantly, how appropriate the balance between these factors is for the kind of software product in question.

Internal Quality

For developers tasked with extending, enhancing, or otherwise maintaining the software, the external quality is only one part of the picture. Developers are as interested in the quality of the design and the quality of the source code as they are in the external quality of the software. For example, are the design and the source code easily understood? Are they efficient? Do they comply with accepted standards, patterns, and best practices?

Just as with external quality, the relative importance of the different internal quality factors of a piece of software depends on the type of software and kind of developers involved.

Therefore, we can define the internal quality of a piece of software as its level of and relative balance between the following factors:

  • elegance - is the design and code as simple as possible but not so simple that it does not do the job well?
  • efficiency - does it avoid unnecessary overuse of resources?
  • comprehensibility - is the design and code easy to understand?
  • flexibility - can it be easily adapted to do things differently?
  • compliance - does it conform to documented, agreed standards, patterns and best practices?

Initially a piece of software with low internal quality may exhibit high external quality. However, unless the internal quality is improved, the external quality is bound to eventually suffer as developers try to fix bugs and add new features over time. This is where refactoring becomes truly useful. Refactoring is about making improvements to the internal structure of a piece of code without changing its external behavior[2]. Of course, if we can start with high internal quality then the amount of refactoring needed is reduced.

In incremental or iterative development maintaining internal quality becomes even more important. In fact, I would assert that to be able to maintain a high level of functional correctness and performance of a software product throughout an aggressive release schedule demands an equal emphasis on maintaining the internal quality and conceptual integrity of that software. To quote an very cringey old television advert I saw in Singapore, "Wellness comes from within".

Strategies for improving internal and external quality

Study after study over the last thirty years has shown that it is far easier and much more cost effective to fix a problem close to the point of its introduction than at some point significantly further into the future.

Some of the reasons for this are obvious:

  • The later a problem is identified then the higher the likelihood that work based on it has been added. That work will need to be rechecked and maybe redone.

  • The later a problem is identified, the more groups of people involved and the more administrative process required to have it fixed. For example, a developer spotting a defect when running his own unit tests on a new piece of his own work requires little in the way of administrative process to fix that defect. In contrast, a defect in a shipped product discovered and reported by a customer usually requires the involvement of the technical support, development, testing, release, and documentation teams.

  • The longer the distance between introducing a defect and identifying it, the higher the level of frustration, irritation and resistance of developers. They have to go back to designs and source code they thought was finished long ago and rework it.

The following are some proven strategies for the early detection of problems in internal and external quality.

Strategy 1: Use a highly iterative process.

Shorten the duration of each of the analysis, design, implement, test (ADIT) sequence by using a highly iterative process. A traditional waterfall process that does all the analysis first, followed by all the design, then all the coding, and then all the testing, obviously has the longest possible 'distance' between the end of an analysis, design and implementation activity and the start of the testing activity where many analysis, design and coding defects are identified. A highly iterative development process breaks down the deliverables of a software project into small pieces and applies the development process to each of those small pieces. Therefore, using a highly iterative process means that the analysis, design and coding of each piece reaches testing much quicker.

A few other points are worth noting in this respect:

  1. I know that analysis, design , implementation, and testing activities are never done in a pure sequence but as my old economics lecturer used to say 'lets assume the curve on the graph is really a straight line to make the math easier; it's the economic principle at this point that is important not the details of math'. So, for the purposes of this discussion, I am making an analogous assumption that analysis, design, implementation, and testing are essentially a sequence of activities.

  2. As we have said previously, the longer the possible 'distance' between introducing a defect and identifying it, the more formal the process needed for bug reporting, tracking, fixing and re-testing because more people and longer intervals of time are involved. It follows therefore that we can expect highly incremental and iterative processes to be able to be less formal than their waterfall cousins because they significantly reduce this 'distance'.

  3. Testing only examines external quality factors.

Strategy 2: Use collaborative analysis and design sessions

Design is all about examining the trade offs between various alternatives and picking the one that best solves the problem under consideration. Picking the wrong solution can lead to considerable amounts of rework (refactoring) later on. Good designs earlier means high internal quality earlier making it easier to achieve high external quality earlier.

Human beings, even the best of us, are fallible and have off days. However, in many software development organizations individuals are expected to make significant design decisions every day. Sometimes these mistakes are picked up at design reviews but surprisingly few organizations practice these. Even when the mistake is caught in a review it often means a significant amount of time has been already lost. An alternative approach is to use collaborative design sessions where design is done in small teams around flipcharts or whiteboards. More minds applied means more ideas considered, more alternatives examined, more chance of a truly elegant solution, and less chance of significant design mistakes.

However, for collaborative design sessions to be more productive than individuals working separately requires discipline and management. Facilitating team design sessions and knowing when to work together and when to work separately is a highly valuable skill in a development team lead or chief programmer. CoadLetter #40 Lessons learnt from Fred contains some very useful tips and techniques for working well in small groups.

Strategy 3: Use design and code inspections/reviews/walkthroughs

The use of peer reviews, walkthroughs or inspections can significantly shorten the distance between the introduction of defects and their detection. When done well, inspections find more defects than testing and also find different defects than testing. In contrast to testing, inspections improve the internal quality of software by examining the analysis, design and source code.

The qualifying statement, when done well, is important. Done badly, inspections and reviews rapidly become argumentative, demoralizing, intimidating and soul-destroying wastes of time. It is worth, at the very least, reading a good book on the subject before introducing inspections into a team's development culture.

Inspections can be time-consuming and tools that help speed up the process are very useful. Tools like Borland Together can produce UML class and sequence diagrams directly from source code. These can be useful in visualizing and understanding the high-level structure of code being reviewed before examining the details. Some of my friends now use Together's generated sequence diagrams for inspections instead of the actual source code with reportedly good results.

I have not had the opportunity to try this yet but it sounds like an interesting idea. It does avoid the usual almost useless and petty arguments over source code layout (something that can also be fixed by invoking Together's code formatter with agreed settings prior to a review). Clued up developers can also run Together's automated audits and metrics over their source code prior to a review and fix up any flagged noncompliance with coding standards, etc. Clued up reviewers can also do the same to highlight areas of the design or code that could possibly be improved. The use of automated audits and metrics is the topic of the next strategy.

Strategy 4: Use targeted automated audits and metrics

Use targeted automated audits and metrics to highlight potential problem areas in design and source code. Borland Together comes with a large set of automated source code audits and metrics, some of them very sophisticated. These can be run as part of a regular build as well as on an ad hoc basis prior to inspections or unit testing, for example. Automated audits can check simple things like correct formatting of names of attributes, methods, and classes, etc. They can also check more sophisticated things like private and local variables that are not used, unhelpful hiding of variable names and inappropriate overriding of methods. Metrics measures simple things like lines of code, number of methods in a class, number of classes,etc. They can also measure more interesting things such as degrees of coupling and levels of complexity.

It is important to target the audits and metrics. Agree as a team or organization on the appropriate set of automated audits and metrics and their parameters. Otherwise the results of running the audits and metrics are as likely to confuse, frustrate, and waste time as they are to help pinpoint areas of poor internal quality. For example, the value of reporting hundreds of source code layout issues is highly dubious since they can generally be fixed in one shot by a code formatter.

Note: It would also be really cool if Together could sew together the settings and parameters of the audits and metrics into some sort of programming standards document. So many organizations' QA departments require one. Java has the Sun coding conventions of course but they only really cover the basics. Maybe in a future release :-)

Strategy 5: Apply analysis, design and implementation patterns

Apply analysis, design and implementation patterns to reuse proven solutions in analysis, design and implementation. This strategy probably needs little explanation. Many developers are aware of and recognize the value of analysis, design and coding patterns. Using proven building blocks reduces the likelihood of poor designs being chosen.

As with automated audits and metrics, it is important to agree as a team and organization on which patterns and variations of patterns you are going to use. Without this agreement the use of patterns loses much of its value because any pattern can be used whether it is appropriate or not.

Peter Coad and the designers of Borland Together recognized the value of patterns very early on and Together includes a number of configurable wizards that generate the skeleton code (and therefore the UML class diagrams in Together) for a number of popular analysis, design and coding patterns. This list can be extended using Together's pattern template editor for very simple patterns and the open Java API for more sophisticated patterns.

Strategy 6: Communicate design clearly at all levels of abstraction

Communicate design clearly at all levels of abstraction using the most appropriate means of communication: text, lists, tables and/or pictures.

Miscommunication and misunderstanding are behind many significant defect in the analysis and design of software components and systems. Reducing these problems can make a significant improvement in a systems internal and external quality.

They say a picture is worth a thousand words but sometimes a simple list or a few lines of source code communicate far better than any number of pictures. A good software development team minimizes communication disconnects and misunderstandings by using the most appropriate means available to communicate with the different roles and personalities within a development team and with the other stakeholders in a project.

Again tools like Borland's Together can help reduce the time and effort required to do this. Together's source code parsing can be used to quickly build UML class and interaction diagrams at various levels of detail and the built-in document generator can be customized to produce useful, up-to-date documents and web pages from those diagrams and source code.

Conclusion

As Mac Felsing points out in our book, A Practical Guide to Feature-Driven Development [3], just splitting quality into two extreme perspectives is useful but reality is a little more complicated. Other groups of people also have slightly different sets of requirements and priorities. There are the project managers who focused on the delivery of the software, the product managers who are interested in maintaining the conceptual integrity of the product, the marketing team who are interested in this year's fashionable features and so on. We end up with internal and external quality being two extremes in a spectrum of view points on quality. However, introducing strategies to improve the two extremes of internal and external quality will go a long way to satisfying the many viewpoints in between.

About the Author

Stephen Palmer is the principal consultant at SteP 10 (www.step-10.com). An expert in pragmatic object-oriented analysis and design, Stephen works with teams all over the world helping them create better software faster.

Acknowledgment

Many thanks to Richard Pitt at Borland UK for reviewing this in his spare time and as a result improving both the content and style.

References

[1] Weinberg, Quality Software Management Vol 1: Systems Thinking, Dorset House ISBN: 0-932633-22-6

[2] Fowler, Refactoring: Improving the Design of Existing Code, Addison Wesley ISBN: 0-201-48567-2

[3] Palmer, Felsing, A Practical Guide to Feature-Driven Development, Prentice Hall ISBN: 0-130676-15-2

Divider line

To read more of Stephen Palmer's work, visit http://www.step-10.com/

For more information on Agile Management, visit http://www.agilemanagement.net/

Use case modeling information can be found at http://www.advancedusecases.com/

For the latest up-to-date techniques in the Unified Modeling Language and Agile Software Development Processes, subscribe to The Coad Letter. Visit The Borland Developer Network for all of the latest information on how to deliver better software faster.


Published on: 1/30/2004 12:00:00 AM

Server Response from: ETNASC01

Copyright© 1994 - 2013 Embarcadero Technologies, Inc. All rights reserved.