13th International Conference on Software Quality

Software Division of
The American Society for Quality

Home ] Up ] 13ICSQ Program ] Sponsor & Exhibitor Opportunities ] Special Needs ] Dallas Info ] Hotel/Airline/Rental Car/Driving Directions ] Paper & Tutorial Presenter Information ] Volunteers Opportunities ] ICSQ History.htm ] Contact Us ] About ASQ ]


Tuesday Concussent Sessions
Gerald Weinberg
Karl Wiegers
Robin Goldsmith
Neil Potter
Mark Paulk


Mark Paulk - Invited Speaker

Mark is a Senior Systems Scientist at the Institute for Software Research International (ISRI) in the School of Computer Science at Carnegie Mellon University.  Mark’s current research interests center on high maturity practices and statistical control for software processes.  He is currently working in the areas of outsourcing (the eSourcing Capability Model or eSCM) and eCommerce for ISRI.

From 1987 to 2002, Mark was with the Software Engineering Institute (SEI) at Carnegie Mellon.    While at the SEI, Mark was the “book boss” for Version 1.0 of the Capability Maturity Model for Software.  He was the project leader during the development of Software CMM Version 1.1.  He was also involved with software engineering standards, including ISO 15504, ISO 12207, and ISO 15288.

Prior to joining Carnegie Mellon, Mark was a Senior Systems Analyst for System Development Corporation at the Ballistic Missile Defense Advanced Research Center in Huntsville, Alabama. 

Mark received his master's degree in computer science from Vanderbilt University.  He received his bachelor's degree in mathematics and computer science from the University of Alabama in Huntsville.

Professional society memberships and certifications:

  • Senior Member of the Institute of Electrical and Electronics Engineers (IEEE)

  • Senior Member of the American Society for Quality (ASQ)

  • ASQ Certified Software Quality Engineer

  • SEI Lead Assessor

Presentation:  Some Explanatory Factors for Software Quality

Even in a world of agile methods and Internet-time processes, quality is a crucial attribute of software. In this session, some of the issues associated with building a useful operational definition of software quality are described. Defining quality is of limited value, however, if we do not understand the factors that influence quality. A number of explanatory factors have been proposed, but the empirical evidence on the effect of these factors has been mixed. One reason for the mixed evidence is that surrogates are frequently used that capture the desired factor poorly. For example, it is widely agreed that the competence of the people doing the work is fundamental to the quality of the work done. How does one measure competence? Even if we agree that surrogates such as years of experience are inadequate, better measures may not be readily available. After reviewing the evidence supporting various proposed factors, data from the Personal Software Process (PSP) is analyzed to see what the impact of some of those factors is on software quality as measured by defects found in testing. The factors considered include process factors, such as design time, and nonprocess factors, such as programming language. One of the greatest challenges in empirical software engineering is the variability associated with individual differences, and the PSP data show that, even though performance improves and variability decreases as disciplined processes are instilled, the dominant factor in superior performance remains the competence of the individual professional.  The talk closes with a discussion of the issues associated with generalizing from PSP data to an industry environment.