Technical Presentations - Tuesday, October
B: Quality Processes
Maintaining Quality Beyond Development: Towards a Dynamic Perspective
nature of knowledge and software evolution and usage present a
pervasive challenge to system developers. Discrete attempts to create
such systems often lead to a mismatch between system, expectation and
a changing reality. The rationale for a Dynamic Feedback Model stems
from the need to focus on a continuous and long-term perspective of
development and growth in change-intensive environments. This paper
makes the case for a learning and knowledge-driven view of software
development and presents such a model in a way that accounts for the
long-term survival, growth and evolution of software-intensive systems
thereby enabling quality to be viewed as a dynamic property.
Dalcher leads the Software Fornesics Centre, a specialized unit
focusing on systems failures, software pathology and project failures,
at Middlesex University. He gained his PhD in software engineering
from King’s College, University of London. He is a member of the
ACM, the British Computer Society, the PMI, the Association for
Project Management and the IEEE Computer Society, where he is a member
of the Engineering of Computer Based Systems Technical Committee and
chair of the Forensic ECBS Special Interest Group.
Lucy S'antana and
Guerra presenting Quality
of Software Process or Quality of Software Product?
field of Information Technology research grows substantially around
the world. In this context, Software plays a fundamental role and its
commercialization goes beyond the territorial limits of the countries
that develop them. In view of its increasing complexity and
wide-ranging action, besides the financial importance for its original
country, the quality of software is a concern that has mobilized in
such a way both governmental efforts as well as private concerns. The
norms and models used by the software organizations always
aim at the quality of software process or software product. The
objective of this work is to present an integration proposal of these
two approaches: the CMM® process model and the NBR 13596,
a Brazilian equivalent of ISO/IEC 9126 that are concerned about
quality characteristics of software product.
Activities, goals and work products, of the CMM® process
areas, were analyzed searching for cases where the definitions,
guide lines and quality characteristics of software product,
are embodied in ISO/IEC 9126. The contributions of this work involve
the increasing efforts that are being carried out in the software
quality, so as to motivate the use of Software Engineering providing
an integrated vision of process and product quality, besides
demonstrating a practical integration possibility. The integration
proposed is the simultaneous use of ISO/IEC 9126 and the CMM® process
model. The application could be in enterprise which wanted improve
process and product at the same time.
Lucy Sant'Ana has a Master Degree on Total Quality Manager by
Mechanical Engineering Faculty of UNICAMP. She has been working for 22
years in Quality Area and, during last 5 years, involving with
Software Quality Process and Product at CenPRA - Centro de Pesquisas
Renato Archer.Actually, she lives in Canada.
Cervigni Guerra is Doctor in Mechanical Engineering by UNICAMP.
Working at CenPRA-Centro de Pesquisas Renato Archer, she has been
involved in Quality Area subjects for 10 years. Actually, she works
with Software Quality on Software Product Evaluation Department. At
UNICAMP, Ana Guerra has been advising master graduating students on
Total Quality Manager at Mechanical Engineering Faculty.
C: Project Management
Turbo-Charging the Formal Peer Review Process
peer reviews are powerful tools for assessing and improving the
quality of software. There is no dearth of statistics on the effectiveness of this
process in finding and reducing errors throughout the system
development life cycle. Studies
have shown that a comprehensive system of reviews can lead to a ten
times reduction in the number of errors reaching each stage of
testing, resulting in a 50% to 80% reduction in testing costs.1
In addition, Level 3 of the Capability Maturity Model (CMM)
requires the peer review process to be an integral component of the
software development framework.2
software peer review process, with its minor variations, has been well
documented in books and articles.
There are even standards that have been developed by the
Institute for Electrical and Electronics Engineers.3
To summarize, it is a structured and disciplined process for
bringing together a small group of subject matter experts for a
limited amount of time, to identify defects in a discrete body of
work. Each participant is
assigned a specific role such as moderator, author, presenter,
recorder or reviewer. Managers
are not allowed to attend the session as they will have an inhibiting
effect on the team. Defect
resolution, although critical, is done at a later date and is not
permitted during the session. This
powerful combination of group synergy and laser-like focus tends to
flush out serious problems quickly and effectively.
stellar credentials, why is this type of review not widely used on
projects? As a process
consultant running these reviews in many corporate environments, I saw
first-hand why the process is difficult and painful.
Let’s fact it - reviews are as much fun as a visit to a
dentist. Very few
people enjoy exposing their work to the harsh glare of perceived
addition, the final product is a list of defects, which can be
disheartening to a development team.
sheer frustration, I began factoring in human psychology into these
reviews, to counteract their negative impact, while leaving intact its
structural integrity. The
final turbo-charged process was rolled out at different sites, and was
used to review telecommunications, pharmaceutical, document imaging
and third party vendor software projects.
The process proved successful regardless of what was being
reviewed - whether it was
a marketing proposal, a requirements document, design specifications
or code - based on feedback collected at the end of each review
through a detailed questionnaire.
Not only did the participants seem to enjoy the sessions, but
more importantly, they perceived the process as adding real value to the project.
Ranganathan is an Independent
Management Consultant. Her previous positions included Senior
Management Consultant - Process and Quality Assurance, Systems
Engineer, SQA Group Manager, SQA Project Manager, Team Leader, Senior
Systems Analyst, Systems Analyst, Programmer Analyst,
was the co-recipient
of the "Best Technical Paper/Presentation" at
ASQ’s 11 International Conference on Software Quality in Pittsburgh
in October 2001.
Billions of dollars
are wasted every year on IT projects that are canceled before
completion. At the same
time, careers and personal lives are damaged by failed IT projects.
Experience and many published studies show that most IT
projects do not deliver promised functionality, on time, on budget,
with acceptable performance and reliability.
Failure is so common that it is not unexpected.
This presentation describes 13 early warning signs of IT
project failure drawn from the author’s 20+ years of IT project
management experience in a variety of industries for software projects
with cost ranging from $50,000 to $125,000,000, and work with the
University of North Texas Information Systems Research Center.
This presentation also describes escalation theory, and why
organizations continue to fund and even increase funding for
“runaway” IT projects. Once
a project begins, there is strong institutional and personal bias to
continue the project even if the project is clearly on a path to
failure. This presentation discusses the issue of escalation bias, and
how early identification of IT projects that are likely to fail can
help overcome escalation bias.
McKeeman is a Research Fellow with the Information Systems Research
Center at the University of North Texas.
He is Certified Software Quality Engineer #1016 with the
American Society for Quality and Project Management Professional #2130
with the Project Management Institute.
He graduated from Georgia Tech with High Honors and obtained
his MBA from Harvard Business School.
He has 20+ years of IT project management experience as Vice
President of Information Systems for Equitable AgriBusiness, Andersen
Consulting, and as President of Change Management Consulting.
Clients include Atlanta Gas Light, Bank of America, Citizens
Telecom, Consumers Energy, Contel Telephone, Federal Home Loan Bank,
Georgia Department of Education, Southern Company, West Teleservices,
and the Department of Veterans Affairs.
Robert is a dual US and Irish citizen, and is based in Orlando,
Dekkers & Patricia McQuaid
presenting Demystifying Networking -- Communication Skills for
Project Success (An
key to software quality lies in getting the right requirements and
getting the requirements right, and it all boils down to effective
communication with users and your project team.
While testing, programming, configuration management and
other technical topics are commonly taught through workshops, it
seems to be “assumed” that effective team management and
communication skills will be gained by climbing the project
management or QA Manager ladder.
NOT SO! .... In this interactive session, participants will
learn how to feel comfortable (and make others feel comfortable)
in a room full of strangers, how to make great project and team
connections (and keep them), and tips to make your team sizzle.
Get ready for a non-technical session (taught by two
experienced "networkers") where you'll make immediate
connections with your fellow ICSQ attendees, and learn new
techniques (by doing them!) that will inject new life (and fun)
into your project team.
you are the social coordinator or a shy wallflower in your office,
you are sure to pick up new ideas, and have fun gaining confidence
with new communication skills.
The number of participants in this interactive session is
limited, so be sure to sign up soon.
A. Dekkers is the President of Quality Plus Technologies, Inc. and
specializes in software quality, fact based management (using
measurement), process improvement and IFPUG Certified Function
She is known internationally as a presenter and authority
on quality and measurement, and she holds designations as a
Certified Management Consultant (CMC), a Certified Function Point
Specialist (CFPS), a P.Eng. (Canada) and an Information Systems
Ms. Dekkers is an ISO project editor (software engineering
standards), and is active in organization leadership in PMI, ASQ,
IFPUG, and other industry organizations. In 2000, ASQ’s Quality
Progress Journal named Carol as one of their 21 New Voices of
Quality for the 21st Century.
Carol is one of the authors of the forthcoming ASQ Software
Quality Engineering Handbook (ASQ Quality Press).
Patricia McQuaid is an Associate Professor of Management
Information Systems at California Polytechnic State University.
She has taught in both the Colleges of Business and Engineering.
Her research interests include software quality management,
project management, software process improvement, software
testing, and complexity metrics.
She served as the program chair for the Americas for the
Second World Congress for Software Quality, held in Japan, 2000
and will do so again for the Third World Congress to be held in
France in 2005. She has a doctorate in Computer Science and
Engineering, an MBA, and an undergraduate degree in accounting.
She is a member of IEEE, and a senior member of the American
Society for Quality (ASQ).
Patricia is one of the authors of the forthcoming ASQ
Software Quality Engineering Handbook (ASQ Quality Press).
She was also a contributing author to the recently
published Fundamental Concepts for the Software Quality Engineer
(ASQ Quality Press).
Zhiwei presenting A
Rule-Based Fuzzy Classification for Software Quality Models
quality models can predict early in the development process those
components of the software system that are likely to have faults or
need high development effort. This paper introduces a rule-based fuzzy
classification (RBFC) technique as a method for identifying
fault-prone software modules in the early stage of software
development life cycle. The objective of the this technique is to
minimize the cost of misclassification. This model is based on
software metrics and a new kind of rule generation technique applied
to extract fuzzy rules from numerical data. It also provides a
convenient way to modify rules according to the costs of different
One case study
of full-scale industrial software system compared RBFC models with
nonparametric discriminant analysis model. We found that RBFC models
give management more flexible reliability enhancement strategies than
nonparametric discriminant analysis models, and in these case studies,
RBFC yielded more accurate results than corresponding nonparametric
Zhiwei Xu received
the B. S. degree in electrical engineering from Wuhan University of
Technology, China in 1994, the M.S. degree in electrical engineering
from Guangxi University, China, in 1997, and Ph. D. degree in computer
engineering from Florida Atlantic University, Boca Raton, in 2001. He
is currently a Lead Engineer in Software and Systems Engineering
Research Lab, Motorola Labs, Motorola Inc, Schaumburg, IL. His
research interests include software quality modeling, software
measurement, software reliability, neural network applications, fuzzy
logic applications, and machine learning. Dr. Xu is a member of the
IEEE Computer Society
Taghi M. Khoshgoftaar
received the M.S. degree in applied
mathematics from Massachusetts Institute of Technology, Cambridge, the
M.S. degree in computer science from North Carolina State University,
Raleigh, and the Ph.D. degree in statistics from Virginia Polytechnic
Institute and State University, Blacksburg. He
is a professor of the Department of Computer Science and Engineering,
Florida Atlantic University, Boca Raton, FL USA. He is also the
Director of the Empirical Software Engineering Laboratory at Florida
Atlantic University. His research interests are in software
reliability and quality engineering, software complexity metrics and
measurements, software engineering, computational intelligence,
computer performance evaluation, multimedia systems, and statistical
modeling. He has published more than 150 refereed papers in these
Quality in Web Application Transactions
Server Pages) expansions in creating web-enabled transaction
processing allow Developers to create standard objects for class
libraries to connect to a back-end SQL database.
The SQLConnection classes, once set, act as ‘read only’
connectors to pass query driven datasets from the SQLServer to created
XML (Extensible Markup Language) pages on the web.
ODBC (Object Database Connectivity) connectors act as traffic
regulators to ensure that XML pages receive and send data to target
established, the data elements and ODBC connectors receive testing
from the Developer, primarily verification and validation of the data
streams and their appearance on the web pages. The
creation of test scripts from a website, XML/ASP code and a database
require quality techniques to communicate to development and test
teams. A transaction web
page, XML and ASP sample code and a small database and customer
satisfaction necessitate test and evaluation techniques.
Transaction procedures for design considerations, data
constraints, and transmission controls require examination using OO/UML
(Object Oriented/Unified Modeling Language) tools.
Evelyn V. Richardson has experience in
requirements engineering, information systems, database programming
and system maintenance. She is QAI certified in Software Test and
Evaluation Engineering and ASQ certified in Software Quality
Engineering. She holds a Masters Degree in Computer Systems Management
from the University of Maryland University College and is pursuing a
Ph.D. in Systems Engineering from the University of South Australia.
Ms. Richardson is a member of the Institute of Electrical and
Electronics Engineers (IEEE), the American Society for Quality (ASQ),
the International Council on Systems Engineering (INCOSE) and the
Quality Assurance Institute (QAI).
Westfall Team presenting
Steps to Useful Software Metrics
defines a practical process for establishing and tailoring a software
metrics program that focuses on goals and information needs. The
process provides a practical, systematic, start-to-finish method of
selecting, designing and implementing software metrics. It outlines a
cookbook method that the attendee can use to simplify the journey from
software metrics in concept to delivered information. There
are multitudes of possible software metrics.
The first four steps defined in this presentation will teach
the attendee how to identify their metric customers and then utilize
the goal/question/metric paradigm to select the software metrics that
match the information needs of those customers. Steps 5-10 walk the attendee through the design and tailoring
of their selected metrics including definitions, models, counting
criteria, benchmarks and objectives, reporting mechanisms and
additional qualifiers. The
last two steps help the attendee solve implementation issues including
collecting data and minimizing the impact of human factors on their
Westfall Team provides Software Quality Engineering consulting
services and training. We
partner with your software specialists to define and improve your
software quality and engineering systems, processes and metrics.
We also offer a broad range of training courses to help improve
the knowledge and skill levels of your software professionals.
Speaker Opportunity - Slot 2
Standards - SC7
Panel Presentation with
Jim Moore (US), Jean Berubé (CDA), Alec Dorling (UK), Perry
is an internationally recognized expert in the field of
Software Quality Management, Process Assessment and Improvement. He has
held key posts at international research institutes that include the
Centre for Software Engineering in Sweden, the European Software Institute
in Spain and the National Computing Centre in the UK. He is a chartered
engineer with over 30 years experience in the IT industry gained both in
real-time and commercial systems environments. He has been involved with
most of the key UK government's initiatives in software engineering
and software quality.
R. DeWeese is a member of the Senior Staff at the Lockheed Martin
Engineering Process Improvement Center.
He is responsible for the planning and coordinating the strategic
direction on deployment and implementation of integrated engineering
processes, methodologies, environments, assessment methods, reuse and
quality management at Lockheed Martin Companies.
He provides consulting services within the Lockheed Martin
Corporation on the integration of engineering processes, establishing a
common engineering process environment, and achieving corporate goals
related to CMMI and continuous process improvement.
Mr. DeWeese is the current Project Editor of ISO/IEC 12207,
Software Life Cycle Processes and he is the Lockheed Martin Primary
Representative of the U.S. TAG to ISO/IEC JTC1.
Session H: Project Management
presenting Organizational Project Management Maturity: An
Assessment and Improvement Model
most organizations have implemented some level of project management
capability. Over the past decade, project management has proven itself
as a value-added discipline. Now organizations are preparing to take
it to the next level. Often, this begins with the utilization of a
Project Management maturity model. Project Management maturity models
measure capabilities and progress toward reaching desired levels of
Project Management expertise. This model is based on years of
experience, lessons learned, industry best practices, and the work
conducted by the Project Management Institute (PMI). Additionally,
this unique model incorporates selected key process areas from the
Software Engineering Institute’s Capability Maturity Model and the
Baldrige National Quality Program (2002). The model utilizes five
levels of Project Management Maturity, similar to the steps in the
Capability Maturity Model (CMM). This model also utilizes general
capability categories. Each category represents an assessment point
used in determining fit within the maturity levels. This step-by-step
process begins with an assessment methodology, followed by capability
maturity mapping, capability gap analysis, and finally recommendations
and improvement planning.
Lacy is a Certified Quality Manager, Certified Software Quality
Engineer, and Project Management Professional (PMP) with 18 years
experience in the Software Industry. Currently, he works for DBI
Consulting where he serves as the Practice Lead for the Project
Services practice. He has a degree in Information Systems Management,
an MBA, and is currently working on a Doctorate in Organization
Development at the University of St. Thomas in Minneapolis. Kent is an
experienced public speaker, who presented at the 10th
International Conference on Software Quality, in additional to several
Sterling Software and Project Management Institute Conferences.
Earned Value to Track Software Projects
This paper describes an alternate
approach to tracking the status of software development projects. The
use of earned value in the tracking of the software project allows for
a more comprehensive understanding of what the project’s status is.
Accordingly, the status is also known at all stages of the development
process, and not just discovered a few days before delivery. The
project can only be controlled if management knows the actual status
and can make intelligent decisions based on this status.
software development projects must reflect good planning (effort
estimation and scheduling) and close tracking of the functional
progress. We have seen that software project managers plan what they
think they need and not what can be tracked.
If the planning does not reflect the tracking, then the
software project managers have a problem (and, accordingly, so do the
customer, corporate management, etc).
Very few software development projects
meet the success criteria – finishing on time, within the budget,
with the functionality required and with the quality required
(Standish, 1995). The question that should be asked is – why is this
so? The functionality and quality are set by the customer (of course
it is negotiated by the developer, but the customer has his demands
which, as far as he is concerned, are mandatory). What are left to
chance are the schedule and the budget.
We feel that the two major factors that
should be dealt in the software development project are estimating the
development effort and tracking the progress of the project. The
schedule plan is an outcome of the estimated budget (effort divided by
persons over time) and, accordingly, it is a secondary factor.
Assuming that the software team knows
how to estimate the software costs, the major problem is tracking the
status. As stated above, this paper presents an alternate approach to
tracking the status of the software project. This approach relates to
the type of work done in the software project and does not follow
standard project management techniques blindly.
Mike, with 25 years experience, is a leading
consultant in Israel in the combined field of software project
management and software quality assurance.
He has led a number of software development groups in the
military industry and has proven that non-conventional project
management techniques work. He is very involved in software V&V
for medical devices as well as military and telecommunications
equipment. He is a member
of the ASQ, IEEE/CS, PMI and ACM.
I: Metrics & Measurement
Standardized Defect Statuses
struggle with defect statuses, the stages that a defect goes through
on its path to final resolution.
Ambiguous and overlapping categories cause arguments,
inefficiency, inaccuracy, and failures to handle defects properly.
Based on measurement principles and best practices observed
while consulting, the author has derived a solution to this problem.
This paper provides a complete set of easy to understand,
non-overlapping statuses and a full explanation of the reasoning
behind them. It covers
the valid reasons for withdrawing a defect and provides a state
transition diagram of normal status changes. Don’t keep reinventing this wheel. Adopt these standardized defect statuses and spend your time
more productively fixing defects and finding more of them.
Lile Brown, Ph.D., CSQA, CSQE
is an independent consultant who teaches courses and seminars on
software quality assurance and consults with management on
implementing quality assurance. Dave
has worked on both the business and information technology sides of
major corporations, leading business process improvement teams,
developing corporate quality assurance methodologies, and creating and
teaching numerous courses on quality. Dave
holds a B.S. from Tufts University, and an M.A. and Ph.D. from the
University of Connecticut. He
is a member of ASQ and its Software Division, the Quality Assurance
Association of Connecticut, and the Quality Assurance Institute.
El Emam & Dave Zubrow
More out of Your Inspection Data: Using
Capture-Recapture Models for the Reinspection Decision
J: Advanced Topics
presenting Measuring Software Size
for Continuous Improvement
There has been
much research carried out to investigate how to control software
development projects. This has been due to a history of poorly
controlled projects that have been late, over budget and of poor
quality. Software has now become a critical component to all business
in the major economies of the world. Efficient and effective
management of software projects is of tremendous economic importance.
Project managers from all disciplines agree that in order to control a
project one has to be able to measure it. Just because software is an
intangible product doesn’t mean it should be treated any different.
It may be harder to measure but not impossible. The one issue that
keeps reappearing in relation to software projects is that in order to
be able to control it, it is necessary to be able to measure the size
of the project.
looks at how the size of a software product can be measured. It
discusses current sizing methods, Lines of Code and Function Point
Analysis. It highlights the problems which makes these methods
insufficient in today’s software development environment.
COSMIC-FFP method is discussed. An overview of the method is given
which discusses the components of the model , the mapping phase and
the measurement phase of the process. The discussion highlights how
this method works in today’s complex software development
environment to measure the size of any software product.
investigates the use of the COSMIC-FFP method with past, current and
future software design methods. If COSMIC-FFP is to be of real use
then it must work with all of these methods to allow an
organisation’s process to mature. The main traditional software
design method used is Data Flow Diagrams. The current design method
used in industry is the Unified Modeling Language. Future design
methods look like they will take an Agile Method approach. Each of
these methods is investigated to see if it will map to the COSMIC-FFP
of this paper are that there is a clear and logical mapping between
these methods and the COSMIC-FFP model. This is of major importance to
the software community as it allows for greater control of projects.
It means that software projects can be better estimated and managed.
The value of benchmarking is greatly increased if the projects are all
measured or normalised using the same size measure.
shows why COSMIC-FFP is a
powerful tool and will become the sizing method of the future for
software products. It appears to works in all given development
scenarios. There are still some restrictions on using it as there is
currently no published “backfire” information that allows
conversion of LOC and FPA figures to COSMIC-FFP.
However I believe that this situation will be resolved shortly
and COSMIC-FFP will become the accepted measurement standard for
software products in the near future.
Horgan is a Lecturer in Computing at ITT. Previously she was a
Project Manager at Motorola Inc. in Cork Ireland. Claire has
also worked as a Software Engineer for Motorola and Kindle.
An Integrated Graphical Assessment for Managing Software Product Quality
software product quality has become more and more relevant and
important to managers, even though it is still challenging to define
and measure the detailed quality criteria and to integrate them into
quality models. Software engineering standards can help establish a
common language for these detailed criteria and, in parallel,
implement a model of quality from its high-level concepts down to
its lowest level of measurable details; in particular, the revised
ISO/IEC 9126 suite of standards represents a useful taxonomy and
framework for specifying software product quality. Several
frameworks and techniques are being built on the basis of these
standards. In particular, the GDQA (Graphical Dynamic Quality
Assessment) framework and the QF2D (Quality Factor
through QFD) technique have been proposed to tackle software product
quality analysis and measurement. This paper examines the structure
of both and integrates them into an Integrated Graphical Assessment
of Quality (IGQ) technique supporting quality assessments and
related improvements through the full software lifecycle.
Speaker Opportunity - Slot 3
Speaker Opportunity - Slot 4
L: Quality Processes
C. Starr presenting
SQA Consistently at Level 3
Quality Assurance – what is it, and how do we do it at CMMÒ
Level 3 (SEI, 1994) in United Defense’s (Armament Systems
Division (ASD)? Software Quality Assurance (SQA) is a planned and
systematic approach to the evaluation of the quality of the software
products and adherence to software product standards, processes, and
procedures. SQA includes
the process of assuring that standards and procedures are established
and followed throughout the software acquisition/development life
first introduces the reader to ASD, then discusses the differences
observed in the Armament Systems Division
of United Defense as both an SEI CMM Level 2 and Level 3
organization, the evolving role of Software Quality Assurance during
the maturation of ASD, and the methods ASD is using to implement
consistent Software Quality Assurance oversight across the software
projects in the division.
Program with software projects has their own SQA Group; each group
varies in size relative to the size of the program they support.
Each group is organizationally independent of the program’s
Software Development Manager, and each group is managed independently
from the others. Yet,
they operate in a consistent, coordinated manner.
The majority of this paper describes the organizational
elements, methods, and tools ASD has used to develop its Level 3
implementation of SQA.
Donald Starr is
currently a member of the Organization’s Software Engineering
Process Group and is process owner for the SQA process area for the
Armament Systems Division of United Defense, Limited Partnership (UDLP).
He functions as the Architect for the Organization’s Defined
Process (ODP) for Software Development.
His current duties also include mentoring project teams in use
of the ODP, conducting ODP training classes, and serves as SEPG
liaison to the Software Quality Review Board.
has a Certificate in Systems Management from the University of
Southern California, a BS in Computer Science.
M: Project Management
Accurate Estimates and Realistic Schedules Using the Yellow Sticky
ability of an organization to accurately estimate tasks, build
realistic schedules, and then meet those schedules is critical. Yet
few organizations have demonstrated the ability to do this
consistently. As a result, many software development and QA groups
have little or no credibility when it comes to estimating and
scheduling. There are many reasons why organizations are deficient in
this area. Some of the most common reasons include lack of training,
inability to manage commitments made to customers, poorly written or
non-existent requirements, and lack of management support. Some
organizations believe project management tools can improve their
estimating and scheduling abilities. Unfortunately, tools can't solve
this problem. Identifying and implementing best practices can. This
paper introduces an estimating and scheduling technique called the
Yellow Sticky Method that has proven to be a very accurate method for
estimating and scheduling the work required to develop and test
software products. This paper was published in the ASQ Software
Quality Journal Vol. 4, Issue 2, March 2002.
R. Rakitin has over 25 years experience as a software engineer and
software quality manager. He has written several papers on software
quality and a book titled: Software
Verification & Validation for Practitioners and Managers. BSEE
from Northeastern University and MSCS from RPI. Certifications from
ASQ include CSQE and CQA. He is a member of the IEEE Computer Society,
the ASQ Software Division, and is on the Editorial Review Board for
the ASQ Journal Software Quality
Professional. As President of Software Quality Consulting Inc., he
helps companies establish a more predictable software development
a Win-Win Investment for Employees and Employers
you every considered enhancing your career through professional
Most of us say that we’re too busy to do our day job let
alone anything in addition to it, but what if by doing some
“extra” things you could make your job easier, faster, and
increase your job satisfaction (and maybe your paycheck)?
Certification is one value-added activity.
It’s commonly defined as formal recognition by an institution
that an individual has demonstrated proficiency within and
comprehension of a specified body of knowledge at a point in time.
Certification is a tool and when utilized to its full
potential, can define career paths, contribute to a company’s bottom
line, and drive product quality and customer satisfaction upwards.
Patel is Chief Quality Officer at RapidSQA, a first-of-breed Quality
Service Provider (QSP) specializing in software training and
consulting. He is
co-founder of the Nokia Quality Forum (NQF) Boston and the QAI Boston
Federation Chapter. Eric
is a frequent speaker at testing conferences and holds three
certifications. As Deputy Regional Councilor for the ASQ Software Division
Region 1, Eric maintains active memberships in ASQ as well as IEEE,
IIST, and NESQAF. Published in Software Quality Professional (SQP) and
STQE, he also serves as a reviewer for SQP and The Journal of Software
Kalashian is an ASQ Certified Software Quality Engineer and Certified
Quality Manager who enjoys teaching and mentoring other software
professionals to be proactive in their implementation of tools and
techniques that focus on problem prevention as a means for driving
product quality. Darin is currently working through Northeastern
University’s High Tech MBA
Sense of ISO 15504 (and SPICE)
15504 was initiated in 1993 as the SPICE (Software Process
Improvement and Capability dEtermination) Project, then formally
moved into ISO/IEC as JTC1/SC7’s Working Group 10.
The first draft appeared around June of 1995 and the second,
around October of 1996. Several
ballot and comment periods followed and ISO 15504 was issued as a
Technical Report (TR) in 1998.
Immediately thereafter, work was begun to plan the
implementation of changes deemed needed to move the TR to full
International Standard (IS) status.
This work continues today and, during this time, activities
under the name “SPICE” have continued as well such as a series
of trials, which have used various versions of ISO 15504, including
the TR. Though SPICE activities are not under ISO/IEC auspices, many
of the people involved in the ISO 15504 standards effort are also
associated with SPICE activities.
presentation describes the work which has been going on to move ISO
15504 from a TR to full IS status including reducing the document
set from 9 to 5 documents and removing the Process Dimension from
the standard in favor of Process Reference Models.
Duncan has over 28 years of experience in internal and external
software product development with commercial and government
organizations, including over 14 years in the Telecom Industry.
The last 6 years, as an internal/external consultant, he has
helped organizations achieve various quality/process registration
and capability assessment goals.
Scott is Standards Chair for the ASQ’s Software Division
and is a member of the U.S. Technical Advisory Group for ISO/IEC
JTC1/SC7 software engineering standards as well as the IEEE CS
Software Engineering Standards Committee’s Executive Committee.
Since 1985, Scott has provided corporate training sessions on
auditing/assessment standards, conducted public seminars in managing
software development through metrics, and been a speaker at national
and international conferences and user groups.
Speaker Opportunity - Slot 5