Friday 10 May 2013

CS2055 SOFTWARE QUALITY ASSURANCE QUESTION BANK







CS1020 SOFTWARE QUALITY MANAGEMENT QUESTION PAPERS


UNIT - I
Software quality
In the context of software engineering, software quality measures how well software is designed (quality of design), and how well the software conforms to that design (quality of conformance), although there are several different definitions.
  • ·         Software product quality
  • ·         Product quality
  • ·         conformance to requirements or program specification; related to Reliability
  • ·         Scalability
  • ·         Correctness
  • ·         Completeness
  • ·         absence of bugs
  • ·         Fault-tolerance
  • ·         Extensibility
  • ·         Maintainability
  • ·         Documentation
Software Quality Factors
A software quality factor is a non-functional requirement for a software program which is not called up by the customer's contract, but nevertheless is a desirable requirement which enhances the quality of the software program.
·         Understandability
·         Completeness
·         Maintainability
·         Conciseness
·         Portability
·         Consistency
·         Testability
·         Usability
·         Reliability
·         Structured ness
·         Efficiency
·         Security
Software Quality Measurement
Measurement of software quality factors
Understandability
Are variable names descriptive of the physical or functional property represented? Do uniquely recognizable functions contain adequate comments so that their purpose is clear? Are deviations from forward logical flow adequately commented? Are all elements of an array functionally related?
Completeness
Conciseness
Is all code reachable? Is any code redundant? How many statements within loops could be placed outside the loop, thus reducing computation time? Are branch decisions too complex?
Portability
           Does the program depend upon system or library routines unique to a particular installation? Have machine-dependent statements been flagged and commented? Has dependency on internal bit representation of alphanumeric or special characters been avoided?
           The effort required to transfer the program from one hardware/software system environment to another.
Consistency
Is one variable name used to represent different physical entities in the program? Does the program contain only one representation for physical or mathematical constants? Are functionally similar arithmetic expressions similarly constructed? Is a consistent scheme for indentation used?'
Developing a Set of Metrics
 
 Quality Models
_ Why a quality model?
_ Enables quality comparison both qualitatively and quantitatively
_ Hierarchical models
_ considers quality under a series of quality characteristics or criteria, each having a set of associated measures ormetrics
_ combined in a hierarchical nature into an overall assessment of quality
_ Questions
 what criteria should be employed?
 how do they inter-relate?
 how can they be combined to provide an overall assessment of quality?
                   
McCall’s Model
 Boehm’s Model
Barry W. Boehm is known for his many contributions to software engineering. He was the first to identify software as the primary expense of future computer systems; he developed COCOMO, the spiral model, wideband Delphi, and many more contributions through his involvement in industry and academia.


Metrics Measurement and Analysis



 Metrics Measurement and Analysis


THE GOAL QUESTION METRIC APPROACH
The Goal Question Metric (GQM) approach is based upon the assumption that for an organization to measure in a purposeful way it must first specify the goals for itself and its projects, then it must trace those goals to the data that are intended to define those goals operationally, and finally provide a framework for interpreting the data with respect to the stated goals. Thus it is important to make clear, at least in general terms, what informational needs the organization has, so that these needs for information can be quantified whenever possible, and the quantified information can be analyzed a to whether or not the goals are achieved. The approach was originally defined for evaluating defects for a set of projects in the NASA Goddard Space Flight Center environment.
The result of the application of the Goal Question Metric approach application is the specification of a measurement system targeting a particular set of issues and a set of rules for the interpretation of the measurement data. The resulting measurement model has three levels:
1. Conceptual level (GOAL): A goal is defined for an object, for a variety of reasons, with respect to various models of quality, from various points of view, relative to a particular environment. Objects of measurement are •Products: Artifacts, deliverables and documents that are produced during the system life cycle; E.g., specifications, designs, programs, test suites.
•Processes: Software related activities normally associated with time; E.g., specifying, designing, testing, interviewing. •Resources: Items used by processes in order to produce their outputs; E.g., personnel, hardware, software, office space.
2. Operational level (QUESTION): A set of questions is used to characterize the way the assessment/achievement of a specific goal is going to be performed based on some characterizing model. Questions try to characterize the object of measurement (product, process, resource) with respect to a selected quality issue and to determine its quality from the selected viewpoint.
3. Quantitative level (METRIC): A set of data is associated with every question in order to answer it in a quantitative way. The data can be •Objective: If they depend only on the object that is being measured and not on the viewpoint from which they are taken; E.g., number of versions of a  document, staff hours spent on a task, size of a program.
•Subjective: If  they depend on both the object that is being measured and the viewpoint from which they are taken; E.g., readability of a text, level of user satisfaction.




image001

The complete Goal Question Metric model is as follows:
 THE GOAL QUESTION METRIC PROCESS
A GQM model is developed by identifying a set of quality and/or productivity goals, at
corporate, division or project level; e.g., customer satisfaction, on-time delivery, improved performance. From those goals and based upon models of the object of measurement, we derive questions that define those goals as completely as possible. For example, if it is to characterize a software system (e.g., an electronic mail package, a word processor) with respect to a certain set of quality issues (e.g., portability across architectures), then a quality model of the product must be chosen that deals with those issues (e.g., list of functional features that can be implemented in different architectures). The next step consists in specifying the measures that need to be collected in order to answer those questions, and to track the conformance of products and processes to the goals. After the measures have been specified, we need to develop the data collection mechanisms, including validation and analysis mechanisms.
 CONCLUSION
In summary, the Goal Question Metric approach is a mechanism for defining and interpreting operational and measurable software. It can be used in isolation or, better, within the context of a more general approach to software quality improvement.
Figure below outlines the basic roles and flows of information for this model.


UNIT -2
Software Quality Assurance (SQA) consists of a means of monitoring the software engineering processes and methods used to ensure quality. It does this by means of audits of the quality management system under which the software system is created. These audits are backed by one or more standards, usually ISO 9000.
It is distinct from software quality control which includes reviewing requirements documents, and software testing. SQA encompasses the entire software development process, which includes processes such as software design, coding, source code control, code reviews, change management, configuration management, and release management. Whereas software quality control is a control of products, software quality assurance is a control of processes.
Software quality assurance is related to the practice of quality assurance in product manufacturing. There are, however, some notable differences between software and a manufactured product. These differences stem from the fact that the manufactured product is physical and can be seen whereas the software product is not visible. Therefore its function, benefit and costs are not as easily measured. What's more, when a manufactured product rolls off the assembly line, it is essentially a complete, finished product, whereas software is never finished. Software lives, grows, evolves, and metamorphoses, unlike its tangible counterparts. Therefore, the processes and methods to manage, monitor, and measure its ongoing quality are as fluid and sometimes elusive as are the defects that they are meant to keep in check.
Tasks

Quality Engineering
the activity consisting of the cohesive collection of all tasks that are primarily performed to ensure and help continually improve the quality of an endeavor’s process and work products

Goals
The typical goals of quality engineering are to:
  • Ensure that the necessary levels of quality are achieved.
  • Make the achievement of quality predictable and repeatable.
  • Minimize endeavor, organizational, and personal risks due to poor quality.
Objectives
The typical objectives of quality engineering are to:
  • Define what quality means on the endeavor in terms of a quality model defining quality factors and quality sub factors.
  • Plan the quality tasks including helping the requirements team determine and specify the quality requirements and associated quality factors (attributes) and quality metrics.
  • Assure the quality of the process used by the endeavor.
    Thus, quality assurance is concerned with fulfilling the quality requirements and achieving the quality factors of the endeavor’s process.
    “Are we building the products right?”
  • Control the quality of the work products delivered during the endeavor.
    Thus, quality control is concerned with fulfilling the quality requirements and achieving the quality factors of the endeavor’s work products.
    “Are we building the right products?”
Examples
Examples of quality engineering based on scope include:
  • Application Quality Engineering
  • Business Quality Engineering
  • Contact Center Quality Engineering
  • Data Center Quality Engineering
Preconditions
Quality engineering typically may begin when the following preconditions hold:
  • The endeavor is started.
  • The quality team is initially staffed and trained in quality engineering.
Completion Criteria
Quality engineering is typically complete when the following post conditions hold:
  • The endeavor is complete.

Tasks
The following diagram illustrates the relationships between the quality tasks:





Plan

The purpose of this Software Quality Assurance Plan (SQAP) is to define the techniques, procedures, and methodologies that will be used at the Center for Space Research (CSR) to assure timely delivery of the software that meets specified requirements within project resources.

The use of this plan will help assure the following: (1) That software development, evaluation and acceptance standards are developed, documented and followed. (2) That the results of software quality reviews and audits will be given to appropriate management within CSR. This provides feedback as to how well the development effort is conforming to various CSR development standards. (3) That test results adhere to acceptance standards.


Teams

The SQA team shall check that the quality is maintained during the project and that the proper quality procedures are being followed, discovered problems are reported to the Project Management. The members of the project team must work according to the part(s) of the SQAP that applies to their specific task.
The tasks of the SQA team
For the first phase of the project (UR), the SQA team must see to it that the following documents are properly reviewed internally before they are submitted for an external review.
  • The URD
The SQA team must check whether the URD:
    • contains a general description of the software that has to be developed;
    • contains requirements on the software to be developed as stated by the client;
    • contains constraints on the software to be developed;
    • contains a priority list of the requirements.
  • The SPMP
The SQA team must check whether the goals of the project are clearly described. A life cycle approach for the project must be defined. The SQA team must ensure that the SPMP is realistic by checking:
    • the assumptions made during the planning of the project;
    • restrictions with respect to plan (e.g. availability of members);
    • external problems (e.g. delivery of PCs, interface card and drivers).
  • The SCMP
With respect to the SCMP, the SQA team has to check whether the document provides procedures concerning:
    • CI identification
    • CI storage
    • CI change control
    • CI status indication
All documents must have a unique identifier and backups must be made at least once every three days.
  • The SQAP
With respect to the SQAP, the SQA team must check wether the SQAP contains:
    • Project standards
    • Review procedures
    • Problem reporting procedures
    • Responsibilities of the project members with respect to quality assurance

 Tasks during SR phase

For the second phase of the project (SR), the SQA team must see to it that the following documents are properly reviewed internally before they are submitted for an external review.
  • The SRD
The SQA team must check whether the SRD:
    • contains requirements on the software to be developed, these requirements must be based on the software requirements stated in the URD;
    • contains constraints on the software to be developed, these constraints must be based on the software constaint in the URD;
    • contains a priority list of the requirements.
    • contains a traceability matrix.
  • The SPMP-SR
The SQA team must ensure that the SPMP is realistic by checking:
    • the assumptions made during the planning;
    • restrictions with respect to the planning (e.g. availability of members);
    • external problems (e.g. external software/code).
  • The SCMP-SR
Whith respect to the SCMP, the SQA team must check wether the SCP contains:
    • the additional baselines.
  • The SQAP-SR
With respect to the SQAP, the SQA team must check wether the SQAP contains:
    • the Tasks of the SQA team during the SR phase.

 

 

Tasks during AD phase

For the third phase of the project (AD), the SQA team must see to it that the following documents are properly reviewed internally before they are submitted for an external review.
  • The ADD
The SQA team must check whether the ADD:
    • contains an architectural design of the software to be developed, this design must describe a logical model and the interfaces between the different classes;
    • contains pre and post conditions of the methods in the locical model;
    • contains a tracibility matrix where the design is checked to the software requirements in the SRD.
  • The SPMP-AD
The SQA team must ensure that the SPMP is realistic by checking:
    • the assumptions made during the planning;
    • restrictions with respect to the planning (e.g. availability of members);
    • external problems.
  • The SCMP-AD
Whith respect to the SCMP, the SQA team must check wether the SCMP contains:
    • the additional baselines.
  • The SQAP-AD
With respect to the SQAP, the SQA team must check wether the SQAP contains:
    • the tasks of the SQA team during the AD phase.

 characteristics of a good QA (Quality Assurnace) Engineer:

• Understanding of business approach and goals of the organization
• Understanding of entire software development process
• Strong desire for quality
• Establish and enforce SQA methodologies, processes and Testing Strategies
• Judgment skills to assess high-risk areas of application
• Communication with Analysis and Development team
• Report defects with full evidence
• Take preventive actions
• Take actions for Continuous improvement
• Reports to higher management
• Say No when Quality is insufficient
• Work Management
• Meet deadlines

Documentation:
Project documentation may include many kinds of documents (e.g., plans, task reports, development products, problem reports, phase summary reports). Project size, criticality (i.e., the severity of the consequence of failure of the system), and complexity are some features that may affect the amount of documentation a project should need. For example, the design documentation may consist of a single document describing both the system architecture and the detailed modules or it may consist of separate documents for the architecture and subsystems. The purpose of this section is not to specify how many documents should be required. Rather, this section identifies the information content needed for any project and the timeliness of requirements so that the information can be used by the vendor, the utility, and the NRC reviewers. Because the NRC reviewers cannot determine the characteristics of the software product without substantial technical specifications, project plans, and reports, NRC should specify the technical products of the vendor that the utility must provide NRC.

Review:

The reviewers will also need to evaluate the installation package, which consists of installation procedures, installation medium (e.g., magnetic tape), test case data used to verify installation, and expected output from the test cases. In some instances, the product may already be installed in the utility. NRC should request documentation on the results of installation and acceptance testing.

 
UNIT - III

Reliability and Quality Control:


Although the terms reliability and quality are often used interchangeably, there is a difference between these two disciplines. While reliability is concerned with the performance of a product over its entire lifetime, quality control is concerned with the performance of a product at one point in time, usually during the manufacturing process. As stated in the definition, reliability assures that components, equipment and systems function without failure for desired periods during their whole design life, from conception (birth) to junking (death). Quality control is a single, albeit vital, link in the total reliability process. Quality control assures conformance to specifications. This reduces manufacturing variance, which can degrade reliability. Quality control also checks that the incoming parts and components meet specifications, that products are inspected and tested correctly, and that the shipped products have a quality level equal to or greater than that specified. The specified quality level should be one that is acceptable to the users, the consumer and the public. No product can perform reliably without the inputs of quality control because quality parts and components are needed to go into the product so that its reliability is assured.


Quality Tools:
Cause Analysis Tools
Tips and tools for the first step to improvement: identifying the cause of a problem or situation.
Evaluation and Decision-Making Tools
Making informed decisions and choosing the best options with a simple, objective rating system, and determining the success of a project.
Process Analysis Tools
How to identify and eliminate unnecessary process steps to increase efficiency, reduce timelines and cut costs.
Seven Basic Quality Tools
These seven tools get to the heart of implementing quality principles.

Data Collection and Analysis Tools
How can you collect the data you need, and what should you do with them once they’re collected?
Idea Creation Tools
Ways to stimulate group creativity and organize the ideas that come from it.
Project Planning and Implementing Tools
How to track a project’s status and look for improvement opportunities.
Seven New Management and Planning Tools
Ways to promote innovation, communicate information and successfully plan major projects.
Seven Basic Quality Tools( Ishikawa's basic tools )
    * Overview
    * Read More          
"The Old Seven."
"The First Seven."
"The Basic Seven."
Quality pros have many names for these seven basic tools of quality, first emphasized by Kaoru Ishikawa, a professor of engineering at Tokyo University and the father of “quality circles.”
Start your quality journey by mastering these tools, and you'll have a name for them too: "indispensable."

   1. Cause-and-effect diagram (also called Ishikawa or fishbone chart): Identifies many possible causes for an effect or problem and sorts ideas into useful categories.
   2. Check sheet: A structured, prepared form for collecting and analyzing data; a generic tool that can be adapted for a wide variety of purposes.
   3. Control charts: Graphs used to study how a process changes over time.
   4. Histogram: The most commonly used graph for showing frequency distributions, or how often each different value in a set of data occurs.
   5. Pareto chart: Shows on a bar graph which factors are more significant.
   6. Scatter diagram: Graphs pairs of numerical data, one variable on each axis, to look for a relationship.
   7. Stratification: A technique that separates data gathered from a variety of sources so that patterns can be seen (some lists replace "stratification" with "flowchart" or "run chart").

case tools:
Computer-aided software engineering (CASE) is the use of software tools to assist in the development and maintenance of software. Tools used to assist in this way are known as CASE Tools.
Some typical CASE tools are:
    * Code generation tools
    * Data modeling tools
    * UML
    * Refactoring tools
    * QVT or Model transformation Tools
    * Configuration management tools including revision control
Preventing, Discovering and Removing Defects
To reduce the number of defects delivered with a software project an organization can engage in a variety of activities.  While defect prevention is much more effective and efficient in reducing the number of defects, most organization conduct defect discovery and removal.  Discovering and removing defects is an expensive and inefficient process.  It is much more efficient for an organization to conduct activities that prevent defects.
Defect Removal Efficiency
If an organization has no defect prevention methods in place then they are totally reliant on defect removal efficiency.
1. Requirements Reviews up to 15% removal of potential defects
2. Design Reviews up to 30% removal of potential defects
3. Code Reviews up to 20% removal of potential defects
4. Formal Testing up to 25% removal of potential defects
In other words, if your organization is great at defect removal the maximum percentage of defects your organization can expect to remove is 90%.  If a software project is 100 function points, the total number of maximum (or potential) defects could be 120.  If you were perfect at defect removal your project would still have up to 12 defects after all your defect discovery and removal efforts.  The far majority of organization would receive a B (medium) or even a D (poor) at defect removal efficiency.
Activity            Perfect            Medium           Poor
Requirements Reviews          15%     5%       0%
Design Reviews          30%     15%     0%
Code Reviews            20%     10%     0%
Formal Testing            25%     15%     15%
Total Percentage Removed   90%     45%     15%

Defect Discovery and Removal

Size in Function Points
                        Totals Defects Remaining
Points   Max Defects Perfect            Medium           Poor
100      120                  12         66                   102
200      240                  24         132                 204
500      600                  60         330                 510
1,000   1,200               120       660                 1,020
2,500   3,000               300       1,650              2,550
5,000   6,000               600       3,300              5,100
10,000 12,000             1,200    6,600              10,200
20,000 24,000              2,000  13,200            20,400
An organization with a project of 2,500 function points and was about medium at defect discovery and removal would have 1,650 defects remaining after all defect removal and discovery activities.  The calculation is 2,500 x 1.2 = 3,000 potential defects.  The organization would be able to remove about 45% of the defects or 1,350 defects.  The total potential defects (3,000) less the removed defects (1,350) equals the remaining defects of 1,650.
Defect Prevention
If an organization concentrates on defect prevention (instead of defect detection) then the number of defects inserted or created is much less.  The amount of time and effort required to discover and remove this defects is much less also.
1. Roles and Responsibilities Clearly Defined up to 15% reduction in number of defects created
2. Formalized Procedures up to 25% reduction in number of defects created
3. Repeatable Processes up to 35% reduction in number of defects created
4. Controls and Measures in place up to 30% reduction in number of defects created
Imagine an organization with items 1 and 2 in place.  A project with 100 function points would have a potential of 120 defects, but since they have preventative measures in place, they can reduce the number of potential defects by 48 (40% = 25% + 15%).  That makes the potential number of defects 72 compared to 120 with no preventative efforts.  Assuming that an organization was medium at defect discovery and removal they could remove 45% of the remaining defects or have 40 remaining when the project rolled to production.
Defect Removal         Max Defects   Prevention       Medium
100                              120                  72                    40
200                              240                  144                  79
500                              600                  360                  198
1,000                           1,200               720                  396
2,500                           3,000               1,800               990
5,000                           6,000               3,600               1,980
10,000                         12,000             7,200               3,960
20,000                         24,000             14,400             7,920

The above table represents the number of defects that an organization that does items 1 and 2 above and is medium at discovery and removal.
The problem for estimating defects is multidimensional.  First the total number of defects must be estimated.  Second the impact of defect prevention needs to be understood and the estimated number of defects adjusted.  Third an assessment needs to be done to understand how many defects can be discovered and removed by an organization.
Clearly, the fewer number of defects that an organization must discover and remove the better.  The way this is accomplished is by better process, a more stable organization and repeatable processes.  The focus of software organizations needs to be on defect prevention instead of defect detection.

Software Reliability Models:
A proliferation of software reliability models have emerged as people try to understand the characteristics of how and why software fails, and try to quantify software reliability. Over 200 models have been developed since the early 1970s, but how to quantify software reliability still remains largely unsolved. Interested readers may refer to [RAC96], [Lyu95]. As many models as there are and many more emerging, none of the models can capture a satisfying amount of the complexity of software; constraints and assumptions have to be made for the quantifying process. Therefore, there is no single model that can be used in all situations. No model is complete or even representative. One model may work well for a set of certain software, but may be completely off track for other kinds of problems.

Most software models contain  the following parts: assumptions, factors, and a mathematical function that relates the reliability with the factors. The mathematical function is usually higher order exponential or logarithmic.

Software modeling techniques can be divided into two subcategories: prediction modeling and estimation modeling. [RAC96] Both kinds of modeling techniques are based on observing and accumulating failure data and analyzing with statistical inference

Rayleigh Model
            Rayleigh model has been found to be most suitable for predicting reliability of software product.  It predicts the expected value of defect density at different stages of life cycle of the project, once parameters like total number of defects or total cumulative defect rate and peak of the curve in terms of unit of time for the curve are decided. 
            The nature of curve indicates the pattern of defect removal rate in the life cycle of the project. The area bounded by the x-axis and the curve is the measure of total defects likely to be unearthed from the software being developed.
Below is the plot of the Rayleigh Curve plotted for one of our projects:


                                    Fig 5: Rayleigh plot

The red line indicates the actual defect density observed as against the predicted values (brown smooth curve) obtained through a theoretical model. Observed defect density closely matches with the defect density predicted by the model.

The curve indicates the defect density at the time of system testing as 21 defects.  


Reliability growth models for quality assessment:

Software reliability assessment is very important in developing a quality software product efficiently. This paper discusses the quantitative measurement and assessment of software reliability. The techniques are based on the software reliability growth models (SRGM's) developed in Japan, which are characterized by nonhomogeneous Poisson processes. By making the assumptions on which they are based more realistic, the models discussed here were designed to describe a software error-detection process or a software failure-occurrence process during the testing phase of software development. A summary of existing SRGM's is given, and the maximum-likelihood estimations based on the SRGM's are discussed for software reliability data analysis and software reliability assessment. Through the use of a software reliability assessment tool that incorporates several leading SRGM's, examples of software reliability assessment are given for some sets of observed test data taken from actual software projects.


                                                                        Unit 4
                                               Quality Management System
Topics covered
1. Concept of quality
2. QMS
3. Elements of qms
4. RAyleigh model

5. Reliability Growth Models

6. Complexity Metrics and Models

7. Customer Satisfaction Analysis

Concept of quality - historical background
The  concept of quality as we think of it now first emerged out of The  Industrial Revolution. Previously goods had been made from start to finish by The  same person or team of people, with handcrafting and tweaking The  product to meet 'quality criteria'. Mass production brought huge teams of people together to work on specific stages of production where one person would not necessarily complete a product from start to finish. In The  late 1800s pioneers such as Frederick Winslow Taylor and Henry Ford recognized The  limitations of The  methods being used in mass production at The  time and The  subsequent varying quality of output. Taylor established Quality Departments to oversee The  quality of production and rectifying of errors, and Ford emphasized standardization of design and component standards to ensure a standard product was produced. Management of quality was The  responsibility of The  Quality department and was implemented by Inspection of product output to 'catch' defects.
Application of statistical control came later as a result of World War production methods. Quality management systems are The  outgrowth of work done by W. Edwards Deming, a statistician, after whom The  Deming Prize for quality is named.
Quality, as a profession and The  managerial process associated with The  quality function, was introduced during The  second-half of The  20th century, and has evolved since then. Over this period, few other disciplines have seen as many changes as The  quality profession.
The  quality profession grew from simple control, to engineering, to systems engineering. Quality control activities were predominant in The  1940s, 1950s, and 1960s. The  1970s were an era of quality engineering and The  1990s saw quality systems as an emerging field. Like medicine, accounting, and engineering, quality has achieved status as a recognized profession
Quality Management System (QMS)
Quality Management System (QMS) can be defined as a set of policies, processes and procedures required for planning and execution (production / development / service) in The  core business area of an organization. QMS integrates The  various internal processes within The  organization and intends to provide a process approach for project execution. QMS enables The  organizations to identify, measure, control and improve The  various core business processes that will ultimately lead to improved business performance.
ELEMENTS OF  QUALITY MANAGEMENT SYSTEMS
The  standards of ISO 9000 detail 20 requirements for an organization's quality management system in The  following areas:
·         Management Responsibility
·         Quality System
·         Order Entry
·         Design Control
·         Document and Data Control
·         Purchasing
·         Control of Customer Supplied Products
·         Product Identification and Tractability
·         Process Control
·         Inspection and Testing Control of Inspection, Measuring, and Test Equipment
·         Inspection and Test Status
·         Control of Nonconforming Products
·         Corrective and Preventive Action
·         Handling, Storage, Packaging, and Delivery
·         Control of Quality Records
·         Internal Quality Audits
·         Training
·         Servicing
·         Statistical Techniques
The  Rayleigh Model Framework
Perhaps The  most important principle in software engineering is "do it right The  first time." This principle speaks to The  importance of managing quality throughout The  development process. theinterpretation of The  principle, in The  context of software quality management, is threefold:
·         The  best scenario is to prevent errors from being injected into The  development process.
·         When errors are introduced, improve The  front end of The  development process to remove as many of them as early as possible. Specifically, in The  context of The  waterfall development process, rigorous design reviews and code inspections are needed. In The  Cleanroom methodology, function verification by The  team is used.
·         If The  project is beyond The  design and code phases, unit tests and any additional tests by The  developers serve as gatekeepers for defects to escape The  front-end process before The  code is integrated into The  configuration management system (The  system library). In other words, The  phase of unit test or pre-integration test (The  development phase prior to system integration) is The  last chance to do it right The  "first time."
The  Rayleigh model is a good overall model for quality management. It articulates The  points on defect prevention and early defect removal related to The  preceding items. Based on The  model, if The  error injection rate is reduced, The  entire area under The  Rayleigh curve becomes smaller, leading to a smaller projected field defect rate. Also, more defect removal at The  front end of The  development process will lead to a lower defect rate at later testing phases and during maintenance. Both scenarios aim to lower The  defects in The  latter testing phases, which in turn lead to fewer defects in The  field. The  relationship between formal machine-testing defects and field defects, as described by The  model, is congruent with The  famous counterintuitive principle in software testing by Myers (1979), which basically states that The  more defects found during formal testing, The  more that remained to be found later. The  reason is that at The  late stage of formal testing, error injection of The  development process (mainly during design and code implementation) is basically determined (except for bad fixes during testing). High testing defect rates indicate that The  error injection is high; if no extra effort is exerted, more defects will escape to The  field.
If we use The  iceberg analogy to describe The  relationship between testing and field defect rates, The  tip of The  iceberg is The  testing defect rate and The  submerged part is The  field defect rate. The  size of The  iceberg is equivalent to The  amount of error injection. By The  time formal testing starts, The  iceberg is already formed and its size determined. The  larger its tip, The  larger The  entire iceberg. To reduce The  submerged part, extra effort must be applied to expose more of The  iceberg above The  water. Figure 9.1 shows a schematic representation of The  iceberg analogy.

Figure 9.1. Iceberg Analogy—Error Injection, Testing Defects, and Latent Defects

Reliability Growth Models

Although reliability growth models are meant for reliability assessment, they are also useful for quality management at the back end of the development process. Models developed from a previous product or a previous release of the same product can be used to track the testing defects of the current product. To have significant improvement, the defect arrival rate (or failure density) of the current project must fall below the model curve. Figure 9.15 shows an example from a systems software product developed at IBM Rochester. Each data point represents a weekly defect arrival rate during the system test phase. The defect arrival patterns represented by the triangles and circles indicate two later releases of the same product. Compared to the baseline model curve, both new releases witnessed a significant reduction in defect rate during the system test phase.
Figure 9.15. Reliability Growth Model for Quality Management


As a second example, when another product was just about at the start of system testing, the PTR arrival rates were unusually high compared to the model. It was clear that proceeding in a business-as-usual manner would not result in meeting the product's quality goal. A special quality improvement program (QIP) was then pro-posed, evaluated, approved, and swiftly implemented. The QIP involved five extra activities:
1.    Blitz testing— "artistic" testing in stressful environments
2.    Customer evaluation— customers conducting testing in the development laboratory
3.    Code inspections— additional inspections of error-prone modules, especially routines that are difficult to test such as the error recovery/exception handling routines
4.    Design reviews— rereview of designs of suspect components and modules
5.    Extension of system test— improvement of test suites and extension of testing schedules to allow thorough final test execution
Because of the special QIP activities, the product ship date was delayed one month. As a result, more than 250 would-be field defects were found and removed. The field quality of the product, evidenced by field defect arrivals reported in later years, improved

Complexity Metrics and Models

Thus far the reliability and quality management models we have discussed are either at the project or the product level. Both types of model tend to treat the software more or less as a black box. In other words, they are based on either the external behavior (e.g., failure data) of the product or the intermediate process data (e.g., type and magnitude of inspection defects), without looking into the internal dynamics of design and code of the software. In this chapter we describe the relationships between metrics about design and code implementation and software quality. The unit of analysis is more granular, usually at the program-module level. Such metrics and models tend to take an internal view and can provide clues for software engineers to improve the quality of their work.
Reliability models are developed and studied by researchers and software reliability practitioners with sophisticated skills in mathematics and statistics; quality management models are developed by software quality professionals and product managers for practical project and quality management. Software complexity research, on the other hand, is usually conducted by computer scientists or experienced software engineers. Like the reliability models, many complexity metrics and models have emerged in the recent past. In this chapter we discuss several key metrics and models, and describe a real-life example of metric analysis and quality improvement.
. Lines of Code
The lines of code (LOC) count is usually for executable statements. It is actually a count of instruction statements. The interchangeable use of the two terms apparently originated from Assembler program in which a line of code and an instruction statement are the same thing. Because the LOC count represents the program size and complexity, it is not a surprise that the more lines of code there are in a program, the more defects are expected. More intriguingly, researchers found that defect density (defects per KLOC) is also significantly related to LOC count. Early studies pointed to a negative relationship: the larger the module size, the smaller the defect rate. For instance, Basili and Perricone (1984) examined FORTRAN modules with fewer than 200 lines of code for the most part and found higher defect density in the smaller modules. Shen and colleagues (1985) studied software written in Pascal, PL/S, and Assembly language and found an inverse relationship existed up to about 500 lines. Since larger modules are generally more complex, a lower defect rate is somewhat counterintuitive. Interpretation of this finding rests on the explanation of interface errors: Interface errors are more or less constant regardless of module size, and smaller modules are subject to higher error density because of smaller denominators.
More recent studies point to a curvilinear relationship between lines of code and defect rate: Defect density decreases with size and then curves up again at the tail when the modules become very large. For instance, Withrow (1990) studied modules written in Ada for a large project at Unisys and confirmed the concave relationship between defect density (during formal test and integration phases) and module size (Table 11.1). Specifically, of 362 modules with a wide range in size (from fewer than 63 lines to more than 1,000), Withrow found the lowest defect density in the category of about 250 lines. Explanation of the rising tail is readily available. When module size becomes very large, the complexity increases to a level beyond a programmer's immediate span of control and total comprehension. This new finding is also consistent with previous studies that did not address the defect density of very large modules.
Experience from the AS/400 development also lends support to the curvilinear model. In the example in Figure 11.1, although the concave pattern is not as significant as that in Withrow's study, the rising tail is still evident.
Figure 11.1. Curvilinear Relationship Between Defect Rate and Module Size—AS/400 data
The curvilinear model between size and defect density sheds new light on software quality engineering. It implies that there may be an optimal program size that can lead to the lowest defect rate. Such an optimum may depend on language, project, product, and environment; apparently many more empirical investigations are needed. Nonetheless, when an empirical optimum is derived by reasonable methods (e.g., based on the previous release of the same product, or based on a similar product by the same development group), it can be used as a guideline for new module development.

Customer Satisfaction Analysis
What The  clients really want is The  interpretation and analysis of the data to provide actionable information: What can we learn from it? What actions does it suggest that we take to improve customer satisfaction with what we offer?
The  analysis process starts by performing statistical tests to reveal relationships or differences in customer ratings of The  performance on different product and service attributes, and how they affect overall satisfaction. We compare The  performance to The  peers, utilizing The  Benchmark database to describe how The  performance rates on a relative basis. We identify The  customers' product and service priorities and we compare these to their perceptions of The  performance through Quadrant Analysis. We look for gaps in performance versus expectations in The  search for major opportunities for improvement.
We augment The  analysis of the quantitative survey data with careful study of the qualitative information – the comments and observations made by The  customers. These are an invaluable The of insight into the reasons behind their ratings. In most cases, The  analysis is aimed at identifying the key drivers of satisfaction – those product or service elements that are most closely related to customer satisfaction.
The  clients benefit from the perspective of The  years of experience with a wide variety of clients. Applying The  knowledge, The  goal is to deliver actionable results – information you can use to create change that will improve The  competitive position and The  bottom line