A good test engineer has qualities like finding problems, paying attention to detail, communicating well, and understanding development. For QA engineers, these qualities are also important along with understanding the whole development process. QA/test managers should maintain team morale, promote cooperation, withstand pressures, and communicate with technical and non-technical people. Documentation, requirements, test plans, cases, and configuration management are critical parts of QA. Risk analysis helps determine testing focus when time is limited or requirements are changing.
The document discusses formulation and planning for web engineering projects. It begins by outlining the key steps in formulation, which include identifying business needs, describing objectives, defining features and functions, and establishing requirements gathering. It then provides details on gathering requirements such as defining user categories, communicating with stakeholders, analyzing information, and creating use cases. The document also discusses differences between outsourcing and in-house development, as well as best practices for planning web engineering projects.
This document provides a quality management checklist template with sections on quality planning, checkpoints and reviews, testing, documentation reviews, and production readiness. It includes example questions to consider for each checkpoint and review. The full checklist contains over 20 sections with descriptions of quality tasks and considerations for an effective quality management process.
Siddharth Raipure has over 3 years of experience as a software testing analyst working in financial services and public sector projects. He has experience in functional testing, automation testing, test planning, execution, and estimation. Some of his responsibilities included analyzing requirements, designing and implementing automation frameworks, executing test cases, reporting defects, and mentoring junior team members. He is proficient in software testing tools like HP ALM, Selenium, and databases. He has worked on projects for investment banks and public sector clients involving applications for know your customer regulatory compliance and welfare benefits.
The document discusses various metrics that can be used to measure different aspects of software quality. It describes McCall's quality factors triangle which identifies key attributes like correctness, reliability, efficiency etc. It then discusses different types of metrics like function-based metrics which measure functionality, design metrics which measure complexity, and class-oriented metrics which measure characteristics of object-oriented design like coupling and cohesion. The document provides examples of metrics that can measure code, interfaces, testing and more.
This document provides a summary of Sumanth's experience as an MDM Consultant with over 8 years of experience in software testing. It outlines 5 projects he has worked on, including testing roles for projects involving customer data management systems for large banks. For each project, it summarizes the client, technologies used, Sumanth's role and contributions to testing activities like test planning, execution and defect management. The document demonstrates Sumanth's extensive experience in testing data integration and MDM systems within the banking and telecommunications domains.
Static test techniques provide a powerful way to improve the quality and productivity of software development. This chapter describes static test techniques, including reviews, and provides an overview of how they are conducted
The document discusses elements that should be included in development and quality plans for software projects. It provides details on what should be covered in each plan, including deliverables, milestones, risks, resources, and testing. The development plan should define project scope, schedule, roles and tasks, while the quality plan outlines goals, reviews, tests and configuration management. Both plans are major elements for meeting standards like ISO29110 and CMMI. The document uses examples to illustrate how elements in each plan can be specified, such as deliverables tables, review activities, and test cases.
In this presentation, we will understand about team roles for project, how to determine requirements activities and planning steps and understanding requirements risk approach. We will also discuss about identifying stakeholders, defining business analyst work division strategy, core business concepts, risk control, various related documentations and procedure of collecting product metrics.
To know more about Welingkar School’s Distance Learning Program and courses offered, visit:
http://www.welingkaronline.org/distance-learning/online-mba.html
The document discusses analysis modeling principles and techniques used in requirements analysis. It covers key topics such as:
1. The purpose of requirements analysis is to specify a software system's operational characteristics, interface with other systems, and constraints. Models are built to depict user scenarios, functions, problem classes, system behavior, and data flow.
2. Analysis modeling follows principles such as representing the information domain, defining functions, modeling behavior, partitioning models, and moving from essential to implementation details. Common techniques include use case modeling, class modeling, data flow diagrams, state diagrams, and CRC modeling.
3. The objectives of analysis modeling are to describe customer requirements, establish a basis for software design, and define a set
The document discusses key concepts for managing software projects including the four Ps of project management: People, Product, Process, and Project. It describes stakeholders and team structures, and emphasizes establishing clear objectives and scope, tracking progress, and learning lessons through post-mortem reviews. Metrics for both processes and products are discussed to assess status, risks, and quality in order to guide improvement.
This document discusses static testing techniques, including reviews. It describes the review process, roles in reviews, types of reviews, and static analysis using tools. Reviews are a formal process typically involving planning, preparation, a review meeting, rework, and follow-up. Roles include the moderator, author, scribe, and reviewers. Types of reviews serve different purposes at different stages. Static analysis tools can check coding standards and metrics, as well as code structure.
The document is a resume for Mahesh Kushwah. It summarizes that he has over 2.8 years of experience in software testing, particularly manual testing of web-based applications. He has knowledge of testing tools like JMeter and processes like the software development life cycle. He is looking for a job in a professional organization where he can use his testing skills. His experience includes working on projects for water resource management systems for the governments of Madhya Pradesh and Tamil Nadu in India.
The document discusses the importance of requirements analysis and outlines desired skills and characteristics of effective requirements analysts. It notes that 53% of industry investments are lost to cost overruns and failed projects due to issues with requirements like incompleteness and changes. Effective requirements practices like collaborating with customers, managing requirements changes, and using automated tools can help reduce rework costs that typically account for 45% of projects. The document provides recommendations for training requirements analysts and establishing requirements processes and management commitment to support successful requirements definition.
The document provides information about Luiz Barboza, including his education background and certifications. It then outlines the agenda for a course on performance evaluation of information systems, covering topics like workload modeling, performance requirements, tools for performance analysis and testing, and performance modeling.
This document contains the resume of Shylaja M, who has 6 years of experience in software testing and quality assurance. She has worked on various projects in domains such as HR, payroll, project management, and e-learning. Some of her responsibilities include test planning, requirement analysis, test case design, defect tracking, automation testing and liaising between development and testing teams. She is proficient with tools like Jira, Selenium and has expertise in .NET and SQL technologies. She is currently working as a Software Tester at Dhanush InfoTech Pvt Ltd in Hyderabad, India.
This document provides a summary of Naveen Raj M's career and qualifications. He has over 2 years of experience in software testing, including developing requirements documents, performing various types of testing, and working on Agile software development teams. His technical skills include languages like C and C++, databases like SQL and MySQL, and testing tools like QTP and Selenium. He has worked on several web and mobile application projects in various industries like medical tourism, car rental management, logistics, and education. His roles have included testing software, writing test cases, monitoring results, and collaborating with developers. He has a Bachelor's degree in Electrical Engineering and additional certification in automation testing.
The document discusses quality planning (QP), quality assurance (QA), and quality control (QC) as three important concepts that govern the structure of a QA team and a company's approach to software quality. QP involves creating plans to manage product quality. QA assures that software meets specified functionality and quality standards. QC establishes rules to ensure successive software builds continue to meet requirements. Effective QA requires all three components. The roles of the QA and QC teams are also described.
Joan Ruscin has over 15 years of experience in quality assurance, testing, and requirements management. She currently works as a senior quality assurance specialist for the FBI Criminal Justice Information Services on projects involving biometrics and cloud technologies. She ensures compliance with standards, defines processes and procedures, analyzes documentation, and supervises technical staff in carrying out tests. Previously, she worked as a requirements manager where she specialized in requirements analysis, validation, and documentation for projects involving software development.
Ch 7 integrating quality activities in the projectlife cycleKittitouch Suteeca
The document describes Kittitouch S.'s software engineering course covering various topics:
- Software development methodologies like waterfall, prototyping, spiral, and object-oriented models.
- Factors affecting quality assurance activities and models for quality assurance planning.
- Key software engineering concepts like verification, validation, and qualification.
- A model for evaluating the effectiveness and costs of software quality assurance plans and defect removal activities.
The document includes three revisions by Kittitouch between January and May 2012 with added topics in each revision.
Manual testing interview questions and answerskaranmca
The document discusses several key aspects of manual testing, including:
- What makes a good test engineer, including having a "test to break" attitude and strong communication skills.
- The qualities of a good QA engineer, such as understanding the software development process.
- The traits of a good test manager, like maintaining team enthusiasm and communicating with different stakeholders.
- The importance of documentation in QA and having repeatable practices.
- The significance of requirements and ensuring they are clear, testable, and involve all relevant customers.
The document provides answers to frequently asked questions about manual software testing. It discusses introducing new QA processes, the role of documentation, qualities of a good test engineer, definitions of a test plan, test case, and other key testing terms. It also offers advice on issues like handling bugs, testing with changing requirements, and knowing when to stop testing. The overall focus is on best practices for planning, executing and improving manual testing processes.
The document provides details for performing a system analysis for a software engineering project. It outlines the following steps:
1. Introduction including purpose, intended audience, project scope.
2. Overall description of the product including perspective, features, user classes, operating environment, and design/implementation constraints.
3. Functional requirements organized by user class/feature including descriptions, conditions, business rules.
4. External interface requirements including user interfaces, hardware interfaces, software interfaces, communications interfaces.
5. System features including reliability, security, performance, supportability, design constraints.
The document specifies requirements for a software engineering project and provides guidance on performing requirement analysis and developing a software requirements specification (SR
Software Engineering Practices and Issues.pptxNikilesh8
The document discusses planning for a software project. It emphasizes the importance of careful planning to clarify goals, needs and constraints. This helps avoid issues like schedule slippage and cost overruns. The planning process involves defining the problem, developing solution strategies, and planning the development process. Goals and requirements are also important to establish. Goals can be qualitative or quantitative, and apply to both the development process and final product. Requirements specify necessary system capabilities and can be functional, for performance or interfaces. High-level goals and requirements are often expressed in terms of quality attributes like reliability, efficiency and usability.
Software testing involves verifying that software meets requirements and works as intended. There are various testing types including unit, integration, system, and acceptance testing. Testing methodologies include black box testing without viewing code and white box testing using internal knowledge. The goal is to find bugs early and ensure software reliability.
This document discusses software quality assurance (SQA) systems. It describes six main components of an SQA system: pre-project, project life cycle, infrastructure for error prevention/improvement, management, standards/certification/assessment, and organization. It also discusses SQA considerations like software complexity, classification of SQA components, and factors to consider when constructing an organization's SQA system.
Software Testing adds organizational value in quantitative and qualitative ways. Successful organizations recognize the importance of quality. Establishing a quality-oriented mindset is the responsibility of business leadership.
The document discusses software testing concepts including:
- Quality assurance ensures processes are established to produce products that meet specifications.
- Testing determines if a product meets requirements and identifies failures to meet requirements.
- A test plan is written by the lead tester and includes the testing strategy, resources, and plans. It outlines test cases and procedures to validate software meets specifications.
- Testing begins in the define system phase to ensure requirements are testable, and continues through subsequent phases including product testing, acceptance testing, and deployment. Documentation and repeatable processes are critical to quality assurance.
The document introduces quality management processes and activities including quality assurance, standards, quality planning, and quality control. It explains that quality management aims to ensure the required level of quality is achieved in a software product by defining standards and procedures. Quality management is important for large, complex systems to support continuity as teams change.
Quality management involves defining quality standards and procedures to ensure a required level of quality is achieved in software products. This includes activities like quality assurance to establish standards, quality planning to select applicable standards for a project, and quality control to ensure standards are followed. Software measurement can be used to assess quality by collecting metrics on the development process and product attributes, but accurately relating measurements to quality is challenging due to complex relationships between processes and outcomes.
Quality management involves defining quality standards and procedures to ensure a quality product. This includes quality assurance, establishing standards, quality planning, and quality control such as reviews and metrics. Measurement can assess software quality but relationships between what is measured and quality attributes are complex, and metrics have limitations and rarely predict quality directly.
UAT involves developing a test strategy, scenarios, and scripts. The test strategy outlines the testing approach, including people, tools, procedures, and support. Test scenarios describe situations to test. Test scripts define actual inputs and expected results. An effective test strategy is specific, practical, and justified, clarifying major tasks and challenges. It identifies the type and timing of testing, critical success factors, and tradeoffs.
Software testing for project report .pdfKamal Acharya
Methods of Software Testing There are two basic methods of performing software testing: 1. Manual testing 2. Automated testing Manual Software Testing As the name would imply, manual software testing is the process of an individual or individuals manually testing software. This can take the form of navigating user interfaces, submitting information, or even trying to hack the software or underlying database. As one might presume, manual software testing is labor-intensive and slow.
Software testing is an important phase of the software development process that evaluates the functionality and quality of a software application. It involves executing a program or system with the intent of finding errors. Some key points:
- Software testing is needed to identify defects, ensure customer satisfaction, and deliver high quality products with lower maintenance costs.
- It is important for different stakeholders like developers, testers, managers, and end users to work together throughout the testing process.
- There are various types of testing like unit testing, integration testing, system testing, and different methodologies like manual and automated testing. Proper documentation is also important.
- Testing helps improve the overall quality of software but can never prove that there
This document discusses software quality assurance. It defines software quality and quality assurance. The three general principles of quality assurance are knowing what you are doing, knowing what you should be doing, and knowing how to measure the difference. Quality assurance techniques include formal methods, testing, inspection, and metrics. These techniques are applied through a software process and the different phases of the software development lifecycle, including requirements, design, implementation, and testing. Verification ensures the product is being built correctly while validation ensures the right product is being built.
The document discusses software requirements from the perspective of customers and stakeholders. It defines key terms like business requirements, functional requirements, and non-functional requirements. It emphasizes the importance of frequent engagement with customers to understand their needs and manage expectations. This ensures the developed software closely matches what customers require rather than making assumptions. The document also discusses identifying all relevant stakeholders to obtain a full picture of requirements, such as direct users, indirect users, and others affected by the system.
Quality Management in Software Engineering SE24koolkampus
This document discusses quality management in software development. It covers quality assurance and standards, quality planning, quality control, software measurement and metrics. Quality management aims to ensure the required level of quality is achieved in software products by defining quality standards and procedures and making quality everyone's responsibility. Standards are key to effective quality management as they encapsulate best practices and provide a framework for quality assurance processes. Quality reviews and software measurement are important for quality control.
Ali Ahmed has over 8 years of experience in manual software quality assurance and testing. He has expertise in black box, functional, integration, regression and other types of testing. At his current role at Toyota Financial Services, he manages an offshore testing team and ensures deliverables meet requirements. Previously he has worked on financial, healthcare and other applications at UnitedHealthCare, ServiceLink and other companies.
The document discusses software quality assurance. It defines SQA as using planned and systematic methods to evaluate software quality, standards, processes, and procedures. This ensures development follows standards and procedures through continuous monitoring, product evaluation, and audits. SQA activities include product evaluation and monitoring to ensure adherence to development plans, as well as product audits to thoroughly review products, processes, and documentation against established standards. Software reviews are used to uncover errors and defects during development in order to "purify" software requirements, design, code, and testing data before release.
How to Make a Field Storable in Odoo 17 - Odoo SlidesCeline George
Let’s discuss about how to make a field in Odoo model as a storable. For that, a module for College management has been created in which there is a model to store the the Student details.
New Features in Odoo 17 Email Marketing - Odoo SlidesCeline George
In this slide, let’s discuss the new features of email marketing Odoo 17. The new features enhance user in creating effective and efficient campaigns. This module will help to control the email layouts and other aspects of it.
A history of Innisfree in Milanville, PennsylvaniaThomasRue2
A history of Innisfree in Milanville, Damascus Township, Wayne County, Pennsylvania. By TOM RUE, July 23, 2023. Innisfree began as "an experiment in democracy," modeled after A.S. Neill's "Summerhill" school in England, "the first libertarian school".
How to Configure Field Cleaning Rules in Odoo 17Celine George
In this slide let’s discuss how to configure field cleaning rules in odoo 17. Field Cleaning is used to format the data that we use inside Odoo. Odoo 17's Data Cleaning module offers Field Cleaning Rules to improve data consistency and quality within specific fields of our Odoo records. By using the field cleaning, we can correct the typos, correct the spaces between them and also formats can be corrected.
How to define Related field in Odoo 17 - Odoo 17 SlidesCeline George
The related attribute is used in field definitions to establish a relationship between models and automatically fetch the value from a related model's field. It provides a way to reference and display fields from related models without having to create a separate field and write code to synchronize the values manually.
How to Create an XLS Report in Odoo 17 - Odoo 17 SlidesCeline George
XLSX reports are essential for structured data analysis, customizable presentation, and compatibility across platforms, facilitating efficient decision-making and communication within organizations.
Description:
Welcome to the comprehensive guide on Relational Database Management System (RDBMS) concepts, tailored for final year B.Sc. Computer Science students affiliated with Alagappa University. This document covers fundamental principles and advanced topics in RDBMS, offering a structured approach to understanding databases in the context of modern computing. PDF content is prepared from the text book Learn Oracle 8I by JOSE A RAMALHO.
Key Topics Covered:
Main Topic : PL/SQL
Sub-Topic :
Structure of PL/SQL Block, Declaration Section, Variable, Constant, Execution Section, Exception, How PL/SQL works, Control Structures, If then Command,
Loop Command, Loop with IF, Loop with When, For Loop Command, While Command, Integrating SQL in PL/SQL program.
Target Audience:
Final year B.Sc. Computer Science students at Alagappa University seeking a solid foundation in RDBMS principles for academic and practical applications.
URL for previous slides
Unit V
Chapter 15
Unit IV
Chapter 14 Synonym : https://www.slideshare.net/slideshow/lecture_notes_unit4_chapter14_synonyms-pdf/270327685
Chapter 13 Users, Privileges : https://www.slideshare.net/slideshow/lecture-notes-unit4-chapter13-users-roles-and-privileges/270304806
Chapter 12 View : https://www.slideshare.net/slideshow/rdbms-lecture-notes-unit4-chapter12-view/270199683
Chapter 11 Sequence: https://www.slideshare.net/slideshow/sequnces-lecture_notes_unit4_chapter11_sequence/270134792
chapter 8,9 and 10 : https://www.slideshare.net/slideshow/lecture_notes_unit4_chapter_8_9_10_rdbms-for-the-students-affiliated-by-alagappa-university/270123800
About the Author:
Dr. S. Murugan is Associate Professor at Alagappa Government Arts College, Karaikudi. With 23 years of teaching experience in the field of Computer Science, Dr. S. Murugan has a passion for simplifying complex concepts in database management.
Disclaimer:
This document is intended for educational purposes only. The content presented here reflects the author’s understanding in the field of RDBMS as of 2024.
1. What makes a good test engineer?
A good test engineer has a 'test to break' attitude, an ability to take the
point of view of the customer, a strong desire for quality, and an attention to
detail. Tact and diplomacy are useful in maintaining a cooperative relationship
with developers, and an ability to communicate with both technical (developers)
and non-technical (customers, management) people is useful. Previous software
development experience can be helpful as it provides a deeper understanding of
the software development process, gives the tester an appreciation for the
developers' point of view, and reduce the learning curve in automated test tool
programming. Judgement skills are needed to assess high-risk areas of an
application on which to focus testing efforts when time is limited.
What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA engineer. Additionally,
they must be able to understand the entire software development process and how
it can fit into the business approach and goals of the organization.
Communication skills and the ability to understand various sides of issues are
important. In organizations in the early stages of implementing QA processes,
patience and diplomacy are especially needed. An ability to find problems as
well as to see 'what's missing' is important for inspections and reviews.
What makes a good QA or Test manager?
A good QA, test, or QA/Test(combined) manager should:
• be familiar with the software development process
• be able to maintain enthusiasm of their team and promote a positive
atmosphere, despite
• what is a somewhat 'negative' process (e.g., looking for or preventing
problems)
• be able to promote teamwork to increase productivity
• be able to promote cooperation between software, test, and QA engineers
• have the diplomatic skills needed to promote improvements in QA processes
• have the ability to withstand pressures and say 'no' to other managers when
quality is insufficient or QA processes are not being adhered to
• have people judgement skills for hiring and keeping skilled personnel
• be able to communicate with technical and non-technical people, engineers,
managers, and customers.
• be able to run meetings and keep them focused
What's the role of documentation in QA?
Critical. (Note that documentation can be electronic, not necessarily paper.) QA
practices should be documented such that they are repeatable. Specifications,
designs, business rules, inspection reports, configurations, code changes, test
plans, test cases, bug reports, user manuals, etc. should all be documented.
There should ideally be a system for easily finding and obtaining documents and
determining what documentation will have a particular piece of information.
Change management for documentation should be used if possible.
What's the big deal about 'requirements'?
One of the most reliable methods of insuring problems, or failure, in a complex
software project is to have poorly documented requirements specifications.
Requirements are the details describing an application's externally-perceived
functionality and properties. Requirements should be clear, complete, reasonably
detailed, cohesive, attainable, and testable. A non-testable requirement would
be, for example, 'user-friendly' (too subjective). A testable requirement would
be something like 'the user must enter their previously-assigned password to
access the application'. Determining and organizing requirements details in a
useful and efficient way can be a difficult effort; different methods are
available depending on the particular project. Many books are available that
describe various approaches to this task. (See the Bookstore section's 'Software
Requirements Engineering' category for books on Software Requirements.)
Care should be taken to involve ALL of a project's significant 'customers' in
the requirements process. 'Customers' could be in-house personnel or out, and
could include end-users, customer acceptance testers, customer contract
officers, customer management, future software maintenance engineers,
2. salespeople, etc. Anyone who could later derail the project if their
expectations aren't met should be included if possible.
Organizations vary considerably in their handling of requirements
specifications. Ideally, the requirements are spelled out in a document with
statements such as 'The product shall.....'. 'Design' specifications should not
be confused with 'requirements'; design specifications should be traceable back
to the requirements.
In some organizations requirements may end up in high level project plans,
functional specification documents, in design documents, or in other documents
at various levels of detail. No matter what they are called, some type of
documentation with detailed requirements will be needed by testers in order to
properly plan and execute tests. Without such documentation, there will be no
clear-cut way to determine if a software application is performing correctly.
'Agile' methods such as XP use methods requiring close interaction and
cooperation between programmers and customers/end-users to iteratively develop
requirements. The programmer uses 'Test first' development to first create
automated unit testing code, which essentially embodies the requirements.
What steps are needed to develop and run software tests?
The following are some of the steps to consider:
• Obtain requirements, functional design, and internal design specifications and
other necessary documents
• Obtain budget and schedule requirements
• Determine project-related personnel and their responsibilities, reporting
requirements, required standards and processes (such as release processes,
change processes, etc.)
• Identify application's higher-risk aspects, set priorities, and determine
scope and limitations of tests
• Determine test approaches and methods - unit, integration, functional, system,
load, usability tests, etc.
• Determine test environment requirements (hardware, software, communications,
etc.)
• Determine testware requirements (record/playback tools, coverage analyzers,
test tracking, problem/bug tracking, etc.)
• Determine test input data requirements
• Identify tasks, those responsible for tasks, and labor requirements
• Set schedule estimates, timelines, milestones
• Determine input equivalence classes, boundary value analyses, error classes
• Prepare test plan document and have needed reviews/approvals
• Write test cases
• Have needed reviews/inspections/approvals of test cases
• Prepare test environment and testware, obtain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking
processes, set up logging and archiving processes, set up or obtain test input
data
• Obtain and install software releases
• Perform tests
• Evaluate and report results
• Track problems/bugs and fixes
• Retest as needed
• Maintain and update test plans, test cases, test environment, and testware
through life cycle
What's a 'test plan'?
A software project test plan is a document that describes the objectives, scope,
approach, and focus of a software testing effort. The process of preparing a
test plan is a useful way to think through the efforts needed to validate the
acceptability of a software product. The completed document will help people
outside the test group understand the 'why' and 'how' of product validation. It
should be thorough enough to be useful but not so thorough that no one outside
the test group will read it. The following are some of the items that might be
included in a test plan, depending on the particular project:
• Title
• Identification of software including version/release numbers
3. • Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other
test plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-
info/responsibilties
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature,
functionality, process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error
classes
• Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
• Test environment validity analysis - differences between the test and
production systems and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as
screen capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by
testers to help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties,
deliverables, contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.
(See the Bookstore section's 'Software Testing' and 'Software QA' categories for
useful books with more information.)
What's a 'test case'?
• A test case is a document that describes an input, action, or event and an
expected response, to determine if a feature of an application is working
correctly. A test case should contain particulars such as test case identifier,
test case name, objective, test conditions/setup, input data requirements,
steps, and expected results.
• Note that the process of developing test cases can help find problems in the
requirements or design of an application, since it requires completely thinking
through the operation of the application. For this reason, it's useful to
prepare test cases early in the development cycle if possible.
4. What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it.
After the problem is resolved, fixes should be re-tested, and determinations
made regarding requirements for regression testing to check that fixes didn't
create problems elsewhere. If a problem-tracking system is in place, it should
encapsulate these processes. A variety of commercial problem-tracking/management
software tools are available (see the 'Tools' section for web resources with
listings of such tools). The following are items to consider in the tracking
process:
• Complete information such that developers can understand the bug, get an idea
of it's severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., 'Released for Retest', 'New', etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a test case
or if the developer doesn't have easy access to the test case/test script/test
tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool logs
that would be helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is
common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results
A reporting or tracking process should enable notification of appropriate
personnel at various stages. For instance, testers need to know when retesting
is needed, developers need to know when bugs are found and how to get the needed
information, and reporting/summary capabilities are needed for managers.
What is 'configuration management'?
Configuration management covers the processes used to control, coordinate, and
track: code, requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the
changes. (See the 'Tools' section for web resources with listings of
configuration management tools. Also see the Bookstore section's 'Configuration
Management' category for useful books with more information.)
What if the software is so buggy it can't really be tested at all?
The best bet in this situation is for the testers to go through the process of
reporting whatever bugs or blocking-type problems initially show up, with the
focus being on critical bugs. Since this type of problem can severely affect
schedules, and indicates deeper problems in the software development process
(such as insufficient unit testing or insufficient integration testing, poor
design, improper build or release procedures, etc.) managers should be notified,
5. and provided with some documentation as evidence of the problem.
How can it be known when to stop testing?
This can be difficult to determine. Many modern software applications are so
complex, and run in such an interdependent environment, that complete testing
can never be done. Common factors in deciding when to stop are:
• Deadlines (release deadlines, testing deadlines, etc.)
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends
What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application,
every possible combination of events, every dependency, or everything that could
go wrong, risk analysis is appropriate to most software development projects.
This requires judgement skills, common sense, and experience. (If warranted,
formal methods are also available.) Considerations can include:
• Which functionality is most important to the project's intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance
expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?
What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if
extensive testing is still not justified, risk analysis is again needed and the
same considerations as described previously in 'What if there isn't enough time
for thorough testing?' apply. The tester might then do ad hoc testing, or write
up a limited test plan based on the risk analysis.
What can be done if requirements are changing continuously?
A common problem and a major headache.
• Work with the project's stakeholders early on to understand how requirements
might change so that alternate test plans and strategies can be worked out in
advance, if possible.
• It's helpful if the application's initial design allows for some adaptability
so that later changes do not require redoing the application from scratch.
• If the code is well-commented and well-documented this makes changes easier
for the developers.
• Use rapid prototyping whenever possible to help customers feel sure of their
requirements and minimize changes.
• The project's initial schedule should allow for some extra time commensurate
with the possibility of changes.
• Try to move new requirements to a 'Phase 2' version of an application, while
using the original requirements for the 'Phase 1' version.
• Negotiate to allow only easily-implemented new requirements into the project,
while moving more difficult new requirements into future versions of the
application.
6. • Be sure that customers and management understand the scheduling impacts,
inherent risks, and costs of significant requirements changes. Then let
management or the customers (not the developers or testers) decide if the
changes are warranted - after all, that's their job.
• Balance the effort put into setting up automated testing with the expected
effort required to re-do them to deal with changes.
• Try to design some flexibility into automated test scripts.
• Focus initial automated testing on application aspects that are most likely to
remain unchanged.
• Devote appropriate effort to risk analysis of changes to minimize regression
testing needs.
• Design some flexibility into test cases (this is not easily done; the best bet
might be to minimize the detail in the test cases, or set up only higher-level
generic-type test plans)
• Focus less on detailed test plans and test cases and more on ad hoc testing
(with an understanding of the added risk that this entails).
What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application has significant
unexpected or hidden functionality, and it would indicate deeper problems in the
software development process. If the functionality isn't necessary to the
purpose of the application, it should be removed, as it may have unknown impacts
or dependencies that were not taken into account by the designer or the
customer. If not removed, design information will be needed to determine added
testing needs or regression testing needs. Management should be made aware of
any significant added risks as a result of the unexpected functionality. If the
functionality only effects areas such as minor improvements in the user
interface, for example, it may not be a significant risk.
How can Software QA processes be implemented without stifling productivity?
By implementing QA processes slowly over time, using consensus to reach
agreement on processes, and adjusting and experimenting as an organization grows
and matures, productivity will be improved instead of stifled. Problem
prevention will lessen the need for problem detection, panics and burn-out will
decrease, and there will be improved focus and less wasted effort. At the same
time, attempts should be made to keep processes simple and efficient, minimize
paperwork, promote computer-based processes and automated tracking and
reporting, minimize time required in meetings, and promote training as part of
the QA process. However, no one - especially talented technical types - likes
rules or bureacracy, and in the short run things may slow down a bit. A typical
scenario would be that more days of planning and development will be needed, but
less time will be required for late-night bug-fixing and calming of irate
customers.
What if an organization is growing so fast that fixed QA processes are
impossible?
This is a common problem in the software industry, especially in new technology
areas. There is no easy solution in this situation, other than:
• Hire good people
• Management should 'ruthlessly prioritize' quality issues and maintain focus on
the customer
• Everyone in the organization should be clear on what 'quality' means to the
customer
How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies
among clients, data communications, hardware, and servers. Thus testing
requirements can be extensive. When time is limited (as it usually is) the focus
should be on integration and system testing. Additionally,
load/stress/performance testing may be useful in determining client/server
application limitations and capabilities. There are commercial tools to assist
with such testing. (See the 'Tools' section for web resources with listings that
include these kinds of test tools.)
7. How can World Wide Web sites be tested?
Web sites are essentially client/server applications - with web servers and
'browser' clients. Consideration should be given to the interactions between
html pages, TCP/IP communications, Internet connections, firewalls, applications
that run in web pages (such as applets, javascript, plug-in applications), and
applications that run on the server side (such as cgi scripts, database
interfaces, logging applications, dynamic page generators, asp, etc.).
Additionally, there are a wide variety of servers and browsers, various versions
of each, small but sometimes significant differences between them, variations in
connection speeds, rapidly changing technologies, and multiple standards and
protocols. The end result is that testing for web sites can become a major
ongoing effort. Other considerations might include:
• What are the expected loads on the server (e.g., number of hits per unit
time?), and what kind of performance is required under such loads (such as web
server response time, database query response times). What kinds of tools will
be needed for performance testing (such as web load testing tools, other tools
already in house that can be adapted, web robot downloading tools, etc.)?
• Who is the target audience? What kind of browsers will they be using? What
kind of connection speeds will they by using? Are they intra- organization (thus
with likely high connection speeds and similar browsers) or Internet-wide (thus
with a wide variety of connection speeds and browser types)?
• What kind of performance is expected on the client side (e.g., how fast should
pages appear, how fast should animations, applets, etc. load and run)?
• Will down time for server and content maintenance/upgrades be allowed? how
much?
• What kinds of security (firewalls, encryptions, passwords, etc.) will be
required and what is it expected to do? How can it be tested?
• How reliable are the site's Internet connections required to be? And how does
that affect backup system or redundant connection requirements and testing?
• What processes will be required to manage updates to the web site's content,
and what are the requirements for maintaining, tracking, and controlling page
content, graphics, links, etc.?
• Which HTML specification will be adhered to? How strictly? What variations
will be allowed for targeted browsers?
• Will there be any standards or requirements for page appearance and/or
graphics throughout a site or parts of a site??
• How will internal and external links be validated and updated? how often?
• Can testing be done on the production system, or will a separate test system
be required? How are browser caching, variations in browser option settings,
dial-up connection variabilities, and real-world internet 'traffic congestion'
problems to be accounted for in testing?
• How extensive or customized are the server logging and reporting requirements;
are they considered an integral part of the system and do they require testing?
• How are cgi programs, applets, javascripts, ActiveX components, etc. to be
maintained, tracked, controlled, and tested?
Some sources of site security information include the Usenet newsgroup
'comp.security.announce' and links concerning web site security in the 'Other
Resources' section.
Some usability guidelines to consider - these are subjective and may or may not
apply to a given situation (Note: more information on usability testing issues
can be found in articles about web site usability in the 'Other Resources'
section):
• Pages should be 3-5 screens max unless content is tightly focused on a single
topic. If larger, provide internal links within the page.
• The page layouts and design elements should be consistent throughout a site,
so that it's clear to the user that they're still within a site.
• Pages should be as browser-independent as possible, or pages should be
provided or generated based on the browser-type.
• All pages should have links external to the page; there should be no dead-end
pages.
• The page owner, revision date, and a link to a contact person or organization
should be included on each page.
Many new web site test tools have appeared in the recent years and more than 280
of them are listed in the 'Web Test Tools' section.