[Part 03] Fundamentals of Software Testing

Fundamentals of Software Testing

Table of Contents

Fundamentals of Software Testing

Previously, In part 1 we discussed the top three points & in part 2 we discussed the other 4 points among all points of Fundamentals of Software Testing. This is the next part of the previous one. If you haven’t read our previous parts yet then please go bottom of this article. You’ll find both of them.

We intend to provide awareness in the software testing field. If you are interested in this please feel free to knock us on our CONTACT US page. We’ll be happy to assist you.

 Test Execution and Reporting

Test execution is a crucial part of software testing, ensuring that the software meets quality standards.

1. Running Test Cases: Test cases are like a checklist of what needs to be tested in the software. Running them involves executing each test step by step, just like following a recipe. This process ensures that every aspect of the software is thoroughly checked, helping identify any potential issues or bugs.

Importance: Running test cases ensures that the software functions as intended and meets the specified requirements. It helps in uncovering defects early in the development cycle, saving time and resources by addressing issues before they escalate.

2. Verifying Results: After running a test case, testers compare the actual outcome with the expected result. This verification step ensures that the software behaves as expected and performs its intended functions correctly.

Importance: Verifying results helps in validating the accuracy and reliability of the software. It ensures that the features work as intended, providing confidence to stakeholders that the software meets quality standards and is ready for deployment.

3. Reporting Defects: If any discrepancies or issues are found during testing, testers report them as defects or bugs. Reporting includes documenting detailed information about the problem, such as steps to reproduce it and its impact on the software.

Importance: Reporting defects is vital for improving the quality of the software. It provides valuable feedback to developers, enabling them to identify and fix issues promptly. Addressing defects early in the development process helps prevent potential issues in the future and ensures a smoother user experience.

Types of Defects and Defect Tracking

What is Defect tracking and management?

Defect tracking and management is the process of finding, documenting, prioritizing, fixing, and tracking issues or problems (defects) in software. It involves:

  1. Finding Defects: Testers identify issues by testing the software against predefined criteria or user expectations.
  2. Documenting Defects: Testers record detailed information about each defect, including how to reproduce it, its severity, and its impact on the software.
  3. Prioritizing Defects: Defects are prioritized based on their severity and impact on the software’s functionality or usability.
  4. Fixing Defects: Developers address the identified defects by modifying the software’s code or configuration to correct the issues.
  5. Tracking Defects: The status of each defect is tracked throughout the resolution process, from discovery to closure. This helps ensure that defects are properly addressed and resolved in a timely manner.

 Types of defects

Defects in software can take various forms and affect different aspects of the software’s functionality, usability, or performance. Here are some common types of defects:

  1. Functional Defects: These defects impact the core functionality of the software. Examples include buttons not working, calculations producing incorrect results, or features not performing as expected.
  2. User Interface (UI) Defects: UI defects affect the visual appearance or usability of the software. This includes issues like misaligned elements, inconsistent styling, or poor navigation flow.
  3. Performance Defects: Performance defects degrade the software’s speed, responsiveness, or efficiency. Examples include slow loading times, excessive resource consumption, or bottlenecks in processing.
  4. Compatibility Defects: Compatibility defects arise when the software behaves differently across various platforms, devices, or environments. This includes issues like browser compatibility problems, operating system-specific bugs, or hardware dependencies.
  5. Security Defects: Security defects expose vulnerabilities that could compromise the confidentiality, integrity, or availability of the software and its data. Examples include insufficient access controls, input validation vulnerabilities, or insecure data storage.
  6. Concurrency Defects: Concurrency defects occur in multi-threaded or concurrent software systems when multiple processes or threads access shared resources improperly, leading to race conditions, deadlocks, or data corruption.
  7. Localization/Internationalization Defects: These defects impact the software’s support for different languages, cultures, or regions. This includes issues like mistranslated text, date/time formatting errors, or cultural insensitivity.
  8. Documentation Defects: Documentation defects involve inaccuracies, inconsistencies, or inadequacies in the software’s documentation, such as user manuals, help guides, or API references.

Verification vs Validation

Verification: It’s like checking if the software is built correctly according to the design and requirements. Verification is typically done by the developers and testers, and it can be done using static analysis (checking the code).

Validation: It’s like checking if the software meets the customer’s needs and expectations. Validation is typically done by the users themselves, and it can be done using usability testing, user acceptance testing, and other methods. In short, verification checks if the software is built correctly, while validation checks if the software does what it’s supposed to do and meets the customer’s needs.

Testing Metrics and Measurement

Testing metrics play a crucial role in assessing the effectiveness, efficiency, and overall quality of the testing process. Here are some key testing metrics:

  • Test Coverage: Coverage testing shows the extent to which code is being tested. The larger the coverage area, the greater the code protection
  • Defect Density: Defect rate indicates the number of defects found per code or unit of test. A large amount of something can cause a lot of problems.
  • Test Performance: This shows how effective our tests are at detecting errors. Early detection of many errors is useful.
  • Defect Arrival Rate: Defect Arrival Rate refers to the number of newly discovered bugs found during a specific period. Having numerous new bugs could indicate a problem.
  • Mean Time to Detect (MTTD) and Mean Time to Repair (MTTR): The speed of error detection and resolution is determined by the mean time to detection (MTTD) and mean time to repair (MTTR). Quicker is preferred because it indicates that we are resolving issues rapidly.
  • Test Execution Time: The time needed for our tests to be completed. Faster time frames enable us to conduct software testing quickly.

These measurements help us evaluate how well our testing is working and find ways to improve.

Previous Parts:

Share this Article To your friends

Leave a Reply

Your email address will not be published. Required fields are marked *

Our Blog

Our tips and solutions in SQA services

Future-Proof Your Software

QA Harbor's Gift To You A Free QA Consultation!

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Who are going to assist you!

Masudur Rahaman

Managing Director

Farzam Aidelkhani

Biz & Sales Lead

Zabir Ibne Mizan

Business Analyst