Software-based System Testing Task
A Software-based System Testing Task is a software engineering task that is a verification task (that tests a software system against software requirements).
- AKA: Computing System Evaluation.
- Context:
- It can (typically) involve Software Test Items, such as unit test items.
- It can (typically) be a part of Software Development.
- ...
- It can range from being a Software Functional Quality Test to being a Software Structural Quality Test (associated to a software quality measure).
- It can range from being a Formal Software Testing Task to being an Informal Software Testing Task.
- It can range from being a Manual Software Testing Task to being an Automated Software Testing Task.
- ...
- It can be solved by a Software Testing System (possibly implemented using a software testing framework).
- It can be triggered by a Software Review.
- …
- Example(s):
- a Experience-Specicif System Testing Task, such as:
- a Mobile App Testing Task that validates UI responsiveness and compatibility across devices.
- a Software Unit Testing Task (of unit tests) that verifies individual software functions.
- a Software Component Testing Task focusing on testing independent software components.
- a Software Integration Testing Task (of software integration tests) that assesses interactions between integrated components.
- a Software Performance Testing Task that measures system response times and throughput under varying load conditions.
- a Software Security Testing Task that evaluates a system’s resistance to security threats.
- a Software Acceptance Testing Task that validates the system against customer-defined requirements.
- a Software Regression Testing Task that checks for new defects after software updates.
- a Software Compatibility Testing Task that ensures consistent functionality across different platforms.
- ...
- a Experience-Specicif System Testing Task, such as:
- Counter-Example(s):
- a Software Design Task, as it focuses on conceptualizing the architecture rather than verifying implemented functionality.
- a Software Programming Task, which deals with coding and development, not testing.
- a Software Debugging Task, which is aimed at identifying and resolving software issues but not verifying functionality against requirements.
- a Software Documentation Task, which involves creating user manuals and developer guides but is not involved in verification.
- See: Software Bug, Distributed System Testing, Software Development Process, Software Verification, Software Quality, Test-Driven Development.
References
2023
- chat
- Q: What is the difference between unit testing, regression testing, integration testing, performance testing, security testing, acceptance testing?
- A:
- Unit testing: testing individual components or functions to verify they work as intended.
- Regression testing: retesting a system or component to verify that changes or updates did not negatively affect existing functionality.
- Integration testing: testing how components or systems work together as a single unit.
- Performance testing: evaluating a system or component's performance and behavior under anticipated workloads.
- Security testing: evaluating a system or component's resistance to unauthorized access and security threats.
- Acceptance testing: evaluating a system or component to determine if it meets customer and business requirements, typically conducted by end-users or stakeholders.
2020
- (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/software_testing Retrieved:2020-5-18.
- Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs (errors or other defects), and verifying that the software product is fit for use.
Software testing involves the execution of a software component or system component to evaluate one or more properties of interest. In general, these properties indicate the extent to which the component or system under test:
- meets the requirements that guided its design and development,
- responds correctly to all kinds of inputs,
- performs its functions within an acceptable time,
- it is sufficiently usable,
- can be installed and run in its intended environments, and
- achieves the general result its stakeholders desire.
- As the number of possible tests for even simple software components is practically infinite, all software testing uses some strategy to select tests that are feasible for the available time and resources. As a result, software testing typically (but not exclusively) attempts to execute a program or application with the intent of finding software bugs (errors or other defects). The job of testing is an iterative process as when one bug is fixed, it can illuminate other, deeper bugs, or can even create new ones.
Software testing can provide objective, independent information about the quality of software and risk of its failure to users or sponsors.
Software testing can be conducted as soon as executable software (even if partially complete) exists. The overall approach to software development often determines when and how testing is conducted. For example, in a phased process, most testing occurs after system requirements have been defined and then implemented in testable programs. In contrast, under an agile approach, requirements, programming, and testing are often done concurrently.
- Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs (errors or other defects), and verifying that the software product is fit for use.
2015
- (Maddox, 2015) ⇒ Philip Maddox. (2015). “Testing a Distributed System.” In: Communications of the ACM Journal, 58(9). doi:10.1145/2776756
- QUOTE: Distributed systems can be especially difficult to program for a variety of reasons. They can be difficult to design, difficult to manage, and, above all, difficult to test. Testing a normal system can be trying even under the best of circumstances, and no matter how diligent the tester is, bugs can still get through. ... A common pitfall in testing is to check input and output from only a single system. A good chunk of the time this will work to check basic system functionality, but if that data is going to be propagated to multiple parts of a distributed system, it is easy to overlook issues that could cause a lot of problems later.
2003
- (Chen & Roşu, 2003) ⇒ Feng Chen, and Grigore Roşu. (2003). “Towards Monitoring-oriented Programming: A Paradigm Combining Specification and Implementation.” In: Electronic Notes in Theoretical Computer Science, 89(2).