Software Testing Life Cycle (STLC) is a structured sequence of activities that ensure software quality through a systematic testing process. It defines clear phases from requirement analysis to test closure, ensuring each step contributes to delivering a reliable and defect-free product.
Unlike ad-hoc testing, STLC provides a well-defined roadmap for testing teams, ensuring complete coverage and efficiency.
This post explores each phase of the Software Testing Life Cycle in detail — from requirement analysis to test closure — including objectives, activities, deliverables, and real-world examples.
1. Introduction to STLC
STLC represents the process of conducting testing activities in a structured and logical order. Each phase has its own entry and exit criteria, ensuring quality is maintained throughout the testing process.
Key Objectives of STLC
- To define a systematic testing approach.
- To ensure testing covers functional and non-functional requirements.
- To detect defects early and minimize risks.
- To improve testing efficiency through planning and automation.
- To produce measurable results and documentation for continuous improvement.
Importance of STLC
STLC helps testers and QA teams to stay organized, plan effectively, and evaluate software quality based on requirements. By following STLC, teams ensure:
- Timely identification of issues
- Reduced project costs due to early bug detection
- Increased accuracy and traceability of test results
- Efficient resource management
2. Phases of the Software Testing Life Cycle
STLC includes the following major phases:
- Requirement Analysis
- Test Planning
- Test Case Design (Test Case Development)
- Test Environment Setup
- Test Execution
- Test Closure
Each phase has entry and exit criteria, activities, and deliverables. Let’s explore these one by one.
3. Requirement Analysis Phase
Objective
The objective of this phase is to understand the testing requirements from both functional and non-functional perspectives. The QA team analyzes the Software Requirement Specification (SRS) document to identify what needs to be tested.
Key Activities
- Review SRS and Business Requirement Documents (BRD).
- Identify testable requirements.
- Define types of testing needed (functional, integration, performance, etc.).
- Identify automation feasibility.
- Analyze risks and dependencies.
- Create requirement traceability matrix (RTM).
Deliverables
- Requirement Traceability Matrix (RTM)
- Automation feasibility report
- Clarification logs or requirement queries
Example Code: Mapping Requirements to Tests
# Example: Creating a simple RTM in Python
requirements = {
"REQ-001": "User login functionality",
"REQ-002": "Password reset feature",
"REQ-003": "Profile update section"
}
test_cases = {
"TC-01": "Verify login with valid credentials",
"TC-02": "Verify login with invalid credentials",
"TC-03": "Verify password reset email delivery"
}
traceability_matrix = {}
for req_id, desc in requirements.items():
linked_tests = [tc for tc in test_cases if req_id.split('-')[1] in tc]
traceability_matrix[req_id] = linked_tests
for req, tests in traceability_matrix.items():
print(f"{req}: {tests}")
This simple code demonstrates how each requirement can be mapped to related test cases.
4. Test Planning Phase
Objective
The Test Planning phase defines the overall testing strategy. The Test Lead or Manager creates a Test Plan document that outlines scope, objectives, resources, tools, schedule, and deliverables.
Key Activities
- Define test objectives and scope.
- Identify test types (manual, automated, regression, etc.).
- Select tools and frameworks.
- Determine entry and exit criteria for testing.
- Estimate effort, cost, and time.
- Assign roles and responsibilities.
- Identify risks and mitigation strategies.
Deliverables
- Test Plan Document
- Effort Estimation Document
- Test Schedule
- Risk Management Plan
Example Code: Test Schedule Automation
import datetime
tasks = {
"Requirement Analysis": 3,
"Test Planning": 5,
"Test Case Design": 7,
"Environment Setup": 2,
"Test Execution": 10,
"Test Closure": 3
}
start_date = datetime.date(2025, 10, 1)
schedule = {}
for task, days in tasks.items():
end_date = start_date + datetime.timedelta(days=days)
schedule[task] = (start_date, end_date)
start_date = end_date
for phase, duration in schedule.items():
print(f"{phase}: {duration[0]} to {duration[1]}")
This example automatically calculates testing phase durations based on estimated effort.
5. Test Case Design Phase
Objective
In this phase, test cases and test scripts are designed based on the identified requirements. The goal is to ensure all functionalities are covered through detailed and executable test cases.
Key Activities
- Write detailed test cases with clear steps, expected results, and conditions.
- Prepare test data.
- Identify reusable test components.
- Review test cases for completeness.
- Prioritize test cases (smoke, regression, etc.).
Deliverables
- Test Case Documents
- Test Data Sets
- Test Scripts (if automation is included)
Example Code: Automated Test Case Template
class TestCase:
def __init__(self, id, description, steps, expected):
self.id = id
self.description = description
self.steps = steps
self.expected = expected
def run(self):
print(f"Running Test: {self.id} - {self.description}")
for step in self.steps:
print(f"Step: {step}")
print(f"Expected Result: {self.expected}\n")
test1 = TestCase(
"TC001",
"Verify login with valid credentials",
["Open login page", "Enter valid username/password", "Click login button"],
"User is successfully logged in"
)
test1.run()
This shows how a test case class can be structured programmatically.
6. Test Environment Setup Phase
Objective
To prepare the hardware and software conditions under which testing will be performed. The test environment should replicate the production setup as closely as possible.
Key Activities
- Identify software, hardware, and network configurations.
- Install necessary tools and test frameworks.
- Configure databases and servers.
- Verify environment stability.
- Conduct smoke tests to validate setup.
Deliverables
- Environment Configuration Document
- Access credentials
- Test Environment Validation Report
Example Code: Environment Validation
import os
tools = ["python", "pytest", "selenium"]
for tool in tools:
response = os.system(f"which {tool} > /dev/null 2>&1")
if response == 0:
print(f"{tool} is installed and ready.")
else:
print(f"Warning: {tool} is missing from environment.")
This script checks if essential tools are available in the test environment.
7. Test Execution Phase
Objective
In this phase, testers execute the designed test cases in the prepared environment. Actual results are compared with expected results, and any discrepancies are reported as defects.
Key Activities
- Execute test cases (manual or automated).
- Log test results.
- Report defects in the defect tracking tool.
- Retest fixed defects.
- Perform regression testing as necessary.
Deliverables
- Test Execution Report
- Defect Logs
- Updated Traceability Matrix
Example Code: Automated Test Execution and Result Logging
results = {}
def execute_test(test_id, actual, expected):
results[test_id] = "Pass" if actual == expected else "Fail"
execute_test("TC001", "Login Successful", "Login Successful")
execute_test("TC002", "Error Message", "Error Message")
execute_test("TC003", "Email Sent", "Email Not Sent")
for test, result in results.items():
print(f"{test}: {result}")
This code simulates running tests and logging their pass/fail status.
8. Defect Management During Test Execution
Defect management is a crucial sub-process within test execution. It ensures that all discovered defects are recorded, tracked, and resolved systematically.
Common Defect States
- New
- Assigned
- Open
- Fixed
- Retested
- Closed
- Reopened
Example Code: Simple Defect Tracking System
class Defect:
def __init__(self, id, desc, status="New"):
self.id = id
self.desc = desc
self.status = status
def update_status(self, new_status):
self.status = new_status
defects = [
Defect("D001", "Login button not responsive"),
Defect("D002", "Profile picture upload fails"),
]
defects[0].update_status("Fixed")
for d in defects:
print(f"{d.id}: {d.desc} - {d.status}")
9. Test Closure Phase
Objective
The Test Closure phase ensures all testing activities are completed, deliverables are handed over, and a closure report is prepared for stakeholders.
Key Activities
- Evaluate test completion criteria.
- Prepare test summary and closure report.
- Analyze metrics and lessons learned.
- Archive test artifacts for future use.
- Conduct team retrospective meetings.
Deliverables
- Test Summary Report
- Lessons Learned Document
- Test Metrics Report
- Closure Checklist
Example Code: Generating Test Summary Report
summary = {
"Total Test Cases": 50,
"Executed": 50,
"Passed": 47,
"Failed": 3,
"Defects Found": 5,
"Defects Closed": 5
}
print("=== TEST SUMMARY REPORT ===")
for key, value in summary.items():
print(f"{key}: {value}")
print("\nTest Efficiency: {:.2f}%".format((summary["Passed"] / summary["Executed"]) * 100))
10. Entry and Exit Criteria for Each STLC Phase
| Phase | Entry Criteria | Exit Criteria |
|---|---|---|
| Requirement Analysis | Approved SRS/BRD | RTM Created, Feasibility Done |
| Test Planning | Requirement Analysis Completed | Approved Test Plan |
| Test Case Design | Test Plan Approved | Test Cases Reviewed and Approved |
| Environment Setup | Test Data Ready | Environment Validated |
| Test Execution | Environment Stable | All Tests Executed and Defects Logged |
| Test Closure | All Tests Completed | Test Summary and Closure Reports Prepared |
11. Benefits of Following STLC
- Improved Test Coverage: Every requirement is validated.
- Early Defect Detection: Reduces rework and cost.
- Efficient Resource Utilization: Defined responsibilities prevent duplication.
- Consistent Quality: Structured process improves product reliability.
- Better Communication: Clear documentation and reports aid collaboration.
- Automation Feasibility: Defined structure supports CI/CD integration.
12. Challenges in STLC Implementation
- Changing requirements mid-cycle.
- Limited test environment availability.
- Incomplete documentation or unclear specifications.
- Tight deadlines affecting coverage.
- Lack of skilled resources.
These challenges can be mitigated through effective planning, communication, and automation.
13. STLC vs SDLC
| Aspect | STLC | SDLC |
|---|---|---|
| Purpose | Focuses on software testing activities | Focuses on overall software development |
| Scope | Ensures product quality and defect-free delivery | Covers entire software creation lifecycle |
| Responsibility | QA/Test Team | Development + Management Team |
| Output | Test Reports, Defect Logs | Functional Software Product |
14. Integration of STLC with Agile and DevOps
In modern development models, STLC integrates seamlessly with Agile and DevOps workflows.
STLC in Agile
- Testing happens in short sprints.
- Continuous feedback and improvement.
- Testers collaborate with developers from the start.
STLC in DevOps
- Continuous Testing integrated with CI/CD pipelines.
- Automated test scripts triggered with every code change.
- Real-time monitoring and reporting.
Example Code: CI/CD Integration
name: Run Tests
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Unit Tests
run: pytest tests/
This YAML workflow runs automated tests whenever code changes are pushed to the main branch.
15. Metrics Used in STLC
- Test Case Execution Rate = (Executed / Total Test Cases) × 100
- Defect Density = Defects / Size of Module
- Test Coverage = (Requirements Tested / Total Requirements) × 100
- Defect Leakage = (Defects Found After Release / Total Defects) × 100
Example Code: Calculate Test Metrics
defect_density = 10 / 500 # 10 defects in 500 LOC
coverage = (48 / 50) * 100
print(f"Defect Density: {defect_density}")
print(f"Test Coverage: {coverage}%")
Leave a Reply