Test Procedures
Verifying that work meets requirements
Tests are specific procedures that verify whether acceptance criteria have been met. They're the practical steps you take to confirm that what you built actually works as intended.
What are Tests?
x Tests help you:
- Verify Quality: Confirm that work meets the requirements
 - Catch Problems Early: Find issues before delivering to clients
 - Document Behavior: Record how the system should work
 - Enable Confidence: Know that changes don't break existing functionality
 - Track Progress: See what's working and what still needs attention
 
Think of tests as the actual checks you perform to prove that an acceptance criterion is satisfied.
Test Properties
Each test has:
Basic Information
- Title: A clear description of what you're testing (e.g., "Test user login with valid credentials")
 - Description: Detailed steps for executing the test
 - Criterion: Which acceptance criterion this test verifies
 - Sort Order: Where it appears in the list
 - Created/Updated: Timestamps
 
Relationship to Criteria
Each criterion can have multiple tests, and each test can be run multiple times, producing different results.
Types of Tests
Manual Tests
Tests that a person performs by hand:
Example:
- Title: "Manual: Verify mobile responsiveness"
 - Steps:
- Open website on iPhone 12
 - Check that all buttons are tappable
 - Verify text is readable without zooming
 - Confirm images scale properly
 - Test navigation menu
 
 
Use manual tests for:
- UI/UX verification
 - Visual design checks
 - User experience flows
 - Exploratory testing
 
Automated Tests
Tests that run automatically via code:
Example:
- Title: "Automated: API returns 401 for invalid token"
 - Details: Test file 
auth.test.ts, runs on every deployment - Steps:
- Send request with invalid token
 - Verify status code is 401
 - Verify error message is correct
 
 
Use automated tests for:
- API endpoint testing
 - Unit tests
 - Integration tests
 - Regression testing
 
End-to-End (E2E) Tests
Tests that walk through complete user workflows:
Example:
- Title: "E2E: Complete checkout process"
 - Steps:
- User adds items to cart
 - User proceeds to checkout
 - User enters shipping information
 - User enters payment details
 - User completes purchase
 - Verify order confirmation
 
 
Use E2E tests for:
- Critical user journeys
 - Multi-step processes
 - Integration between systems
 - Real-world scenarios
 
Performance Tests
Tests that measure speed and efficiency:
Example:
- Title: "Performance: Dashboard loads under 2 seconds"
 - Details:
- Clear browser cache
 - Navigate to dashboard
 - Measure time to interactive
 - Verify < 2000ms on 4G connection
 
 
Use performance tests for:
- Page load times
 - API response times
 - Database query speed
 - Scalability checks
 
Writing Effective Tests
The AAA Pattern
Structure your tests as Arrange-Act-Assert:
Arrange: Set up the test conditions
- Create a test user account
- Log in as that user
- Navigate to settings pageAct: Perform the action being tested
- Change email address
- Click "Save Changes"Assert: Verify the expected result
- Email address is updated
- Success message is displayed
- User remains logged inBe Specific
✅ Good - Clear steps:
**Title**: Test password reset email delivery
**Steps**:
1. Click "Forgot Password" link on login page
2. Enter email: test@example.com
3. Click "Send Reset Link"
4. Check email inbox within 60 seconds
5. Verify email contains valid reset link
6. Verify link expires after 24 hours❌ Bad - Too vague:
**Title**: Test password reset
**Steps**:
1. Try resetting password
2. Check that it worksMake Tests Independent
Each test should work on its own:
✅ Good - Self-contained:
**Title**: Test user profile update
**Steps**:
1. Create test user (setup)
2. Log in as test user
3. Update profile name
4. Verify name changed
5. Delete test user (cleanup)❌ Bad - Depends on previous test:
**Title**: Test user profile update
**Steps**:
(Assumes user from previous test exists)
1. Update profile...Test Results
Recording Results
When you run a test, you record the result:
Result Properties:
- Status: Passed, Failed, Blocked, or Skipped
 - Executed By: Who ran the test
 - Executed At: When it was run
 - Duration: How long it took (for automated tests)
 - Notes: Any observations or issues
 - Error Message: Details if the test failed
 
Result Statuses
- Passed: ✅ Test ran successfully, all checks passed
 - Failed: ❌ Test ran but didn't meet expectations
 - Blocked: 🚫 Test couldn't run due to a dependency or environment issue
 - Skipped: ⏭️ Test was intentionally not run
 
Tracking Test History
You can see:
- All past results for a test
 - When it last passed
 - How often it fails
 - Who typically runs it
 - Average duration
 
Organizing Tests
One Test per Criterion
Each criterion should have at least one test:
Criterion: "User can save work as draft"
Test: "Verify draft is saved and can be resumed"
Multiple Tests for Complex Criteria
Complex criteria need several tests:
Criterion: "Form validation prevents invalid input"
Tests:
- Test email validation
 - Test password strength validation
 - Test required field validation
 - Test special character handling
 
Test Sections
For very complex tests, you can break them into sections:
Test: "Complete order processing workflow"
Sections:
- Setup: Create test products and user
 - Shopping: Add items to cart, apply discount
 - Checkout: Enter shipping and payment
 - Verification: Confirm order created and email sent
 - Cleanup: Remove test data
 
Running Tests
When to Run Tests
- During Development: Run related tests as you build
 - Before Committing: Run all tests before saving changes
 - Before Deployment: Run full test suite before going live
 - After Deployment: Run smoke tests to verify production
 - Scheduled: Run automated tests nightly or on every commit
 
Test Coverage
Good test coverage includes:
Happy Path (60%):
- Tests for expected, successful scenarios
 - "User does everything right"
 
Error Cases (30%):
- Tests for validation and error handling
 - "User enters invalid data"
 
Edge Cases (10%):
- Tests for unusual but possible scenarios
 - "User does something unexpected"
 
Tracking Progress
Tests contribute to your SOW's overall completion:
- Each test can pass or fail
 - The SOW tracks what percentage of tests are passing
 - This counts for 30% of the overall SOW completion
 
You can see:
- Total number of tests
 - How many are passing
 - How many are failing
 - Which tests haven't been run yet
 
Best Practices
Writing Tests
Do:
- Write tests at the same time as criteria
 - Make tests repeatable (same result every time)
 - Keep test steps simple and clear
 - Include expected results explicitly
 - Test one thing per test
 - Update tests when requirements change
 
Don't:
- Write tests after the work is done
 - Make tests dependent on each other
 - Assume readers know what to do
 - Skip error scenarios
 - Let failing tests accumulate
 - Delete tests because they're failing
 
Test Organization
- Group by feature: Keep related tests together
 - Use prefixes: "Manual:", "Auto:", "E2E:", "Perf:"
 - Number logically: Test 1, 2, 3 in execution order
 - Keep current: Archive obsolete tests
 
Test Execution
- Run regularly: Don't let tests get stale
 - Fix failures quickly: Don't ignore failing tests
 - Document blockers: Note why a test can't run
 - Share results: Let the team know what's working
 
Common Test Patterns
Login Flow
**Criterion**: User can log in with valid credentials
**Test 1**: Successful login
1. Navigate to login page
2. Enter valid email and password
3. Click "Login"
4. Verify redirect to dashboard
5. Verify welcome message appears
**Test 2**: Failed login
1. Navigate to login page
2. Enter valid email, wrong password
3. Click "Login"
4. Verify error message appears
5. Verify user stays on login pageForm Validation
**Criterion**: Registration form validates all fields
**Test 1**: Empty email shows error
1. Open registration form
2. Leave email field empty
3. Enter other required fields
4. Click "Submit"
5. Verify "Email is required" error appears
**Test 2**: Invalid email format shows error
1. Open registration form
2. Enter "notanemail" in email field
3. Enter other required fields
4. Click "Submit"
5. Verify "Invalid email format" error appearsData Persistence
**Criterion**: User preferences persist across sessions
**Test**: Preferences survive logout/login
1. Log in as test user
2. Change theme to "Dark Mode"
3. Change language to "Spanish"
4. Log out
5. Close browser
6. Open browser and log back in
7. Verify theme is still "Dark Mode"
8. Verify language is still "Spanish"Troubleshooting Tests
Test is Failing
- Re-run it: Was it a fluke?
 - Check prerequisites: Are dependencies in place?
 - Review changes: Did something change that affects this?
 - Update test: Does the test need to be updated?
 - Fix the code: Is there actually a bug?
 
Test is Blocked
- Identify blocker: What's preventing the test from running?
 - Document it: Add notes explaining the blocker
 - Create task: Make a task to resolve the blocker
 - Check regularly: Test when blocker might be resolved
 
Test is Flaky
If a test sometimes passes and sometimes fails:
- Identify the cause: What's inconsistent?
 - Fix timing issues: Add appropriate waits
 - Fix data issues: Ensure clean test data
 - Fix environment issues: Stabilize test environment
 - Consider splitting: Break into smaller, more reliable tests