Test And Accept
Concepts

Test Procedures

Verifying that work meets requirements

Tests are specific procedures that verify whether acceptance criteria have been met. They're the practical steps you take to confirm that what you built actually works as intended.

What are Tests?

x Tests help you:

  • Verify Quality: Confirm that work meets the requirements
  • Catch Problems Early: Find issues before delivering to clients
  • Document Behavior: Record how the system should work
  • Enable Confidence: Know that changes don't break existing functionality
  • Track Progress: See what's working and what still needs attention

Think of tests as the actual checks you perform to prove that an acceptance criterion is satisfied.

Test Properties

Each test has:

Basic Information

  • Title: A clear description of what you're testing (e.g., "Test user login with valid credentials")
  • Description: Detailed steps for executing the test
  • Criterion: Which acceptance criterion this test verifies
  • Sort Order: Where it appears in the list
  • Created/Updated: Timestamps

Relationship to Criteria

Each criterion can have multiple tests, and each test can be run multiple times, producing different results.

Types of Tests

Manual Tests

Tests that a person performs by hand:

Example:

  • Title: "Manual: Verify mobile responsiveness"
  • Steps:
    1. Open website on iPhone 12
    2. Check that all buttons are tappable
    3. Verify text is readable without zooming
    4. Confirm images scale properly
    5. Test navigation menu

Use manual tests for:

  • UI/UX verification
  • Visual design checks
  • User experience flows
  • Exploratory testing

Automated Tests

Tests that run automatically via code:

Example:

  • Title: "Automated: API returns 401 for invalid token"
  • Details: Test file auth.test.ts, runs on every deployment
  • Steps:
    1. Send request with invalid token
    2. Verify status code is 401
    3. Verify error message is correct

Use automated tests for:

  • API endpoint testing
  • Unit tests
  • Integration tests
  • Regression testing

End-to-End (E2E) Tests

Tests that walk through complete user workflows:

Example:

  • Title: "E2E: Complete checkout process"
  • Steps:
    1. User adds items to cart
    2. User proceeds to checkout
    3. User enters shipping information
    4. User enters payment details
    5. User completes purchase
    6. Verify order confirmation

Use E2E tests for:

  • Critical user journeys
  • Multi-step processes
  • Integration between systems
  • Real-world scenarios

Performance Tests

Tests that measure speed and efficiency:

Example:

  • Title: "Performance: Dashboard loads under 2 seconds"
  • Details:
    1. Clear browser cache
    2. Navigate to dashboard
    3. Measure time to interactive
    4. Verify < 2000ms on 4G connection

Use performance tests for:

  • Page load times
  • API response times
  • Database query speed
  • Scalability checks

Writing Effective Tests

The AAA Pattern

Structure your tests as Arrange-Act-Assert:

Arrange: Set up the test conditions

- Create a test user account
- Log in as that user
- Navigate to settings page

Act: Perform the action being tested

- Change email address
- Click "Save Changes"

Assert: Verify the expected result

- Email address is updated
- Success message is displayed
- User remains logged in

Be Specific

✅ Good - Clear steps:

**Title**: Test password reset email delivery

**Steps**:
1. Click "Forgot Password" link on login page
2. Enter email: test@example.com
3. Click "Send Reset Link"
4. Check email inbox within 60 seconds
5. Verify email contains valid reset link
6. Verify link expires after 24 hours

❌ Bad - Too vague:

**Title**: Test password reset

**Steps**:
1. Try resetting password
2. Check that it works

Make Tests Independent

Each test should work on its own:

✅ Good - Self-contained:

**Title**: Test user profile update

**Steps**:
1. Create test user (setup)
2. Log in as test user
3. Update profile name
4. Verify name changed
5. Delete test user (cleanup)

❌ Bad - Depends on previous test:

**Title**: Test user profile update

**Steps**:
(Assumes user from previous test exists)
1. Update profile...

Test Results

Recording Results

When you run a test, you record the result:

Result Properties:

  • Status: Passed, Failed, Blocked, or Skipped
  • Executed By: Who ran the test
  • Executed At: When it was run
  • Duration: How long it took (for automated tests)
  • Notes: Any observations or issues
  • Error Message: Details if the test failed

Result Statuses

  • Passed: ✅ Test ran successfully, all checks passed
  • Failed: ❌ Test ran but didn't meet expectations
  • Blocked: 🚫 Test couldn't run due to a dependency or environment issue
  • Skipped: ⏭️ Test was intentionally not run

Tracking Test History

You can see:

  • All past results for a test
  • When it last passed
  • How often it fails
  • Who typically runs it
  • Average duration

Organizing Tests

One Test per Criterion

Each criterion should have at least one test:

Criterion: "User can save work as draft"

Test: "Verify draft is saved and can be resumed"

Multiple Tests for Complex Criteria

Complex criteria need several tests:

Criterion: "Form validation prevents invalid input"

Tests:

  1. Test email validation
  2. Test password strength validation
  3. Test required field validation
  4. Test special character handling

Test Sections

For very complex tests, you can break them into sections:

Test: "Complete order processing workflow"

Sections:

  1. Setup: Create test products and user
  2. Shopping: Add items to cart, apply discount
  3. Checkout: Enter shipping and payment
  4. Verification: Confirm order created and email sent
  5. Cleanup: Remove test data

Running Tests

When to Run Tests

  • During Development: Run related tests as you build
  • Before Committing: Run all tests before saving changes
  • Before Deployment: Run full test suite before going live
  • After Deployment: Run smoke tests to verify production
  • Scheduled: Run automated tests nightly or on every commit

Test Coverage

Good test coverage includes:

Happy Path (60%):

  • Tests for expected, successful scenarios
  • "User does everything right"

Error Cases (30%):

  • Tests for validation and error handling
  • "User enters invalid data"

Edge Cases (10%):

  • Tests for unusual but possible scenarios
  • "User does something unexpected"

Tracking Progress

Tests contribute to your SOW's overall completion:

  • Each test can pass or fail
  • The SOW tracks what percentage of tests are passing
  • This counts for 30% of the overall SOW completion

You can see:

  • Total number of tests
  • How many are passing
  • How many are failing
  • Which tests haven't been run yet

Best Practices

Writing Tests

Do:

  • Write tests at the same time as criteria
  • Make tests repeatable (same result every time)
  • Keep test steps simple and clear
  • Include expected results explicitly
  • Test one thing per test
  • Update tests when requirements change

Don't:

  • Write tests after the work is done
  • Make tests dependent on each other
  • Assume readers know what to do
  • Skip error scenarios
  • Let failing tests accumulate
  • Delete tests because they're failing

Test Organization

  • Group by feature: Keep related tests together
  • Use prefixes: "Manual:", "Auto:", "E2E:", "Perf:"
  • Number logically: Test 1, 2, 3 in execution order
  • Keep current: Archive obsolete tests

Test Execution

  • Run regularly: Don't let tests get stale
  • Fix failures quickly: Don't ignore failing tests
  • Document blockers: Note why a test can't run
  • Share results: Let the team know what's working

Common Test Patterns

Login Flow

**Criterion**: User can log in with valid credentials

**Test 1**: Successful login
1. Navigate to login page
2. Enter valid email and password
3. Click "Login"
4. Verify redirect to dashboard
5. Verify welcome message appears

**Test 2**: Failed login
1. Navigate to login page
2. Enter valid email, wrong password
3. Click "Login"
4. Verify error message appears
5. Verify user stays on login page

Form Validation

**Criterion**: Registration form validates all fields

**Test 1**: Empty email shows error
1. Open registration form
2. Leave email field empty
3. Enter other required fields
4. Click "Submit"
5. Verify "Email is required" error appears

**Test 2**: Invalid email format shows error
1. Open registration form
2. Enter "notanemail" in email field
3. Enter other required fields
4. Click "Submit"
5. Verify "Invalid email format" error appears

Data Persistence

**Criterion**: User preferences persist across sessions

**Test**: Preferences survive logout/login
1. Log in as test user
2. Change theme to "Dark Mode"
3. Change language to "Spanish"
4. Log out
5. Close browser
6. Open browser and log back in
7. Verify theme is still "Dark Mode"
8. Verify language is still "Spanish"

Troubleshooting Tests

Test is Failing

  1. Re-run it: Was it a fluke?
  2. Check prerequisites: Are dependencies in place?
  3. Review changes: Did something change that affects this?
  4. Update test: Does the test need to be updated?
  5. Fix the code: Is there actually a bug?

Test is Blocked

  1. Identify blocker: What's preventing the test from running?
  2. Document it: Add notes explaining the blocker
  3. Create task: Make a task to resolve the blocker
  4. Check regularly: Test when blocker might be resolved

Test is Flaky

If a test sometimes passes and sometimes fails:

  1. Identify the cause: What's inconsistent?
  2. Fix timing issues: Add appropriate waits
  3. Fix data issues: Ensure clean test data
  4. Fix environment issues: Stabilize test environment
  5. Consider splitting: Break into smaller, more reliable tests