Manual Testing Techniques Every QA Must Master

Manual Testing Techniques Every QA Must Master

Manual Testing Techniques Every QA Must Master

Manual testing is where many QA careers begin — and where the foundational instincts of a great tester are formed. In this post I'm going to walk you through the techniques I teach new testers: not as abstract theory, but as concrete actions you can apply the next time you test a feature. Expect real examples, checklists, and mini-exercises you can finish in under 30 minutes.

Why manual testing still matters

Automation is powerful — and we’ll cover it later in the series — but manual testing remains critical for three reasons:

  • Human judgment: Only people notice confusing UX, awkward copy, or unexpected workflows.
  • Early discovery: Exploratory manual tests often reveal architectural assumptions that break automation later.
  • Speed and adaptiveness: For rapidly changing UIs, manual tests adapt faster than brittle scripts.
Mentor note: Treat manual testing as a research activity, not rote checkbox work. Approach it like you’re investigating a tiny product — and you're the user, the adversary, and the designer all at once.

Core techniques — what you must know

Below are battle-tested techniques that every QA should master. For each I give a short explanation, a clear example, and a 5-minute exercise you can try immediately.

1. Equivalence Partitioning

What it is: Group inputs that the system should handle the same way to reduce redundant tests.

Example: A coupon code field accepts 5–10 alphanumeric characters. Instead of testing every length, you test representative values: a valid 6-char code, a minimal 5-char code, a maximal 10-char code, and invalid characters like emojis.

5-minute exercise: Pick any input field on your app (e.g., phone number). Identify 3 equivalence partitions and write one test case for each.

2. Boundary Value Analysis (BVA)

What it is: Test the edges of ranges where bugs commonly hide.

Example: If age must be 18–60, test 17, 18, 60, and 61. Often developers use <= and < inconsistently — BVA finds that.

5-minute exercise: For a numeric input you know, list the boundary values and write assertions for each.

3. Decision Table Testing

What it is: A structured way to test complex conditional logic by listing rules and outcomes.

Example: Loan approval might depend on income, credit score and existing debts. Create a small table with combinations (Income high/low × Credit good/bad × Debt yes/no) and define the expected result for each.

10-minute exercise: Create a 2×2 decision table for a simple feature in your app and derive test cases.

4. State Transition / Workflow Testing

What it is: When a system's behavior depends on previous states (e.g., order → shipped → returned), test transitions and invalid transitions.

Example: Ensure you cannot refund an order that hasn't been shipped. Also test state recovery (what happens if a server crash occurs mid-transition?)

10-minute exercise: Pick a workflow (e.g., user signup) and map its states; then write tests for valid and invalid transitions.

5. Exploratory Testing

What it is: Unscripted testing guided by experience, intuition, and curiosity. It’s where you find surprises.

Practical tip: Time-box exploratory sessions (e.g., 60 minutes) and take notes. Use charters: a short mission like “Explore payment retries when network is intermittent.”

20-minute exercise: Run a focused exploratory session based on a one-sentence charter and capture 3 new test ideas or defects.

6. Error-Guessing

What it is: Use experience to guess where errors might be — no formal technique required, but extremely effective.

Example: If a form accepts files, guess potential errors: zero-byte files, very large files, files with special characters in the name, or files with incorrect mime-type.

7. Usability Testing (Quick Checks)

Manual testing and UX go hand-in-hand. On every build ask: Is it discoverable? Are labels clear? Does the primary action feel obvious?

10-minute exercise: Ask a non-technical colleague to perform one task (e.g., find transaction history). Observe and note pain points.

Test design patterns and practical recipes

Now we’ll stitch together these techniques into practical test design patterns you can reuse.

Recipe: “Happy Path + Edge Paths”

Every feature should have: the happy path, common negative paths, and edge/boundary tests.


Feature: Checkout (single item)

- Happy path: add item → proceed to checkout → pay → success

- Negative: invalid card details → show error

- Edge: item quantity = 0, item price = 0, network drop during payment

      

Recipe: “Role-Based Scenarios”

Test each user role. Permissions issues are frequent bugs. Test what a Manager sees vs Employee vs Admin.

Recipe: “Session and Concurrency”

Test sessions: multiple tabs, concurrent updates, and race conditions. Example — two users edit the same record; who wins? How does the system prevent data loss?

Writing clear and useful test cases

A test case is useful when another person can run it and get the same result. Keep them simple, atomic, and focused on the outcome.

Test case checklist

  • Unique ID
  • Short title
  • Preconditions (test data, account)
  • Steps (numbered)
  • Expected result (specific)
  • Post-conditions (cleanup)

Test Case ID: TC-345

Title: Login with valid credentials

Preconditions: User exists with email test@example.com

Steps:

  1. Navigate to /login

  2. Enter email test@example.com

  3. Enter password P@ssw0rd

  4. Click Sign In

Expected:

  User is taken to dashboard; welcome message contains user's first name.

Post-condition:

  User session created and visible in sessions table.

      
Mentor note: If your expected result is “works”, rewrite it. Replace “works” with “displays error X within 2s” or “creates record in DB table Y”.

Defect reporting that gets fixed fast

Writing a clear bug report speeds up triage and fixes. Include context, steps to reproduce, expected vs actual, environment, logs/screenshots, and a priority suggestion.

Example bug report


Summary: Checkout fails with 500 when using Maestro cards

Steps to reproduce:

  1. Add item to cart

  2. Proceed to checkout

  3. Enter Maestro card details (1234 5678 9012 3456)

  4. Click Pay

Expected: Payment accepted and order confirmation shown

Actual: 500 internal server error; no order created

Environment: Staging v2.1, Chrome 117, Windows 10

Attachments: HAR file, screenshot of error, server logs

Suggested priority: High (blocks checkout)

      
15-minute exercise: Open your bug tracker and review the last 5 bugs. For each, check if the report contains the 6 items above. If not, improve the description.

Test data — the often-forgotten hero

Bad test data causes false failures. Use realistic, reusable data and keep it isolated. Maintain seed scripts or a small data factory to create test accounts with known states (e.g., user with no orders, user with high balance, admin user).

Data tips

  • Label test data clearly (e.g., test_user_payments)
  • Reset or snapshot DB after heavy tests
  • Use mock services for third-party APIs where possible

Exploratory testing in pairs

Pair testing — one person explores while the other takes notes and asks questions — increases the discovery rate and transfers domain knowledge quickly.

30-minute exercise: Pair with a teammate: one tester explores, other logs findings and asks “what if” questions. Swap roles after 15 minutes.

Accessibility & localization — small steps, big impact

Quick checks you can do manually:

  • Tab through the page — can you reach all actions?
  • Use browser zoom (200%) — does layout break?
  • Change language setting — are key messages translated?

Regression testing — what to choose for automation

Not all manual tests should become automated. Choose stable, high-value flows (login, critical payment paths, core APIs) for automation. Manual should focus on exploration, new features, and UX validation.

Common pitfalls and how to avoid them

  • Copy-paste test cases: Avoid duplication — write modular steps.
  • Environment mismatch: Always validate staging mirrors production for critical components.
  • Over-documentation: Keep cases concise; too-long steps are ignored.
  • Under-testing integrations: External APIs often cause production incidents — include them in your scope.

Real-life example — how a simple manual check found a critical bug

We once had a recurring issue: refund emails were not sent intermittently. Automated checks passed because they focused on API responses, which were correct. A manual exploratory session revealed the SMTP connection was being rate-limited by a third-party provider when spikes occurred. The fix involved adding queueing and retry logic. The bug would have stayed invisible without manual testing that followed real user behavior.

Checklist — your daily manual testing ritual

  1. Read the story and acceptance criteria. Ask clarifying questions.
  2. Prepare test data and environment. Seed accounts if needed.
  3. Run quick smoke tests (5–10 minutes) after deployment.
  4. Run focused exploratory sessions with a charter.
  5. Log defects with clear steps, evidence and priority.
  6. Share a short summary with developers & PMs.

Mini project — apply everything in 90 minutes

Goal: Test the “Change Password” flow on any site (or create a small sample app). Steps:
  1. List the happy path and 6 negative/edge cases (e.g., expired token, mismatch, weak password, unicode chars).
  2. Create 5 test cases (ID, steps, expected).
  3. Run one exploratory session for 30 minutes and note surprises.
  4. Log any defects and propose mitigation.

How to grow from manual tester to QA strategist

Manual testing is not the end of the road — it’s the foundation. Learn to:

  • Document patterns and share checklists
  • Measure what you do (defects found, time spent, coverage)
  • Automate stable flows and monitor results
  • Teach others — mentoring is the fastest way to learn

Resources & tools (quick list)

  • Test case management: TestRail, Zephyr (or plain spreadsheets)
  • Exploratory testing: session-based testing notebooks
  • Bug reporting: JIRA templates with steps and environment
  • Accessibility quick checks: Lighthouse, axe browser extension

Closing thoughts — a mentor’s parting note

Manual testing trains your senses. It teaches you how users behave, where assumptions hide, and how systems fail in the wild. If you take nothing else from this post, take this: be curious, be deliberate, and be precise. The combination of curiosity and discipline is what separates a tester from a QA engineer.

Comments

Popular posts from this blog

AI Agents in DevOps: Automating CI/CD Pipelines for Smarter Software Delivery

What is Hyperautomation? Complete Guide with Examples, Benefits & Challenges (2025)

Getting Started with Automation: When, Why & How