Skip to main content

Command Palette

Search for a command to run...

How I Usually Start Automating User Stories in Playwright

Updated
3 min read
How I Usually Start Automating User Stories in Playwright

When a new story comes for automation, I don’t touch the code immediately. I follow a simple and steady approach that helps avoid rework and confusion.

1. Functional understanding comes first

My first step is always on the functional side. I read the user story fully and check if all acceptance criteria make sense. If anything is unclear or has gaps, I talk to the developer or product owner and clarify it.
It's better to clear doubts early instead of assuming something and later rewriting automation.

2. Write manual test cases

Once the functionality is clear, I write manual test cases covering all possible scenarios:

  • happy paths

  • edge cases

  • negative cases

  • functional and UI validations

This gives me confidence that I have covered the story from all angles.

3. Manual execution before automation

Next, I execute these manual test cases on the application.

  • If I find bugs, I log them and wait for fixes.

  • If everything works fine, then I move towards automation.

This avoids writing automation for a broken feature.

4. If there is a blocker, still make partial progress

Sometimes the story is half-working or blocked. In those cases, I don’t sit idle. I do whatever is possible:

  • open all reachable screens

  • capture locators

  • design the page objects

  • plan which class should hold which method

  • create placeholder/dummy methods without implementations

This saves time later because the structure is already ready.

5. When the story is stable, start automation

Once the feature is stable, automation becomes straightforward. I start implementing the test using the existing framework conventions and project structure.

6. Use codegen for a quick head start

I run the same scenarios using Playwright codegen. Codegen gives raw steps which are not production-ready, but it helps me quickly get all actions and locators.

I then pass this generated script to AI tools and ask them to rewrite it following:

  • Page Object Model

  • DRY (Don’t Repeat Yourself)

  • proper naming

  • reusable functions

This gives a good starting structure. I only use AI as an assistant, not as a final answer.

7. Keep an .md guide in the repository

Maintaining a .md file with good examples of:

  • page structure

  • locator strategy

  • fixtures usage

  • naming conventions
    is very helpful.
    Anyone new in the team can learn faster, and even I can reuse patterns easily.

8. Replace weak locators manually

AI or codegen cannot always give good locators. They usually pick long CSS or nth-child types, which are flaky.
So I manually rewrite locators using:

  • getByRole

  • getByTestId

  • stable attributes

This improves long‑term stability.

9. Write meaningful assertions

Codegen cannot create strong assertions. It only does basic checks like:

  • element visible

  • text present

But real automation needs:

  • data validation

  • API + UI comparison

  • sorting logic

  • complex workflows

These I write manually based on business rules.

Final thoughts

AI is helpful, but it cannot understand the product like a human tester. It can speed up repetitive parts, but functional thinking and assertion design still need human effort.
This is the simple and practical process I follow for automating one story in a stable and clean way.