Introduction to Using AI for Test Case Design (Step-by-Step Login Example)

16 May 2026 20:37 7,801 views
Learn how to use AI tools like ChatGPT or Gemini to quickly generate positive, negative, and edge test cases for login functionality, and how to refine prompts so the output actually matches your real app and business rules.

AI is changing how QA engineers and testers work every day. Instead of manually brainstorming every possible test case, you can now use tools like ChatGPT or Gemini to generate detailed scenarios in seconds. The real skill is knowing how to ask for what you need—and how to review what AI gives you.

In this guide, you’ll see a simple, step-by-step example of using AI to design test cases for a login feature, and how to turn a generic prompt into a powerful, app-specific test suite.

Starting with a Simple Login Testing Problem

Imagine you have a basic requirement: “User should log in with valid credentials.” Your task is to create test cases that cover all important scenarios around this login flow.

Traditionally, you would sit down and think through:

• What happens with valid credentials?
• What about invalid credentials?
• What edge cases might break the system?

This can be time-consuming and easy to miss cases. Instead, you can start by asking an AI tool to help.

Using AI to Generate Initial Test Cases

Open your preferred AI chat tool (ChatGPT, Gemini, or similar) and start with a simple prompt like:

“Generate positive, negative, and edge test cases for login functionality.”

In a few seconds, you’ll typically get a list of scenarios such as:

Positive scenarios:

• Login with valid username and valid password
• Login with valid email and valid password
• Login successfully after logout
• Login with “Remember me” enabled

Negative scenarios:

• Invalid username with valid password
• Valid username with invalid password
• Both username and password invalid
• Empty username field
• Empty password field
• Both fields empty
• Exceeded maximum login attempts
• Login with expired password

Edge cases:

• Username and password at maximum allowed length
• Username and password exceeding maximum length
• Use of special characters in credentials
• Use of Unicode characters
• Login with slow internet connection
• Server down or unavailable

Within seconds, you get a broad set of ideas that might have taken much longer to think through manually.

Why You Shouldn’t Trust AI Output Blindly

Even though AI can generate a lot of scenarios quickly, you should never copy everything directly into your test suite without review.

Always:

Review the output for correctness and relevance
Remove or adjust scenarios that don’t apply to your app
Add missing cases based on your specific business rules

AI doesn’t know your exact product, domain, or constraints unless you tell it. The first prompt above is very generic, so the output is also generic. It’s a good starting point, but not a finished test plan.

Making Prompts More Specific to Your App

To get more useful test cases, you need to give AI more context about your application. This is where prompt engineering comes in.

Instead of a generic request, you might say:

“Generate five test cases for login with email validation and password with minimum 8 characters.”

Already, the AI now knows:

• Login is via email
• Passwords must be at least 8 characters

But you can go further and describe:

• What type of application it is (e.g., fintech, e-commerce, social app)
• Which platforms you support (web, mobile, API)
• What login methods are allowed (email/password, OTP, social login, etc.)
• Business rules (e.g., lockout after 5 failed attempts, 2FA, device change behavior)
• Validation rules (password complexity, email format, etc.)

For example, a more powerful prompt could be:

“Generate detailed positive, negative, and edge test cases for the login functionality of a fintech mobile application where users log in using email and password. Passwords must be at least 8 characters and contain at least one number. Include scenarios for account lockout after 5 failed attempts and handling of expired passwords.”

This kind of prompt will produce test cases that are much closer to your real-world needs.

Let AI Help You Improve Your Own Prompts

You don’t have to guess the perfect prompt on your own. You can actually ask the AI how to improve your prompt.

For example, you can say:

“Tell me how I can improve this prompt and get better output as per my app scenarios and business logic.”

The AI might suggest adding:

Application context: “a fintech mobile application” instead of just “an app”
Domain and platform: web, mobile, API
User types: admin, regular user, guest, etc.
Authentication methods: email/password, OTP, social login
Security rules: 2FA, lockout rules, device switching behavior
Output format: table with columns like Test Case ID, Description, Steps, Expected Result, Priority

It may even generate a full example prompt for you that you can copy, tweak, and reuse for other features like signup or forgot password. This is similar in spirit to how some QA-focused tools work; for a more advanced approach to natural language test automation, you might also explore AI-powered testing with KaneAI.

Getting Structured Test Cases in Table Format

Once you have a richer prompt, you can also tell the AI exactly how you want the output structured. For example, you can request:

“Provide the test cases in a table with columns: Test Case ID, Scenario Type (Positive/Negative/Edge), Description, Steps, Expected Result, Priority.”

The AI can then return a neatly formatted table that might include:

• Positive login scenarios tailored to a fintech mobile app
• Negative scenarios such as invalid credentials, locked accounts, expired passwords
• Edge cases like network issues, device changes, or unusual character inputs

You can then review this table, adjust the wording, change priorities, and import the relevant cases into your test management tool.

Best Practices for Using AI in Testing

To use AI effectively and safely in QA, keep these principles in mind:

1. Give clear, detailed prompts

• Include app type, domain, platform, user roles, and business rules
• Specify validation rules and security requirements
• Define the desired output format (e.g., table, bullet list)

2. Always review and refine the output

• Don’t trust AI blindly
• Check for missing edge cases and incorrect assumptions
• Adapt scenarios to match your actual flows and constraints

3. Use AI as a partner, not a replacement

• Let AI handle the heavy lifting of idea generation
• Use your domain knowledge to filter, correct, and extend the test cases
• Reuse and improve your prompts over time as your product evolves

4. Apply the same approach to other flows

Once you’re comfortable with login, you can repeat this process for:

• Signup and registration
• Forgot password and reset flows
• Profile updates, payments, and more

For more complex AI-assisted workflows, especially on the development side, you might also be interested in guides like setting up Claude Code for AI-assisted coding, which can complement your testing efforts.

Conclusion

AI can dramatically speed up test design, but the real power comes from combining AI-generated ideas with your own QA expertise. Start with a simple login scenario, experiment with increasingly detailed prompts, and let the AI help you both generate and refine your test cases.

Over time, you’ll build a library of strong prompts and reusable patterns that make your testing faster, more thorough, and more aligned with your actual business logic—without ever giving up human judgment.

Share:

Comments

No comments yet. Be the first to share your thoughts!

More in Testing & QA