Skip to content

redbus-labs/LEAP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

L.E.A.P (LLM-driven Execution & Automation Platform)

This framework leverages Agentic AI to autonomously execute frontend test cases written in natural English and dynamically generate test automation code based on a library of pre-defined tools and agents.


Table of Contents


MANDATORY PRE-READ

Link


Project Structure

leap-agentic/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ core_agentic/
β”‚   β”‚   β”œβ”€β”€ agentic.py                     # Core AI modules and LLM logic
β”‚   β”‚   β”œβ”€β”€ agentic_base.py                # Hooks, setup/teardown, and workflow support
β”‚   β”‚   β”œβ”€β”€ test_trial.py                  # Playground for English-based test execution
β”‚   β”‚   β”œβ”€β”€ run_configs.py                 # Configs to control execution (e.g., channel, LLM model)
β”‚   └── main/
β”‚       β”œβ”€β”€ agent_group/                   # Folder for each agent group (e.g., redBus Home Page)
β”‚       β”‚   β”œβ”€β”€ agent/                     # Folder for each agent (e.g., Search Form)
β”‚       β”‚   β”‚   β”œβ”€β”€ agent_details.py       # Agent metadata (Name, Description)
β”‚       β”‚   β”‚   └── tools/                 # Tools available for the agent
β”‚       β”‚   β”‚       β”œβ”€β”€ definitions/       # Abstract tool signatures for AI
β”‚       β”‚   β”‚       β”‚   β”œβ”€β”€ agent_locator_tools.py   # Web element abstracts
β”‚       β”‚   β”‚       β”‚   └── agent_function_tools.py  # Business flow abstracts
β”‚       β”‚   β”‚       └── implementation/    # Platform-specific implementations
β”‚       β”‚   β”‚           β”œβ”€β”€ android.py
β”‚       β”‚   β”‚           β”œβ”€β”€ ios.py
β”‚       β”‚   β”‚           β”œβ”€β”€ mweb.py
β”‚       β”‚   β”‚           └── dweb.py
β”‚       β”œβ”€β”€ por/                           # Page Object Repository (Singleton pattern)
β”‚       β”‚   β”œβ”€β”€ por_agent_group.py         # File for agent group comprising agent objects
β”‚       β”‚   └── por_master.py        # Master repository with all agent group PORs
β”‚       └── utilities/
β”‚           └── helper/                    # Frontend Automation "Super Helper"
β”‚               β”œβ”€β”€ helper_definition.py           # Abstract methods/capabilities shared with AI
β”‚               β”œβ”€β”€ helper_common_implementation.py # Cross-platform logic & internal framework tools
β”‚               β”œβ”€β”€ helper_apps_implementation.py   # App-specific logic (Gestures, Contexts)
β”‚               └── helper_browser_implementation.py # Web-specific logic (Cookies, Window handles)
β”‚   └── resources/
β”‚       └── configs/
β”‚           β”œβ”€β”€ common.yaml               # Cross-platform common configs
β”‚           β”œβ”€β”€ mweb.yaml                 # MWeb-specific config
β”‚           β”œβ”€β”€ dweb.yaml                 # DWeb-specific config
β”‚           β”œβ”€β”€ android.yaml              # Android-specific config
β”‚           └── ios.yaml                  # iOS-specific config
β”œβ”€β”€ credentials.json                      # Your API keys (not tracked in git)
β”œβ”€β”€ credentials.json.example              # Template for credentials (safe to commit)
β”œβ”€β”€ agent_onboarding.md                   # Context for AI-based agent onboarding
β”œβ”€β”€ learner.csv                           # File where learnings are recorded

Config Usage in Test Cases

When writing test cases in natural language, configuration values can be referenced using angular brackets (< >).

Example:

Without config:

Search for ferries from Location-1 to Location-2

With config:

Search for ferries from <source> to <destination>

Here, <source> and <destination> are automatically picked from the corresponding configuration file.


Onboarding Checklist for a New Agent

  1. Create an agent group folder:

    src/main/agent_group/<group_name>/
    
  2. Create a POR for the group:

    src/main/por/por_<group_name>.py
    

    Register it in por_master.py.

  3. Set up the agent structure:

    agent_group/<group_name>/
    β”œβ”€β”€ agent_details.py
    └── tools/
        β”œβ”€β”€ definitions/
        β”‚   β”œβ”€β”€ agent_locator_tools.py
        β”‚   └── agent_function_tools.py
        └── implementation/
            β”œβ”€β”€ android.py
            β”œβ”€β”€ ios.py
            β”œβ”€β”€ mweb.py
            └── dweb.py
    

OR

Use an AI-powered IDE (or any IDE with GitHub Copilot enabled):

  • Add agent_onboarding.md as context
  • Tell the AI you want to onboard a new agent
  • Answer the questions prompted by the AI
  • Sit back and let the AI handle the onboarding process πŸ™‚

Tool Onboarding Process

When onboarding a new tool, declare all tool definitions as abstract methods in the appropriate base class:

  • agent_locator_tools β†’ Locator-related tools
  • agent_function_tools β†’ Action or function-based tools

1. Static Tool Definition

Use this pattern for elements or actions that do not require parameters.

@abstractmethod
def search_button(self):
    """
    Search button to search all ferries available.
    """
    pass

Guidelines:

  • Method name should clearly represent the UI element or action
  • Add a concise docstring explaining the purpose
  • No parameters are required

2. Dynamic Tool Definition (With Parameters)

Use this pattern when the tool requires runtime inputs.

@abstractmethod
def ferryTupleByFerryName(
    self,
    ferryName: Annotated[str, "Name of the ferry"],
    ferryOccurence: Annotated[int, "Position among multiple ferries (Default: 1)"]
):
    """
    Returns the ferry tuple based on ferry name or ferry operator name.
    """
    pass

Guidelines:

  • Use Annotated[type, "description"] for parameters
  • Clearly describe each parameter
  • Mention default values explicitly in descriptions
  • Add a meaningful docstring

Locator Best Practices

  1. Maximize Coverage with Minimal Locators
  • Prefer flexible, descriptive selectors
  • Reduce locator count and maintenance effort
  1. Tag Reference-Changing Locators to the Appropriate Agent Group If a locator interaction changes the page or context, tag it with the correct agent group reference.
run_configs.setRef("search_result_page")
  1. Use Dynamic Parameters
  • Substitute dynamic values as placeholders in locator strings
  • Pass actual values at runtime
  1. Enable Self-Healing
  • Always wrap locators with selfHeal()
  • Enables recovery from minor UI text changes

Example

def ferryTupleByBusName(
    self,
    ferryName: Annotated[str, "Name of the ferry operator"],
    ferryOccurence: Annotated[int, "Position among multiple ferries (Default: 1)"]
):
    run_configs.setRef("time_selection_page")
    return agentic_base.helper.selfHeal(
        "(//*[@data-autoid='inventoryList']"
        "//*[contains(@class,'travelsName_') and text()='{ferryName}']"
        "//ancestor::*[contains(@class,'tupleWrapper_')])[{ferryOccurence}]",
        ferryName,
        ferryOccurence
    )

Additional Requirements

1. Define Section Locators in Tool Implementation Constructors

Each agent represents a specific section or component of a page. Root locator(s) must be defined in the constructor.

Single Section Locator:

class search_widget:
    def __init__(self):
        run_configs.section_locator = ["//div[@data-section='abcd']"]

Multiple Section Locators:

class search_widget:
    def __init__(self):
        run_configs.section_locator = [
            "//div[@id='abcd']",
            "//div[@data-component='xyz']"
        ]

2. Explain Agent Groups in refChangeCheck()

Location: core_agentic/agentic_base -> refChangeCheck()

Each agent_group must have a brief textual explanation describing what it represents on the UI.

Purpose:

  • Accurate page/section change validation
  • Helps the AI understand context transitions
  • Ensures correct agents are supplied after navigation

Quick Start

Prerequisites

  • Git
  • An IDE with Python support (AI-powered IDEs are optional but recommended)
  • Python 3.13+
  • An LLM API key (recommended: Gemini)

Try It Out

  1. Clone the repository

    git clone https://github.com/redbus-labs/LEAP
    cd LEAP
  2. Install dependencies

    pip install -r requirements.txt
  3. Install Playwright Browsers

    playwright install
  4. Set up credentials (choose one method):

    Method 1: Environment Variables (Recommended) Guide - Link

    Method 2: JSON File (Alternative)

    cp credentials.json.example credentials.json
    # Edit credentials.json and add your API keys

    Security Note:

    • The credentials.json file is already in .gitignore and will not be committed to the repository
    • credentials.json.example is a template file (safe to commit) that shows the expected structure
    • Environment variables are strongly recommended as they prevent accidental credential exposure in code, logs, or version control
    • Never commit actual credentials to the repository
  5. Execute test_demo() under core_agentic/test_trial.py

    pytest core_agentic/test_trial.py::test_demo -s

Key Points to Consider

  • Description Quality Matters
    The accuracy of execution is highly dependent on the quality and clarity of element and action descriptions.

  • Supported LLMs
    LEAP currently supports Gemini and AWS Bedrock models.
    To add support for additional LLMs:

    • Onboard the model in src/main/utilities/helper/helper_common.py
    • Implement it in the setupLLM() method, following the existing Gemini or Bedrock integrations
  • Recommended Models
    LEAP works with any LLM, but for better accuracy in complex scenarios, we recommend:

    • Gemini 2.5 Pro
    • Gemini 3 Pro
      (Recommendations as of Jan 14, 2026)
  • Enable LLM Thinking
    Keep the LLM’s thinking enabled for improved reasoning and more reliable execution.

  • Automation Support

    • Browser automation is supported via Playwright
    • Mobile app automation with Appium is a work in progress
      You can experiment with app automation by implementing support in helper_apps_implementation.py, using the existing Playwright browser implementation as a reference

Contributing

We welcome contributions from the community! This project can only evolve with open source contributions.

How to Contribute

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Please read our Contributing Guidelines before submitting contributions.

Development Setup

  1. Clone the repository
  2. Import into your IDE
  3. Run tests to ensure everything works
  4. Start coding!

Acknowledgements

Architected and Developed by: Krishna Hegde, Senior SDET, redBus

Guided and Mentored by: Chandrashekar Patil, Senior Director – QA, redBus

Special Thanks:

  • Smruti Sourav Sahoo, SDET, redBus β€” for supporting the development efforts and being an early adopter of the project
  • Vishnuvardhan Reddy Bhumannagari, Senior SDET Manager, redBus β€” for introducing AI-based visual assertions at redBus
  • Rithish Saralaya, SVP Engineering, redBus β€” for organizing Project Nirman, an AI-first initiative that helped uncover key gaps in the framework and fast-track its growth.

Issues and Support

Found a bug or need help? Please open an issue on GitHub.


About

LLM-driven Execution & Automation Platform

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Languages