A comprehensive automation and algorithms reference project demonstrating modern testing patterns and best practices.
Sloth Python is an educational and professional-grade project combining:
- 🧪 Advanced test automation frameworks (Robot Framework, pytest, Playwright)
- 🤖 AI-powered self-healing test locators that automatically repair broken selectors
- 🏗️ Comprehensive algorithm library (data structures, ML, divide & conquer, and more)
- ⚙️ Production-ready CI/CD workflows using GitHub Actions
- 🔧 AI-driven test script generation from natural-language goals using MCP
Perfect for learning modern test automation, exploring algorithms, or as a reference for professional test frameworks.
- Quick Start
- Installation
- Getting Started as a Contributor
- Configuration
- Running Tests
- Self-Healing Framework
- AI-Generated Test Scripts
- CI/CD Pipeline
- Project Structure
- Best Practices
- Troubleshooting
- Documentation
- Contributing
- License
- Support & Feedback
- Advanced Test Automation: Robot Framework and pytest examples for unit, API, and Playwright-based UI testing
- Self-Healing Locators: AI-assisted Playwright framework that automatically detects and repairs broken element selectors
- AI Test Script Generation: MCP-aware Playwright workflow that generates runnable pytest UI tests from natural-language goals
- Algorithm Library: Curated implementations of algorithms, data structures, and machine learning concepts
- Production-Ready CI/CD: GitHub Actions workflows for automated smoke tests and nightly regression suites
- Comprehensive Examples: Real-world test scenarios and automation patterns
- Python 3.12+ (Tested with Python 3.14)
- Git
New to open source? Check out GETTING_STARTED.md for a beginner-friendly guide.
Want to contribute? Start with the Contributing Guide.
- Clone the repository:
git clone https://github.com/466725/sloth-python.git cd sloth-python
-
Create and activate a virtual environment: Windows (PowerShell):
py -3.14 -m venv .venv .\.venv\Scripts\activate
Linux/macOS (bash/zsh):
python3 -m venv .venv source .venv/bin/activate -
Install dependencies:
pip install -r requirements.txtNote: This installs the packages used by Robot Framework, pytest, Playwright, and the supporting demo utilities.
-
Install Playwright Browsers:
playwright install
Runtime settings are centralized in utils/config.py and read from environment variables with safe defaults.
| Variable | Default | Description |
|---|---|---|
TANGERINE_URL |
https://www.tangerine.ca/en/personal |
Base URL for Tangerine UI tests |
DEEP_SEEK_URL |
https://api.deepseek.com |
Base URL for DeepSeek-compatible API calls |
OPENAI_URL |
https://api.openai.com |
Base URL for OpenAI API calls |
UI_LOCALE |
en-US |
Browser locale used by Playwright-based UI tests |
SLEEP_TIME |
1 |
Generic sleep duration used in selected fixtures |
COOKIE_BANNER_TIMEOUT_SECONDS |
5 |
Wait time for Tangerine cookie banner handling |
PW_HEADLESS |
true |
Playwright headless mode (1/0, true/false, yes/no, on/off) |
AI_GEN_MODEL |
gpt-4.1 |
Model used by the UI test generator |
AI_GEN_BASE_URL |
OPENAI_URL value |
OpenAI-compatible base URL used by generator |
AI_GEN_MAX_DOM_CHARS |
12000 |
Max DOM/element-tree size sent to the model |
AI_GEN_OUTPUT_DIR |
pytest_demo/tests/ai/generated_playwright |
Default output folder for generated tests |
Quick local check:
python -m utils.configUse the commands below for the most common local test workflows.
pytest is the main runner for unit, API, and Playwright UI tests.
# Full pytest run
python -m pytest
# Fast smoke run
python -m pytest -m "unit or api"
# Only unit tests
python -m pytest -m unit
# Only UI tests
python -m pytest -m ui
# One file / one test
python -m pytest pytest_demo/tests/unit/test_csv_reader.py -q
python -m pytest pytest_demo/tests/unit/test_csv_reader.py::test_read_csv_to_list_converts_numeric_cells_to_int -q
# Tangerine Playwright suite
python -m pytest pytest_demo/tests/ui/tangerine_playwright
# Generate and view Allure results
python -m pytest --alluredir=temps/allure-results --clean-alluredir
allure serve temps/allure-resultsFor pytest_demo/tests/ui/tangerine_playwright, Playwright records per-test video and keeps/attaches it only for failed tests. Videos are written under temps/playwright-videos/tangerine_playwright/.
The repo includes three API-testing styles:
| Approach | Best for | Run command |
|---|---|---|
| Pytest + Python | Flexible validation and reusable helpers | python -m pytest -q pytest_demo/tests/api/test_deep_seek_api.py |
| Robot + Python keywords | Readable Robot flow with Python power | python -m robot --outputdir temps/robot_api robot_demo/api/deep_seek_api_hybrid_test.robot |
Robot-only RequestsLibrary |
Simple keyword-driven API checks | python -m robot --outputdir temps/robot_api robot_demo/api/deep_seek_api_test.robot |
DeepSeek demos use OPENAI_API_KEY; DEEP_SEEK_URL is optional.
Use Playwright Codegen to record actions and bootstrap UI tests:
python -m playwright codegen https://www.tangerine.ca/en/personalRun a Playwright test visibly for debugging:
python -m pytest pytest_demo/tests/ui/tangerine_playwright/test_codegen_demo.py --headed --slowmo 200--headed: opens a visible browser--slowmo 200: slows actions for easier observation
This project uses Python pytest + Playwright, so run tests with
python -m pytest ..., notnpx playwright test.
For AI-based test generation, see AI-Generated UI Test Scripts.
Robot demos live under robot_demo/.
# All Robot demos
python -m robot --outputdir temps/robot_all robot_demo/
# Calculator demo
python -m robot --outputdir temps/robot_calculator robot_demo/calculator/
# Tangerine Playwright suite
python -m robot --outputdir temps/robot_tangerine_playwright robot_demo/tangerine_playwright/
# Dry run (syntax and keyword wiring only)
python -m robot --dryrun --outputdir temps/robot_tangerine_playwright_dryrun robot_demo/tangerine_playwright/Robot writes output.xml, log.html, and report.html to the selected directory under temps/.
For robot_demo/tangerine_playwright/:
- failure screenshots are saved under
artifacts/playwright/screenshots/ - failure videos are saved under
artifacts/playwright/videos/ - screenshot/video links appear in Robot
log.htmlandreport.html - passed-test videos are deleted to keep artifacts small
The Tangerine Robot keyword libraries also bootstrap the project root import path automatically, so -P is typically not needed.
This project includes an advanced self-healing mechanism for Playwright-based UI tests that automatically detects and repairs broken locators.
Location: pytest_demo/self_healing/
Locator Store:
pytest_demo/locators/signinpage.jsonpytest_demo/locators/signuppage.json
- Primary Locator Failure → Framework attempts primary locator
- Backup Locators → Tries backup selectors from locator store
- DOM Scanning → Scans page DOM for similar elements using fuzzy matching
- Auto-Update → If a match is found, test passes and the page-specific locator file is automatically updated
- Resilience → Subsequent test runs use the updated selector
- Reduced Maintenance: Eliminates manual locator fixes after UI changes
- Improved Stability: Tests are more resilient to minor DOM alterations
- Smart Learning: System learns from failures and improves over time
The Robot suite in robot_demo/tangerine_playwright/ uses the same self-healing locator store, but limits healing to these keys in the Playwright keywords:
tangerine.logintangerine.signup
Locator definitions are shared from:
pytest_demo/locators/signinpage.jsonpytest_demo/locators/signuppage.json
Robot mode currently runs with read-only healing (auto_update=False) so it can recover using stored locator strategies without silently rewriting the locator files.
An AI-powered pipeline that generates runnable pytest + Playwright test scripts from natural-language goals and live page context.
How it works: Playwright captures the page (DOM, screenshot, network events) → context is packaged into a structured prompt → an OpenAI-compatible model generates Python test code → the file is written to pytest_demo/tests/ai/generated_playwright/.
Location: pytest_demo/ai_generation/ — modules: mcp_context.py, prompt_builder.py, ai_client.py, generator.py, cli.py
OPENAI_API_KEYset (supports OpenAI, DeepSeek, Azure, OpenRouter, or any OpenAI-compatible endpoint)- Dependencies installed:
pip install -r requirements.txt - Playwright browsers installed:
playwright install
# 1. Set your API key
$env:OPENAI_API_KEY = "<your-api-key>"
# 2. Generate a test from a live page
python -m pytest_demo.ai_generation.cli `
--url "https://www.tangerine.ca/en/personal" `
--goal "Verify homepage loads and Sign In button is visible" `
--test-name "test_tangerine_homepage" `
--output "pytest_demo/tests/ai/generated_playwright/test_tangerine_homepage.py"
# 3. Run the generated test
python -m pytest -q pytest_demo/tests/ai/generated_playwright/test_tangerine_homepage.py| Option | Default | Description |
|---|---|---|
--url |
(required) | Target page URL |
--goal |
(required) | Natural-language test goal |
--test-name |
test_generated_ui_flow |
Generated function name |
--output |
AI_GEN_OUTPUT_DIR |
Output file path |
--model |
gpt-4.1 |
LLM model name |
--base-url |
OPENAI_URL |
OpenAI-compatible endpoint |
--headless |
true |
Run Playwright headless (true/false) |
# Sign-in page
python -m pytest_demo.ai_generation.cli `
--url "https://www.tangerine.ca/app/#/login" `
--goal "Verify username/password fields and submit button are present" `
--test-name "test_tangerine_signin"
# Sign-up page
python -m pytest_demo.ai_generation.cli `
--url "https://www.tangerine.ca/app/#/signup" `
--goal "Verify sign-up form is visible and required fields are present" `
--test-name "test_tangerine_signup"
# Run all generated tests
python -m pytest -q pytest_demo/tests/ai/generated_playwrightKey environment variables (see Configuration for full list):
| Variable | Default | Description |
|---|---|---|
AI_GEN_MODEL |
gpt-4.1 |
LLM model identifier |
AI_GEN_BASE_URL |
OpenAI endpoint | API base URL |
AI_GEN_MAX_DOM_CHARS |
12000 |
Max DOM size sent to the model |
AI_GEN_OUTPUT_DIR |
pytest_demo/tests/ai/generated_playwright |
Output folder |
- Review before committing — AI output is a strong starting point, not production-ready by default
- Model choice —
gpt-4.1for best quality;gpt-4.1-minifor cost savings - Large pages — DOM is truncated at
AI_GEN_MAX_DOM_CHARS; increase if needed - Self-healing — Generated tests are plain pytest files; wrap with self-healing helpers manually if needed
- Validate with:
python -m pytest -q pytest_demo/tests/ai/test_ai_generation.py
Automated testing is orchestrated through GitHub Actions workflows to ensure code quality and early defect detection.
Smoke Tests (Push + Pull Request)
- Run on pushes to
main/master - Run on pull requests targeting
main/masterwhile the PR is open - Include pytest
unit+apicoverage and the Robot calculator suite - Provide fast feedback on core regressions
Nightly Regression Suite (2 AM UTC)
- Runs from the scheduled workflow at
0 2 * * * - Installs Playwright browsers with dependencies
- Executes the full pytest suite and all Robot suites
- Attempts Allure report generation after the test run
The workflow uploads generated reports when available in the run's Artifacts section, including:
allure-report/temps/log.htmltemps/report.htmltemps/output.xml
- Navigate to the workflow run on GitHub
- Download the artifacts zip file
- Extract and open
report.htmlin your browser
Use these commands to mimic the core CI flow locally (see Running Tests for more command variants):
# Run smoke tests
python -m pytest -m "unit or api"
python -m robot robot_demo/calculator/
# Run a nightly-like full pass
playwright install
python -m pytest --tb=short --maxfail=5
python -m robot --outputdir temps robot_demo/sloth-python/
├── algorithms/ # Algorithms & Data Structures
│ ├── backtracking/ # Backtracking algorithms
│ ├── divide_and_conquer/ # Divide & conquer patterns
│ ├── machine_learning/ # ML implementations (KNN, SVM, Decision Trees, etc.)
│ ├── maths/ # Mathematical algorithms
│ ├── searches/ # Search algorithms (binary, linear, etc.)
│ ├── sorts/ # Sorting algorithms
│ ├── strings/ # String manipulation algorithms
│ ├── conversions/ # Number system conversions
│ └── data_structures/ # Trees, heaps, queues, stacks, tries, etc.
│
├── pytest_demo/ # Pytest Test Suite
│ ├── ai_generation/ # AI + MCP context driven script generator
│ ├── tests/ # Test cases
│ │ ├── AI/ # AI-generation tests and generated Playwright scripts
│ │ │ └── generated_playwright/
│ │ ├── unit/ # Unit tests
│ │ ├── api/ # API tests (Requests)
│ │ └── ui/ # UI tests
│ │ └── tangerine_playwright/
│ ├── self_healing/ # Self-healing Playwright framework
│ ├── locators/ # Locator repository (signinpage.json, signuppage.json)
│ ├── conftest.py # Pytest fixtures & configuration
│ └── ...
│
├── robot_demo/ # Robot Framework demo suites (API/UI/keyword patterns)
│ ├── api/ # API demos (Robot-only RequestsLibrary and Robot + Python keywords)
│ ├── calculator/ # Calculator test suite
│ └── tangerine_playwright/ # Tangerine UI suite (custom Playwright library)
│
├── fun_part/ # Educational & Fun Examples
│ ├── go_game/ # Game implementations
│ ├── bilibili/ # API demo projects
│ └── web_programming/ # Web examples
│
├── utils/ # Shared Utilities
│ ├── config.py # Configuration management
│ ├── constants.py # Application constants
│ └── csv_reader.py # CSV utilities
│
├── .github/workflows/ # GitHub Actions CI/CD definitions
├── requirements.txt # Python dependencies
├── pytest.ini # Pytest configuration
├── pyproject.toml # Project metadata
└── README.md # This file
- algorithms/ - Production-ready implementations for learning and reference
- pytest_demo/ - Complete test automation examples with best practices
- robot_demo/ - Robot demo suites for API and UI automation patterns
- utils/ - Reusable components (config, constants, helpers)
This project demonstrates industry best practices:
- Page Object Model (POM) - Maintainable UI test structure
- Fixtures & Dependency Injection - Pytest fixtures for test setup/teardown
- Marker-Based Organization - Categorize tests with markers such as
unit,api,ui, andplaywright - Parameterization - Run same test with multiple data sets
- Self-Healing - AI-powered locator recovery mechanism
- Type Hints - Type annotations for better IDE support and documentation
- Docstrings - Comprehensive module and function documentation
- Error Handling - Proper exception handling and logging
- Configuration Management - Externalized config for different environments
- DRY Principle - Reusable utilities and helper functions
- Automated Testing - Smoke tests on PRs, full regression nightly
- Report Generation - HTML and Allure reports for test visibility
- Artifact Management - Uploaded for debugging and report review
Issue: "ModuleNotFoundError" when running tests Windows (PowerShell):
.\.venv\Scripts\activate
pip install -r requirements.txtLinux/macOS (bash/zsh):
source .venv/bin/activate
pip install -r requirements.txtIssue: Playwright tests timeout
# Solution: Install browsers and retry a focused UI suite first
playwright install
python -m pytest pytest_demo/tests/ui/tangerine_playwright -qIssue: Locator selector not found in Playwright
- If the test uses the self-healing helpers, the framework may recover automatically
- Check
pytest_demo/locators/signinpage.jsonandpytest_demo/locators/signuppage.jsonfor updated selectors - Manual fix: Update the JSON or run with
-vflag for detailed logs
If you find Sloth Python useful — whether for learning, professional automation, or as a reference — please consider sponsoring!
Your support helps fund:
- 🛠️ Ongoing maintenance and new features
- 🤖 AI/Playwright tooling improvements
- 📚 More algorithm and test examples
- ⏱️ Faster responses to issues and PRs
Even a small monthly contribution makes a big difference. Thank you! 🙏
Contributions are welcome and appreciated! Whether you're fixing bugs, adding features, improving documentation, or sharing new algorithm implementations, we'd love your help.
- Fork the repository on GitHub
- Create a feature branch with a descriptive name:
git checkout -b feature/add-new-algorithm git checkout -b fix/self-healing-bug git checkout -b docs/improve-readme
- Make your changes and ensure code quality:
- Follow PEP 8 style guidelines
- Add type hints for new functions
- Include docstrings and comments
- Write unit tests for new functionality
- Test your changes locally:
python -m pytest -m "unit or api" # Quick smoke test python -m pytest --tb=short # Full test suite
- Commit with clear messages:
git commit -m "feat: add new sorting algorithm" git commit -m "fix: correct self-healing locator logic"
- Push your branch and create a Pull Request on GitHub with:
- Clear title and description
- Reference to any related issues (e.g.,
Fixes #42) - Explanation of changes and why they're needed
- Algorithms - New algorithm implementations in
algorithms/(with tests) - Test Automation - Enhanced Robot Framework keywords, new UI test examples
- Self-Healing - Improvements to the locator recovery mechanism
- AI Generation - Enhancements to the MCP-driven test generator
- Documentation - README updates, code examples, tutorials
- CI/CD - Workflow improvements, additional test coverage
# Complete initial setup first (see Installation), then:
# Create and switch to feature branch
git checkout -b feature/your-feature-name
# Install dependencies (if adding new packages)
pip install -r requirements.txt
# Make your changes and test
python -m pytest
python -m robot robot_demo/calculator/
# Commit and push
git add .
git commit -m "feat: describe your changes"
git push origin feature/your-feature-name
# Create Pull Request on GitHub- Python - PEP 8, type hints, docstrings
- Tests - Pytest or Robot Framework with clear naming
- Documentation - Updated README.md or inline comments for complex logic
- Commit Messages - Clear, concise, use conventional commits (feat:, fix:, docs:, etc.)
- GitHub Discussions - Ask questions and share ideas
- GitHub Issues - Report bugs or request features
- Check existing issues - Your question might already be answered
Thank you for contributing! 🙌
This project is licensed under the MIT License.
The MIT License permits:
- ✅ Commercial use
- ✅ Modification
- ✅ Distribution
- ✅ Private use
With the conditions:
⚠️ License and copyright notice must be included
- GitHub Issues - Report bugs and request features
- GitHub Discussions - Ask questions, share ideas, and discuss best practices
- Documentation - Check
README.mdand inline code comments for implementation details - Example Tests - Review
pytest_demo/androbot_demo/for working examples
Found a bug? Please open an issue with:
- Python version and OS (e.g., Python 3.14 on Windows 11)
- Steps to reproduce the issue
- Expected vs actual behavior
- Error message and stack trace (if applicable)
- Environment details (e.g., Playwright version, headless/headed mode)
Have an idea for improvement? Open an issue with:
- Clear description of the feature or problem
- Proposed solution or use case
- Alternative approaches you've considered (if any)
- Examples or code snippets showing the idea
Have questions or want to discuss testing strategies? Use GitHub Discussions to:
- Share test automation patterns and best practices
- Get advice on test framework choices
- Discuss algorithm implementations
- Connect with other contributors
We actively monitor both Issues and Discussions—your feedback helps improve this project!
- CONTRIBUTING.md - Detailed guidelines for contributing code, algorithms, or documentation
- CODE_OF_CONDUCT.md - Community standards and expectations for respectful interaction
- SECURITY.md - How to responsibly report security vulnerabilities
We provide issue templates to streamline reporting:
- Bug Reports - For issues and problems
- Feature Requests - For new functionality ideas
- Documentation - For improvements to docs
- Questions - For general inquiries (consider using Discussions instead)
- Python - Programming language
- Pytest - Testing framework
- Playwright - Modern browser automation
- Robot Framework - Keyword-driven testing
- OpenAI API - AI-powered test generation
This project draws on industry best practices from:
- Test automation communities
- Software engineering principles
- Algorithm research and implementations
We welcome feedback, contributions, and ideas from the community. If you find this project useful, please consider:
- ⭐ Starring the repository
- 🔗 Sharing it with others
- 🤝 Contributing improvements
- 💬 Providing feedback via Issues or Discussions