Automated testing for Brave Search using Selenium WebDriver with Python (pytest) and Java (TestNG) implementations.
.
βββ basic-scripts/ # Original basic scripts
β βββ brave.py
β βββ chrome.py
β βββ choose.py
β βββ search.py
β βββ logs.txt
β
βββ python-tests/ # Python test implementations
β βββ test_search_pytest.py # Using pytest framework
β βββ test_search_manual.py # Manual approach (no framework)
β βββ test-report.html # HTML test report
β βββ requirements.txt # Python dependencies
β
βββ java-tests/ # Java test implementations
β βββ src/test/java/
β β βββ BraveSearchTest.java
β βββ pom.xml # Maven configuration
β βββ testng.xml # TestNG suite config
β
βββ functional-testing/ # Demo login/signup app with tests
β βββ demo_app.py # Flask web application
β βββ test_demo_app.py # Automated tests (57+ tests)
β βββ run_all_tests.py # Master test runner
β βββ conftest.py # Pytest configuration
β βββ pytest.ini # Pytest settings
β βββ demo_app_test_report.html # HTML test report
β βββ templates/ # HTML templates
β β βββ login.html
β β βββ signup.html
β β βββ dashboard.html
β βββ assets/ # CSS and static files
β βββ style.css
β
βββ API testing/ # API test automation
β βββ mock_server.py # Flask mock API server (loads data from db.json)
β βββ db.json # Initial test data for mock server
β βββ simple_post_test.py # Simple POST request test
β βββ test_get_post.py # GET request with detailed output validation
β βββ test_status_code.py # Status code validation test
β βββ test_parse_json.py # JSON parsing and extraction test
β βββ test_field_validations.py # Field type and value validations
β βββ test_response_time_logging.py # Response time and logging test
β βββ setup_test_data.py # Test data setup utility
β
βββ test_search_playwright.py # Playwright browser automation tests
βββ pytest.ini # General pytest configuration
βββ playwright.ini # Playwright-specific configuration
βββ README.md # This file
All implementations test the following scenarios:
- Navigate to Brave Search homepage
- Verify search box exists
- Verify search box accepts input
- Verify search button exists
- Complete end-to-end search flow
# Install dependencies
pip install -r python-tests/requirements.txt
# Run tests with pytest (recommended)
pytest python-tests/test_search_pytest.py -v
# Or run manually without pytest
python python-tests/test_search_manual.py# Install Maven (if not already installed)
brew install maven
# Run tests
mvn test -f java-tests/pom.xml| Feature | Python + pytest | Python Manual | Java + TestNG |
|---|---|---|---|
| Code Lines | ~70 | ~140 | ~120 |
| Setup/Teardown | Automatic | Manual | Automatic |
| Parallel Execution | β Yes | β No | β Yes |
| Test Reports | β Rich | β Rich | |
| Learning Curve | Easy | Easy | Medium |
- Brave Browser installed at:
/Applications/Brave Browser.app/Contents/MacOS/Brave Browser - ChromeDriver (auto-managed by Selenium)
- Chromium browser (auto-installed by Playwright)
- Internet connection
- Python 3.7+
- pip package manager
- Dependencies:
- selenium
- pytest
- pytest-html
- pytest-xdist (for parallel execution)
- pytest-playwright (for Playwright tests)
- playwright (browser automation)
- requests (for API testing)
- flask (for demo apps and mock servers)
- openpyxl (for Excel file handling)
- Java 11+
- Maven 3.6+
# Install Python dependencies for all projects
pip3 install selenium pytest pytest-html pytest-xdist pytest-playwright playwright requests flask openpyxl --break-system-packages
# Install Playwright browsers
python3 -m playwright install
# Or install from requirements files
pip install -r python-tests/requirements.txt# Install Maven (macOS)
brew install maven
# Dependencies are managed by Maven (pom.xml)Use Python + pytest when:
- You want clean, maintainable code
- You need quick setup and execution
- You prefer Python ecosystem
Use Java + TestNG when:
- You're working in Java ecosystem
- You need enterprise-level testing
- Your team is familiar with Java
Use Python Manual when:
- You're learning automation basics
- You want to understand fundamentals
- β Python pytest tests: All 5 tests passing
- β³ Java TestNG tests: Requires Maven installation
- Browser path is configured for macOS (update for Windows/Linux)
- Explicit waits are used (10 seconds timeout)
- Tests run sequentially by default
- For parallel execution:
pytest -n 4(Python) or configure TestNG (Java)
A complete login/signup application with comprehensive test suite including smoke, regression, and data-driven testing:
# Terminal 1: Start the demo app
cd functional-testing
python3 demo_app.py
# Terminal 2: Run all tests (smoke + regression + data-driven in parallel)
cd functional-testing
python3 test_demo_app.py
# Or use the master runner:
python3 run_all_tests.py # Runs smoke first, then regression
python3 run_all_tests.py --all # Runs all tests at once
# Run specific test types:
pytest test_demo_app.py -m smoke # Only smoke tests (5 tests)
pytest test_demo_app.py -m regression # Only regression tests (52+ tests)
pytest test_demo_app.py -m datadriven # Only data-driven tests (37 tests)
pytest test_demo_app.py -m login # Only login tests
pytest test_demo_app.py -m signup # Only signup tests
pytest test_demo_app.py -m validation # Only validation testsFeatures:
- Flask-based web application
- 57+ comprehensive tests in one file (test_demo_app.py)
- Smoke Testing: 5 critical path tests
- Regression Testing: 15 comprehensive tests
- Data-Driven Testing: 37+ parametrized tests with large datasets
- Single unified HTML report (demo_app_test_report.html)
- Professional test structure with explicit waits and pytest markers
- Parallel execution with pytest-xdist (4 workers)
Test Organization:
-
Smoke Tests (@pytest.mark.smoke): Quick sanity checks (5 tests)
- App is running
- Login/Signup pages load
- Valid login works
- Navigation works
-
Regression Tests (@pytest.mark.regression): Comprehensive coverage (15 tests)
- 6 Login tests (@pytest.mark.login)
- 6 Signup tests (@pytest.mark.signup)
- 3 Navigation tests (@pytest.mark.navigation)
- Validation tests (@pytest.mark.validation)
-
Data-Driven Tests (@pytest.mark.datadriven): Large dataset testing (37+ tests)
- 10 invalid login scenarios (empty fields, wrong passwords, invalid emails)
- 11 invalid signup scenarios (empty fields, mismatched passwords, existing emails)
- 5 valid signup scenarios (multiple user registrations)
- 5 password length validation tests (1-5 character passwords)
- 6 email format validation tests (various invalid email formats)
- Uses @pytest.mark.parametrize for data-driven approach
Test Data Sets:
- INVALID_LOGIN_DATA: 10 test cases
- INVALID_SIGNUP_DATA: 11 test cases
- VALID_SIGNUP_DATA: 5 test cases
- PASSWORD_VALIDATION_DATA: 5 test cases
- EMAIL_FORMAT_DATA: 6 test cases
Execution:
- All tests run in headless mode for speed
- Parallel execution with 4 workers (pytest-xdist)
- Single unified HTML report with all results
Why this approach? Testing systems you control provides reliable, reproducible results. Data-driven testing allows you to test multiple scenarios with minimal code duplication. This is how professional QA engineers work - using parametrized tests to cover edge cases efficiently.
Comprehensive API testing framework with Python and requests library:
# Terminal 1: Start the mock API server
cd "API testing"
python3 mock_server.py
# Terminal 2: Run tests
python3 simple_post_test.py # Simple POST request
python3 test_get_post.py # GET with detailed output validation
python3 test_status_code.py # Status code validation
python3 test_parse_json.py # JSON parsing
python3 test_field_validations.py # Field validations
python3 test_response_time_logging.py # Response time & logging
python3 setup_test_data.py # Setup test data
# Or run with pytest
pytest "API testing/test_get_post.py" -v -sFeatures:
- Mock Flask API server for testing (loads initial data from db.json)
- POST request automation (create student data)
- GET request with detailed output validation and terminal printing
- Response validation (status codes, JSON data)
- Expected vs actual output comparison
- Python equivalent of Java REST Assured tests
Test Scripts:
-
mock_server.py: Flask-based mock API server
- Loads initial data from db.json on startup
- POST /studentdata - Create student
- GET /studentdata - Get all students
- GET /studentdata/:id - Get single student
-
db.json: Initial test data
- Contains 3 students (Abdullah, Ali, Sara)
- Loaded automatically when server starts
-
simple_post_test.py: Basic POST request test
- Creates student with name and courses
- Validates status code 201
- Prints formatted JSON response
-
test_get_post.py: GET request with detailed validation
- Retrieves student data
- Prints raw response text and formatted JSON
- Validates status code, ID, and name against expected values
- Shows response headers
- Compares actual vs expected output
- Python equivalent of Java REST Assured test
-
test_status_code.py: Status code validation
- Extracts and validates HTTP status code
- Asserts status equals 200
- Python equivalent of Java REST Assured StatusCodeTest
-
test_parse_json.py: JSON parsing and extraction
- Extracts specific fields from JSON response
- Validates "Courses" field is not null
- Python equivalent of Java REST Assured TestParseJson
-
test_field_validations.py: Field type and value validations
- Validates status code is 200
- Checks "name" field is a string (isinstance)
- Validates "Courses" field is not null
- Asserts "id" field is greater than 0
- Python equivalent of Java REST Assured testFieldValidations
-
test_response_time_logging.py: Response time and logging
- Measures API response time in milliseconds
- Asserts response time is less than 2000ms
- Logs response body as string
- Python equivalent of Java REST Assured testResponseTimeandLogging
-
setup_test_data.py: Test data utility
- Creates test students
- Returns student IDs for testing
Why API Testing? API testing validates backend functionality independently of UI, enabling faster test execution and easier continuous integration. This approach mirrors real-world API testing workflows used in professional QA environments.
Modern browser automation using Playwright (faster and more reliable than Selenium):
# Run Playwright tests (visible browser with slow motion)
python3 -m pytest test_search_playwright.py -v
# Run specific test
python3 -m pytest test_search_playwright.py::test_complete_search_flow -v
# Run in headless mode (fast)
python3 -m pytest test_search_playwright.py -v --headed=falseFeatures:
- Modern automation: Playwright is faster and more reliable than Selenium
- Visual feedback: Dynamic element highlighting with different colors
- Slow motion: Configurable speed (1000ms default) to watch tests execute
- Headed mode: Browser window visible by default (configured in pytest.ini)
- Smart waits: Built-in auto-waiting for elements
- Real search: Searches for "mac repair shop" and opens first result
Test Coverage:
- test_navigate_to_brave_search: Navigate to Brave Search homepage
- test_search_box_exists: Verify search box exists (highlighted in blue)
- test_search_box_is_interactable: Type text and verify (highlighted in green)
- test_search_button_exists: Verify search button exists (highlighted in orange)
- test_complete_search_flow: Complete end-to-end flow
- Highlights search box in purple
- Types "mac repair shop"
- Presses Enter to search
- Highlights first result in cyan
- Clicks and opens the website
- Displays opened URL
Configuration:
- Use
playwright.inifor Playwright-specific settings --headed: Shows browser window (set to false for headless)--slowmo=1000: Slows down actions by 1 second for visibility--browser=chromium: Uses Chromium (can also use firefox)
Run with custom config:
# Use playwright.ini configuration
pytest -c playwright.ini test_search_playwright.py -v
# Or pass options directly
pytest test_search_playwright.py -v --headed --slowmo=1000 --browser=chromiumWhy Playwright?
- Faster execution than Selenium
- Better auto-waiting mechanisms
- More reliable element detection
- Modern API with cleaner syntax
- Built-in support for multiple browsers
- No need for browser drivers (auto-managed)
Playwright vs Selenium:
| Feature | Playwright | Selenium |
|---|---|---|
| Speed | β‘ Faster | Slower |
| Auto-waiting | β Built-in | |
| Browser drivers | β Auto-managed | |
| API | π― Modern & clean | π Traditional |
| Multi-browser | β Native | |
| Element highlighting | β Easy |
This project includes GitHub Actions workflow for continuous testing:
Workflow Features:
- Triggers on push/pull request to main branch
- Runs on Ubuntu latest
- Automated test execution for:
- Basic scripts (chrome.py, brave.py, choose.py, search.py)
- Python manual tests
- Python pytest tests (uses Chrome in headless mode)
- Functional testing (smoke + regression)
- Generates HTML test reports
- Uploads test artifacts for review
- Auto-detects CI environment and uses appropriate browser
Environment Detection:
- Local: Uses Brave Browser if available, otherwise Chrome
- CI/CD: Automatically uses headless Chrome with optimized flags
Workflow File: .github/workflows/test.yml
View Results:
- Check the "Actions" tab in GitHub repository
- Download test reports from workflow artifacts
- Review test execution logs
Feel free to fork this repository and submit pull requests. All tests will run automatically via GitHub Actions.
This project is for educational purposes demonstrating various test automation approaches.