I'm Sandal Iqbal, a developer and leader with 15+ years shaping scalable test automation and leading cross-functional teams in Agile environments. I design and implement automation frameworks (Playwright, Selenium) across TypeScript, Java, and Python to cover web UI, APIs, and monitoring-enabled deployments. I thrive on translating complex requirements into robust test strategies and mentoring teams to deliver high-quality software faster. I also leverage Gen AI tools to accelerate code generation and automate repetitive tasks, with deep experience in API testing (REST/GraphQL), CI/CD, performance testing (Gatling), and security testing (OWASP ZAP, MobSF). My focus spans mobile, web, and backend services, driving shift-left QA, reliability, and measurable quality across organizations.

Sandal Iqbal

I'm Sandal Iqbal, a developer and leader with 15+ years shaping scalable test automation and leading cross-functional teams in Agile environments. I design and implement automation frameworks (Playwright, Selenium) across TypeScript, Java, and Python to cover web UI, APIs, and monitoring-enabled deployments. I thrive on translating complex requirements into robust test strategies and mentoring teams to deliver high-quality software faster. I also leverage Gen AI tools to accelerate code generation and automate repetitive tasks, with deep experience in API testing (REST/GraphQL), CI/CD, performance testing (Gatling), and security testing (OWASP ZAP, MobSF). My focus spans mobile, web, and backend services, driving shift-left QA, reliability, and measurable quality across organizations.

Available to hire

I’m Sandal Iqbal, a developer and leader with 15+ years shaping scalable test automation and leading cross-functional teams in Agile environments. I design and implement automation frameworks (Playwright, Selenium) across TypeScript, Java, and Python to cover web UI, APIs, and monitoring-enabled deployments. I thrive on translating complex requirements into robust test strategies and mentoring teams to deliver high-quality software faster.

I also leverage Gen AI tools to accelerate code generation and automate repetitive tasks, with deep experience in API testing (REST/GraphQL), CI/CD, performance testing (Gatling), and security testing (OWASP ZAP, MobSF). My focus spans mobile, web, and backend services, driving shift-left QA, reliability, and measurable quality across organizations.

See more

Experience Level

Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Intermediate
See more

Language

Javanese
Fluent
English
Fluent

Work Experience

Principal Software Development Engineer in Test at Balbix
July 1, 2024 - Present
Led end-to-end automation with Playwright-based framework in TypeScript for web UI testing. Implemented AI-driven browser automation to accelerate test creation and performed visual testing to detect dashboard anomalies. Measured UI test coverage on React apps to inform strategy. Established Grafana/Prometheus alerting for application performance and conducted negative tests (Kafka, AWS, Airflow) to ensure full coverage. Led GraphQL API automation using Python/Pytest and defined shift-left QA strategy while mentoring teams to improve cycles and coverage.
Engineering Leader at Xindus India Pvt Ltd
February 1, 2024 - October 12, 2025
Oversaw QA activities including test planning, documentation, execution, and production support. Established automation frameworks for backend and frontend testing and guided the team in adopting AI tools for automated test generation.
Staff Engineer in Test at SIXT India Pvt Ltd
October 1, 2023 - October 12, 2025
Acted as test lead for multiple teams, standardizing QA practices and aligning sprint goals. Built Appium-based mobile automation integrated with CI/CD and parallel execution. Developed Spring Boot services for Kubernetes test tool integration with Jacoco. Implemented performance and security testing using Gatling, Lighthouse, and OWASP ZAP.
Technical Lead in Test at Myntra
September 1, 2018 - October 12, 2025
Led QA initiatives as Technical Lead in Test, standardizing automation reporting and driving improvements in test coverage and reliability across teams.
Member of Technical Staff at eBay
February 1, 2017 - October 12, 2025
Designed and implemented service automation framework and practices, enabling scalable test coverage for key backend services.
Senior Quality Specialist at SAP Ariba
April 1, 2016 - October 12, 2025
Planned and automated secure data testing, strengthening data integrity and security testing across supplier networks.
SDET at Amazon
May 1, 2014 - October 12, 2025
Built Selenium-based frameworks, reduced test times, and automated deployment checks to accelerate release cycles.
Senior Test Engineer at Clickable
September 1, 2011 - October 12, 2025
Developed deployment pipeline and automation tools to streamline testing and delivery processes.
SDET at Microsoft
June 1, 2009 - October 12, 2025
Owned Windows component test automation, delivering reliable one-click test automation and robust test suites.
Software Engineer at Verizon
March 1, 2006 - October 12, 2025
Maintained legacy COBOL/mainframe systems, ensuring stability and reliability of critical legacy workflows.

Education

Add your educational history here.

Qualifications

Add your qualifications or awards here.

Industry Experience

Software & Internet, Professional Services, Other, Media & Entertainment, Computers & Electronics
    paper Shift-Left QA Transformation and AI-Augmented Testing

    As Engineering Leader, I spearheaded the shift-left QA strategy, embedding automation early in the SDLC.
    He established unified backend and frontend frameworks using Playwright and Pytest, aligning them with GitLab CI for automated execution on code merges.

    I also pioneered the use of AI-assisted test design, where tools like Copilot and GPT-based models generated test scenarios directly from acceptance criteria.
    This reduced planning overhead and improved coverage consistency across teams.

    I mentored QA engineers in implementing these AI workflows, fostering a culture of autonomous testing and faster feedback loops.

    Key Achievements:

    Reduced manual test design effort by 40% via AI-assisted generation.
    Standardized automation frameworks across product teams.
    Accelerated deployment confidence through early validation pipelines.

    paper Cloud-Native QA Automation and CI/CD Integration

    At SIXT, I architected a cloud-native test automation ecosystem that unified mobile, API, and performance testing within CI/CD.
    He developed an Appium-based mobile automation suite integrated with Jenkins pipelines for parallel execution on Android and iOS, significantly reducing feedback cycles.

    Sandal built a Spring Boot microservice that interfaced with Kubernetes to dynamically spin up and destroy test environments on demand — integrating Jacoco for real-time coverage analysis.

    Performance and security validation were automated using Gatling, Lighthouse, and OWASP ZAP, embedding them directly into deployment pipelines to enforce quality gates.

    Key Achievements:
    Cut regression execution time by 50% through parallel and containerized testing.
    Enforced DevSecOps practices by integrating security scans into build pipelines.
    Created a coverage-driven release readiness dashboard for continuous quality visibility.

    paper AI-Driven Test Automation Framework

    Led the design and development of an AI-powered test automation framework that transformed how front-end and API testing was executed. The framework integrated Playwright (TypeScript) for web UI testing and Pytest (Python) for GraphQL API validation, ensuring seamless coverage across layers.

    Using Cline and GitHub Copilot, the system generated intelligent test scripts automatically from user stories and API specifications, significantly cutting down authoring time.
    To enhance observability, Sandal implemented Grafana-Prometheus alerting to monitor test stability and detect regression trends in CI pipelines.

    Additionally, the framework incorporated visual anomaly detection using AI models to verify React-based dashboards — ensuring that graphical widgets and performance metrics remained consistent after each release.
    This initiative resulted in a 35% reduction in manual QA effort and established a reusable model for other teams.

    Key Achievements:
    Built a fully autonomous Playwright framework with modular plugin support.
    Introduced AI-based visual validation, improving UI accuracy.
    Integrated coverage-based prioritization to target high-value test cases.
    Enhanced system reliability via Kafka, AWS, and Airflow fault testing.