Hi, I'm Nabeel Javed, a full stack engineer with 5 years of experience building cloud-native, high-throughput, mission-critical systems. I enjoy crafting scalable backend services and responsive front-ends using Node, React, and modern serverless architectures. I have deep experience integrating LLMs and Retrieval-Augmented Generation into the software development lifecycle to boost debugging, code quality, development velocity, and system reliability. I've led scalable backend design, production troubleshooting, and developer enablement across teams.

Hi, I'm Nabeel Javed, a full stack engineer with 5 years of experience building cloud-native, high-throughput, mission-critical systems. I enjoy crafting scalable backend services and responsive front-ends using Node, React, and modern serverless architectures. I have deep experience integrating LLMs and Retrieval-Augmented Generation into the software development lifecycle to boost debugging, code quality, development velocity, and system reliability. I've led scalable backend design, production troubleshooting, and developer enablement across teams.

Available to hire

Hi, I’m Nabeel Javed, a full stack engineer with 5 years of experience building cloud-native, high-throughput, mission-critical systems. I enjoy crafting scalable backend services and responsive front-ends using Node, React, and modern serverless architectures.

I have deep experience integrating LLMs and Retrieval-Augmented Generation into the software development lifecycle to boost debugging, code quality, development velocity, and system reliability. I’ve led scalable backend design, production troubleshooting, and developer enablement across teams.

See more

Experience Level

Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
See more

Language

English
Fluent
Urdu
Fluent
Hindi
Fluent

Work Experience

Lead Full Stack Developer at Careem
May 1, 2023 - November 24, 2025
Led development of real-time ride-hailing and payment tracking with WebSocket, Redis, and event-driven pipelines, achieving a 25% improvement in ride-success rate. Optimized database and caching layers to reduce p95 latency by 35% on high-traffic endpoints. Designed serverless microservices on AWS Lambda with SNS/SQS, cutting infra costs by 15% and enabling zero-downtime deployments. Implemented CI/CD enhancements with Jenkins and Docker, reducing deployment time by 50% and increasing deployment frequency. Integrated real-time ML pricing models to improve rider-driver match success by 10%. Built an internal LLM-powered code quality tool reducing pre-release defects by 35%, plus RAG-based error intelligence indexing logs, traces, outages, and commits to reduce MTTR by 20%. Enabled AI-assisted development using Copilot, OpenAI API, and custom dev agents to accelerate scaffolding, testing, docs, and refactors. Added static analysis with LLMs to detect SQL inefficiencies, unsafe API contra
Full Stack Developer at Arbisoft
April 1, 2023 - April 1, 2023
Developed a large-scale EdTech platform serving 5M monthly users with 98% uptime using React and TypeScript. Designed REST and GraphQL services achieving 99.9% availability; performed performance tuning and load testing, resolving 95% of bottlenecks within sprints. Migrated microservices to ECS with containerization, increasing deployment frequency by 30%. Applied AI-driven modernisation strategies to legacy Node services, enabling faster failure mapping to historical patterns. Implemented AI-assisted test generation and contract tests to boost coverage and reduce regressions; standardised AI-driven workflows for faster onboarding and code consistency.
Full Stack Developer at Arbisoft
April 30, 2023 - April 30, 2023
Developed large-scale EdTech platform serving 5M monthly users with 98% uptime using React and TypeScript. Designed REST and GraphQL services achieving 99.9% availability. Performed load testing with Cypress and Playwright resolving bottlenecks. Migrated microservices to ECS improving deployment frequency by 30%. Used retrieval-based LLM debugging to modernize legacy Node services by mapping current failures to historical patterns. Implemented AI-generated integration tests and contract tests improving coverage and reducing regression bugs. Standardized AI-augmented development pipelines enabling faster onboarding and higher code consistency.

Education

Bachelor of Science at Nuces
February 20, 2020 - June 28, 2024
BSc Computer Science at FAST NUCES Islamabad
January 11, 2030 - November 24, 2025
BSc Computer Science at FAST NUCES Islamabad
January 11, 2030 - November 24, 2025

Qualifications

AWS Solutions Architect Professional
January 11, 2030 - November 24, 2025
Google Cloud Professional Developer
January 11, 2030 - November 24, 2025
AWS Developer Associate
January 11, 2030 - November 24, 2025
Azure Developer Associate
January 11, 2030 - November 24, 2025
Certified ScrumMaster
January 11, 2030 - November 24, 2025

Industry Experience

Software & Internet, Retail, Financial Services, Professional Services
    paper Dcube Ai

    Trading Agent

    Description

    The Trading Agent is an automated trading bot designed for autonomous trading on Solana-based tokens. It evaluates token data, filters opportunities based on predefined trading criteria, and uses AI for final trade decisions. The bot executes trades and posts real-time updates on Twitter, sharing details about purchases and sales, including profit margins.

    This project is built on Eliza where it integrates with APIs like BirdEye and Helius to gather live token rates and utilizes OpenAI for AI-driven trade decision-making.


    Trading Logic

    For Purchasing:

    1. Fetch token data such as market cap, volume, and bonding progress.
    2. Finalize token purchase if:
      • Market cap reaches $10K on PumpFun within 24 hours.
      • Bonding market cap is 50% of the total market cap.
      • Buying volume is twice the selling volume in the last 4 hours.
    3. Retrieve wallet balance using BirdEye API.
    4. Use AI via LLM to determine the appropriate purchase amount based on collected data.
    5. Store purchase data in the pumpfundata table.
    6. Check for buying opportunities every 3 hours.
    7. After purchasing a token, tweet on Twitter to announce the purchase.

    For Selling:

    1. Retrieve stored purchase positions from the database.
    2. Fetch the current token price.
    3. Compare prices for each token and execute sales:
      • Sell 50% of the amount at 2x profit.
      • Sell 25% at 4x profit.
      • Sell 12.5% at 8x profit.
    4. Update the token position in the pumpfundata table after partial or full sales.
    5. Store sale details in the pumpfunsales table.
    6. Check for selling opportunities every hour.
    7. After selling a token, tweet on Twitter to announce the sale and profit gained.