Experienced in data analytics, full-stack development & AI/ML. Built data warehouses, SQL optimization, BI dashboards & secure real-time applications. Also experienced with integrating banking/payment and accounting ERP APIs. Expert in Python, JavaScript, SQL, Django & machine learning. Transform complex data into business insights & build scalable solutions.

Experienced in data analytics, full-stack development & AI/ML. Built data warehouses, SQL optimization, BI dashboards & secure real-time applications. Also experienced with integrating banking/payment and accounting ERP APIs. Expert in Python, JavaScript, SQL, Django & machine learning. Transform complex data into business insights & build scalable solutions.

Available to hire

Experienced in data analytics, full-stack development & AI/ML. Built data warehouses, SQL optimization, BI dashboards & secure real-time applications. Also experienced with integrating banking/payment and accounting ERP APIs. Expert in Python, JavaScript, SQL, Django & machine learning. Transform complex data into business insights & build scalable solutions.

See more

Experience Level

Expert
Expert
Expert
Expert
Expert
Expert
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Beginner
Beginner
See more

Language

English
Fluent
Hindi
Fluent

Work Experience

Software Development Intern at Bluecrest Software
May 1, 2024 - September 6, 2024
• Developed star-schema based data warehouse for an inventory management system and extensively documented warehouse structure and data flow. Created (8) dimension tables, (3) fact tables, (6) weekly and monthly datamarts, for the same, using Python scripting and SQL and optimized SQL query duration time by 33%. • Derived 29 facts to analyze retail performance across (3 divisions) sales, stock and target related business metrics. • Collaborated cross-functionally with the product, business analysis and client (3) teams to deliver 5 BI dashboards on Metabase covering 15+ business metrics (KPIs).

Education

Bachelor of Technology at Indian Institute of Information Technology and Management, Gwalior
December 1, 2021 - May 9, 2025

Qualifications

Add your qualifications or awards here.

Industry Experience

Software & Internet
    paper Tech Specification Extractor - Fine-tuned DistilBERT

    [Link to paper](https://www.twine.net/signin
    Extracting technical specifications from engineering documentation is challenging due to specialized terminology and complex textual structures. Traditional Natural Language Processing (NLP) techniques struggle with this specialized content, especially when limited annotated data is available for model training. This thesis explores the adaptation of DistilBERT, a lightweight pre-trained language model, to effectively extract technical specifications from engineering documents with minimal manual annotation requirements. Through implementation of both conventional fine-tuning and a novel pattern-enhanced training approach, the research evaluates strategies for reducing labeled data dependency while maintaining extraction accuracy. The experimental methodology employs masked language modeling to adapt DistilBERT to engineering text, followed by targeted fine-tuning with pattern-based data augmentation. Cross-domain evaluations between electrical and mechanical engineering documentation reveal the transferability of learned patterns across technical domains. Results demonstrate that domain adaptation improves model performance across multiple technical entity types, with a 2.69% reduction in perplexity observed through adaptation. Notable performance differences emerge between entity types, with standardized specifications showing stronger cross-domain transfer potential than domain-specific attributes.

    paper Tech Specification Extractor - Fine-tuned DistilBERT

    [Link to paper](https://www.twine.net/signin
    Extracting technical specifications from engineering documentation is challenging due to specialized terminology and complex textual structures. Traditional Natural Language Processing (NLP) techniques struggle with this specialized content, especially when limited annotated data is available for model training. This thesis explores the adaptation of DistilBERT, a lightweight pre-trained language model, to effectively extract technical specifications from engineering documents with minimal manual annotation requirements. Through implementation of both conventional fine-tuning and a novel pattern-enhanced training approach, the research evaluates strategies for reducing labeled data dependency while maintaining extraction accuracy. The experimental methodology employs masked language modeling to adapt DistilBERT to engineering text, followed by targeted fine-tuning with pattern-based data augmentation. Cross-domain evaluations between electrical and mechanical engineering documentation reveal the transferability of learned patterns across technical domains. Results demonstrate that domain adaptation improves model performance across multiple technical entity types, with a 2.69% reduction in perplexity observed through adaptation. Notable performance differences emerge between entity types, with standardized specifications showing stronger cross-domain transfer potential than domain-specific attributes.

    paper Tech Specification Extractor - Fine-tuned DistilBERT

    [Link to paper](https://www.twine.net/signin
    Extracting technical specifications from engineering documentation is challenging due to specialized terminology and complex textual structures. Traditional Natural Language Processing (NLP) techniques struggle with this specialized content, especially when limited annotated data is available for model training. This thesis explores the adaptation of DistilBERT, a lightweight pre-trained language model, to effectively extract technical specifications from engineering documents with minimal manual annotation requirements. Through implementation of both conventional fine-tuning and a novel pattern-enhanced training approach, the research evaluates strategies for reducing labeled data dependency while maintaining extraction accuracy. The experimental methodology employs masked language modeling to adapt DistilBERT to engineering text, followed by targeted fine-tuning with pattern-based data augmentation. Cross-domain evaluations between electrical and mechanical engineering documentation reveal the transferability of learned patterns across technical domains. Results demonstrate that domain adaptation improves model performance across multiple technical entity types, with a 2.69% reduction in perplexity observed through adaptation. Notable performance differences emerge between entity types, with standardized specifications showing stronger cross-domain transfer potential than domain-specific attributes.

    paper Tech Specification Extractor - Fine-tuned DistilBERT

    [Link to paper](https://www.twine.net/signin
    Extracting technical specifications from engineering documentation is challenging due to specialized terminology and complex textual structures. Traditional Natural Language Processing (NLP) techniques struggle with this specialized content, especially when limited annotated data is available for model training. This thesis explores the adaptation of DistilBERT, a lightweight pre-trained language model, to effectively extract technical specifications from engineering documents with minimal manual annotation requirements. Through implementation of both conventional fine-tuning and a novel pattern-enhanced training approach, the research evaluates strategies for reducing labeled data dependency while maintaining extraction accuracy. The experimental methodology employs masked language modeling to adapt DistilBERT to engineering text, followed by targeted fine-tuning with pattern-based data augmentation. Cross-domain evaluations between electrical and mechanical engineering documentation reveal the transferability of learned patterns across technical domains. Results demonstrate that domain adaptation improves model performance across multiple technical entity types, with a 2.69% reduction in perplexity observed through adaptation. Notable performance differences emerge between entity types, with standardized specifications showing stronger cross-domain transfer potential than domain-specific attributes.

    paper Tech Specification Extractor - Fine-tuned DistilBERT

    [Link to paper](https://www.twine.net/signin
    Extracting technical specifications from engineering documentation is challenging due to specialized terminology and complex textual structures. Traditional Natural Language Processing (NLP) techniques struggle with this specialized content, especially when limited annotated data is available for model training. This thesis explores the adaptation of DistilBERT, a lightweight pre-trained language model, to effectively extract technical specifications from engineering documents with minimal manual annotation requirements. Through implementation of both conventional fine-tuning and a novel pattern-enhanced training approach, the research evaluates strategies for reducing labeled data dependency while maintaining extraction accuracy. The experimental methodology employs masked language modeling to adapt DistilBERT to engineering text, followed by targeted fine-tuning with pattern-based data augmentation. Cross-domain evaluations between electrical and mechanical engineering documentation reveal the transferability of learned patterns across technical domains. Results demonstrate that domain adaptation improves model performance across multiple technical entity types, with a 2.69% reduction in perplexity observed through adaptation. Notable performance differences emerge between entity types, with standardized specifications showing stronger cross-domain transfer potential than domain-specific attributes.

    paper Tech Specification Extractor - Fine-tuned DistilBERT

    [Link to paper](https://www.twine.net/signin
    Extracting technical specifications from engineering documentation is challenging due to specialized terminology and complex textual structures. Traditional Natural Language Processing (NLP) techniques struggle with this specialized content, especially when limited annotated data is available for model training. This thesis explores the adaptation of DistilBERT, a lightweight pre-trained language model, to effectively extract technical specifications from engineering documents with minimal manual annotation requirements. Through implementation of both conventional fine-tuning and a novel pattern-enhanced training approach, the research evaluates strategies for reducing labeled data dependency while maintaining extraction accuracy. The experimental methodology employs masked language modeling to adapt DistilBERT to engineering text, followed by targeted fine-tuning with pattern-based data augmentation. Cross-domain evaluations between electrical and mechanical engineering documentation reveal the transferability of learned patterns across technical domains. Results demonstrate that domain adaptation improves model performance across multiple technical entity types, with a 2.69% reduction in perplexity observed through adaptation. Notable performance differences emerge between entity types, with standardized specifications showing stronger cross-domain transfer potential than domain-specific attributes.

    paper Tech Specification Extractor - Fine-tuned DistilBERT

    [Link to paper](https://www.twine.net/signin
    Extracting technical specifications from engineering documentation is challenging due to specialized terminology and complex textual structures. Traditional Natural Language Processing (NLP) techniques struggle with this specialized content, especially when limited annotated data is available for model training. This thesis explores the adaptation of DistilBERT, a lightweight pre-trained language model, to effectively extract technical specifications from engineering documents with minimal manual annotation requirements. Through implementation of both conventional fine-tuning and a novel pattern-enhanced training approach, the research evaluates strategies for reducing labeled data dependency while maintaining extraction accuracy. The experimental methodology employs masked language modeling to adapt DistilBERT to engineering text, followed by targeted fine-tuning with pattern-based data augmentation. Cross-domain evaluations between electrical and mechanical engineering documentation reveal the transferability of learned patterns across technical domains. Results demonstrate that domain adaptation improves model performance across multiple technical entity types, with a 2.69% reduction in perplexity observed through adaptation. Notable performance differences emerge between entity types, with standardized specifications showing stronger cross-domain transfer potential than domain-specific attributes.

    paper Tech Specification Extractor - Fine-tuned DistilBERT

    [Link to paper](https://www.twine.net/signin
    Extracting technical specifications from engineering documentation is challenging due to specialized terminology and complex textual structures. Traditional Natural Language Processing (NLP) techniques struggle with this specialized content, especially when limited annotated data is available for model training. This thesis explores the adaptation of DistilBERT, a lightweight pre-trained language model, to effectively extract technical specifications from engineering documents with minimal manual annotation requirements. Through implementation of both conventional fine-tuning and a novel pattern-enhanced training approach, the research evaluates strategies for reducing labeled data dependency while maintaining extraction accuracy. The experimental methodology employs masked language modeling to adapt DistilBERT to engineering text, followed by targeted fine-tuning with pattern-based data augmentation. Cross-domain evaluations between electrical and mechanical engineering documentation reveal the transferability of learned patterns across technical domains. Results demonstrate that domain adaptation improves model performance across multiple technical entity types, with a 2.69% reduction in perplexity observed through adaptation. Notable performance differences emerge between entity types, with standardized specifications showing stronger cross-domain transfer potential than domain-specific attributes.

    paper Tech Specification Extractor - Fine-tuned DistilBERT

    [Link to paper](https://www.twine.net/signin
    Extracting technical specifications from engineering documentation is challenging due to specialized terminology and complex textual structures. Traditional Natural Language Processing (NLP) techniques struggle with this specialized content, especially when limited annotated data is available for model training. This thesis explores the adaptation of DistilBERT, a lightweight pre-trained language model, to effectively extract technical specifications from engineering documents with minimal manual annotation requirements. Through implementation of both conventional fine-tuning and a novel pattern-enhanced training approach, the research evaluates strategies for reducing labeled data dependency while maintaining extraction accuracy. The experimental methodology employs masked language modeling to adapt DistilBERT to engineering text, followed by targeted fine-tuning with pattern-based data augmentation. Cross-domain evaluations between electrical and mechanical engineering documentation reveal the transferability of learned patterns across technical domains. Results demonstrate that domain adaptation improves model performance across multiple technical entity types, with a 2.69% reduction in perplexity observed through adaptation. Notable performance differences emerge between entity types, with standardized specifications showing stronger cross-domain transfer potential than domain-specific attributes.

    paper Tech Specification Extractor - Fine-tuned DistilBERT

    [Link to paper](https://www.twine.net/signin
    Extracting technical specifications from engineering documentation is challenging due to specialized terminology and complex textual structures. Traditional Natural Language Processing (NLP) techniques struggle with this specialized content, especially when limited annotated data is available for model training. This thesis explores the adaptation of DistilBERT, a lightweight pre-trained language model, to effectively extract technical specifications from engineering documents with minimal manual annotation requirements. Through implementation of both conventional fine-tuning and a novel pattern-enhanced training approach, the research evaluates strategies for reducing labeled data dependency while maintaining extraction accuracy. The experimental methodology employs masked language modeling to adapt DistilBERT to engineering text, followed by targeted fine-tuning with pattern-based data augmentation. Cross-domain evaluations between electrical and mechanical engineering documentation reveal the transferability of learned patterns across technical domains. Results demonstrate that domain adaptation improves model performance across multiple technical entity types, with a 2.69% reduction in perplexity observed through adaptation. Notable performance differences emerge between entity types, with standardized specifications showing stronger cross-domain transfer potential than domain-specific attributes.