Openkyber - Senior AI Infrastructure Engineer
AI Engineer is needed in Georgia.
Client: Openkyber
Location: Georgia
Contract: undefined
Job Description
Primary skills required for the role include Gemini AI experience, AI multi-cloud experience, and experience in running AI services. Candidates should possess cursor coding experience and extensive implementation experience in the data analytics space or in a senior developer role within modern technology stacks. Knowledge of data science techniques and data warehousing concepts, including technical architectures, infrastructure components, ETL/ELT, and reporting/analytic tools and environments, is essential.
Applicants must demonstrate excellent programming skills and proficiency in at least one major programming scripting language used in Gen AI orchestration, such as Python, PySpark, or Java. The role also requires the ability to build API-based scalable solutions and to debug and troubleshoot software or design issues. Hands-on exposure to integrating with Gemini Pro 1.x using API endpoints and a thorough understanding of prompt engineering is necessary.
Experience with implementation exposure to LLM agents like LangChain and vector databases such as Pinecone, Chroma, or FAISS is important. The ideal candidate should be able to quickly conduct experiments and analyze features and capabilities of newer LLM models as they are introduced to the market. Basic data engineering skills to load structured and unstructured data from source systems to target data stores are also required.
Collaboration with Gen AI leads and other team members to address requirements from the product backlog is a key component of this role. Candidates will build and maintain data pipelines and infrastructure to support AI solutions, with hands-on exposure to using Google Cloud Platform services for storage, serverless logic, search, transcription, and chat.
Requirements
1. Gemini AI experience
2. AI multi-cloud experience
3. Experience in running AI services
4. Cursor coding experience
5. Extensive implementation experience in the data analytics space or a senior developer role in modern technology stacks
6. Knowledge of data science techniques
7. Understanding of data warehousing concepts, including technical architectures and infrastructure components
8. Proficiency in ETL/ELT and reporting/analytic tools and environments
9. Excellent programming skills in at least one major programming scripting language (Python, PySpark, or Java)
10. Ability to build API-based scalable solutions and troubleshoot software/design issues
11. Hands-on integration experience with Gemini Pro 1.x using API endpoints
12. Understanding of prompt engineering and implementation exposure to LLM agents like LangChain
13. Familiarity with vector databases like Pinecone, Chroma, or FAISS
14. Basic data engineering skills for loading structured & unstructured data
15. Ability to conduct experiments on newer LLM models
16. Collaboration skills to work closely with Gen AI leads and team members
17. Hands-on experience with Google Cloud Platform services for storage, serverless logic, search, transcription, and chat
How It Works
๐Get quality leads
Review job leads for free, filter by local or global clients, and get real time notifications for new opportunities.
๐Apply with ease
Pick the best leads, unlock contact details, and apply effortlessly with Twine's AI application tools.
๐Grow your career
Showcase your work, pitch to the best leads, land new clients and use Twineโs tools to find more opportunities.