ML Engineer | Codersbrain
full-time
Posted on April 24, 2025
Job Description
AI/ML Engineer
Job Summary
As an ML Engineer, you will play a pivotal role in designing, building, and deploying advanced AI systems leveraging Large Language Models (LLMs), Generative AI, and Agentic AI systems. You will focus on creating intelligent, scalable, and autonomous systems that generate, reason, and act, driving the next wave of human-machine interaction. This role involves hands-on development, system architecture, and close collaboration with cross-functional teams to align AI capabilities with evolving business needs.
Responsibilities
- Design, fine-tune, and deploy LLMs for various natural language processing (NLP) and content generation tasks.
- Build and optimize Generative AI applications for text, code, and multimodal content creation using state-of-the-art models (e.g., GPT, Claude, Mistral, LLaMA).
- Develop Agentic AI workflows using frameworks such as LangChain, CrewAI, or similar solutions for autonomous task execution and reasoning.
- Implement Retrieval-Augmented Generation (RAG) pipelines and memory systems to enhance contextual understanding in LLM deployments.
- Architect, automate, and maintain scalable production-grade ML systems with robust CI/CD, monitoring, and retraining pipelines.
- Collaborate cross-functionally with product, data, and platform teams to ensure AI solutions effectively address business objectives.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.
- Proficiency in programming languages such as Python and familiarity with ML frameworks (e.g., PyTorch, TensorFlow).
- Demonstrated experience with LLMs and Generative AI (e.g., OpenAI, HuggingFace Transformers, LLaMA, Mistral, Claude, T5, Gemini).
- Hands-on experience with agentic frameworks (e.g., LangChain, AutoGPT, CrewAI, ReAct, Semantic Kernel).
- Strong understanding of MLOps tools and practices (e.g., MLflow, Airflow, Kubernetes, Docker, FastAPI, Weights & Biases).
- Experience with cloud platforms (AWS, GCP, Azure) and deployment tools (SageMaker, Vertex AI).
- Familiarity with vector databases (Pinecone, FAISS, Weaviate, Qdrant) and RAG methodologies.
- Excellent problem-solving skills, with the ability to work independently and collaboratively.
Preferred Skills
- Experience with DevOps tools such as GitHub Actions, Terraform, and monitoring solutions (Prometheus, Grafana).
- Background in building and scaling APIs for AI and ML services.
- Exposure to multimodal AI systems and advanced memory architectures.
- Participation in open-source AI projects or contributions to ML research.
- Strong written and verbal communication skills.
Experience
- Open to candidates with varying levels of experience (minimum not specified).
- Relevant experience in building, deploying, and managing production-level AI/ML systems, particularly in areas involving LLMs, Generative AI, and MLOps.
- Prior experience in cross-functional and agile environments is a plus.
Environment
- Location: Bangalore (Onsite).
- Work Setting: Full-Time, collaborative team environment.
- Fast-paced, innovative, and highly technical workplace focused on cutting-edge AI/ML advancements.
Deadline
- Application Deadline: 2025-04-26