AI Engineer | Scrabble
Job Description
AI Engineer – Multi-Agent Systems & Orchestration
Job Summary
We are forming a brand-new division focused on intelligent agents and LLM-based orchestration systems. As one of the first few engineers on this team, you will play a core role in shaping the architecture, agent lifecycle, and tooling stack from scratch. Your experience in building complex backend systems will be pivotal in designing a foundational AI layer for enterprise software, specifically geared toward multi-agent LLM automation.
Responsibilities
-
LLM & Agent Frameworks:
• Integrate and orchestrate multi-agent workflows using tools such as LangChain, LangGraph, OpenAI SDK, or similar frameworks.
• Implement custom agent controllers for long-running tasks and structured tool usage. -
Backend Development & Prototyping:
• Build and maintain production-grade APIs and microservices using Python, TypeScript, Go, or Rust.
• Rapidly prototype new agent capabilities while ensuring performance, modularity, and testability. -
Infrastructure & Orchestration:
• Own the agent orchestration layer using Kubernetes, Nomad, or Ficracker.
• Design systems for resource scheduling, stateful agent execution, and process isolation. -
Networking & Remote Access:
• Develop stateful networking infrastructure with Traefik/Envoy for shell, browser, and desktop agents.
• Manage VNC-based remote access ensuring low latency and high concurrency. -
Cloud & DevOps:
• Deploy and manage infrastructure across GCP, AWS, or Azure using Terraform/OpenTofu.
• Build secure, zero-downtime CI/CD pipelines for infrastructure, agents, and models. -
Experimental Innovation (~30% time):
• Explore LLM fine-tuning, agent memory architectures, context persistence, and new agent frameworks.
• Contribute ideas and code to internal research, technical talks, and open-source initiatives.
Qualifications
- Minimum 3+ years of backend or infrastructure engineering experience.
- Proficiency in Python and TypeScript; hands-on experience with Go or Rust.
- Experience with LLM frameworks such as LangChain, LangGraph, OpenAI, CrewAI, or similar tools.
- Deep knowledge of Kubernetes, containers, and microservice orchestration.
- Strong understanding of networking concepts, load balancers, and shell-based execution models.
- Familiarity with Terraform/Helm, cloud platforms (GCP, AWS, or Azure), and CI/CD workflows.
- A solution-first mindset with strong problem-solving skills and a drive to push technical boundaries.
Preferred Skills
- Previous experience in a tech lead, senior engineer, or architect role.
- Demonstrated ability to design, debug, and scale robust backend systems.
- Experience with experimental projects such as LLM fine-tuning and agent memory architectures.
- Ability to work hands-on in a fast-paced, collaborative environment.
Experience
- 3–8 years of relevant backend or infrastructure engineering experience, with a track record of building clean, modular, and production-grade systems.
Environment
- Work from our Bangalore office in a dynamic, fast-moving environment.
- Enjoy a collaborative setting where you are encouraged to lead innovations and take ownership of projects.
- Opportunity to work with state-of-the-art technologies and contribute to a foundational AI platform.
Tools
file_search
// Tool for searching files uploaded by the user. // // To use this tool, you must send it a message. To set the tool as the recipient for your message, include this in the message header: to=file_search.<function_name> // // For example, to call file_search.msearch, you would use: // <|im_start|>assistant to=file_search.msearch code<|im_sep|>{"queries": ["first query", "second query"]}<|ghissue|> // // Note that the above must match exactly. // // You must provide citations for your answers. Each result will include a citation marker that looks like this: fileciteturn7file4