Reprogram how humans and intelligent systems grow together in CloudWalkβs experimental playground where People, AI, and Data collide. Design and prototype AI copilots, LLM-powered flows, and smart systems that boost how we learn, hire, evolve, and collaborate. Dive into People data and make it speak through dashboards, bots, agents, and stories. Build experimental tools that rethink how we run people ops. Prototype fast and iterate.
Job listings
Build sophisticated agents from scratch using whatever technology gets the job done, designing multi-agent systems that communicate, collaborate, and execute complex operations with minimal human intervention, developing and integrating Model Context Protocols (MCP) that elevate the agents' capabilities beyond current limitations, and identifying and obliterating inefficiencies in workflows through intelligent automation that actually works.
Leverage AI to detect bugs, security vulnerabilities, and ship features rapidly. Empower our product, design, and engineering teams to bring AI to our customers. Iterate quickly to keep up with the pace of changes in AI research, ensuring the best experience possible within the crypto industry.
Develop AI-powered digital twins and intelligent assistants. Utilize LangChain for conversational AI and Gradio for UI development. Design models for real-time interaction and personalization. Research and integrate memory and contextual learning mechanisms.
Support cargo.one in building intelligent systems that streamline workflows, enhance decision-making, and unlock automation across the business. You'll operate independently, often owning a problem from initial conversation to deployed solution. Youβll engage directly with non-technical stakeholders to scope needs, explore opportunities, and rapidly deliver solutions using Python, AI/LLMs, and lightweight web tools or APIs.
This role will play a critical part in Tyndale's mission to harness data for advanced AI and Machine Learning applications. You will collaborate with data scientists, data engineers and business stakeholders to own the full lifecycle of our ML models and Generative AI applications. This is an exciting opportunity to shape the foundation of our AI and Machine Learning efforts, driving impactful insights and innovations.
As an AI Engineer at Entera, youβll contribute to our AI and backend stack. Youβll be responsible for deploying tools to our internal AI architecture, particularly leveraging Large Language Models (LLMs) via frameworks & tools such as OpenAI compliant APIs, LangChain and LangSmith. Youβll develop and integrate AI solutions utilizing our internal software stack, primarily using Python, connecting data from internal APIs to hosted or internal LLMs.
In this role, you will design and implement cutting-edge Agentic AI solutions, including autonomous AI agents powered by large language models (LLMs). Your work will directly contribute to transforming business processes and enabling intelligent automation across the enterprise.
Develop and deploy AI/ML models and agentic workflows using frameworks like LangChain, LangGraph, or Semantic Kernel. Integrate LLMs (e.g., Claude, Gemini, Mistral) via APIs to enhance automation and decision-making. Implement Retrieval-Augmented Generation (RAG) for querying documentation (e.g., Jira, Confluence). Build multi-agent coordination patterns (e.g., planner-executor, supervisor-worker). Automate workflows by integrating with Jira, Bitbucket, Git, and MS Teams.