Press release

Orkes Brings Generative AI and Human-in-the-Loop Capabilities to Microservices and Workflow Orchestration

0
Sponsored by Businesswire

Orkes, the enterprise microservices and workflow orchestration platform for Netflix Conductor used by companies like United Wholesale Insurance, Foxtel, and Normalyze, has released new capabilities that allow organizations to implement generative AI models and vector databases directly into their microservices infrastructure and business workflows.

AI has made huge strides in what trained models can do with the release of LLMs such as GPT-3 (popularized by ChatGPT) and Llama2. Companies large and small see the promise these innovations have to positively impact their business, and are exploring how generative AI can power and launch greenfield applications in their existing business flows.

“The biggest hurdle for our customers using generative AI technologies has been bridging the gap between having Gen-AI models and using them for actual enterprise-grade business workflows,” said Viren Baraiya, CTO at Orkes. “We built Orkes AI Orchestration to provide enterprises an easy way for model inferences and vector DB tasks to be weaved into their business logic. Companies get all the scalability, observability and security aspects inherited from Orkes Conductor so their Gen-AI powered applications are production ready from Day 1.”

With the new Orkes AI Orchestration product, enterprises can:

  • Easily add language models or ML inferences into their existing or new workflows by choosing a model provider from a collection of pre-built integrations including Azure Open AI, Open AI and Vertex AI.

  • Integrate with vector databases from providers such as Pinecone and Weivate.

  • Create role-based access models for the LLMs/MLs and vector DBs integrations so specific teams in the company can access information and build applications.

Built with enterprise needs in mind, AI Orchestration allows users to securely log and audit every interaction with a model at any time to ensure quality and safety. Orkes provides the security of dedicated clusters that can be run as a fully cloud hosted model or on the customer’s cloud or private data centers, ensuring all the data and compute are retained by the enterprise.

Orkes is also launching Human Task, a companion feature for inserting human intervention into AI workflows to ensure people are involved in stages where direct input or oversight is needed. Human Task enables customers to deploy powerful scenarios where AI and human actions seamlessly complement and work together, such as building co-pilots that do the heavy lifting for business flows to assist knowledge workers.

“Human Task takes advantage of critical junctures in AI workflows that benefit immensely from human insight,” said Boney Sekh, CTO at Orkes. “We envision a world where AI doesn’t replace humans, but empowers them. Imagine a scenario where AI takes care of the groundwork, and humans step in for nuanced decisions, like underwriters in mortgage applications. Our combined approach with generative AI paves the way for co-pilots in various sectors, allowing for deeper understanding and more informed decisions based on AI-generated insights.”

Using Orkes AI Orchestration and Human Task, prompt engineers (or developers who are also building & tuning prompts) can use an intuitive visual interface to create, test, tune and securely share prompts that allow for humans to be involved at various stages of the workflow. An example would be building a prompt to generate insights across multiple email quotes received by suppliers. They can select which models this prompt can be used with, insert placeholders in the prompt that can be wired up in a workflow to be substituted dynamically during runtime, test it during development time to see how models respond and finally share it with the right teams to be used in workflows they build. End users can then review the summary generated by AI and respond to quotes more efficiently, saving hours of time for just this one task.

When building workflows or updating new ones, developers can now simply pick from multiple new systems tasks (i.e, tasks that are run on the Orkes Conductor cluster itself and do not need external workers) that interact with models and vector DBs. One such example is the LLM Text Complete system task that can associate an existing prompt from the prompt library, choose the specific model from the list of integrated models to call, decide which workflow variables should be wired up to the placeholders and then add it at the right place in the workflow definition as needed by the business logic. After this, during runtime, the model will be called based on the prompt generated and its results can be used further down in the workflow.

Orkes AI Orchestration and Human Tasks can be added to any of the Conductor clusters created and managed in Orkes Cloud. To learn more about AI Orchestration and sign-up for the waitlist, please visit: https://orkes.io/ai.

About Orkes

Orkes is the premier microservices and workflow orchestration platform redefining the way companies build and scale complex software applications with high reliability. Founded in 2021 by seasoned product and engineering leaders from Google, Netflix, AWS, and Microsoft Azure, Orkes provides an enterprise-grade, cloud-hosted version of the popular Netflix Conductor open-source platform used by Fortune 100 companies and international corporations. Backed by Battery Ventures and Vertex Ventures US, the company is headquartered in Cupertino, CA. Learn more at www.orkes.io.