Agents and “agentic AI” are all the rage now, eclipsing last year’s focus on artificial intelligence (AI) and generative AI (GenAI). They are a way to automate work almost effortlessly so that repetitive and boring tasks get done with the least amount of effort and perhaps, more consistently. In business software, a broad range of software providers are claiming agents to be a panacea that can improve performance and lower costs. They are alluring, with an almost unlimited number of potential use cases. Agents are an important evolutionary step in the design of business software, similar to the transition from procedural programming to event-driven programming that accelerated in the late 1980s. That paradigm shift enabled business software to be more flexible and responsive in replicating how work is performed. Adding agents to the considerable body of well-developed business applications will take the capabilities of these applications to the next level.
My colleague, David Menninger, recently wrote about the direction of agentic AI, and another, Matt Aslett, commented on the critical importance of data management as an enabler of agentic AI. Data is the key foundational element defining the capabilities and soundness of all AI, GenAI and agentic systems, especially those used in business applications that provide actionable workplace productivity opportunities.
Like most advances in business technology, the age of agents is being proclaimed ahead of their actual arrival and practical accessibility.
ISG Research defines agentic AI as software designed to execute business processes through autonomous actions, potentially controlling multiple processes and systems through the orchestration of one or more AI or algorithmically determined rules-based models, based on an understanding of the environment and the goals that should be achieved. Agents differ from predictive AI and GenAI in that they are fully assembled components that perform the entire scope of the sense-analyze-decide-act system paradigm. Predictive and generative AI are used by agents, but agents alone take actions autonomously based on data and their decision-making constructs. Agents differ from bots in that the latter are rules-based systems designed to perform specific tasks, but unlike agents they do not learn, adapt and make decisions on their own based on their interactions with their environment. Agentic systems may use bots and programmatic devices such as extract, transform and load (ETL), application programming interfaces (APIs) and robotic process automation (RPA) in their operation but only agents produce actions autonomously.
Autonomous decision-making capabilities require ongoing training regimes to ensure that the actions and their outcomes are reliably consistent with intentions. This is easy to describe but often difficult to put into practice, especially for more sophisticated agents. And autonomous doesn’t mean that the work is completely hands-off. There may be decision nodes in the agent’s domain that exhibit insufficient certainty or where the potential negative consequences of a bad decision mean that the node will always require human review and decision.
Agents are described in various ways, which adds to confusion. One approach simply classifies them as either task-based or role-based. The former are designed to execute processes while the latter replicate an individual’s behavior in the context of their function and responsibilities when performing specific tasks with definable outcomes. One type is not inherently superior to the other except in the context in which they are being used. Task-oriented agents can be simpler and less expensive to deploy and operate. Role-based systems can be more capable of a broader range of autonomous actions but with more extensive training and the higher costs that come along with this. Hybrids, where a role-based agent orchestrates a set of task agents to perform a process, also will evolve to be an important part of the landscape.
An agent taxonomy to consider distinguishes them by their sophistication in training and the resulting scope of their abilities.
Yet another aspect of an agentic system is the degree to which it is capable of handling static and dynamic complexity in the work it performs. Complexity, in turn, will be correlated with the scope of data required to train, operate and maintain the models that agents will employ, and therefore their cost.
Static complexity is a function of:
Dynamic complexity is a function of how often and to what degree:
Like all predictive AI models, agents will require training, periodic testing and maintenance to ensure that they are operating properly. Methods for training agents and agentic systems are still in a very early stage. Parallel with efforts necessary to make AI and GenAI functional, they need reliable data with which to build and train agents and will require enterprises to take sustained and concerted steps to improve data management and data governance. ISG Research asserts that through 2026, one-third of enterprises will realize that a lack of AI and ML governance has resulted in biased and ethically questionable decisions.
Heuristic and large action models (LAMs) are two basic approaches for training and testing agentic systems and their elements. A heuristic (rule of thumb) approach learns through observation, seeing how work is performed in what context and under which conditions. Process intelligence techniques, where system logs are used to apply process modeling and analysis to identify how tasks are typically performed in a specific context, will be useful heuristic training. These sorts of systems are likely to require a break-in period during which time they will require a solid dose of human intervention to ensure that the tasks are being executed properly but will become more autonomous over time. More sophisticated role-based agents are likely to benefit from the development of LAMs. As my colleagues have pointed out, unlike heuristic approaches that use machine learning to deal with closely bounded processes, LAMs are designed to make decisions and execute a series of actions across a variety of environments. Just as LLMs leverage vast datasets of text, LAMs are developed based on a large body of action and outcome data. At their core, LAMs are optimized for function calling as the mechanism for taking action.
Agents are active systems, ones that dynamically interact with their environment using mechanisms to respond to inputs and act accordingly, in contrast to passive systems, which simply respond to external conditions without initiating changes.
Agents or agent-like systems are beginning to appear in a wide range of business software and will continue to proliferate at an accelerating pace over the next three years. Agentic AI can help any enterprise but the readiness of an organization or enterprise to adopt the technology from specific software providers is likely to vary significantly. One reason is that their version of a specific application might be heavily customized, or they have insufficient clean data to support training and maintenance or both. I recommend taking an informed approach to assessing and adopting such software and addressing any barriers to their adoption. Many software providers are jumping on the agent bandwagon, using an expansive definition of agentic that does not measure up to our definition. Non-agentic technology can still be useful, but it’s necessary to understand its limitations along with its capabilities to make informed decisions about where and how to deploy software. I also recommend setting expectations appropriately. In these early years, agents, like humans, will have learning curves that require hands-on coaching to achieve full productivity.
The growing availability of useful, safe and affordable agentic AI used in business will present executives with a significant opportunity over the next five years to achieve measurable performance gains and seize opportunities to alter their competitive landscape. Understanding the technology and acting now to address foundational requirements must be a core piece of their enterprise’s strategy, regardless of how aggressively they plan to employ agents in their operations, The danger of not taking action now far outweighs the relatively smaller costs of being prepared.
Regards,
Robert Kugel