ChannelLife Ireland - Industry insider news for technology resellers
Ireland
LogicMonitor says trust key to safe AI observability

LogicMonitor says trust key to safe AI observability

Thu, 14th May 2026 (Today)
Mark Tarre
MARK TARRE News Chief

LogicMonitor argues that AI-led observability should be judged by its ability to take safe operational action, not simply provide insight. At the centre of that argument is trust, which it sees as the main barrier to wider adoption of autonomous IT operations.

Many organisations, it said, remain stuck in reactive incident response despite heavy spending on observability tools. Its Observability & AI Trends 2026 report suggests businesses already hold large volumes of telemetry data but still struggle to turn that information into operational decisions.

Karthik SJ, General Manager of AI at LogicMonitor, said the issue is no longer basic visibility across systems. The next stage for observability, he argued, depends on whether AI can move from identifying problems to taking action in live environments without creating unacceptable risk.

"Organisations today have more visibility into their systems than ever before; however, visibility alone doesn't resolve incidents," said SJ. "The real value of AI in observability comes when it can help teams move from understanding what is happening to safely acting on those insights in real time.

"The barrier is not a lack of data or technological capability, it is trust. For AI to move from analysing systems to actively operating them, IT leaders need confidence that automated decisions will be safe, transparent, and accountable. Trust mechanisms are what transform AI from a helpful assistant into a trusted operational actor."

Trust and control

A central part of that trust is explainability. If AI systems recommend action, operators need to understand the reasoning behind those recommendations and how conclusions were reached.

Without that visibility, automated remediation can appear opaque and become harder to troubleshoot. That, in turn, can make IT teams more reluctant to let software intervene directly in production systems.

The same concern extends to the growing operational weight placed on AI systems. As observability platforms shift from detecting anomalies to recommending or executing remediation steps, mistakes can carry broader consequences for live services and infrastructure.

"The lack of trust some companies may feel with AI isn't entirely irrational," said SJ. "Most organisations are still early in adopting AI for operational decision-making, so fear of the unknown remains strong. At the same time, the growing responsibility AI observability already carries in operational decisions brings real risks. When AI moves from analysing data to intervening in live systems, the consequences of mistakes become much more significant."

Guardrails are therefore needed to limit what actions AI can take automatically, require approval for higher-risk changes, and test decisions against operating policies. In practice, that means setting clear boundaries around where autonomous action is permitted and where human sign-off remains mandatory.

Gradual adoption

LogicMonitor described a staged path to autonomous IT operations rather than an immediate handover to software. In the early phase, AI acts as a monitoring and detection tool, using telemetry data such as metrics, logs, traces and infrastructure signals to identify anomalies.

From there, AI broadens into a recommendation role. At that stage, systems suggest possible responses based on historical incidents and patterns, while human operators retain authority to approve or reject those steps.

"As systems mature, AI begins to act more like an advisor," said SJ. "It not only detects problems; it also recommends potential solutions based on historical incidents and patterns. Human operators remain firmly in control, approving or rejecting these recommendations while benefiting from faster diagnosis and improved context. Organisations may then let AI perform certain safe actions automatically once it has demonstrated reliability."

Only at the most advanced stage does the model move towards partial autonomy, where AI can detect incidents, identify likely root causes and trigger remediation workflows without waiting for human approval. Even then, people should continue to oversee policies, monitor outcomes and set the limits of what automated systems are allowed to do.

The position reflects a wider debate in enterprise technology over how far businesses are willing to let AI make or carry out operational decisions. While automation has long been part of IT management, AI introduces a different level of judgement and raises questions about transparency, accountability and the handling of failure.

For observability vendors, the challenge is no longer only collecting and correlating data from distributed systems. It also involves persuading customers that AI can operate within clear boundaries and produce decisions that staff can inspect, understand and, where necessary, override.

"Dashboards and alerts have long helped organisations understand what is happening across their systems; however, as digital environments grow more complex and distributed, relying solely on human operators to interpret data and respond to incidents is becoming increasingly unsustainable," said SJ. "The future of observability will not be defined by more sophisticated dashboards or faster alerts. It will be defined by an organisation's ability to operationalise AI safely, moving from insight to intervention, letting systems be observed and intelligently operated," noted SJ.