Agentic AI And Data Protection: An Overview of the New ICO’s Tech Futures Report

By Fernanda Carvalho Dias 

Agentic AI investment is increasing rapidly, attracting interest and scrutiny from regulators, innovators, and technology adopters. On 8 January 2026, the Information Commissioner’s Office (ICO) released its ICO Tech Futures: Agentic AI Report, focusing on agentic AI. The ICO’s report aims to build an evidence base on the technology’s current state and associated data protection risks, while encouraging privacy-led innovation (ICO Tech Futures: Agentic AI Report, 2026).

ICO publications have often signalled regulatory priorities and operational expectations. In emerging areas like distributed ledger technologies, the ICO’s guidance has set design-phase compliance expectations and prompted active industry engagement (ICO Guidance on Distributed Ledger Technologies, 2024).

What Is Agentic AI?

Definitions of agentic AI vary across industry and research, and organisations often use different terms for marketing purposes. For the purposes of the ICO Tech Futures Report, agentic AI refers to computing systems that integrate large language models (LLMs) or other foundation models with additional tools such as databases, memory, operating systems, and interfaces for interacting with the world. This integration, often described as “scaffolding”, enables systems to move beyond static responses and perform tasks with greater autonomy and adaptability.

From Traditional Agents to Agentic Systems

In computing, an agent is software designed to carry out processes or tasks with varying levels of automation. Historically, agents were specialised and operated within pre-set limits, such as automatic stock management. Recent advances in AI have expanded the autonomy and range of tasks that agents can perform. When combined with LLMs, these systems gain capabilities that allow them to interpret context, use natural language, and execute open-ended tasks. This evolution marks a fundamental shift from narrow, rule-based agents to systems capable of dynamic interaction and reasoning-like behaviour.

Core Capabilities of Agentic AI

Agentic AI systems typically exhibit four interrelated capabilities:

  • Perception: Ability to process diverse inputs, including natural language and unstructured data not originally designed for machine interpretation.

  • Planning: Capacity to generate plans, break tasks into sub-tasks, and check for errors, resembling reasoning processes.

  • Action: Execution of tasks through tool use, code generation, and interaction with humans or other agents.

  • Learning and Memory: Adaptive decision-making that incorporates feedback, learns user preferences, and adjusts future actions based on past outcomes.

These capabilities enable agentic systems to autonomously pursue goals, adapt to new contexts, and exhibit reasoning-like behaviour, even if their reasoning is not equivalent to human cognition.

Variables Shaping Agentic AI Development

The development of agentic AI depends on a range of technical, social, and governance variables. Key factors include the degree of autonomy, generality, and controllability of agents, their ability to handle underspecified tasks, and the integration of privacy-enhancing technologies. Adoption will also be shaped by accessibility, market concentration, and public trust, alongside regulatory alignment and liability mechanisms. To improve performance, agentic systems must move beyond scripted responses and interpret context dynamically, a challenge that has prompted research into neuro-symbolic AI. This approach combines connectionist models with symbolic reasoning to enable more adaptive behaviour, but it raises concerns about transparency and the growing demand for contextual data. While agentic AI outputs may appear reasoned, their use introduces significant implications for privacy and accountability.

Risks arising from agentic AI

While the ICO identifies a wide range of risks in its report on agentic AI, many of these concerns are not entirely new. Issues such as explainability, transparency, and data privacy have already been extensively discussed in the context of large language models (LLMs). These risks largely stem from the underlying generative architecture and persist when LLMs are integrated into agentic systems. However, certain risks highlighted by the ICO warrant differential treatment either because they scale the potential harm significantly or because they introduce qualitatively distinct challenges compared to LLMs. Examples include the complexity of assigning controller and processor responsibilities across multi-agent supply chains, the autonomy that enables agents to act beyond anticipated purposes, and the emergence of opaque agent-to-agent communication. These features create governance and accountability problems that cannot be addressed by simply extending existing frameworks for LLMs.

Controller and Processor Responsibilities

The ICO highlights the difficulty of assigning controller and processor roles across agentic AI supply chains. These systems involve multiple actors and integrations, creating uncertainty about accountability. Data protection law relies on clear role definitions to allocate obligations. In agentic ecosystems, liability may shift dynamically, requiring new governance structures rather than incremental adjustments.

Rapid Automation and Automated Decision-Making

Agentic AI accelerates automation of complex tasks, increasing reliance on automated decision-making (ADM). Existing legal provisions require transparency and human intervention for significant decisions. However, the autonomy and speed of agentic systems reduce opportunities for oversight. This is not only an amplification of ADM risks but a structural challenge, as decisions become iterative and adaptive across distributed agents.

Purpose Limitation and Data Minimisation

Agentic systems often operate with broad purposes to enable open-ended tasks. This conflicts with the principle of purpose limitation, which demands specificity. 

Data minimisation, for example, can be harder to achieve because performance depends on extensive contextual data. Stakeholders expressed differing views on data access for agentic AI. Some suggested gradual expansion of access as systems prove reliable, while others support broader initial access with strong controls, noting that context improves performance and personalisation. However, this raises challenges for data minimisation, which requires processing only what is necessary for a defined purpose. The ICO states that organisations must avoid granting access “just in case” and instead justify every use of personal data. While limited access can restrict functionality, compliance will depend on tools that enforce controlled, purpose-driven access, following principles such as least privilege.

Transparency and Explainability

Transparency declines as agentic systems exchange information autonomously across multiple stages. Complex information flows may create significant transparency challenges. Data may be shared across multiple stages, agents, and organisations, and systems could autonomously develop new ways of using personal information. This includes processing data beyond what developers anticipated, pursuing objectives in unexpected ways, seeking additional information without consent, or repurposing data collected for other purposes. The rise of agent-to-agent communication, often invisible to humans, may further complicate understanding how and where data is processed, making it harder to ensure human intervention and uphold data rights.

Accuracy and Memory

Agentic AI relies on memory for adaptation, which amplifies the impact of inaccurate data. Inaccurate data, whether acquired or hallucinated, can influence multiple decisions and cause cascading errors. The complexity of correcting these inaccuracies depends on whether they reside in short-term memory or are embedded in the model, ranging from simple resets to major architectural fixes. This risk is amplified by the high volume of actions and decisions agentic systems perform, which reduces opportunities for human oversight and correction, potentially leading to significant harm.

Oversight and Multi-Agent Complexity

Oversight becomes harder when agents interact across organisational and external systems. Interoperability gaps and inconsistent security standards create systemic vulnerabilities. Multi-agent systems can lead to compound privacy losses and accountability failures. Monitoring emergent behaviours requires dedicated governance mechanisms, such as standalone monitoring agents.

Security Threats

Agentic AI introduces new attack surfaces. Malicious actors could distort goals, manipulate reasoning, or poison data in memory. These threats target the logic of decision-making rather than infrastructure, requiring security strategies that focus on behavioural integrity. Existing frameworks will need significant adaptation to address these risks.

Business Models and Data Concentration

Personal assistant agents require extensive access to personal data, creating concentration risks and increasing exposure to breaches. Embedding personal information within models adds extraction risks if models are shared or compromised. These risks arise from design and business choices, not only technical architecture, and demand regulatory scrutiny of commercial practices.

Opportunities for Privacy-Positive Innovation

The ICO identifies opportunities for embedding privacy by design into agentic AI systems. Organisations can incorporate governance tools, benchmarking frameworks, and privacy management agents to ensure compliance and accountability. There are also potential opportunities for innovations that actively support data protection, privacy, and information rights. For example, organisations can design agentic systems that automate the exercising of information rights, such as enabling individuals to access, rectify, or erase their data through intelligent agents. These approaches align with the ICO’s emphasis on responsible innovation as a driver of economic growth and trust. By drawing attention to these opportunities, the ICO encourages developers to integrate privacy-enhancing features into the architecture of agentic AI rather than treating compliance as an afterthought.

Conclusion

The ICO’s Tech Futures 2026 report clearly signals its view that agentic AI represents a turning point in the evolution of artificial intelligence and requires special attention. While many risks echo those associated with large language models, the integration of autonomy, memory, and multi-agent interaction introduces governance challenges that are both deeper and broader in scope. These systems do not simply scale existing issues; they create new dynamics of accountability, transparency, and security. At the same time, the report highlights opportunities for innovation that align with data protection principles, such as agents that automate the exercise of information rights. 

References


Next
Next

How to Protect Your Trademark and Domain Name Before a Dispute Arises: Practical Tips for Brand Owners