The hidden costs of convenience: the potential risks of AI and how professionals should respond

We’re in the middle of a productivity revolution.

Large Language Models (LLM) and AI assistants like Copilot, ChatGPT and embedded “Ask AI” features in business apps can draft letters, compare documents, summarise cases and automate repetitive tasks in seconds.

Busy professionals such as solicitors, accountants and consultants have been increasingly discovering the transformative power of the speed of AI.

However, the speed does have a potential trade-off that needs to be understood.

We have been having ongoing conversations with our Cybersecurity Consultants, who have highlighted a number of practical and recurring risks that you need to be aware of.

Data exposure and uncertain data flows

Most mainstream AI services do not fully disclose how user inputs are stored or processed and these may be reused without a user’s knowledge or consent.

Anything that is pasted into an AI prompt might be logged, retained, indexed or used to re-train models.

This means that you should never input confidential or sensitive data into an AI system that is not specifically designed for that purpose, as it may lack the provisions to keep the data safe.

If you do consult with AI or LLMs, you should rephrase information and ensure that anything inputted is stripped of identifying information to avoid breaching your GDPR obligations.

Lack of transparency and provenance (hallucinations)

Generative models are known to “hallucinate.”

This is the process where the information produced is factually incorrect but may be presented in a compelling or convincing way.

As such, you need to independently verify any information that is provided by an AI or LLM to ensure that it has a basis in reality.

Failure to do so can lead to professional embarrassment and potential legal repercussions.

Operational security and third-party dependencies

Organisations often mix corporate-managed systems, like Microsoft 365, with multiple third-party cloud products such as LEAP, HubDoc and other payroll platforms.

AI integrations can increase attack surface or inadvertently expose credentials and documents.

A breach or misconfiguration in one system can cascade if sufficient protocols are not in place.

If an external provider’s integration or AI tooling ingests firm data, control and auditability can be lost.

The best way to handle this is by limiting the scope of AI services to ensure that you retain control over the data being accessed.

Where available, you should aim to use tenant-isolated versions of AI services where available.

Over-reliance, impersonation and social engineering

AI can write highly persuasive emails and imitate tone, which, while useful for streamlining communications, does increase the risk of fraudulent communications.

Compromised personal accounts combined with AI-generated content make impersonation scams more convincing.

Social engineering is a much easier way for cybercriminals to access sensitive data, as it means they do not have to work to penetrate firewalls or encryption.

Instead, they rely on users providing access and this can lead to major reputational and financial damage.

It is vital that all teams commit to awareness training to learn cybersecurity best practices and are sceptical of unusual requests.

Privacy, regulatory and compliance uncertainty

There are many jurisdictions that are still catching up with AI regulation and this can impact your utility of AI systems.

Using AI without understanding legal obligations could breach client confidentiality rules or data protection laws and leave you at risk of legal repercussions.

To avoid this, you should conduct a detailed risk assessment before adopting any AI system.

It is wise to seek professional legal advice and support to ensure that you are not risking non-compliance.

Cost and false sense of protection

Enterprise-grade “isolated tenant” AI solutions can reduce data leakage risk but may be expensive and potentially prohibitive for small firms.

It is not uncommon for increased costs to push many firms and businesses towards using consumer-facing AI services rather than more secure, bespoke systems.

Doing this increases the risk of exposure or could deter adoption entirely.

If you do elect to use consumer-facing AI, then be sure that you have weighed up the cost vs risk to determine whether it is truly the right move.

Auditability and incident response gaps

Organisations may lack monitoring that flags unusual data access or mass downloads.

They may also not be equipped to adequately complete audits of AI-assisted document generation and sharing.

If something goes wrong, a lack of logs and governance makes it more challenging to respond to the situation.

To avoid this becoming an issue, you should enable logging and DLP (Data Loss Prevention) on core platforms (SharePoint, OneDrive, email).

Alongside this, implement alerts for large downloads and unusual sharing patterns so that you can respond as soon as they happen.

Throughout your time working with AI systems, maintain an incident response plan that accounts for AI-related scenarios.

Practical next steps for firms and teams

Before implementing any AI system, you should draft an AI usage policy.

This needs to establish what is allowed, what is forbidden, how to anonymise data and escalation routes.

Staff will also need to be trained to use AI safely and effectively in accordance with GDPR requirements.

Be sure that you are using technical controls where possible.

Tenant-isolated AI should be utilised if it is feasible to do so and you should embed DLP rules and device compliance policies while restricting default sharing settings in cloud collaboration tools.

Always be aware of which third-party services can access client data and treat each integration as a potential risk.

Regularly review the integration by compiling quarterly reports that flag overly permissive sharing links, non-compliant devices, or unusual download activity so you catch problems early.

If enterprise isolation is unaffordable, document the decision, implement compensating controls and reassess periodically.

Do not let convenience outpace caution

AI is positioned as having a host of benefits due to the speed at which it can handle mundane tasks.

However, it can only be used effectively by teams that understand and compensate for all of the associated risks.

People and policies need to keep pace with any technological implementation to ensure the business is not at risk of GDPR violations.

If you are responsible for governance, IT or compliance in a professional services firm, you should start by drafting a short, workable AI policy before implementation.

To do this, convene a short cross-functional review to map data flows and decide where AI is allowed, where it is not as well as what mitigation measures are required.

Even small actions can be effective at preventing larger issues over time.

Our specialist team can support you with compliance awareness so that you do not get caught out. Contact John Dyne today for help.