AI assistants are rapidly being deployed across financial services institutions, including banks, asset managers and the thousands of fintechs that handle compliance. Altogether this is one of the most transformative changes to how people work that we’ve seen in decades. As we move from proof of concepts to enterprise-wide rollouts, it’s increasingly important that companies ensure these tools add value and don’t create additional problems.
The importance of embedded teams
This is something we understand at Synechron. I’m currently working with teams helping thousands of people across financial services to set up and work alongside AI assistants. And this is a huge adjustment – you can't expect people to adapt to this level of change overnight. We’ve found that organization-wide training – led by a team of AI experts embedded with business teams – is critical to ensuring that people understand exactly what these tools can and cannot do to add value and remain safe. This is also why so many organizations are using trusted third-party providers, as this expertise just doesn’t exist in-house.
Companies need to establish what information is reliable
A comprehensive security framework must go beyond the basic disclaimers at the bottom of AI assistant searches. Companies need to establish what information is reliable. This means we have to educate employees on the differences between secure internal datasets and open internet sources. Also, they need to know about fact-checking to mitigate the risks of model hallucination and be aware of ethical and regulatory issues. For financial firms, it’s also vital that they work inside controlled environments, especially when dealing with private or sensitive data.
From a security and privacy point of view, there are valid concerns about using generative AI tools at work. As with the adoption of cloud services, we must ensure data remains secure in transit and at rest. Companies must know precisely where their data is going – is it a secured cloud environment or an open public system like ChatGPT? The lack of transparency around how data gets ingested, processed and used by these AI ‘black box’ models is a big concern for some organizations.
Ensure the right tools are deployed
Certain tools simply aren't suited to enterprise use cases that involve sensitive information. ChatGPT is designed for public consumption and may not prioritize the same security and privacy guardrails as an enterprise-grade system. Meanwhile, offerings like GitHub Copilot generate code directly in the IDE, based on user prompts, which could inadvertently introduce vulnerabilities if that code runs without review.
Looking ahead, the integration of AI into operating systems and productivity tools will likely exacerbate these challenges. Microsoft's new feature, Recall, captures screenshots of everything you do and creates a searchable timeline, raising concerns about surveillance overreach and data misuse by malicious actors. Compliance departments must compare – and then align – these technology features with regulatory requirements around reporting and data collection.
Secure, isolated environments
As AI capabilities expand and become more autonomous, we risk ceding critical decisions that impact user privacy and rights to these systems. The good news is that established cloud providers like Azure, AWS, and GCP offer secure, isolated environments in which to deploy AI models integrated with enterprise authentication safely. Companies can also choose to run large language models (LLMs) on-premises, behind their firewalls, and can use open source models to clearly understand the data used in training the model.
Transparency builds trust
Ultimately, AI model transparency is crucial for building trust and adoption. Users deserve clear information on how their data gets handled and processed and opt-in/opt-out choices. Privacy needs to be a core design principle from day one, not an afterthought. Robust AI governance with rigorous model validation is also critical for ensuring these systems remain secure and effective as the technology rapidly evolves.
Finally, organizations need to have performance check-ins – just as they would with any human employee. If your AI assistant is seen as another colleague, it needs to be clear that they’re adding value in line with (or exceeding) their training and ongoing operational costs. It’s easy to forget that simply “integrating AI” across a company is not, in itself, actually valuable.
These tools are already essential
We believe that these are tools are vital. They will be a part of almost everyone’s lives in the near future. What’s important is that companies don’t think they can simply enable access to the tools and then walk away; that this is something that can be announced to shareholders and then be fully operational within a quarter. Education and training will be an ongoing process, and getting security, privacy, and compliance measures right is key in order that we can take full advantage of these capabilities in a way that instills confidence and guarantees safety.
We list the best online cybersecurity courses.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
from TechRadar - All the latest technology news https://ift.tt/nolAR9I
0 coment�rios: