Organisations are increasingly recognising the value of chatbots in providing a positive customer experience. But it is important that security remains top of mind when deploying such tools. Here, Morey Haber, CTO and CISO, BeyondTrust, explores some fundamental security considerations for organisations looking to deploy chatbots and conversation marketing.
According to Gartner’s recent ‘AI and ML Development Strategies’ study, 40% of organisations cite customer experience (CX) as the number one motivator for use of Artificial Intelligence (AI) technology.
Not surprisingly, across the Middle East, we are seeing enterprises of all sizes, and even several government entities, start rapidly deploying chatbots on their websites, all in an effort to provide customers with faster responses to their queries.
These chat applications are designed to field plain text requests from humans that are fed into an AI engine, which can provide ‘smart’, scripted responses to inquiries.
As the Machine Learning technology that powers many of these chat applications gets smarter, it is going to get increasingly harder for users to determine if they are interacting with a real person or a machine.
As a case in point, some services classified as ‘conversation marketing’ may actually route you to the appropriate live person for a more in-depth conversation. But while we might never know the difference, with a little social engineering, a threat actor can easily determine what is behind the scenes and exploit any IT security vulnerability.
Understanding the security implications of chatbots
Irrespective of whether it’s a human or machine, there are some inherent security risks in chat-based services. Ironically, while there is a plethora of information available on how to deploy chatbots and the associated benefits, there isn’t the same level of attention and guidance around how to keep it secure for both your organisation and for the end user.
As a case in point, consider an automated service that is either hosted by the company itself or connected to a cloud-based AI engine as a service. To effectively respond to queries, this service needs to access backend resources. This often means having a database fronted by middleware that allows queries via a secure application programming interface (API). The contents of the database will vary from company to company and may include anything from hotel reservation information to customer data – and it may even accept credit card information.
Here’s a checklist of basic security questions to cover before implementing a chatbot that is fully automated and AI-driven:
- Is the API connecting your organisation’s website and the chatbot engine secured using access control lists (ACLs)? You can accomplish this by using IP addresses, geofencing, etc.
- How do you approach the management of authentications between the systems (webservice, engine, middleware, cloud, etc.)?
- How do you apply vulnerability management best practices across the architecture supporting the chatbot? You should also find a way to implement routine penetration testing.
- Have you adequately secured privileges/privileged access and enforced least privilege?
- What data can the chatbot query – is any of it sensitive? Do any specific regulations apply to how this data is collected, stored or handled? For instance, do communications contain information that may warrant extending your scope of regulations, like PCI DSS? Also, will communications ‘self-destruct’ in accordance with certain regulations?
- Is there a process for logging and detecting potential suspicious queries that may be designed to exploit the AI engine or leak data?
- Can you mitigate or prevent malware or distributed denial of services (DDoS) that target your service?
- Do you ensure end-to-end encryption for all chatbot communication and what protocols are you using?
In addition to carefully considering these security implications, organisations should continuously inventory the supply chain based on assets and communications from chatbot, webservice and provider to maintain a risk assessment plan. Any changes can easily affect some of the best practices listed above.
Protecting your employees during conversation marketing
In conversation marketing, a human is actually responding to the queries via the chat window. Several organisations try to make the experience really ‘authentic’ and, as a consequence, do not use fake names or pictures for the human chat box representative.
However, if a company displays the full name of their chat representative inside the chat box, with just a little social engineering, a bad actor can easily uncover data about the representative that can be used as part of an exploit. This is particularly easy if the representative has a social media profile. So to that end, if you do choose to use conversation marketing, it is critical that you follow a few key security best practices.
- For one, never reveal the employees’ full name and instead use an alias. While this might seem counterproductive (remember the whole making the experience more ‘authentic’), using the full name or even just the first name and last initial poses a high risk as a little research could uncover personal information about the representative.
- If the chat service displays a picture, photo, or avatar of the representative, use a unique image that cannot be found anywhere else on the Internet. The reason – a simple search by the employee and company name will reveal their social media presence and, if the pictures easily match, you might as well use their full name anyway. You will have done very little to mask their identity and provide protection from a potential social engineering attack at home or at work.
- Have a detailed manual in place that clearly states what information the employee can share and what he/she absolutely cannot – under any circumstances, irrespective of the inquiry – during a chat conversation. These guidelines will vary and can include everything from license keys to password resets. Your business will have to establish this list based on the services the chat box provides and any local and industry regulations governing data exposure, particularly across country lines.
- Create a formal support and escalation path for inquiries into potentially sensitive information.
- Provide regular security training for all chat box representatives so that they know how to recognise a potential attack, how to respond to suspicious requests and how to escalate a situation before it becomes a security incident for your organisation.
Let’s face it – when it comes to improving customer service, the benefits of chatbots and conversation marketing is undeniable, which means they are here to stay. But these tools do open up another attack vector – cybercriminals will always exploit the simplest way to compromise an organisation and, unfortunately, humans are often the weakest link.
But by assessing the key questions and implementing these best practices, you can enable a chat service that helps support your business initiatives, without opening up unnecessary risks.