Magazine Button
The journey to AI – fast and full of blind spots

The journey to AI – fast and full of blind spots

Enterprise SecuritySoftwareTop Stories

With any new technology such as AI and ML, it is tempting to jump on the bandwagon and test it out before taking a step back to analyse potential risks. Rachel Roumeliotis, Vice President of Content Strategy at O’Reilly, discusses why this is important and why we should consider adopting best practices to address potential blind spots.

By now we should all be accustomed to Artificial Intelligence (AI) being in our everyday lives and hearing how its advancements can change how we work, interact and learn in the long-term. Newspapers and magazines are littered with articles about the latest advancements and new projects being launched because of AI and Machine Learning (ML) technology.

In the last year it seems like all of the necessary ingredients – powerful, affordable computer technologies, advanced algorithms and the huge amounts of data required – have come together. We’re even at the point of mutual acceptance for this technology from consumers, businesses and regulators alike. It has been speculated that over the next few decades, AI could be the biggest commercial driver for companies and even entire nations.

In fact, AI is changing more than what computers can do and how we communicate and interact with technology. AI is changing the very nature of work, of hiring and is serving as a catalyst for organisation-wide change.

However, with any new technology, the adoption must be thoughtful both in how it is designed and how it is used. Organisations also need to make sure that they have the people to manage it, which can often be an afterthought in the rush to achieve the promised benefits. Before jumping on the bandwagon, it is worth taking a step back, looking more closely at where AI blind spots might develop, and what can be done to counteract them.

Security, privacy and ethics

As the pace of AI and ML development intensifies alongside heightened awareness of cybercrime, organisations must ensure they take into account any potential liabilities. Despite this, it has been proven that security, privacy and ethics are low-priority issues for developers when modelling their Machine Learning solutions.

According to O’Reilly’s recent AI Adoption in the Enterprise survey, security is the most concerning blind spot within organisations. In fact, nearly 73% of senior business leaders admit that they don’t check for security vulnerabilities during model building. Additionally, more than half of organisations also don’t consider fairness, bias, or ethical issues during Machine Learning development. Privacy is similarly neglected, with only 35% keeping this top of mind during model building and deployment.

The buck stops with businesses on this issue. They need to adjust and honour the agreement set out when they start compiling and analysing data. This can be tricky as businesses don’t always have security and privacy ingrained as part of their original business strategy for adopting these technologies. However, they need to understand the importance of looking for security vulnerabilities during model building and recognise why protecting critical information is so important in the initial steps. It is essential for both businesses, but also those individuals working in AI and ML developments to equip themselves with the right knowledge to take the appropriate data safety precautions, only then will they be able to confidently ensure security and privacy.

Despite the lack of attention to security and privacy concerns with Machine Learning development, the majority of resources are focused on ensuring AI projects are accurate and successful. For example, 55% of developers mitigate against unexpected outcomes or predictions, but a large number who don’t still remain. Furthermore, 16% of respondents don’t check for any risks at all during development.

This lack of due diligence is likely due to numerous internal challenges and factors, but surprisingly, a big part of this problem is having the skills and resources to complete these critical aspects of the development process. In fact, the most chronic skills shortages in technology are centred around ML modelling and data science. To make progress in the areas of security, privacy and ethics, organisations urgently need to address this.

What can be done?

AI maturity and usage has grown exponentially in the last year. However, considerable hurdles remain that keep it from reaching critical mass. To ensure that AI and ML are both represented by the masses and that they can be used in a safe way, organisations need to adopt certain best practices.

One of these is making sure technologists that build AI models reflect the broader population. Both from a data set and developer perspective this can be difficult, especially in the technology’s infancy. This means it is vital that developers are aware of the issues that are relevant to the diverse set of users expected to interact with these systems. If we want to create AI technologies that work for everyone – they need to be representative of all races and genders.

As Machine Learning inevitably becomes more widespread, it will become even more important for companies to adopt and excel in this technology. The rise of Machine Learning, AI, and data-driven decision-making means that data risks extend much further beyond data breaches, and now include deletion and alteration. For certain applications, data integrity may end up eclipsing data confidentiality.

The next step in the deployment of AI and ML is to ensure that the right talent and data is being used. Those who build it need to be representative of everyone, with cross-functional teams also being a requirement to ensure its design is representative of all. Security, privacy, ethics and compliance issues will increasingly require that companies set up cross-functional teams when they build AI and Machine Learning systems and products for this purpose. 

AI and Machine Learning are appearing in many of the products and systems we interact with. You could call it a success already with its ability to remove us from more mundane and repetitive tasks. However, at present, organisations need to understand the essentials and ensure they invest time and resources to get security and ethics right. Organisations must ensure that in the year ahead, they are able to close the skills gap and must take another look at overall data quality.

Click below to share this article

Browse our latest issue

Intelligent CISO

View Magazine Archive