Privacy: The use of Artificial Intelligence in Recruitment

The first most basic forms of artificial intelligence (AI) were created in the 1930s, however, it is a technology that is now developing at an increasing speed and progressively becoming more complex.

In the recruitment sector it has begun to be employed in order to aid with pre-employment assessments, conversational chatbots, video interviewing tools, and to match candidates with positions and employers. 

 When the technology is used properly, it can remove some of the main every-day challenges faced by recruitment agencies. Automated sourcing processes can analyse profiles at an impressive speed, ensure that older profiles are not forgotten about when they might be a good match, and candidates’ personalities, skills and fits can be assessed and predicted by machines in a way not possible for humans.

However, the new technology creates potential problems with compliance for established data privacy and data protection rules.

This article will explore how AI is being used in recruitment, the data protection issues this can raise, and how to ensure compliance in order to employ this useful technology and promote innovation. 

Big Data, AI and Machine Learning

Big data is high volume, high velocity, high variety information, which is gathered through AI, a data analysis tool which learns some aspect of the world and then creates hypotheses from it. AI is different from usual methods of data gathering since it doesn’t just analyse data: it learns from this data and offers its own output. One of the fastest growing approaches to AI is machine learning, allowing computers to, literally, think. AI requires huge amounts of data to be processed so that its outputs will be reliable.

Machine learning- such as data mining- does not attempt to emulate human abilities but contributes to human activities, performing tasks it would be impossible for humans to perform alone. In the big data era massive amounts of digitalised personal data can be collected, duplicated and disseminated instantly, without data subjects even having been aware. AI can be trained to automatically collect and analyse all of the data available to it, and often repurposes that data, creating new types of data. It has the power to reveal information that would not normally have been disclosed. 

AI in Recruitment

A common example of AI in recruitment is the use of software to match CV’s to job descriptions, which involves processing job descriptions, personal data about candidates, and often data provided by employers about previous hiring decisions for similar roles. The steps in this chain involve a lot of people and vast amounts of data. Often the data needs to be transferred to a new location so that it is easily accessible to be cleaned, programmed, and used in the AI system. Clearly, the steps in this chain could increase the risk of data breach- such as unauthorised processing, loss, destruction or damage of data. 

The technology of AI means that compliance with privacy laws can be more difficult. It introduces new mechanics not found in the IT systems of most organisations and tends to rely on third party relationships for the supply or development of software. This can make it difficult to identify the risks and therefore difficult to manage them. 

AI systems are rarely developed in-house which means that implementing AI can often involve third parties and require changes to existing software used by companies, which can introduce security risks.

Practitioners of AI software tend to be large in number and have a wide background- they may be IT developers or software entrepreneurs- which can mean that their security understandings and expectations are different. Often, personal data laws are not their key priority. 

The security risks will vary depending on what technology is being used, how it was developed, and the complexity of the operations. The UK’s chief data protection watchdog, the ICO, offered its key piece of guidance for organisations as being to review risk management practices to ensure personal data is secure in an AI context. 

The Law Vs AI

The General Data Protection Regulation (GDPR) came into force one year ago and introduced new, strict obligations for those handling personal data and new rights for those whose data is being processed. Transparency and accountability are critical features of the GDPR, and data subjects need to consent to the processing of their data. 

Data controllers must implement appropriate measures to ensure compliance with the privacy requirements which can be done using methods such as encryption or pseudonymisation, and Article 5 requires that data is processed in a transparent manner. When it comes to AI, is this possible?

It might not be desirable for recruiters to reveal information thus revealing the inner workings of their technology, and it might not be feasible to explain a prediction which has been based on AI and machine learning.

Article 6 sets out lawful bases for processing data, such as the data subject’s consent, however considering the voluminous amounts of data processed for many purposes in AI, can a data subject really give informed, clear consent? Article 22 of the GDPR establishes especially stringent conditions for automated decision making involving personal data when there is no human input involved.

The lawful bases for automated decision making are where the law so requires, where it is required for a contract, or where the data subject consents.

Data subjects also have the right not to be subject to a decision based solely on automated processing, including profiling. 

If there is human input involved these obligations are different, however the requirements cannot be avoided by simply “rubber-stamping” automated decisions with human activity; rather it must be meaningful human activity. Basically, to avoid the strict conditions meaningful human activity is required; if there is no human activity, people have even more right to object to their data being used. 

How to Use AI Safely in Recruitment 

  • Are you aware of the risks involved?

You may already have policies in place that you are proud of, but the increased risk presented by AI means that these policies will almost certainly need to be reviewed and updated. The GDPR requires you to consider the level of risk of your processing and have IT security measures in place that are adequate for that risk.

Basic measures which are probably already implemented include always keeping a record of where and how documents are being stored and ensuring that your storage system is secure with specified access. Having clear audit trails also means you can prove you have measures in place to ensure compliance, should any problems arise. Personal data which is no longer needed should always be deleted immediately. 

“Virtual Machines” or “Containers” can be used on computer systems, which are essentially smaller computer systems which run inside a larger computer system, in isolation. If there are any attacks, the hope is that they can be contained to one system.

privacy impact assessment can be performed to determine and mitigate privacy risks before AI begins processing personal data. Privacy by design solutions can be embedded into big data analytics to protect privacy through a range of technical and organisational measures. The concept of privacy by design is that with each privacy risk identified there is an opportunity to find technical and creative solutions. Such solutions seen in the past have been access controls and audit logs to prevent data misuse and data minimisation measures to ensure that only essential data is processed.

  • Can you explain your use of AI?

Data subjects must understand the purpose of their data being processed in order to give free and fully informed consent to it. Article 13(2) of the GDPR requires that data controllers explain the logic involved and the envisaged consequences of automated decision making. This may seem problematic in the context of big data since it can be difficult and lengthy to explain, and analytics can often involve the repurposing data that the data controller can’t always foresee.

The challenge here is to be innovative with your privacy notices and terms of use. They should be written in plain language which is easy to understand. It doesn’t have to be strictly textual information: some companies now use videos and even cartoons to explain their data processing. The GDPR does not require controllers to explain how data will be processed in their privacy notices: rather, the purposes of this processing. It may be difficult to explain in simple terms how the technology behind the analytics works, but it should be possible to explain the purposes in a way that is comprehensible and unambiguous.  

  • Is your system fully automated?

Understand the systems you have in place, whether they have been outsourced or developed inhouse. Will AI be used to enhance human decision making, or solely to make decisions? 

If you classify your system as involving human activity, the obligations under Article 22 of the GDPR mean that practitioners must be involved in checking the system’s recommendations and should not blindly apply the automated recommendation to an individual. Human review must be active and not just a token gesture. Practitioners should have a meaningful influence on the decision, which means having the authority and competence to go against the machine’s recommendation.

Be mindful of what has been classed as automation bias- human practitioners become complacent with AI machine decision making and can accept the output of the machine blindly without consideration. The output of the machine should have a level of interpretability so that its output can be objectively reviewed: a lack of interpretability, means a lack of meaningful review. 

  • Can the data be anonymised?

Some personal data can be anonymised so that it is no longer possible to identify an individual with the data its self or in combination with other sets of data, in which case it is no longer personal data. Recital 26 of the GDPR sets out that in this case, the data is no longer covered by the GDPR.

It could be a useful task to investigate whether the data you are using could be anonymised, so that you can be sure to avoid privacy risks. With this, however, take care. AI is smart and can draw conclusions from the data you enter so that it may eventually re-hypothesise the information you had made anonymous. 

Anonymisation is best thought of not as a way of reducing regulatory burden by avoiding the possibility of being covered by the GDPR, but as a way of mitigating the risk of any disclosures or losses of personal data. It is also a useful tool for assuring individuals that their data is not being used for analytics that they have not consented to. 

What IS Next for AI in Recruitment?

Whilst the use of big data analytics can have potential implications for data protection and privacy rights, these implications are not impassable hurdles for the legally compliant use of these effective technologies. The approaches described can help with compliance and, hopefully, help with innovation rather than restricting the use of big data analytics.

The law tries to keep up with the development of technology however, given the fast-paced nature at which AI is developing, it is easy to get left behind. By balancing compliance with innovation, tech entrepreneurs can spur on the development of the law, and the useful tools of AI can continue to innovate the recruitment sector.

If you have any privacy questions about your use of AI in the recruitment process then don’t hesitate to get in touch!

Article by Lily Morrison @ Gerrish Legal, July 2020 / Cover photo by Alex Knight on Unsplash

Previous
Previous

France: Google’s €50 million fine upheld!

Next
Next

To Zoom or not to Zoom: A Lesson in Privacy & Security