___
Could robots be guilty of harassment, and will their employers be legally responsible?
The House of Commons Science and Technology Committee has said that robotics and artificial intelligence (AI) could fundamentally reshape the way we live and work. It is already being used in the workplace. IBM’s AI platform, Watson, is advising doctors on treatments in several US hospitals and will be reviewing complex medical histories in Germany to identify potential diagnoses. RBS and NatWest recently announced they will be using virtual chat bot ‘Luvo’ to deal with simple customer services queries in the UK. Initially, the robot will be able to answer 10 questions, but it is intended that increasingly it will assist with complex issues by learning from human interactions.
A 2014 Deloitte and Oxford University study stated that 35 per cent of UK jobs are at high risk from automation over the next two decades, with those paying under £30,000 nearly five times more likely to be replaced than jobs paying over £100,000. Office and administrative support, sales and services, transportation and manufacturing were identified as the sectors most at risk of resulting redundancies.
Highly skilled jobs may be better positioned to survive the advent of AI, but the UK Treasury and the FCA are actively pushing the financial sector to deliver low cost and accessible financial advice via innovative technologies designed to displace an earlier technology in the long term. Altus Consulting suggests that as robo-advice (automated, process-driven, financial advice) develops, avatars will replace human assistance.
Any business – from banking to retailing – is likely to be affected by AI and employers will need to consider the impact on their human employees. As well as the inevitable redundancies, AI is likely to depress wages for lower-skilled work, while human AI innovators become more highly paid. Increased productivity, harnessing technology to aid the existing workforce and innovation are all positives, but organisations also need to think about how they will handle those redundancies and concerns about what happens if AI goes wrong.
The Equality Act 2010 protects employees from being harassed by someone else. Harassment is unwanted conduct related to a relevant protected characteristic that has the purpose or effect of violating the victim’s dignity or creating an intimidating, hostile, degrading, humiliating or offensive environment. A robot is technically incapable of harassing someone because it is not a legal ‘person’. However, a robot may be capable of creating a hostile environment. Microsoft’s chat bot, Tay, was removed from Twitter in March 2016 because it learned and tweeted racist and offensive remarks. As AI develops and becomes more ubiquitous, will the law change to allocate responsibility and provide redress for such conduct? At the very least, employers may need to deal with grievances and employee engagement issues in such circumstances.
AI is also being developed to read facial expressions and body language. In the context of recruitment, if this kind of data regarding job applicants is captured, employers will need to consider data protection issues when storing the information. For example, if records about job applicants’ body language and behaviour during interviews constitute personal data (in other words, they are linked to the applicant’s name or other identifying details) they must be relevant, not excessive and not kept for longer than is necessary to avoid infringing the Data Protection Act 1998.
Prospective employers will also need to be aware of potential discrimination issues with the way such information is used. For example, ‘scanning’ applicants in this way may identify (and generate records about) personal tics or other physical distinctions that are linked to an impairment that may be a disability under the Equality Act 2010 but which has not been disclosed by the job applicant. The employer must be careful not to treat such candidates less favourably because of this information.
Employers can be liable in law for their employees’ actions. Is it feasible that companies could be liable for the acts or omissions of robots that are providing services linked to their business, in the same way they would be liable for their human employees’ negligence?
Currently, the short answer is ‘no’. Robots do not have a legal personality and cannot create a liability for which their ‘employer’ is responsible, and it seems improbable that this may change. However, the European Parliament suggested in May that robot workers should be classed as ‘electronic persons’. The idea is unlikely to receive widespread support and, even if the EU parliament agrees it, will not be binding on member states. But, as industry is flexing to accommodate AI, employment law will also need to adapt if it becomes a widespread feature in the workplace.
Story via – http://www2.cipd.co.uk/pm/peoplemanagement/b/weblog/archive/2016/11/04/artificial-intelligence-managing-the-impact-on-a-workforce.aspx