The UK doesn’t yet need new regulations on artificial intelligence (AI), according to AI in the UK: ready, willing and able?, a report by the House of Lords Select Committee on AI although commentators have expressed concern that ‘regulatory gaps’ may need to be addressed.
A variety of stakeholders gave their views to the Committee, including those who considered existing laws could do the job; those who thought that action was needed immediately; and those who proposed a more cautious and staged approach to regulation.
For instance, tech industry body techUK argued that “the concerns regarding the use of AI technologies … are focused around how data is being used in these systems” and that it was “important to remember that the current data protection legal framework is already sufficient” and that the General Data Protection Regulation (GDPR) would further strengthen that framework. The organisation advocated a cautious approach to other areas where regulation might be required, stating that “where there are other concerns about how AI is developing these need to be fully identified, understood and discussed before determining whether regulation or legislation has a role to play”.
The Foundation for Responsible Robotics, meanwhile, said “we need to act now to prevent the perpetuation of injustice” and that “there are no guarantees of unbiased performance” for algorithms at present.
However, the Law Society of England and Wales said that “AI is still relatively in its infancy and it would be advisable to wait for its growth and development to better understand its forms, the possible consequences of its use, and whether there are any genuine regulatory gaps”.
The Committee’s report agreed, concluding: “Blanket AI-specific regulation, at this stage, would be inappropriate. We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed.”