Leaders Late to Need for Regulation on AI as Job Killer
Bradford K. Newman, Silicon Valley attorney and author of “Protecting Intellectual Property in The Age of Employee Mobility: Forms and Analysis,” wrote in Techcrunch.com recently that government and companies need to regulate the new technology with legislation he proposed back in 2015, the Artificial Intelligence Data Protection Act (AIDPA).
A recent study by McKinsey in 2017 shows the urgency of such legislation Newman said, citing that up to 800 million workers could lose their jobs to AI by 2030 and half of current work functions could be automated by 2050. The lack of human involvement in AI-created intellectual property, as a well as the need for companies to prepare and be responsible for minimizing the impact of replacing workers are all important factors leading to the same conclusion.
Among the remedies Newman proposes:
1. AI will need supervision and new laws to deal with its rapid forays into private information. For instance, current IP laws don’t cover AI working independent of human involvement or oversight (music, art, medical techniques, processes to communicate, processes to kill, etc.), nor does it protect people whose data is being used for developing these abilities.
2. Governments need to create a “chief AI officer,” to monitor AI within the workplace, create company-wide plans for AI-impacted employment, implement the AIDPA regulations, enact company-wide safeguards that monitor for and respond to malicious AI activity and account for AI-created IP.
3. A governing body made up of industry, technical, ethical and legal experts is needed to bring specialized expertise and consistency to regulating AI in industry.
4. Among other changes, a tax on AI-involved industries that eliminate jobs will be needed to address mass worker displacement support and training, among other needs.
Read more at techcrunch.com