Ethics and AI
By Matt Brennan
There are a few leading technology companies with ample resources who are the forefront of developing tomorrow’s most innovative AI. These companies are moving faster than the government can regulate the industry. There are very few laws or regulations on the books to address AI, so that the public has assurance that what they are using is safe.
According to a recent survey from the company SnapLogic, 94 percent of IT decision makers across the US and UK believe that more attention needs to be paid to corporate responsibility and ethics in AI. With advancements being made in autonomous vehicles, healthcare and medicine, environmental technology, and more, it’s critical that AI companies keep the safety of the public in mind. Below are a few steps they can take to create a safer, more ethical AI environment.
Comply with Regulations That Are on the Books
It may be common sense, but the need to comply can go by the wayside in technological competitiveness.
In the US healthcare industry, AI would be governed by Health Insurance Portability and Accountability Act (HIPAA), the Children’s Online Privacy Protection Act (COPPA), or possibly other federal and state laws. If companies have any European customers or employees, they would be governed by General Data Protection Regulation (GDPR).
It’s critical that companies producing AI follow the regulations that are on the books.
Take Control of Data
Data is the gasoline that fuels AI. It’s critical to make sure that the body of data collected is representative of the body of people who will be using the model. This means that data scientists need to continue to use best practices to serve potential customers.
As technology advances, guarding data is an imperative step in protecting the integrity of your technology. When data is compromised, either through hackers or an inadvertent loss of personal data, the public trust is undermined. It’s critical to have secure firewalls in place, and reliable data backups. It’s also critical to have a data recovery plan in place, in the event that operational data is lost. This may mean understanding when a data recovery company may be your best bet to be able to recover from data loss.
Define and Live by Our Values
As more industries become increasingly automated and autonomous, there needs to be a higher standard than statistical accuracy. AI will be empowered to make an increasing number of decisions that may come into conflict with our ethics and values. For example, machines may be making decisions on a defendant’s innocence or guilt. They may decide who will be impacted most in a car accident. They may be making important medical decisions for us in healthcare.
It’s critical that AI know when to default to human ethics over a purely statistical decision. This requires societies to define and live by their values. The decisions made will ensure that our AI is a tool that works on our behalf, and not the other way around.
AI Is Here to Stay
AI will play an increasingly large role in a number of existing and emerging industries. It will be automating roles that currently require human intervention. As that happens, it’s critical to keep humans’ best interest in mind. By following the steps above, we can create AI systems that are ethical and safe.