1 Day 0 - The Big Picture Questions 1 Day 0 - The Big Picture Questions

1.1 “When Artificial Intelligence Unintentionally Learns Non-Compliant Behavior” 1.1 “When Artificial Intelligence Unintentionally Learns Non-Compliant Behavior”

You are the Chief Compliance and Ethics Officer at the mid-sized company M3 (Match-Making- Madness) which has been in business for about ten years. Your company connects freelancers with potential clients and vice versa. Freelancers and potential clients needing solutions each sign up (for a fee) on your app, identify areas of expertise, hourly rate, project description, budget, etc. Your marketing materials note “the personal touch” of your company in vetting users of your app and also note that all suggested matches (while initially suggested by a computer algorithm) are reviewed and then then picked by a person to be sure they are a good fit. There are plenty of other companies like yours, but none that offer such a boutique and high-end personalized service.

Recently, you’ve come to discover that the IT department has been “experimenting” with a new machine learning based match-making algorithm that they have trained using all of the prior matches over the past ten years – in a sense better predicting what the human-matchmakers will ultimately decide to help make things more “efficient.”

Unbeknownst to the IT department (or at least the ones involved in the new algorithm), a year and a half ago M3 had to fire and replace many of the human “matchmakers” because they had been matching their freelancing friends and acquaintances with clients, ignoring the more objective suggestions of the computer algorithm. A set of valuable clients that had received sub-standard freelancing work because of this problem came pretty close to litigation. Due to your stellar attorneys – and your negotiation skills! – they calmed down when you re-iterated your commitment to address the issue by more carefully vetting and training your human matchmakers.

After you informed the IT department of this history, they disclosed that six months ago they already replaced the prior algorithm with their new algorithm, assuming it was ok because a human match-maker was still making the final decision. Then they also tell you that -- up until about one minute ago – they have been really proud their new machine learning algorithm’s suggestions had been approved by the human match-makers nearly 99.9% of the time over the last six months.

After you take a deep breath:

(1) What would you do immediately?

(2) Who should be informed and how?

(3) What internal safeguards and processes would you put in place going forward?