What’s a black box?
Black box AI isn’t as complex as some experts make it out to be. Imagine you have 1,000,000 different spices and 1,000,000 different herbs and you only have a couple of hours to crack Kentucky Fried Chicken’s secret recipe. You’re pretty sure you have all the ingredients but you’re not sure which eleven herbs and spices you should use. You don’t have time to guess, and it would take billions of years or more to manually try every combination. This problem can’t realistically be solved using brute force, at least not under normal kitchen paradigms. But imagine if you had a magic chicken fryer that did all the work for you in seconds. You could pour all your ingredients into it and then give it a piece of KFC chicken to compare against. Since a chicken fryer can’t “taste” chicken, it would rely on your taste-buds to confirm whether it’d managed to recreate the Colonel’s chicken or not. It spits out a drumstick, you take a bite and tell the fryer whether the piece you’re eating now tastes more or less like KFC’s than the last one you tried. The fryer goes back to work, tries more combinations, and keeps going until you tell it to stop once it has the recipe right. That’s basically how black box AI works. You have no idea how the magic fryer came up with the recipe – maybe it used 5 herbs and 6 spices, maybe it used 32 herbs and 0 spices – but, it doesn’t matter. All we care about is using AI as a way to do something humans could do, but much faster.
The downside of transparency
This is fine when we’re using blackbox AI to determine whether something is a hotdog or not, or when Instagram uses it to determine if you’re about to post something that might be offensive. It’s not fine when we can’t explain why an AI sentenced a black man with no priors to more time than a white man with a criminal history for the same offense. The answer is transparency. If there is no black box, then we can tell where things went wrong. If our AI sentences black people to longer prison terms than white people because it’s over-reliant on external sentencing guidance, we can point to that problem and fix it in the system. As legal expert Andrew Burt recently wrote in Harvard Business Review: The AI gold rush of the 2010s led to a Wild West situation where companies can package their AI any way they want, call it whatever they want, and sell it in the wild without regulation or oversight. Companies that have made millions or billions selling products and services related to biased, black box AI have managed to entrench themselves in the same position as the health insurance and fossil fuel industries. Their very existence is threatened by the idea that they may be regulated against doing harm to the greater good.
Can we regulate?
Simply put: No. The lawyers will make sure we’ll never know any more about why a commercial system is biased, even if we develop fully transparent algorithms, than if these systems remain in black boxes. As Axios’ Kaveh Waddell recently wrote: And we also can’t rely on businesses themselves to end the practice. Our desire to extricate black box systems simply means companies can’t “blame the algorithm” anymore, so they’ll hide their work entirely. With transparent AI, we’ll get opaque developers. Instead of choosing not to develop dual use, or potentially dangerous AI, they’ll simply lawyer up. As Burt puts it in his Harvard Business Review article: When things go wrong and AI runs amok, the lawyers will be there to tell us the most company-friendly version of what happened. Most importantly, they’ll protect companies from having to share how their AI systems work. We’re trading a technical black box for a legal one. Somehow, this seems even more unfair.