Towards A More Transparent AI - Jonathan Cartu Internet, Mobile & Application Software Corporation
1113
post-template-default,single,single-post,postid-1113,single-format-standard,qode-quick-links-1.0,ajax_fade,page_not_loaded,,qode_grid_1300,qode-theme-ver-11.2,qode-theme-bridge,wpb-js-composer js-comp-ver-5.2.1,vc_responsive
 

Towards A More Transparent AI

Towards A More Transparent AI


One cornerstone of making AI work is machine learning – the ability for machines to learn from experience and data, and improve over time as they learn. In fact, it’s been the explosion in research and application of machine learning that’s made AI the hot bed of interest, investment, and application that it is today. Fundamentally, machine learning is all about giving machines lots of data to learn from, and using sophisticated algorithms that can generalize from that learning to data that the machine has never seen before. In this manner, the machine learning algorithm is the recipe that teaches the machine how to learn, and the machine learning model is the output of that learning that can then generalize to new data.

Regardless of the algorithm used to create the machine learning model, there is one fundamental truth: the machine learning model is only as good as its data. Bad data results in bad models. In many cases, these bad models are easy to spot since they perform poorly. For example, if you built a machine learning model to identify cats in images, if the model identifies butterflies as cats or fails to spot obvious cats in images, we know that there’s something wrong with the model.

There’s many reasons why a model could perform poorly. The input data could be riddled with errors or poorly cleansed. The various settings and configurations for the model (“hyperparameters”) could be set improperly yielding substandard results. Or perhaps the data scientists and ML engineers that trained the model selected a subset of available data that had some sort of inherent bias in it, resulting in skewed model results. Maybe the model just wasn’t trained enough, or had issues of overfitting or underfitting resulting in poor results. Indeed, there are actually many ways that a resulting model could be substandard.

What if instead of a cat classification model, we’ve built a facial recognition model, and we were using this model for security purposes. If the model misidentifies individuals, is the fault with improper configuration of the model, a poorly trained model, bad input data, or perhaps just we selected a biased set to train the model in the first place? If we are supposed to depend on this model, how can we trust that model knowing that there are so many ways for the model to fail?

The Problem of Transparency

In an typical application development project, we have quality assurance (QA) and testing processes, tools, and technologies that can quickly spot any bugs or deviations from established programming norms. We can run our applications through regression tests to make sure that new patches and fixes don’t cause more problems and we have ways to continuously test our capabilities as we continuously integrate them with increasingly more complex combinations of systems and application functionality.

But here is where we run into some difficulties with machine learning models. They’re not code per se in that we can’t just examine them to see where the bugs are. If we knew how the learning was supposed to work in the first place, well, then we wouldn’t need to train them with data would we? We’d just code the model from scratch and be done with it. But that’s not how machine learning models work. We derive the functionality of the model from the data and through use of algorithms that attempt to build the most accurate model we can from the data we have to generalize to data that the system has never seen before. We are approximating, and when we approximate we can never be exact. So, we can’t just bug fix our way to the right model. We can only iterate. Iterate with better data. Better configuration hyperparameters. Better algorithms. More horsepower. Those are the tools we have.

If you’re the builder of the model, then you have those tools at your disposal to make your models better. But what if you’re the user or consumer of that model? What if that model that you are using is performing poorly? You don’t have as much control over rebuilding that model to begin with. However, even more significantly, you don’t know why that model is performing poorly. Was it trained with bad data? Did the data scientists pick a selective or biased data set that doesn’t match your reality? Were the wrong hyperparameter settings chosen that might work well for the developers but not for you? 

The problem is that you don’t know the answers to any of those questions. There is no transparency. As a model consumer, you just have the model. Use or it lose it. Your choice is to accept the model or go ahead and build your own. As the market shifts from model builders to model consumers, this is increasingly an unacceptable answer. The market needs more visibility and more transparency to be able to trust models that others are building. Should you trust that model that the cloud provider is providing? What about the model embedded in the tool you depend on? What visibility do you have to how the model was put together and how it will be iterated? The answer right now: little to none. 

Problem #1: Unexplainable Algorithms

One issue that has been well acknowledged is that some machine learning algorithms are unexplainable. That is to say, when a decision has been made or the model has come to some conclusion such as a classification or a regression, there is little visibility into the understanding of how the model came to that conclusion. Deep learning neural networks, the current celebutante of machine learning these days particularly suffers from this problem. For example, when an image recognition model recognizes a turtle as a rifle, what is the reason why the model does so? We don’t actually know why, and therefore those looking to take advantage of model weaknesses can exploit such vulnerabilities by throwing these deep learning nets for a loop. While there are numerous efforts underway to add elements of explainability to deep learning algorithms, we are not quite yet at the point where such approaches are seeing widespread adoption.

Not all machine learning algorithms suffer from the same explainability issues. Decision trees by their nature are explainable, although when used in ensemble methods such as Random Forests, we lose elements of that explainability. So the first question any model consumer should ask is how explainable is the algorithm that was used to…

[

Computer Network Development Jonathan Cartu

Source link

No Comments

Post A Comment