Explainable Artificial intelligence

Heading 6

Key Points of XAI reports

 

  • Identifying modelling bugs

    • Sometimes ground truth information leaks into the test set and this leads to false performance reports.

    • This can happen when features are engineered in such a way that the label is (unintentionally) used when computing the feature values

    • It can also happen when missing values are filled in by taking the class distribution into account (and doing it on the test set, where theoretically you don’t know the class).

 

  • Better oversampling

    • Models trained on unbalanced datasets often need oversampling of the smaller class in order to achieve good performance.

    • By using DeepGenerator, our in-house technique for exemplar generation, we achieved better results then standard over-sampling techniques, such as SMOTE

 

  • Feature selection

    • Identifying a small subset of features which account for most of the predictive power of the model.

    • Using XAI techniques to select features often leads to different and better choices than simply using the features which the model itself deems as important (e.g. features with large weights)

    • Having a model which relies on fewer features is not only easier to interpret and faster, but often even works better in practice since it generalizes better to new unseen data.

 

  • Avoid overfitting (and/or bias)

    • Sometimes, a model relying to much on a small set of features is a sign of overfitting of the training data when, in fact, the performance of the model on real world data is much lower

    • Also, models are prone to picking up biases from the training set and this can lead to controversial decisions (e.g. using “race” as a feature to predict loan default).

    • XAI techniques can identify such issues and alleviate them.

 

  • Generate adversarial/counterfactual examples

    • Adversarial examples are cases where the correct prediction is obvious for a human, but the model makes a mistake (usually with high confidence). This can be exploited by potential attackers.

    • Counterfactual examples are similar. They refer to small insignificant changes made to a data point which completely changes the model’s prediction on it.

    • Using XAI to find adversarial and counterfactual examples can reveal underlying issues with the model.

    • E.g: Healthcare Provider Fraud Detection

    • E.g: German Credit Data

 

  • Reveal (unexpected) correlations between features and label

    • Revealing correlations from the model’s perspective, which might or might not correspond to domain knowledge and this can validate or invalidate the model.

    • It can also reveal insights about how the training set was put together. (some trends might be present in the dataset, but they don’t represent the real world behaviour)

    • E.g: German Credit Data

 

 

SUBJECT: Webinar: Explainable AI – Improve AI adoption

"Success in Covid times and beyond with AI.”

 

When Covid is over so is your best window to have your organizations accept real changes to reduce expenses, retaining corporate knowledge, improve outcomes, and profit with 24/7 consistency. 

 

Don’t make the mistake and think  “XAI system can’t do what we need done!”  you would be mistaken.  With the reality and affordability of self-driving vehicles able to pick up and drop off packages or passengers safely, isn’t it time to take an hour and see just how much XAI can do to give you that transformational competitive edge you need.  

 

 

Adapting to the new business norm from Covid-19.   Thriving requires automating and managing process to reduce dependency of an unstable workforce, government regulations, supply chain, and changing customer needs with less face to face transactions.

When Covid is over so will your best window to reduce expenses.  Now is the time to make the changes you have wanted to improve service excellence, reliability, management controls, while majorly reducing fixed expenses, headcount, liabilities, and HR nightmare that take organizations time from real work.    

 

We help redefine and teach XAI worker the processes so you can scale up or down without the major expense and hassle associated with staffing issues. all the while providing 24/7 support while drastically improving quality, speed and lag time as this worker is.  Whether you’re looking for a reduction in staff, management tools to ensure productivity, compliance, training and decision making for a remote team, we can help you like we have with hundreds of other companies like yours.

 

 

 

Now is the time to make the changes you have wanted to improve service excellence, reliability, management controls, while majorly reducing fixed expenses, headcount, liabilities, and HR nightmare that take time from real work.    

 

 

 

 

With the increasing application of AI and ML across all industry verticals, many organizations realize the opaque nature of these applications, which may cause these applications not to be trusted readily and not adopted widely. We are hosting a webinar to discuss these topics and share our experiences on how to improve the adoption of AI/ ML applications leveraging Explainable AI.

Model Lifecycle Management

Our Model lifecycle management platform provides all the tools you need to create a

 

model governance, model comparisons, extensive model versioning and history of user, algorythm, model experimenting, model algorithm explainability, reducing bias, identify model bias to provide a XAI deployment you can trust.

model risk mitigation

model risk exposure

Correlation uncertainty

Time inconsistency

Uncertainty on volatility

Volatility smile

Implied volatility 

 

Model risk is the risk of loss resulting from using imprecise or poorly developed models to make decision.  Models are used throughout business services firms for:

Assessing / predicting exposures

Detecting / predicting fraud

Assigning consumer credit scores

Detecting suspicious or unlawful activity

There are costs and risks involved in using models, however, ranging from the direct costs of developing and implementing them to the adverse impacts of making decisions based on flawed or misused models.  So it’s essential to adopt model risk management (MRM) to control the risks created by unmanaged models

 

 

Models - Model lifecycle management: Build, analyze, manage, improve

Enhanced Experiments

Model versioning

Historical look back archive

Added notes

Publish model functionality

 

Experiments

Anomaly Detection: Zscore based  Isolation Forest, OneClassSVM

Regression

Forecasting

Unsupervised

Hyper parameters grid search

Ability to run / experiment existing model on multiple datasets

 

Model Inspector

parameters

metrics

data visualization

Hyper parameters grid search

Model comparison analysis - compare two models on the same data set to evaluate performance.

 

Model Compare

Show example screen and explain

 

Model Prediction Predictions

 

 

Visual Insights / Digital Dashboard

Customizable

Sharable

Hyper parameters grid search

 

Model Feedback loop

Improve model based on case management feedback

 

Improving model outcomes

            Measure feature importance

                        Deep analysis of all features and the proper relevance and weighting       

            Investigate and identify bias in model

                        Reducing erroneous outcomes

            Investigate and identify bias in training data

                        Reducing erroneous outcomes

            Identify model weekness

                        Review model to identify existing and potential new features that should be

                        included to refine model outcomes

            Investigate and identify out of distribution data for

            Train surrogate model

            Synthesize data

 

Peace of mind

            Continuous model monitoring

            Model accuracy score / f1

            Cloud based ML services (GCP,AWS)

 

Model risk is the danger of misfortune coming about because of utilizing imprecise or ineffectively formulated models for decision making.  Models are being utilized by different organizations, too. Esteeming introductions, doling out shopper financial assessments, anticipating exchange extortion, and identifying dubious, criminal or terroristic movement are only a portion of their shifted employments.

 

There are expenses and dangers associated with utilizing models, notwithstanding, going from the immediate expenses of creating and executing them to the antagonistic effects of settling on choices dependent on defective or abused models. So it's fundamental to receive model danger the executives (MRM) to control the dangers made by unmanaged models

 

 

AI systems sometimes learn undesirable tricks that do an optimal job of satisfying explicit pre-programmed goals on the training data, but that do not reflect the complicated implicit desires of the human system designers. For example, a 2017 system tasked with image recognition learned to "cheat" by looking for a copyright tag that happened to be associated with horse pictures, rather than learning how to tell if a horse was actually pictured.] In another 2017 system, a supervised learning AI tasked with grasping items in a virtual world learned to cheat by placing its manipulator between the object and the viewer in a way such that it falsely appeared to be grasping the object

 

Explainable machine learning

Explainable artificial intelligence (XAI), or Machine Learning (ML) commonly describes post hoc analysis and strategies made to help humans understand a previously trained model and/or its predictions. Examples of usual strategies consist of:

Reason code generating techniques

In particular, local interpretable model-agnostic explanations (LIME) and Shapley values.

Local and global visualizations of model predictions

Accumulated local effect (ALE) plots, one- and two- dimensional partial dependence plots, individual conditional expectation (ICE) plots, and decision tree surrogate models.

Interpretable or white-box models

 

Over the past few years, more researchers have been designing new machine learning algorithms that are nonlinear and highly accurate, but also directly interpretable, and interpretable as a term has become more associated with these new models.

Examples of these newer Bayesian or constrained variants of traditional black-box machine learning models include explain‐ able neural networks (XNNs),10 explainable boosting machines (EBMs), monotonically constrained gradient boosting machines, scalable Bayesian rule lists,11 and super-sparse linear integer models (SLIMs).12,13 In this report, interpretable or white-box models will also include traditional linear models, decision trees, and business rule systems. Because interpretable is now often associated with a model itself, traditional black-box machine learning models, such as multilayer perceptron (MLP) neural networks and gradient boosting machines (GBMs), are said to be uninterpretable in this report. As explanation is cur‐ rently most associated with post hoc processes, unconstrained, black-box machine learning models are usually also said to be at least partially explainable by applying explanation techniques after model training. Although difficult to quantify, credible research efforts into scientific measures of model interpretability are also underway.14 The ability to measure degrees implies interpretability is not a binary, on-off quantity. So, there are shades of interpretability between the most transparent white- box model and the most opaque black-box model. Use more interpretable models for high-stakes applications or applications that affect humans.

 

Model debugging

Refers to testing machine learning models to increase trust in model mechanisms and predictions.15 Examples of model debugging techniques include variants of sensitivity (i.e., “What if?”) analysis, residual analysis, prediction assertions, and unit tests to verify the accuracy or security of machine learning models. Model debugging should also include remediating any discovered errors or vulnerabilities.

Fairness

 

 

Fairness is an extremely complex subject and this report will focus mostly on the more straightforward concept of disparate impact (i.e., when a model’s predictions are observed to be different across demographic groups, beyond some reasonable threshold, often 20%). Here, fairness techniques refer to dispa‐ rate impact analysis, model selection by minimization of dispa‐ rate impact, remediation techniques such as disparate impact removal preprocessing, equalized odds postprocessing, or sev‐ eral additional techniques discussed in this report.16,17 The group Fairness, Accountability, and Transparency in Machine Learning (FATML) is often associated with fairness techniques and research for machine learning, computer science, law, vari ous social sciences, and government. Their site hosts useful resources for practitioners such as full lists of relevant scholar‐ ship and best practices.

© 2019 TimerCap LLC. 

All Rights Reserved.

Keep Up With TimerCap

Follow TimerCap

  • Facebook - White Circle
  • Amazon - White Circle
  • LinkedIn - White Circle
  • Instagram - White Circle
  • YouTube - White Circle

Contact TimerCap

TimerCap puts its customers first.

If you have any questions or concerns,

please reach us via e-mail or phone. 

Phone: (800) 557-4072

E-Mail: info@timercap.com