Skip to content
#

ai-explainability

Here are 11 public repositories matching this topic...

Fairness in data, and machine learning algorithms is critical to building safe and responsible AI systems from the ground up by design. Both technical and business AI stakeholders are in constant pursuit of fairness to ensure they meaningfully address problems like AI bias. While accuracy is one metric for evaluating the accuracy of a machine le…

  • Updated Oct 11, 2021

Comprehensive LLM testing suite for safety, performance, bias, and compliance, equipped with methodologies and tools to enhance the reliability and ethical integrity of models like OpenAI's GPT series for real-world applications.

  • Updated Apr 15, 2024

In-depth exploration of Large Language Models (LLMs), their potential biases, limitations, and the challenges in controlling their outputs. It also includes a Flask application that uses an LLM to perform research on a company and generate a report on its potential for partnership opportunities.

  • Updated Aug 28, 2024
  • Python

Explainability of AI models is a difficult task which is made simpler by Cortex Certifai. It evaluates AI models for robustness, fairness, and explainability, and allows users to compare different models or model versions for these qualities. Certifai can be applied to any black-box model including machine learning models, predictive models and …

  • Updated Oct 28, 2021
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the ai-explainability topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-explainability topic, visit your repo's landing page and select "manage topics."

Learn more