Back to Documentation Menu

FAIRLY Product Features A to Z

AI

What is an AI?
AI, short for "AI system", is an abstraction used to identify comprehensive intelligent systems that are used for functional or business purposes within an organization, function or business unit. An AI system can contain multiple user roles, reports, controls, models and datasets.

Who should create an AI?
An AI owner should create an AI. In most organization, the product owner or project owner should be the AI owner.

Who have access to an AI?
The user who created the AI will be assigned the role of AI Owner by default and will have access to the AI they create.

An administrator in the same organization as the AI owners will be able to access any AI created by any AI Owners in the same organization.

A member in the same organization will be able to see an AI only if they are the AI owner themselves or if they have been assigned to the AI explicitly by the AI owner.

How to create an AI?
Please see Step 1: Create an AI in the Quick Start Guide.

AI Center

What is AI Center?
AI Center is an inventory of AI's created by AI Owners. AI Center is located under Compliance section of the top navigation.

When you click on an AI in the AI Center, you will be brought to the AI Dashboard for the AI selected.

Why is AI Center important?
Most organizations have a problem of knowing where their AI's are being used in their organization. Having an inventory of AI's is the first step in effectively managing AI risks.

AI Dashboard

What is AI Dashboard?
AI Dashboard is an inventory of roles, reports, controls, models and datasets belonging to a specific AI. Select an AI in the AI Center to access the AI Dashboard.

AI Readiness Planner

What is AI Readiness Planner?
To bring Artificial Intelligence products into production on time with industry-leading quality standards, you must set organization-wide procedures to ensure good governance, controls, transparency, and corrective action. This begins with planning for success. Use the AI Readiness Planner to determine your organization’s current AI readiness for bringing AI-driven products into production.

When you go to the FAIRLY app homepage (click on the FAIRLY logo in the top navigation bar), you will see a link to the AI Readiness Planner.

AI-Based Bias Inspector

What is AI-Based Bias Inspector?
The AI-Based Bias Inspector will allow you to upload a csv file (binary format) of a model result dataset or training dataset and relate it to your existing AI. Members assigned to your AI will also be able to view this data.

See Bias Inspection section for more details.

APIs

Where can I find documentation for the FAIRLY APIs?
Please see FAIRLY Client APIs.

Asenion Client

What is Asenion Client?
Fairly Client (also known as Asenion Client) is a project that aims at simplifying the data science experience of model development and validation reporting workflows for machine learning models. It contains Python libraries with APIs for capturing quantitative data while ensuring internal policies and external regulatory requirements are satisfied via a checklist of smart controls. This allows AI model developers to integrate AI model risk management processes directly in their development environment.

Please also see FAIRLY Client APIs.

Audit Trail

What is Audit Trail?
Audit trail provides accountability and transparency for AI model governance. Currently, the audit trail functionality captures creators and timestamps for these assets on the FAIRLY server:
AI Inventory
Model Inventory
Dataset Inventory
Report Inventory
User Role Inventory
Report Comments and Approvals

These audit trail data can be accessed from the FAIRLY app web UI or can be exported from the corresponding database tables and integrated with your reporting tools directly.

Bias Inspection

What is Bias Inspection?
Bias occurs when the model produces bias predictions that are categorically disadvantageous to one set of protected groups. The bias could be introduced by the training data but also by the parameter weighing and selection that’s introduced by humans as part of the model training process. Importantly, even when protected categories are excluded from the training process, bias can still be introduced through proxy risk.

Bias Inspector is located under the Risk section of the top navigation. It also appears as an add-on in the Report Builder section.

Why is Bias Inspection important?
Models trained on a dataset that has inherent bias will produce a bias algorithm that can amplify those biases. Therefore, it is important to examine both dataset and model results for bias before and after model development. Without knowledge of potential biases, the business can be exposed to ethical and legal risk.

How does FAIRLY perform Bias Inspection?
FAIRLY utilizes established industry standards for bias metrics measurement such as disparate impact and equalized odds analysis so that you are not using any black-box proprietary algorithms to examine another black-box. The industry standards include the open-sourced AI Fairness 360 library and we also support the open-sourced Fairlearn library.

Furthermore, traditional performance metrics such as recall and AUC are displayed for the selected protected groups to determine whether the model performs better for certain groups.

How do I interpret the Bias Inspection output?
The bias inspection output consists of three types of outputs: First, bar charts which display the differential values of performance metrics. The bar charts show no bias, when the values for each bar are equal, i.e the model treats different protected categories equally. Second, the pie charts display the demographic breakdown of the protected categories. Finally, the correlation matrix measures potential proxy risk. The way to interpret a correlation matrix is to examine if any protected categories have strong (>0.5,<-0.5) correlation with other input variables, there is potential for proxy risk.

How do I use Bias Inspection feature?
Please see Step 2: Bias Inspection in the Quick Start Guide.

Currently, Bias Inspection feature only supports binary datasets. Support for non binary datasets will be released in May 2022.

Champion Model

What is a Champion Model?
The concept of Champion Model vs Challenger Model derived from Model Risk Management framework SR11-7. When an AI is being developed, model developers need to train and test multiple models and eventually choose a "winner" based on established requirements and controls. The winner in this case is named the Champion Model.

To select a Champion Model for a specific AI, select an AI in the AI Center to access the AI Dashboard. Then click on the Models panel to access the Model Inventory. Use the "Select" button under the "Champion Model?" column to select a Champion Model.

Cloud Support

Which cloud providers do FAIRLY support?
FAIRLY supports all three major cloud providers for private cloud deployments: AWS, Microsoft Azure, Google Cloud Platform (GCP).

FAIRLY's public cloud multi-tenant and single tenant SaaS offerings are hosted on GCP.

Control

What is a Control?
A control is a set of requirements configured with a "tag" name. The requirements are applied with settings and threholds configured for the tag.

Here is a list of the out-of-the-box controls and their descriptions in the FAIRLY Controls Library Reference Guide.

When to select a Control?
An AI owner can select one or more control bundles when creating an AI but it is optional. They may need to consult their Compliance team to confirm which control bundle(s) to select.

Why are Controls important?
An AI developer should use the controls as a checklist of requirements when training models and creating Model Development Reports.

An AI Validator should use the controls to ensure policies and procedures are followed by the AI developers and that the reported values captured by the controls could be replicated and do not exceed the thresholds.

An AI Auditor should use the controls to ensure policies and procedures are followed.

Control Bundle

What is a Control Bundle?
A control bundle is a set of controls. An administrator or AI owner can create a custom control bundle or select one of the four pre-configured control bundles.

The four out-of-the-box pre-configured bundles are:

Fairness bundle contains 3 controls: AUC_SCORE, MEAN_ABSOLUTE_ERROR, MEAN_SQUARED_ERROR.

Mortgage Origination bundle contains 7 controls: RECALL_SCORE, ACCURACY_SCORE, DISPARATE_IMPACT, BIN_WIDTH, ROC_CURVE, HYPERPARAMETER_LIST, EQUALIZED_ODDS.

Option Pricing bundle contains 10 controls: RECALL_SCORE, ACCURACY_SCORE, SHARPE_RATIO, DISPARATE_IMPACT, BIN_WIDTH, ROC_CURVE, HYPERPARAMETER_LIST, MARKET_VOLATILITY, EQUALIZED_ODDS, MEAN_SQUARED_ERROR.

Risk bundle contains 3 controls: SHARPE_RATIO, MARKET_VOLATILITY, MEAN_SQUARED_ERROR

Here is a list of the out-of-the-box controls and their descriptions in the FAIRLY Controls Library Reference Guide.

When to select a Control Bundle?
An AI owner can select one or more control bundles when creating an AI but it is optional. They may need to consult their Compliance team to confirm which control bundle(s) to select.

Why are Control Bundles important?
Control bundles allow all stakeholders to have a consistent view of the established requirements based on internal policy and external regulations for an AI.

Control bundles are reusable; they allow standardization of requirements to be applied to different AIs and models across the organization.

Control bundles are configurable and can be customized without IT overhead, allowing faster response time to changing internal policies and external regulations. Please contact support@fairly.ai if you are interested in creating custom control bundles.

Control Inventory

What is Control Inventory?
Control Inventory is an inventory of controls belonging to a specific AI. Select an AI in the AI Center to access the AI Dashboard, then click on the Controls panel to access the Control Inventory.

Controls Library

What is Controls Library?
Controls Library is a collection of controls under the Governance section of the top navigation.
It provides the name, description and specification of each control.

Here is a list of the out-of-the-box controls and their descriptions in the FAIRLY Controls Library Reference Guide.

Custom Control

What is Custom Control?
Custom Control is an out-of-the-box control with custom specifications that are configurable. Please contact support@fairly.ai if you are interested in this advanced feature.

See also the FAIRLY Controls Library Reference Guide.

Custom Control Bundle

What is Custom Control Bundle?
Custom Control Bundle is a control bundle with a group of out-of-the-box controls and/or custom controls that are configurable. Please contact support@fairly.ai if you are interested in this advanced feature.

See also the FAIRLY Controls Library Reference Guide.

Custom Policy Package

What is Custom Policy Package?
Custom Policy Package is a policy package with a group of out-of-the-box controls and/or custom controls plus out-of-the-box report templates and/or custom report templates that are configurable. Please contact support@fairly.ai if you are interested in this advanced feature.

See also the Policy Library.

Custom Report Template

What is Custom Report Template?
Custom Report Template is a report template with a group of out-of-the-box controls and custom controls pus out-of-the-box report templates and custom report templates that are configurable. Please contact support@fairly.ai if you are interested in this advanced feature.

See also Report Center.

Data Drift

What is Data Drift?
Data drift is the significant change in data distribution, compared to the data used to train the model. Data drift can be caused by the evolution of business processes or industry events which create discontinuities in the underlying phenomena. It does not affect the formatting of the data, but the data at its core and what it represents.

Data Drift is located under the Risk section of the top navigation. It also appears as an add-on in the Report Builder section.

Why is Data Drift important?
Data drift leads to the degradation of a model’s performance on new data. In production data are now significantly different to the data used to train the model, resulting in less accurate predictions. The model «does not know» how to correctly predict on these new data, since its training did not include comparable observations.

How does FAIRLY perform Data Drift?
Data drift can be evaluated in several ways. Statistical approaches on individual columns. We can compute certain statistical quantities (median, mode, quantiles) to verify if individual features have shifted materially. For instance, if the average age of the user demographic the model is targeting has changed drastically since the training data was recorded, the model’s performance may be impacted.

The threshold to determine if the percent change in a categorical column is significant is 30% by default.

To determine if the change in the average of a continuous column is significant, FAIRLY performs a T-test with a p-value of 0.05.

Length: FAIRLY’s Data Drift also checks the lengths of each column in the reference and current datasets, to ensure that they have not changed by more than 25%.

Variance: To determine if any change in variance is significant between reference and current dataset, FAIRLY performs an F-test with a p-value of 0.05.

KS: FAIRLY performs a KS-test with a p-value of 0.05 to determine if any continuous columns have a statistically significant change in their distribution.

Median: FAIRLY checks the median of each continuous column to ensure that is hasn’t changed by more than 25%

How do I interpret the Data Drift output?
Data drift provides two output charts: First, it compares the continuous columns. If a statistically significant change has occurred in the column values, they will be plotted next to each other with the specific values displayed in the table below. The same process is applied to the categorical columns.

How do I use Data Drift feature?
Please see Step 2: Data Drift in the Quick Start Guide.

Data Drift supports binary, categorical and continuous variables.

Data Protection (Sensitive Feature Escrow Service)

What is Sensitive Feature Escrow Service?
Sensitive feature escrow service provides on-demand bias testing for data scientists without direct access to sensitive data to ensure fair machine learning. This is a custom feature for customers who require advanced data protection.

Data Validation

What is Data Validation?
Data Validation is a checklist for data preprocessing when a data scientist is creating Machine Learning datasets.

Data Validation is located under the Risk section of the top navigation. It also appears as an add-on in the Report Builder section.

Why is Data Validation important?
Data Validation allows the model development pipeline to be fully documented and identify potential sources of bias before any models are trained.

How does FAIRLY perform Data Validation?
FAIRLY provides two types of data validation. First, the headers and data types are verified against a configuration file generated from a reference dataset. Second, FAIRLY scans datasets for protected features and applies bias identification techniques such as disparate impact measurement to assess the risk of bias. Additionally, we will record the distributions of important features in the datasets and their respective correlations.

FAIRLY provides two types of data validation. First, the headers and data types are verified against a configuration file generated from a reference dataset. Second, FAIRLY scans datasets for protected features and applies bias identification techniques such as disparate impact measurement to assess the risk of bias. Additionally, we will record the distributions of important features in the datasets and their respective correlations.

Note:
If there are 10 or more unique values in a column, it would be considered a continuous variable. If there are less than 10 unique values in a column, it would be considered a categorical variable.

If there are more than 5 invalid entries in a single column, the results table would only display the first 5 and then combine the rest.

For a given categorical variable, the formulas to determine imbalance is as follows:
Lower bound = 1 / (# unique values) / 3
Upper bound = 1 / (# unique values) * 3
If the proportion of each categorical variable are outside of these bounds, then that column is flagged as having significant imbalance.

For a given continuous variable, if a column has a skewness of greater than +3 or less than -3, then it is flagged as having significant imbalance.

Correlations matrix: if any two columns have correlation of greater than +0.8 or less than -0.8, a strong positive or negative correlation will be indicated.

If dataset has less than 5 columns, it gets flagged as a warning that dataset has less than 5 columns.

If dataset has less than 500 rows, it gets flagged as a warning that dataset has less than 500 rows columns.

How do I interpret the Data Validation output?
The data validation table below contains a review of all the checks performed and their results. The first set of checks confirm that the data types match the configuration file. The desired output is a pass on each of the checks. If your dataset includes categories such as race or gender, these will be flagged as a warning in the protected category check.

How do I use Data Validation feature?
Please see Step 2: Data Validation in the Quick Start Guide.

Data Validation supports binary, categorical and continuous variables.

Database Support

Which databases do FAIRLY support?
FAIRLY currently supports Postgres and will support MS SQL Server in next release.

Dataset Inventory

What is Dataset Inventory?
Dataset Inventory is an inventory of datasets belonging to a specific AI. Select an AI in the AI Center to access the AI Dashboard, then click on the Datasets panel to access the Dataset Inventory.

Users can upload a new dataset (in csv file format) by clicking on the "New Dataset" button. These datasets can be used for Data Validation and Data Drift analysis.

Datasets uploaded to the Bias Inspector directly will also appear in the Dataset Inventory.

Datasets created in the Asenion Client using "sync" commands will also appear in the Dataset Inventory.


Executive Dashboard

What is Executive Dashboard?
Executive Dashboard presents a data-driven approach to AI governance. It provides a summary of AIs with their materiality and risk level as well at which stage they are at in the AI model lifecycle. This feature is most useful for organizations that are ready to manage AI at scale and it's to be used in conjunction with Risk Monitoring. If you are interested in learning more about this feature, please contact support@fairly.ai.

Explainability

What is explainability for feature bias?
Feature bias occurs when a limited number of features greatly influence the output of a model. It is the hypersensitivity of a model to a specific feature. Feature bias can occur when the model is not adapted to the task it needs to perform, when data are not representative of the real world or when the data itself contains biases.

Explainability Tool is located under the Risk section of the top navigation. It also appears as an add-on in the Report Builder section.

Why evaluating feature bias is important?
Unwanted feature bias can be ethically problematic: a model mainly considering the sex or the ethnicity of an individual could be perceived as sexist or racist. On the technical aspect, a model basing its predictions on simply two features is less robust: what happens when those features are not available for an individual?

How FAIRLY evaluates feature Bias?
Feature bias can be detected by establishing the contribution of each feature when the model makes a prediction. FAIRLY leverages the concept of SHAP ((Shapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations) and Gini.

SHAP: Shapley Additive exPlanations is an interpretability method based on Shapley values and was introduced by Lundberg and Lee (2017) to explain individual predictions of any machine learning model. Shapley values can be used to explain the output of a machine learning model. The Shapley value is a concept in game theory used to determine contribution of each player in a coalition or a cooperative game.

LIME: Local Interpretable Model-agnostic Explanations is a technique that approximates any black box machine learning model with a local, interpretable model to explain each individual prediction.

Gini: the Gini index (or coefficient) is a synthetic indicator that captures the level of inequality for a given variable and population. It varies between 0 (perfect equality) and 1 (extreme inequality). Between 0 and 1, the higher the Gini index, the greater the inequality.

How to interpret Explainability outputs?
There are two types of output charts. First, the Beeswarm chart shows the global feature sensitivity by showing the distribution of the SHAP values. Any feature which presents a skewed distribution shows strong negative or positive feature sensitivity. The second feature is the LIME analysis bar chart which shows the feature sensitivity for a particular observation. A large positive or negative value for a particular feature indicates that it was the dominant contributor to the outcome. If a protected category attribute is the dominant feature, then there is potential for bias.

How do I use Explainability feature?
Please see Step 2: Explainabilty in the Quick Start Guide.

Explainability supports binary, categorical and continuous variables.

Model Inventory

What is Model Inventory?
Model Inventory is an inventory of model records belonging to a specific AI. Select an AI in the AI Center to access the AI Dashboard, then click on the Models panel to access the Model Inventory.

Users can create a new model record by clicking on the "New Model" button. These model records can be used for AI-based Bias Inspection Model Results Datasets analysis.

Models created in the Asenion Client using "sync" commands will also appear in the Model Inventory.

Model Framework Support

Which model frameworks do FAIRLY support?
FAIRLY currently supports TensorFlow, PyTorch, Keras, Apache Spark, sckit learn, Onnx, XGBoost, spaCy, and Python with plans to support more. Please contact support@fairly.ai to discuss your requirements.

Notification

What is a Notification?
A Notification is an alert under the bell-shaped icon in the top right corner of the top navigation bar.

A notification is created when:
User has been invited to an organization: “You have an invitation pending for {ORG name}”
User has been added to an AI: “You have been assigned to the AI {AI Name}”
Use role has been changed in the ORG: “Your role in {ORG name}  has been changed to {Role name}"
Use role has been changed in the AI: "Your role in AI {AI name} has been changed to {Role name}"
When a user accepts an invitation to an ORG: “{User name} has accepted your invitation to {ORG name}”
When a user declines an invitation to an ORG: “{User name} has declined your invitation to {ORG name}”

On-Premise Support

Does FAIRLY support on-prem deployment?
FAIRLY supports on-prem deployment through private cloud deployments as well as enterprise on-premises as a service solutions such as OpenShift.

Organization Dashboard

What is Organization Dashboard?
Organization Dashboard is an inventory of users, their roles (administrator, member) and status (active, inactive) belonging to a specific Organization. Click on the gear icon in the top right corner of the top navigation bar to access the Organization Dashboard.

Administrators can also perform user management functions via the Organization Dashboard.

See also Role-based Access Control.

Policy Library

What is Policy Library?
Policy Library is a collection of policy packages under the Governance section of the top navigation.
It provides the name, report type(s) and controls of each policy.

Here is the list of out-of-the-box policy packages:

SR11-7 Risk policy package: this is a general policy package for AI models that have profit and loss implications and are required to follow SR11-7 guidelines from the Federal Reserve Board (FRB) and Office of the Comptrollers of Currency (OCC) in the United States. It can be adapted for similar guidelines in other jurisdictions such as E23 in Canada.

Financial Fairness policy package: this is a policy package specific for AI models that have financial fairness implications.

Mortgage Origination Bias policy package: this is a policy package specific for Mortgage Origination AI models that are required to pass the four-fifth rule by the Fair Lending Act.

Policy packages are configurable and can be customized without IT overhead, allowing faster response time to changing internal policies and external regulations. Please contact support@fairly.ai if you are interested in creating custom policy packages.

Quick Bias Inspector

What is Quick Bias Inspector?
The Quick Bias Inspector will allow users to upload a csv file (binary format) of a model result dataset or training dataset without linking it to an existing AI. This allows for efficient bias inspection on datasets before an AI project even begins to be able to evaluate whether one should use the dataset at all. Because it is not linked to any AI, no one else in the same organization will have access to view these results by design.

See Bias Inspection section for more details.

Report Add-On

What is a Report Add-On?
Report Add-On allow users to add additional sections to a report. Once the report is completed, users will be able to download individual reports for each add-on along with a comprehensive report that includes all the add-on sections.

Report Add-On options are presented as the first step in the report generation workflow.

Who have access to use the Report Add-Ons?
Any user who can access the AI to create a report can see the Report Add-On options in the report generation workflow.

The out-of-the-box Report Add-On options are:

Data Validation
Data Drift
Bias Inspection
Explainability
Data Provenance (coming soon)
AI Governance (coming soon)

How to create a report with Report Add-Ons?
Please see Step 3: Report Generation in the Quick Start Guide.

Report Builder

What is Report Builder?
Report Builder is a tool for creating reports for any AI the user is a member of. It combines qualitative information via the FAIRLY web UI with quantitative data from the AI model developers' working environment. It has built-in workflow management features, WYSIWYG editor with inline controls (tag and image tags) integration and LaTex integration, micro-comment and micro-approval per section as well as configurable templates and add-ons.

Report Builder is located under Compliance -> Report Center -> New Report.

What are the report type packages and who should use them?
There are 4 report type packages out-of-the-box in the FAIRLY App.

Developmental and Performance Testing Report (Lite): this package is good for small scale validations and initial drafts for models that will later need to be in compliance with SR11-7. It is intended for AI model developers.

Model Development Report: this package is for recording the developmental evidence for models that will need to be in compliance with SR11-7. It is intended for AI model developers.

Model Validation Report (Lite): this package will create a basic validation report to be in compliance with SR11-7. This report is compatible with the Model Developmental Report. It is intended for Validators.

AI Report: This package is for the AI Report. It is intended for Auditors.

Who have access to view a report?
Any user who can access the AI can view any reports associated with that AI.

How to create a report?
Please see Step 3: Report Generation in the Quick Start Guide.

Report Center

What is Report Center?
Report Center is an inventory of reports for all the AI the user is a member of. Report Center is located under Compliance section of the top navigation.

Why is Report Center important?
Existing regulations and upcoming AI guidelines and requirements call for increasing documentation as part of risk management for AI. Consistent documentation helps AI model developers ensure they follow best practices, policies, and regulations. Documentation also allows the validation and compliance counterparts to verify and evaluate risk consistently. Having a Report Center to manage the inventory of reports when the number and types of reports are growing exponentially is essential in strengthening AI governance and compliance.

Report Inventory

What is Report Inventory?
Report Inventory is an inventory of reports belonging to a specific AI. Select an AI in the AI Center to access the AI Dashboard, then click on the Reports panel to access the Report Inventory.

Users can create a new report by clicking on the "New Report" button.

Users can also click on a report to see the Report History page to download a finished report or continue to edit an in progress report.

Report Versioning

What is Report Versioning?
Report Versioning is an inventory of reports belonging to a specific report of a specific AI. Select an AI in the AI Center to access the AI Dashboard, then click on the Reports panel to access the Report Inventory. Selecting an individual report will bring you to the Report History page of this report where you can view versions of the report.

Users can create a new version of the report by clicking on the "New Version" button.

Users can also download a finished report or continue to edit an in progress report.

Role-based Access Control

What is Role-based Access Control?
The FAIRLY App contains two levels of Role-based Access Control (RBAC). To authenticate, users must login with a username and password.

The first level restrict access at an organizational level: Administrator and Member. An organization can have one or more administrator. An organization can have zero, one or more members.

An administrator can:
- Assign users in the same organization as an administrator or a member in Settings -> Organization Dashboard.
- Activate or deactivate users in the same organization in Settings -> Organization Dashboard.
- View all the AIs, reports, models and datasets of any users in the same organization.

A member can:
- View all the users in the same organization but cannot assign and change their roles in Settings -> Organization Dashboard.
- Only view the AIs, reports, models and datasets they created as an AI Owner themselves or have been granted access to the AIs by other AI Owners.

The second level restrict access at an AI level: AI Owner, AI Developer, Validator, Auditor, Approver.
An AI will assign the user who created an AI as an AI Owner by default. An AI must belong to an organization.

For Risk features:
All roles can perform any of the 4 risk analysis as long as they have access to the AIs.

For Compliance features:
All roles can access AI Center, Report Center and Schedule as long as they have access to the AIs.

For Report Generation workflow, any roles can comment and download reports as long as they have access to the AIs. In addition:

An AI Owner can:
- Add or remove other users from an AI.
- Assign or change roles of users of an AI.
- Create new customer report templates.
- Plus all of below.

An AI Developer can:
- Create Model Development Reports.
- Request validation from Validators

A validator can:
- Create Model Validation Reports.
- Review and reject Model Development Reports.
- Request approval from Approvers.

An Approver can:
- Approve reports

An Auditor can:
- Review reports.

The FAIRLY Client contains a token with restricted access at the AI level and inherited organizational level. To authenticate, a client application must provide a unique API token. This token is auto-generated in the fairly.yaml file.

Custom roles and access can be implemented. Please contact support@fairly.ai for more information.

Risk Monitoring

What is Risk Monitoring?
FAIRLY's Explainable Risk Monitoring aggregates and measures financial, legal, ethical and reputational risk, creating industry-leading explainability for AI risk monitoring. If you are interested in learning more about this advanced feature, please contact support@fairly.ai for a demo.

SAS Integration

What is FAIRLY's SAS Integration?
FAIRLY unveiled our integration with SAS at its 7th Annual Model Risk Conference. FAIRLY presented a Fairness Scorecard that can be integrated with the SAS Model Risk Management Platform. If you are interested in this advanced feature, please contact support@fairly.ai for a demo.

Schedule

What is Schedule?
Schedule is an inventory of compliance related due dates for all the reports of all the AI's the user is a member of. Schedule is located under Compliance section of the top navigation.

Security Management

How does Security Management fit into FAIRLY's roadmap?
FAIRLY's product vision includes Trust, Risk, and Security Management through model interpretability and explainability, AI data protection, model operations, and adversarial attack resistance. We are currently working with Professor Ali Dehghantanha, Canada Research Chair in Cybersecurity & Threat Intelligence, in developing a comprehensive solution for AI risk management including cyber threat attribution. If you are interested in learning more, please contact support@fairly.ai for more details.

SSO Support

Does FAIRLY support Single Sign On (SSO)?
SAML 2.0 and OAuth support coming soon. Custom SSO integration can be implemented. Please contact support@fairly.ai for details.

Tag

What is a Tag?
A Tag is an identifier for a Control. In the Report Builder, users can select two types of tags: Tags or Image Tags. Tags correspond to specific values like "ACCURACY_SCORE". Image Tags correspond to specific graphs or charts like "SHAP_BEESWARM" chart.

Please also see the FAIRLY Controls Library Reference Guide.

User Management

How does user management work in the FAIRLY app and FAIRLY client?
User Management functionality includes Role-based Access Control at the organization level and the AI level.

An organization administrator can add users to an organization via the Organization Dashboard.

A user (both administrator and member) can view pending invitations to an organization and accept the invitation in the Organization Dashboard.

If a user belongs to more than one organization, they can also switch organization via the Organization Dashboard.

For audit trail purposes, administrator can deactivate a user but cannot delete a user from the organization.

User Role Inventory

What is User Role Inventory?
User Role Inventory is an inventory of user and their roles belonging to a specific AI. Select an AI in the AI Center to access the AI Dashboard, then click on the Roles panel to access the User Role Inventory.

AI Owners can assign members in the same organization to the AI by clicking on the "Assign Roles" button.

AI Owners can also remove members from the AI.

Workflow Management

What is Workflow Management?
Workflow Management allow users with different roles in an AI to perform different tasks in Report Builder. Please see Role-based Access Control.

Questions, suggestions or found errors on this page? Please send your feedback to support@fairly.ai.