blog - AI & Machine Learning Consulting Services | Xyonix

William Constantine, Deep Dhillon

Explaining a Passenger Survival AI Model Using SHAP for the RMS Titanic

In 1912, the RMS Titanic hit an iceberg in the North Atlantic Ocean about 400 miles south of Newfoundland, Canada and sank. Unfortunately, there were not enough lifeboats onboard to accommodate all passengers and 67% of the passengers died. In this article, we walk through the use of SHAP values to explain, in a detailed manner, why an AI model decides to predict whether a given passenger will or will not survive.

Practical Applications of AI and NLP for Automated Text Generation

Practical Applications of AI and NLP for Automated Text Generation

In this article, we explore some practical uses of AI driven automated text generation. We demonstrate how technologies like GPT-3 can be used to better your business applications by automatically generating training data which can be used to bootstrap your machine learning models. We also illustrate some example uses of language transformations like transforming english into legalese or spoken text into written.

Modern AI Text Generation: An Exploration of GPT-3, Wu Dao 2.0 & other NLP Advances

Modern AI Text Generation: An Exploration of GPT-3, Wu Dao 2.0 & other NLP Advances

Within this last year alone, there has been a paradigm shift in model development as research groups are ingesting (nearly) the entire world's worth of information on the internet to train massive deep learning models capable of performing fantastic or frightening feats, depending on your perspective. In this article, we explore an AI compositional technology, known as generative modeling, and demonstrate its ability to simulate human-realistic text.

Inside the Black Box: Developing Explainable AI Models Using SHAP

Explainable AI refers to the ability to interpret model outcomes in a way that is easily understood by human beings. We explore why this matters, and discuss in detail tools that help shine light inside the AI "black box" -- we wish to not just understand feature importance at the population level, but to actually quantify feature importance on a per-outcome basis.