Data Science, Machine Learning & AI
Kontakt

Management Summary

In modern companies, information in text form can be found in many places in day-to-day business. Depending on the business context, this can involve invoices, emails, customer input (such as reviews or inquiries), product descriptions, explanations, FAQs, and applications. Until recently, these information sources were reserved mainly for human beings, as the understanding of a text is a technologically challenging problem for machines.
Due to recent achievements in deep learning, several different NLP (“Natural Language Processing”) tasks can now be solved with outstanding quality.
In this article, you will learn how NLP applications solve various business problems through five practical examples, which ensured an increase in efficiency and innovation in their field of application.

Introduction

Natural Language Processing (NLP) is undoubtedly an area that has received special attention in the Big Data environment in the recent past. The interest in the topic, as measured by Google, has more than doubled in the last three years. This shows that innovative NLP technologies have long since ceased to be an issue only for big players such as Apple, Google, or Amazon. Instead, a general democratization of the technology can be observed. One of the reasons for this is that according to an IBM estimate, about 80% of “global information” is not available in structured databases, but unstructured, natural language. NLP will play a key role in the future when it comes to making this information usable. Thus, the successful use of NLP technologies will become one of the success factors for digitization in companies.

To give you an idea of the possibilities NLP opens up in the business context today, I will present five practical use cases and explain the solutions behind them in the following.

What is NLP? – A Short Overview

As a research topic that had already occupied linguists and computer scientists in the 1950s, NLP had a barely visible existence on the application side in the 20th century.

The main reason for this was the availability of the necessary training data. Although the availability of unstructured data, in the form of texts, has generally increased exponentially, especially with the rise of the Internet, there was still a lack of suitable data for model training. This can be explained by the fact that the early NLP models mostly had to be trained under supervision (so-called supervised learning). However, supervised learning requires that training data must be provided with a dedicated target variable. This means that, for example, in the case of text classification, the text corpus must be manually annotated by humans before the model training.

This changed at the end of the 2010s when a new model generation of artificial neural networks led to a paradigm shift. These so-called “Language Models” are based on huge text corpora of Facebook, Google, etc., (pre-)trained by randomly masking individual words in the texts and predicting them in the course of training. This is so-called self-supervised learning, which no longer requires a separate target variable. In the course of the training, these models learn a contextual understanding of texts.

The advantage of this approach is that the same model can be readjusted for various downstream tasks (e.g., text classification, sentiment analysis, named entity recognition) with the help of the learned contextual understanding. This process is called transfer learning. In practice, these pre-trained models can be downloaded so that only the fine-tuning for the specific application must be done by additional data. Consequently, high-performance NLP applications can now be developed with little development effort.

To learn more about Language Models (especially the so-called Transformer Models like “BERT”, resp. “roBERTa”, etc.) as well as trends and obstacles in the field of NLP, please read the article on NLP trends by our colleague Dominique Lade. [https://www.statworx.com/de/blog/neue-trends-im-natural-language-processing-wie-nlp-massentauglich-wird/].

The 5 Use Cases

Text Classification in the Recruitment Process

A medical research institute wants to make its recruitment process of study participants more efficient.

For testing a new drug, different, interdependent requirements are placed on the persons in question (e.g., age, general health status, presence/absence of previous illnesses, medications, genetic dispositions, etc.). Checking all these requirements is very time-consuming. Usually, it takes about one hour per potential study participant to view and assess relevant information. The main reason for this is that the clinical notes contain patient information that exceeds structured data such as laboratory values and medication: Unstructured information in text form can also be found in the medical reports, physician’s letters, and discharge reports. Especially the evaluation of the latter data requires a lot of reading time and is therefore very time-consuming. To speed up the process, the research institute is developing a machine learning model that pre-selects promising candidates. The experts then only have to validate the proposed group of people.

The NLP Solution

From a methodological point of view, this problem is a so-called text classification. Based on a text, a prognosis is created for a previously defined target variable. To train the model, it is necessary – as usual in supervised learning – to annotate the data, in this case the medical documents, with the target variable. Since a classification problem has to be solved here (suitable or unsuitable study participants), the experts manually assess the suitability for the study for some persons in the pool. If a person is suitable, they are marked with a one (=positive case), otherwise with a zero (=negative case). Based on these training examples, the model can now learn the relationships between the persons’ medical documents and their suitability.

To cope with the complexity of the problem, a correspondingly complex model called ClinicalBERT is used. This is a language model based on BERT (Bidirectional Encoder Representations from Transformers), which was additionally trained on a data set of clinical texts. Thus, ClinicalBERT can generate so-called representations of all medical documentation for each person. In the last step, the neural network of ClinicalBERT is completed by a task-specific component. In this case, it is a binary classification: For each person, a probability of suitability should be output. Through a corresponding linear layer, the high-dimensional text documentation is finally transformed into a single number, the suitability probability. In a gradient procedure, the model now learns the suitability probabilities based on the training examples.

Further Application Scenarios of Text Classification:

Text classification often takes place in the form of sentiment analysis. This involves classifying texts into predefined sentiment categories (e.g., negative/positive). This information is particularly important in the financial world or for social media monitoring. Text classification can also be used in various contexts where it is vital to sort documents according to their type (e.g., invoices, letters, reminders).

Name Entity Recognition for Usability Improvement of a News Page

A publishing house offers its readers on a news page a large number of articles on various topics. In the course of optimization measures, one would like to implement a better recommender system so that for each article, further suitable (complementary or similar) articles are suggested. Also, the search function on the landing page is to be improved so that the customer can quickly find the article he or she is looking for.

To create a good data basis for these purposes, the publisher decided to use Named Entity Recognition (NER) to assign automated tags to the texts, improving both the recommender system and the search function. After successful implementation, significantly more suggested articles are clicked on, and the search function has become much more convenient. As a result, the readers spend substantially more time on the page.

The NLP Solution

To solve the problem, one must first understand how NER works:

NER is about assigning words or entire phrases to content categories. For example, “Peter” can be identified as a person, “Frankfurt am Main” is a place, and “24.12.2020” is a time specification. There are also much more complicated cases. For this purpose, compare the following pairs of sentences:

  1. In the past, Adam didn’t know how to parallel park. (park = from the verb “to park”)
  2. Yesterday I took my dog for a walk in the park. (park = open green area)

It is perfectly evident to humans that the word “park” has a different meaning in each of the two sentences. However, this seemingly simple distinction is anything but trivial for the computer. An entity recognition model could characterize the two sentences as follows:

  1. “[In the past] (time reference), [Adam] (person) didn’t know how to parallel [park] (verb).”
  2. [Yesterday] (time reference) [I] (person) took my dog for a walk in the [park] (location).

In the past, rule-based algorithms would have been used to solve the above NER problem, but here too, the machine learning approach is gaining ground:

The present multiclass classification problem of entity determination is again addressed using the BERT model. Additionally, the model is trained on an annotated data set in which the entities are manually identified. The most comprehensive publicly accessible database in the English language is the Groningen Meaning Bank (GMB). After successful training, the model can correctly determine previously unknown words from the context resulting from the sentence. Thus, the model recognizes that prepositions like “in, at, after…” are followed by a location, but more complex contexts are also used to determine the entity.

Further Application Scenarios of NER:

NER is a classic information retrieval task and is central to many other NER tasks, such as chatbots and question-answer systems. Also, NER is often used for text cataloging, where the type of text is determined based on valid recognized entities.

A Chatbot for a Long-Distance Bus Company

A long-distance bus company would like to increase its accessibility and expand the communication channels with the customer. In addition to its homepage and app, the company wants to offer a third way to the customer, namely a Whatsapp-Chatbot. The goal is to perform specific actions in the conversation with the chatbot, such as searching, booking, and canceling trips. In addition, the chatbot is intended to create a reliable way of informing passengers about delays.

With the introduction of the chatbot, not only existing passengers can be reached more quickly, but also, contact can be established with new customers* who have not yet installed an app.

The NLP solution

Depending on the requirements that are placed on the chatbot, you can choose between different chatbot architectures.

Over the years, four main chatbot paradigms have been tested: In a first generation, the inquiry was examined for well-known patterns and accordingly adapted prefabricated answers were spent (“pattern matching”). More sophisticated is the so-called “grounding”, in which information extracted from knowledge libraries (e.g., Wikipedia) is organized in a network by Named Entity Recognition (see above). Such a network has the advantage that not only registered knowledge can be retrieved, but also unregistered knowledge can be inferred by the network structure. In “searching”, question-answer pairs from the conversation history (or from previously registered logs) are directly used to find a suitable answer. The use of machine learning models is the most proven approach to generate suitable answers (“generative models”) dynamically.

The best way to implement this modern chatbot with clearly definable competencies for the company is to use existing frameworks such as Google Dialogflow. This is a platform for configuring chatbots that have the elements of all previously mentioned chatbot paradigms. For this purpose, parameters such as intents, entities, and actions are passed.

An intend (“user intention”) is, for example, the timetable information. By giving different example phrases (“How do I get from … to … from … to …”, “When is the next bus from … to …”) to a language model, the chatbot can assign even unseen input to the correct intend (see text classification).

Furthermore, different travel locations and times are defined as entities. If the chatbot now captures an intend with matching entities (see NER), an action, in this case a database query, can be triggered. Finally, an intend-answer with the relevant information is given and adapted to all information in the chat history specified by the user (“stateful”).

Further Application Scenarios of Chatbots:

There are many possible applications in customer service, depending on the complexity of the scenario, from the automatic preparation (e.g., sorting) of a customer order to the complete processing of a customer experience.

A Question-Answering System as a Voice Assistant for Technical Questions About the Automobile

An automobile manufacturer discovers that many of its customers do not get along well with the manuals that come with the cars. Often, finding the relevant information takes too long, or it is not found at all. Therefore, it was decided to offer a Voice Assistant to provide precise answers to technical questions in addition to the static manual. In the future, drivers will be able to speak comfortably with their center console when they want to service their vehicle or request technical information.

The NLP solution

Question-answer systems have been around for decades, as they are at the forefront of artificial intelligence. A question-answer system that would always find a correct answer, taking into account all available data, could also be called “General AI”. A significant difficulty on the way to General AI is that the area the system needs to know about is unlimited. In contrast, question-answer systems provide good results when the area is delimited, as is the case with the automotive assistant. In general, the more specific the area, the better results can be expected.

For the implementation of the question-answer system, two data types from the manual are used: structured data, such as technical specifications of the components and key figures of the model, and unstructured data, such as instructions for action. All data is transformed into question-answer form in a preparatory step using other NLP techniques (classification, NER). This data is transferred to a version of BERT that has already been pre-trained on a large question-answer data set (“SQuAD”). The model is thus able to answer questions that have already been fed into the system and provide educated guesses for unseen questions.

Further Application Scenarios of Question-Answer Systems:

With the help of question-answer systems, company-internal search engines can be extended by functionalities. In e-commerce, answers to factual questions can be given automatically based on article descriptions and reviews.

Automatic Text Summaries (Text Generation) of Damage Descriptions for a Property Insurance

An insurance company wants to increase the efficiency of its claim settlement department. It has been noticed that some claims complaints from the customer lead to internal conflicts of responsibility. The reason for this is simple: customers usually describe the claims over several pages, and an increased training period is needed to be able to judge whether or not to process the case. Thus, it often happens that a damage description must be read thoroughly to understand that the damage itself does not need to be processed. Now, a system that generates automated summaries is to remedy this situation. As a result of the implementation, the claim handlers can now make responsibility decisions much faster.

The NLP solution

One can differentiate between two different approaches to the text summary problem: In the extraction, the most important sentences are identified from the input text and are then used as a summary in the simplest case. In abstraction, a text is transformed by a model into a newly generated summary text. The second approach is much more complex since paraphrasing, generalization, or the inclusion of further knowledge is possible here. Therefore, this approach has a higher potential to generate meaningful summaries but is also more error-prone. Modern text summary algorithms use the second approach or a combination of both methods.

A so-called sequence-to-sequence model is used to solve the insurance use case, which assigns a word sequence (the damage description) to another word sequence (the summary). This is usually a recurrent neural network (RNN), trained based on text summary pairs. The training process is designed to model the probability of the next word depending on the last words (and additionally, an “inner state” of the model). Similarly, the model effectively writes the summary “from left to right” by successively predicting the next word. An alternative approach is to have the input numerically encoded by the Language Model BERT and to have a GPT decoder autoregressively summarize the text based on this numerical representation. With the help of model parameters, it can be adjusted in both cases how long the summary should be.

Further Application Scenarios of Text Generation:

Such a scenario is conceivable in many places: Automated report writing, text generation based on retail sales data analysis, electronic medical record summaries, or textual weather forecasts from weather data are possible applications. Text generation is also used in other NLP use cases such as chatbots and Q&A systems.

Outlook

These five application examples of text classification, chatbots, question-answer systems, NER, and text summaries show that there are many processes in all kinds of companies that can be optimized with NLP solutions.

NLP is not only an exciting field of research but also a technology whose applicability in the business environment is continually growing.

In the future, NLP will not only become a foundation of a data-driven corporate culture but also already holds a considerable innovation potential through direct application, in which it is worth investing.

At STATWORX, we already have years of experience in the development of customized NLP solutions. Here are two of our case studies on NLP: Social Media Recruiting with NLP & Supplier Recommendation Tool. We are happy to provide you with individual advice on this and many other topics.

At STATWORX we are excited that a new promising field of Machine Learning has evolved in recent years: Causal Machine Learning. In short, Causal Machine Learning is the scientific study of Machine Learning algorithms which allow estimating causal effects. Over the last few years, different Causal Machine Learning algorithms have been developed, combining the advances from Machine Learning with the theory of causal inference to estimate different types of causal effects. My colleague Markus has already introduced some of these algorithms in an earlier blog post. As Causal Machine Learning is a rather complex topic, I will write a series of blog posts to slowly dive into this new fascinating world of data science. In my first blog post, I gave an introduction into the topic, focusing on what Causal Machine Learning is and why it is important in practice and for the future of data science. In this second blog post, I will introduce the so-called Causal Forest, one of the most popular Causal Machine Learning algorithms to estimate heterogeneous treatment effects.

Why Heterogeneous Treatment Effects?

In Causal Forests, the goal is to estimate heterogeneity in treatment effects. As explained in my previous blog post, a treatment effect refers to a causal effect of a treatment or intervention on an outcome variable of scientific or political interest. For example the causal effect of a subsidised training programme on earnings. As individual treatment effects are unobservable, the practice focuses on estimating unbiased and consistent averages of the individual treatment effect. The most common parameter thereof is the average treatment effect, which is the mean of all individual treatment effects in the entire population of interest. However, sometimes treatment effects may vary widely between different subgroups in the population, bet it larger or smaller than the average treatment effect. In some cases, it might therefore be more interesting to estimate these different, i.e. heterogeneous treatment effects.
In most applications it is also interesting to look beyond the average effects in order to understand how the causal effects vary with observable characteristics. (Knaus, Lechner & Strittmatter, 2018)
The estimation of heterogeneous treatment effects can assist in answering questions like: For whom are there big or small treatment effects? For which subgroup does a treatment generate beneficial or adverse effects? In the field of marketing, for example, the estimation of heterogeneous treatment effects can help to optimise resource allocation by answering the question of which customers respond the most to a certain marketing campaign or for which customers is the causal effect of intervention strategies on their churn behaviour the highest. Or when it comes to pricing, it might be interesting to quantify how a change in price has varying impact on sales among different age or income groups.

Where Old Estimation Methods Fail

Estimating heterogeneous treatment effects is nothing new. Econometrics and other social sciences have long been studying which variables predict a smaller or larger than average treatment effect, which in statistical terms is also known as Moderation. One of the most traditional ways to find heterogeneous treatment effects is to use a Multiple Linear Regression with interaction terms between the variables of interest (i.e. the ones which might lead to treatment heterogeneity) and the treatment indicator. In this blog post, I will always assume that the data is from a randomised experiment, such that the assumptions to identify treatment effects are valid without further complications. We then conclude that the treatment effect depends on the variables whose interaction term is statistically significant. For example, if we have only one variable, the regression model would look as follows:

    \[Y = beta_0 + beta_1 w + beta_2 x_1 + beta_3 (w * x_1),\]

where w is the treatment indicator and x_1 is the variable of interest. In that case, if beta_3 is significant, we know that the treatment effect depends on variable x_1. The treatment effect for each observation can then be calculated as

    \[beta_1 + beta_3 * x_1,\]

which is dependent on the value of x_1 and therefore heterogeneous among the different observations. So why is there a need for more advanced methods to estimate heterogeneous treatment effects? The example above was very simple, it only included one variable. However, usually, we have more than one variable which might influence the treatment effect. To see which variables predict heterogeneous treatment effects, we have to include many interaction terms, not only between each variable and the treatment indicator but also for all possible combinations of variables with and without the treatment indicator. If we have p variables and one treatment, this gives a total number of parameters of:

    \[displaystylesum_{k = 0}^{p + 1} {p + 1 choose k}.\]

So, for example if we had 5 variables, we would have to include a total number of 64 parameters into our Linear Regression Model. This approach suffers from a lack of statistical power and could also cause computational issues. The use of a Multiple Linear Regression also imposes linear relationships unless more interactions with polynomials are included. Because Machine Learning algorithms can handle enormous numbers of variables and combining them in nonlinear and highly interactive ways, researchers have found ways to better estimate heterogeneous treatment effects by combining the field of Machine Learning with the study of Causal Inference.

Generalised Random Forests

Over recent years, different Machine Learning algorithms have been developed to estimate heterogeneous treatment effects. Most of them are based on the idea of Decision Trees or Random Forests, just like the one I focus on in this blog post: Generalised Random Forests by Athey, Tibshirani and Wager (2018). Generalised Random Forests follows the idea of Random Forests and apart from heterogeneous treatment effect estimation, this algorithm can also be used for non-parametric quantile regression and instrumental variable regression. It keeps the main structure of Random Forests such as the recursive partitioning, subsampling and random split selection. However, instead of averaging over the trees Generalised Random Forests estimate a weighting function and uses the resulting weights to solve a local GMM model. To estimate heterogeneous treatment effects, this algorithm has two important additional features, which distinguish it from standard Random Forests.

1. Splitting Criterion

The first important difference to Random Forests is the splitting criterion. In Random Forests, where we want to predict an outcome variable Y, the split at each tree node is performed by minimising the mean squared error of the outcome variable Y. In other words, the variable and value to split at each tree node are chosen such that the greatest reduction in the mean squared error with regard to the outcomes Y is achieved. After each tree partition has been completed, the tree’s prediction for a new observation x is obtained by letting it trickle through all the way from tree’s root into a terminal node, and then taking the average of outcomes Y of all the observations x that fell into the same node during training. The Random Forest prediction is then calculated as the average of the predicted tree values. In Causal Forests, we want to estimate treatment effects. As stated by the Fundamental Problem of Causal Inference however, we can never observe a treatment effect on an individual level. Therefore, the prediction of a treatment effect is given by the difference in the average outcomes Y between the treated and the untreated observations in a terminal node. Without going into too much detail, to find most heterogeneous but also accurate treatment effects, the splitting criterion is adapted such that it searches for a partitioning where the treatment effects differ the most including a correction that accounts for how the splits affect the variance of the parameter estimates.
tree branches

2. Honesty

Random Forests are usually evaluated by applying them to a test set and measure the accuracy of the predictions of Y using an error measure such as the mean squared error. Because we can never observe treatment effects, this form of performance measure is not possible in Causal Forests. When estimating causal effects, one, therefore, evaluates their accuracy by examining the bias, standard error and the related confidence interval of the estimates. To ensure that an estimate is as accurate as possible, the bias should asymptotically disappear and the standard error and, thus, the confidence interval, should be as small as possible. To enable this statistical inference in their Generalised Random Forest, Athey, Tibshirani and Wager introduce so-called honest trees. In order to make a tree honest, the training data is split into two subsamples: a splitting subsample and an estimating subsample. The splitting subsample is used to perform the splits and thus grow the tree. The estimating subsample is then used to make the predictions. That is, all observations in the estimating subsample are dropped down the previously-grown tree until it falls into a terminal node. The prediction of the treatment effects is then given by the difference in the average outcomes between the treated and the untreated observations of the estimating subsample in the terminal nodes. With such honest trees, the estimates of a Causal Forest are consistent (i.e. the bias vanishes asymptotically) and asymptotically Gaussian which together with the estimator for the asymptotic variance allow valid confidence intervals.

Causal Forest in Action

To show the advantages of Causal Forests compared to old estimation methods, in the following I will compare the Generalised Random Forest to a Regression with interaction terms in a small simulation study. I use simulated data to be able to compare the estimated treatment effects with the actual treatment effects, which, as we know, would not be observable in real data. To compare the two algorithms with respect to the estimation of heterogeneous treatment effects, I test them on two different data sets, one with and one wihtout heterogeneity in the treatment effect:
Data Set Heterogeneity Heterogeneity Variables Variables Observations
1 No Heterogeneity x_1x_{10} 20000
2 Heterogeneity x_1 and x_2 x_1x_{10} 20000
This means that in the first data set, all observations have the same treatment effect. In this case, the average treatment effect and the heterogeneous treatment effects are the same. In the second data set, the treatment effect varies with the variables x_1 and x_2. Without going into too much detail here (I will probably write a separate blog post only about causal data generating processes), the relationship between those heterogeneity variables (x_1 and x_2) and the treatment effect is not linear. Both simulated data sets have 20’000 observations containing an outcome variable Y and 10 covariates with values between zero and one. To evaluate the two algorithms, the data sets are split in a train (75%) and a test set (25%). For the Causal Forest, I use the causal_forest() from the grf-package with tune.parameters = "all". I compare this to an lm() model, which includes all variables, the treatment indicator and the necessary interaction terms of the heterogeneity variables and the treatment indicator: Linear Regression Model for data set with heterogeneity:

    \[Y = beta_0 + beta_1 x_1 + beta_2 x_2 + dots + beta_{10} x_{10} + beta_{11} w +\]

    \[beta_{12} (w * x_1) + beta_{13} (w * x_2) + beta_{14} (x_1 * x_2) + beta_{15} (w * x_1 * x_2)\]

Linear Regression Model for data set with no heterogeneity:

    \[Y = beta_0 + beta_1 x_1 + beta_2 x_2 + … + beta_{10} x_{10} + beta_{11} w\]

where x_1x_{10} are the heterogeneity variables and w is the treatment indicator (i.e. w = 1 if treated and w = 0 if not treated). As already explained above, we usually do not know which variables affect the treatment effect and have therefore to include all possible interaction terms into the Linear Regression Model to see which variables lead to treatment effect heterogeneity. In the case of 10 variables as we have it here, this means we would have to include a total of 2048 parameters in our Linear Regression Model. However, since the heterogeneity variables are known in the simulated data, here, I only include the interaction terms for those variables.
Data Set Metric grf lm
No Heterogeneity RMSE 0.01 0.00
Heterogeneity RMSE 0.08 0.45
Looking at the results, we can see that without heterogeneity, the treatment effect is equally well predicted by the Causal Forest (RMSE of 0.01) and the Linear Regression (RMSE of 0.00). However, as the heterogeneity level increases, the Causal Forest is far more accurate (RMSE of 0.08) than the Linear Regression (RMSE of 0.45). As expected, the Causal Forest seems to be better at detecting the underlying non-linear relationship between the heterogeneity variables and the treatment effect than the Linear Regression Model, which can also be seen in the plots below. Thus, even if we already know which variables influence the treatment effect and only need to include the necessary interaction terms, the Linear Regression Model is still less accurate than the Causal Forest due to its lack of modelling flexibility.
treatment effect hexplot

Outlook

I hope that this blog post has helped you to understand what Causal Forests are and what advantages they bring in estimating heterogeneous treatment effects compared to old estimation methods. In my upcoming blog posts on Causal Machine Learning, I will explore this new field of data science further. I will, for example, take a look at the problems of using classical Machine Learning algorithms to estimate causal effects in more detail or introduce different data generating processes to evaluate Causal Machine Learning methods in simulation studies.

References

  • Athey, S., Tibshirani, J., & Wager, S. (2019). Generalised random forests. The Annals of Statistics, 47(2), 1148-1178.
  • Knaus, M. C., Lechner, M., & Strittmatter, A. (2018). Machine learning estimation of heterogeneous causal effects: Empirical monte carlo evidence. arXiv:1810.13237v2.
At STATWORX, we are excited that a new promising field of Machine Learning has evolved in recent years: Causal Machine Learning. In short, Causal Machine Learning is the scientific study of Machine Learning algorithms that allow estimating causal effects. Over the last few years, different Causal Machine Learning algorithms have been developed, combining the advances from Machine Learning with the theory of causal inference to estimate different types of causal effects. My colleague Markus has already introduced some of these algorithms in an earlier blog post. As Causal Machine Learning is a rather complex topic, I will write a series of blog posts to slowly dive into this new fascinating world of data science. This first blog post is an introduction into the topic, focusing on what Causal Machine Learning is and why it is important in practice and for the future of data science.

The Origins of Causal Machine Learning

As Markus has already explained in his earlier blog post, analysis in economic and other social sciences revolves primarily around the estimation of causal effects, that is, the isolated effect of a feature X on the outcome variable Y. An example, which has been investigated by my colleague Matthias, is the causal effect of oil prices on gas prices. Actually, in most cases, the interest lies in so-called treatment effects. A treatment effect refers to a causal effect of a treatment or intervention on an outcome variable of scientific or political interest. In economics, one of the most analyzed treatment effects is the causal effect of a subsidized training program on earnings. Following the potential outcome framework introduced by Rubin (1947), the treatment effect of an individual is defined as follows:

    \[gamma_i = Y_i(1) - Y_i(0)\]

where Y_i(1) indicates the potential outcome of the individual i with treatment and contrary, Y_i(0) denotes the potential outcome of the individual i without treatment. However, as an individual can either receive the treatment or not, and thus, we can only ever observe one of the two potential outcomes for an individual at one point in time, the individual treatment effect is unobservable. This problem is also known as the Fundamental Problem of Causal Inference. Nevertheless, under certain assumptions, the averages of the individual treatment effect may be identified. In randomized experiments, where the treatment is randomly assigned, these assumptions are naturally valid, and the identification of any aggregation level of individual treatment effects is possible without further complications. In many situations, however, randomized experiments are not possible, and the researcher has to work with observational data, where these assumptions are usually not valid. Thus, an extensive literature in economics and other fields has focused on techniques identifying causal effects in cases where these assumptions are not given.
Prediction and causal inference are distinct (though closely related) problems. Athey, 2017, p. 484
In contrast, (Supervised) Machine Learning literature has traditionally focused on prediction, that is, produce predictions of the outcome variable from the feature(s) . Machine Learning models are designed to discover complex structures in given data and generalize them so that they can be used to make accurate predictions on new data. These algorithms can handle enormous numbers of predictors and combining them in nonlinear and highly interactive ways. They have been proven to be hugely successful in practice and are used in applications ranging from medicine to resource allocations in cities.

Bringing Together the Best of Both Worlds

Although economists and other social scientists prioritize precise estimates of causal effects above predictive power, they were intrigued by the advantages of Machine Learning methods, such as the precise out-of-sample prediction power or the ability to deal with large numbers of features. But as we have seen, classical Machine Learning models are not designed to estimate causal effects. Using off-the-shelf prediction methods from Machine Learning leads to biased estimates of causal effects. The existing Machine Learning techniques had to be modified to use the advantages of Machine Learning for consistently and efficiently estimating causal effects– the birth of Causal Machine Learning!
distracted-economist
Currently, Causal Machine Learning can be broadly divided into two lines of research, defined by the type of causal effect to be estimated. One line of Causal Machine Learning research focuses on modifying Machine Learning methods to estimate unbiased and consistent average treatment effects. The average treatment effect is the mean of all individual treatment effects in an entire population of interest, and probably the most common parameter analyzed in econometric causal studies. Models from this line of research try to answer questions like: How will customers react on average to a marketing campaign? What is the average effect of a price change on sales? The other line of Causal Machine Learning research focuses on modifying Machine Learning methods to uncover treatment effect heterogeneity. That is, identifying subpopulations (based on features) of individuals who have a larger or smaller than average treatment effect. These models are designed to answer questions such as: Which customers respond the most to a marketing campaign? How does the effect of a price change on sales change with the age of customers?

Decision-Making Questions Need Causal Answers

Although the study of Causal Machine Learning has been mainly driven by economic researchers, its importance for other areas such as business should not be neglected. Companies often reach for classical Machine Learning tools to solve decision-making problems, such as where to set the price or which customers to target with a marketing campaign. However, there is a significant gap between making a prediction and making a decision. To make a data-driven decision, the understanding of causal relationships is key. Let me illustrate this problem with two examples from our daily business.

Example 1: Price Elasticities

At the core of every company’s pricing management is the understanding of how customers will respond to a change in price. To set an optimal price, the company needs to know how much it will sell at different (hypothetical) price levels. The most practicable and meaningful metric answering this question is the price elasticity of demand. Although it might seem straightforward to estimate the price elasticity of demand using classical Machine Learning methods to predict sales as the outcome with the price level as a feature, in practice, this approach does not simply give us the causal effect of price on sales.
There are a number of gaps between making a prediction and making a decision, and underlying assumptions need to be understood in order to optimise data-driven decision making. Athey, 2017, p. 483
Following a similar example introduced by Athey (2017), assume we have historical data on airline prices and the respective occupancy rates. Typically, prices and occupancy rates are positively correlated as usual pricing policies of airlines specify that airlines raise their seat prices when their occupancy rate increases. In this case, a classical Machine Learning model will answer the following question: If on a particular day, airline ticket prices are high, what is the best prediction for the occupancy rate on that day? The model will correctly predict that the occupancy rate is likely to be high. However, it would be wrong to infer from this that an increase in prices leads to a higher occupancy rate. From common experience, we know that the true casual effect is quite the opposite – if an airline systematically raised its ticket prices by 10% everywhere, it would be unlikely to sell more tickets.

Example 2: Customer Churn

Another common problem, which companies like to solve with the help of Machine Learning, is the prediction of customer churn (i.e., customers abandoning the firm or service thereof). The companies are interested in identifying the customers with the highest risk of churn so that they can respond by allocating interventions in the hope of preventing these customers from leaving. Classical Machine Learning algorithms have proven to be very good at predicting customer churn. Unfortunately, these results cannot sufficiently address the company’s resource allocation problem of which customers to best target with intervention strategies. The question of the optimal allocation of resources to customers is of causal nature: For which customers are the causal effect of intervention strategies on their churn behavior the highest? A study has shown that in many cases, the overlap between customers with the highest risk of churning and customers who would respond most to interventions was much lower than 100%. Thus, treating the problem of customer churn as a prediction problem and therefore using classical Machine Learning models is not optimal, yielding lower payoffs to companies.

The Wish of Every Data Scientist

Looking beyond these practical examples, we can observe that there is a more profound reason why Causal Machine Learning should be of interest to any data scientist: model generalisability. A Machine Learning model that can capture causal relationships of data will be generalizable to new settings, which is still one of the biggest challenges in Machine Learning.
rooster
To illustrate this, I’ll use the example of the rooster and the sun, from “The Book of Why” by Pearl and Mackenzie (2018). A Machine Learning algorithm that is shown data about a rooster and the sun would associate the rising of the sun with the crow of the rooster and may be able to predict when the sun will rise accurately: If the rooster has just crowed, the sun will rise shortly after that. Such a model that is only capable of predicting correlations will not generalize to a situation where there is no rooster. In that case, a Machine Learning model will never predict that the sun will rise because it has never observed such a data point (i.e., without a rooster). If, however, the model captured the true causal relationship, that is, the sun being about to rise causes the rooster to crow, it would be perfectly able to predict that the sun will rise even if there is no rooster.

No True Artificial Intelligence Without Causal Reasoning

Pearl and Mackenzie (2018) go even further, arguing that we can never reach true human-level Artificial Intelligence without teaching machines causal reasoning since cause and effect are the key mechanisms through which we humans process and understand the complex world around us. The ability to predict correlations does not make machines intelligent; it merely allows them to model a reality based on data the algorithm is provided.
The algorithmisation of counterfactuals invites thinking machines to benefit from the ability to reflect on one’s past actions and to participate in this (until now) uniquely human way of thinking about the world. Pearl & Mackenzie, 2018, p. 10
Furthermore, Machine Learning models need the capacity to detect causal effects to ask counterfactual questions, that is, to inquire how some relationship would change given some kind of intervention. As counterfactuals are the building blocks of moral behavior and scientific thought, machines will only be able to communicate more effectively with us humans and reach the status of moral beings with free will if they learn causal and thus counterfactual reasoning.

Outlook

Although this last part has become very philosophical in the end, I hope that this blog post has helped you to understand what Causal Machine Learning is and why it is necessary not only in practice but also for the future of data science in general. In my upcoming blog posts, I will discuss various aspects of this topic in more detail. I will, for example, take a look at the problems of using classical Machine Learning algorithms to estimate causal effects in more detail or compare different Causal Machine Learning algorithms in a simulation study.

References

  • Athey, S. (2017). Beyond prediction: using big data for policy problems. Science 335, 483-485.
  • Pearl, J., & Mackenzie, D. (2018). The book of why. New York, NY: Basic Books.
  • Rubin, D. B. (1974). Estimating causal effects of treatments in randomised and non-randomised studies. Journal of Educational Psychology, 66(5), 688-701.
At STATWORX, we are excited that a new promising field of Machine Learning has evolved in recent years: Causal Machine Learning. In short, Causal Machine Learning is the scientific study of Machine Learning algorithms that allow estimating causal effects. Over the last few years, different Causal Machine Learning algorithms have been developed, combining the advances from Machine Learning with the theory of causal inference to estimate different types of causal effects. My colleague Markus has already introduced some of these algorithms in an earlier blog post. As Causal Machine Learning is a rather complex topic, I will write a series of blog posts to slowly dive into this new fascinating world of data science. This first blog post is an introduction into the topic, focusing on what Causal Machine Learning is and why it is important in practice and for the future of data science.

The Origins of Causal Machine Learning

As Markus has already explained in his earlier blog post, analysis in economic and other social sciences revolves primarily around the estimation of causal effects, that is, the isolated effect of a feature X on the outcome variable Y. An example, which has been investigated by my colleague Matthias, is the causal effect of oil prices on gas prices. Actually, in most cases, the interest lies in so-called treatment effects. A treatment effect refers to a causal effect of a treatment or intervention on an outcome variable of scientific or political interest. In economics, one of the most analyzed treatment effects is the causal effect of a subsidized training program on earnings. Following the potential outcome framework introduced by Rubin (1947), the treatment effect of an individual is defined as follows:

    \[gamma_i = Y_i(1) - Y_i(0)\]

where Y_i(1) indicates the potential outcome of the individual i with treatment and contrary, Y_i(0) denotes the potential outcome of the individual i without treatment. However, as an individual can either receive the treatment or not, and thus, we can only ever observe one of the two potential outcomes for an individual at one point in time, the individual treatment effect is unobservable. This problem is also known as the Fundamental Problem of Causal Inference. Nevertheless, under certain assumptions, the averages of the individual treatment effect may be identified. In randomized experiments, where the treatment is randomly assigned, these assumptions are naturally valid, and the identification of any aggregation level of individual treatment effects is possible without further complications. In many situations, however, randomized experiments are not possible, and the researcher has to work with observational data, where these assumptions are usually not valid. Thus, an extensive literature in economics and other fields has focused on techniques identifying causal effects in cases where these assumptions are not given.
Prediction and causal inference are distinct (though closely related) problems. Athey, 2017, p. 484
In contrast, (Supervised) Machine Learning literature has traditionally focused on prediction, that is, produce predictions of the outcome variable from the feature(s) . Machine Learning models are designed to discover complex structures in given data and generalize them so that they can be used to make accurate predictions on new data. These algorithms can handle enormous numbers of predictors and combining them in nonlinear and highly interactive ways. They have been proven to be hugely successful in practice and are used in applications ranging from medicine to resource allocations in cities.

Bringing Together the Best of Both Worlds

Although economists and other social scientists prioritize precise estimates of causal effects above predictive power, they were intrigued by the advantages of Machine Learning methods, such as the precise out-of-sample prediction power or the ability to deal with large numbers of features. But as we have seen, classical Machine Learning models are not designed to estimate causal effects. Using off-the-shelf prediction methods from Machine Learning leads to biased estimates of causal effects. The existing Machine Learning techniques had to be modified to use the advantages of Machine Learning for consistently and efficiently estimating causal effects– the birth of Causal Machine Learning!
distracted-economist
Currently, Causal Machine Learning can be broadly divided into two lines of research, defined by the type of causal effect to be estimated. One line of Causal Machine Learning research focuses on modifying Machine Learning methods to estimate unbiased and consistent average treatment effects. The average treatment effect is the mean of all individual treatment effects in an entire population of interest, and probably the most common parameter analyzed in econometric causal studies. Models from this line of research try to answer questions like: How will customers react on average to a marketing campaign? What is the average effect of a price change on sales? The other line of Causal Machine Learning research focuses on modifying Machine Learning methods to uncover treatment effect heterogeneity. That is, identifying subpopulations (based on features) of individuals who have a larger or smaller than average treatment effect. These models are designed to answer questions such as: Which customers respond the most to a marketing campaign? How does the effect of a price change on sales change with the age of customers?

Decision-Making Questions Need Causal Answers

Although the study of Causal Machine Learning has been mainly driven by economic researchers, its importance for other areas such as business should not be neglected. Companies often reach for classical Machine Learning tools to solve decision-making problems, such as where to set the price or which customers to target with a marketing campaign. However, there is a significant gap between making a prediction and making a decision. To make a data-driven decision, the understanding of causal relationships is key. Let me illustrate this problem with two examples from our daily business.

Example 1: Price Elasticities

At the core of every company’s pricing management is the understanding of how customers will respond to a change in price. To set an optimal price, the company needs to know how much it will sell at different (hypothetical) price levels. The most practicable and meaningful metric answering this question is the price elasticity of demand. Although it might seem straightforward to estimate the price elasticity of demand using classical Machine Learning methods to predict sales as the outcome with the price level as a feature, in practice, this approach does not simply give us the causal effect of price on sales.
There are a number of gaps between making a prediction and making a decision, and underlying assumptions need to be understood in order to optimise data-driven decision making. Athey, 2017, p. 483
Following a similar example introduced by Athey (2017), assume we have historical data on airline prices and the respective occupancy rates. Typically, prices and occupancy rates are positively correlated as usual pricing policies of airlines specify that airlines raise their seat prices when their occupancy rate increases. In this case, a classical Machine Learning model will answer the following question: If on a particular day, airline ticket prices are high, what is the best prediction for the occupancy rate on that day? The model will correctly predict that the occupancy rate is likely to be high. However, it would be wrong to infer from this that an increase in prices leads to a higher occupancy rate. From common experience, we know that the true casual effect is quite the opposite – if an airline systematically raised its ticket prices by 10% everywhere, it would be unlikely to sell more tickets.

Example 2: Customer Churn

Another common problem, which companies like to solve with the help of Machine Learning, is the prediction of customer churn (i.e., customers abandoning the firm or service thereof). The companies are interested in identifying the customers with the highest risk of churn so that they can respond by allocating interventions in the hope of preventing these customers from leaving. Classical Machine Learning algorithms have proven to be very good at predicting customer churn. Unfortunately, these results cannot sufficiently address the company’s resource allocation problem of which customers to best target with intervention strategies. The question of the optimal allocation of resources to customers is of causal nature: For which customers are the causal effect of intervention strategies on their churn behavior the highest? A study has shown that in many cases, the overlap between customers with the highest risk of churning and customers who would respond most to interventions was much lower than 100%. Thus, treating the problem of customer churn as a prediction problem and therefore using classical Machine Learning models is not optimal, yielding lower payoffs to companies.

The Wish of Every Data Scientist

Looking beyond these practical examples, we can observe that there is a more profound reason why Causal Machine Learning should be of interest to any data scientist: model generalisability. A Machine Learning model that can capture causal relationships of data will be generalizable to new settings, which is still one of the biggest challenges in Machine Learning.
rooster
To illustrate this, I’ll use the example of the rooster and the sun, from “The Book of Why” by Pearl and Mackenzie (2018). A Machine Learning algorithm that is shown data about a rooster and the sun would associate the rising of the sun with the crow of the rooster and may be able to predict when the sun will rise accurately: If the rooster has just crowed, the sun will rise shortly after that. Such a model that is only capable of predicting correlations will not generalize to a situation where there is no rooster. In that case, a Machine Learning model will never predict that the sun will rise because it has never observed such a data point (i.e., without a rooster). If, however, the model captured the true causal relationship, that is, the sun being about to rise causes the rooster to crow, it would be perfectly able to predict that the sun will rise even if there is no rooster.

No True Artificial Intelligence Without Causal Reasoning

Pearl and Mackenzie (2018) go even further, arguing that we can never reach true human-level Artificial Intelligence without teaching machines causal reasoning since cause and effect are the key mechanisms through which we humans process and understand the complex world around us. The ability to predict correlations does not make machines intelligent; it merely allows them to model a reality based on data the algorithm is provided.
The algorithmisation of counterfactuals invites thinking machines to benefit from the ability to reflect on one’s past actions and to participate in this (until now) uniquely human way of thinking about the world. Pearl & Mackenzie, 2018, p. 10
Furthermore, Machine Learning models need the capacity to detect causal effects to ask counterfactual questions, that is, to inquire how some relationship would change given some kind of intervention. As counterfactuals are the building blocks of moral behavior and scientific thought, machines will only be able to communicate more effectively with us humans and reach the status of moral beings with free will if they learn causal and thus counterfactual reasoning.

Outlook

Although this last part has become very philosophical in the end, I hope that this blog post has helped you to understand what Causal Machine Learning is and why it is necessary not only in practice but also for the future of data science in general. In my upcoming blog posts, I will discuss various aspects of this topic in more detail. I will, for example, take a look at the problems of using classical Machine Learning algorithms to estimate causal effects in more detail or compare different Causal Machine Learning algorithms in a simulation study.

References