Data Science, Machine Learning und KI
Kontakt

Fast gleichzeitig mit unseren Kollegen in Frankfurt sind auch wir, das Schweizer STATWORX Team in Zürich, Anfang Juli in ein neues Büro umgezogen.

Die neuen Räumlichkeiten im hippen Industrieviertel von Zürich bieten den perfekten Rahmen für das weitere Wachstum und den geplanten Ausbau des Schweizer Standortes von STATWORX.

Zurück zu den Anfängen

Wie es der Zufall will, wurde STATWORX Schweiz direkt gegenüber vom neuen Bürogebäude ins Leben gerufen. Zwei Jahre nach der Gründung von STATWORX in Frankfurt hat unser CEO, Sebastian Heinz, im Jahr 2013 hier am Turbinenplatz diese Zweigniederlassung eröffnet, um die vielen Schweizer Unternehmen direkt vor Ort beraten zu können.

Mit der ersten Erweiterung des Teams im Jahr 2018 wurde der Standort gewechselt und so arbeitete STATWORX Schweiz bis Ende Juni 2020 im Zentrum von Zürich.

Im neuen Office, am Puls des Kreis 5, haben wir nun genügend Platz für die weitere Entwicklung und dem damit verbundenen Teamzuwachs von STATWORX Schweiz.

Der moderne Open-Space mit Sichtbeton, weiten Glassfronten und tropischen Pflanzen bietet den perfekten Raum für kreatives und konzentriertes Arbeiten. Die Dachterrasse sorgt für einen perfekten Erholungsort für die Pausen zwischendurch.

AI Potential der Schweiz

Im Laufe der Jahre haben wir bereits mit vielen Schweizer Unternehmen gemeinsam an deren Data Science, Machine Learning und AI Herausforderungen gearbeitet. Dabei sind wir industrieübergreifend aktiv gewesen: ob Versicherung, Finanzdienstleistung, Transport, Retail oder Telekommunikation – wir haben in nahezu allen Branchen bereits Erfahrungen gesammelt. Auch die Themen waren vielfältig und spannend: von der Prognose der Kundenabwanderung im Versicherungsumfeld, einer Datenstrategie im öffentlichen Transport, über Anomalieerkennung in Telekommunikationsnetzen bis zur Prognose von Abflugverspätungen ist so ziemlich alles dabei gewesen, was das Herz jedes Data Scientist höher schlagen lässt.

Auch in Zukunft möchten wir mit unserer Expertise und breitgefächerten Erfahrung dazu beitragen, das Thema Data Science und AI in der Schweiz voranzutreiben und mitzugestalten. Wir sind davon überzeugt, dass wir mit unserem integrativen End-2-End Ansatz viele Prozesse, Dienstleistungen und Produkte in Unternehmen verbessern und effizienter gestalten können.

Um das AI Potential der Schweiz zu erschließen, werden wir in naher Zukunft ein maßgeschneidertes Portfolio an Initiativen und Workshops lancieren – stay tuned!

 

At STATWORX we are excited that a new promising field of Machine Learning has evolved in recent years: Causal Machine Learning. In short, Causal Machine Learning is the scientific study of Machine Learning algorithms which allow estimating causal effects. Over the last few years, different Causal Machine Learning algorithms have been developed, combining the advances from Machine Learning with the theory of causal inference to estimate different types of causal effects. My colleague Markus has already introduced some of these algorithms in an earlier blog post.

As Causal Machine Learning is a rather complex topic, I will write a series of blog posts to slowly dive into this new fascinating world of data science. In my first blog post, I gave an introduction into the topic, focusing on what Causal Machine Learning is and why it is important in practice and for the future of data science. In this second blog post, I will introduce the so-called Causal Forest, one of the most popular Causal Machine Learning algorithms to estimate heterogeneous treatment effects.

Why Heterogeneous Treatment Effects?

In Causal Forests, the goal is to estimate heterogeneity in treatment effects. As explained in my previous blog post, a treatment effect refers to a causal effect of a treatment or intervention on an outcome variable of scientific or political interest. For example the causal effect of a subsidised training programme on earnings. As individual treatment effects are unobservable, the practice focuses on estimating unbiased and consistent averages of the individual treatment effect. The most common parameter thereof is the average treatment effect, which is the mean of all individual treatment effects in the entire population of interest. However, sometimes treatment effects may vary widely between different subgroups in the population, bet it larger or smaller than the average treatment effect. In some cases, it might therefore be more interesting to estimate these different, i.e. heterogeneous treatment effects.

In most applications it is also interesting to look beyond the average effects in order to understand how the causal effects vary with observable characteristics.

(Knaus, Lechner & Strittmatter, 2018)

The estimation of heterogeneous treatment effects can assist in answering questions like: For whom are there big or small treatment effects? For which subgroup does a treatment generate beneficial or adverse effects? In the field of marketing, for example, the estimation of heterogeneous treatment effects can help to optimise resource allocation by answering the question of which customers respond the most to a certain marketing campaign or for which customers is the causal effect of intervention strategies on their churn behaviour the highest. Or when it comes to pricing, it might be interesting to quantify how a change in price has varying impact on sales among different age or income groups.

Where Old Estimation Methods Fail

Estimating heterogeneous treatment effects is nothing new. Econometrics and other social sciences have long been studying which variables predict a smaller or larger than average treatment effect, which in statistical terms is also known as Moderation. One of the most traditional ways to find heterogeneous treatment effects is to use a Multiple Linear Regression with interaction terms between the variables of interest (i.e. the ones which might lead to treatment heterogeneity) and the treatment indicator. In this blog post, I will always assume that the data is from a randomised experiment, such that the assumptions to identify treatment effects are valid without further complications. We then conclude that the treatment effect depends on the variables whose interaction term is statistically significant. For example, if we have only one variable, the regression model would look as follows:

    \[Y = beta_0 + beta_1 w + beta_2 x_1 + beta_3 (w * x_1),\]

where w is the treatment indicator and x_1 is the variable of interest. In that case, if beta_3 is significant, we know that the treatment effect depends on variable x_1. The treatment effect for each observation can then be calculated as

    \[beta_1 + beta_3 * x_1,\]

which is dependent on the value of x_1 and therefore heterogeneous among the different observations.

So why is there a need for more advanced methods to estimate heterogeneous treatment effects? The example above was very simple, it only included one variable. However, usually, we have more than one variable which might influence the treatment effect. To see which variables predict heterogeneous treatment effects, we have to include many interaction terms, not only between each variable and the treatment indicator but also for all possible combinations of variables with and without the treatment indicator. If we have p variables and one treatment, this gives a total number of parameters of:

    \[displaystylesum_{k = 0}^{p + 1} {p + 1 choose k}.\]

So, for example if we had 5 variables, we would have to include a total number of 64 parameters into our Linear Regression Model. This approach suffers from a lack of statistical power and could also cause computational issues. The use of a Multiple Linear Regression also imposes linear relationships unless more interactions with polynomials are included. Because Machine Learning algorithms can handle enormous numbers of variables and combining them in nonlinear and highly interactive ways, researchers have found ways to better estimate heterogeneous treatment effects by combining the field of Machine Learning with the study of Causal Inference.

Generalised Random Forests

Over recent years, different Machine Learning algorithms have been developed to estimate heterogeneous treatment effects. Most of them are based on the idea of Decision Trees or Random Forests, just like the one I focus on in this blog post: Generalised Random Forests by Athey, Tibshirani and Wager (2018).

Generalised Random Forests follows the idea of Random Forests and apart from heterogeneous treatment effect estimation, this algorithm can also be used for non-parametric quantile regression and instrumental variable regression. It keeps the main structure of Random Forests such as the recursive partitioning, subsampling and random split selection. However, instead of averaging over the trees Generalised Random Forests estimate a weighting function and uses the resulting weights to solve a local GMM model. To estimate heterogeneous treatment effects, this algorithm has two important additional features, which distinguish it from standard Random Forests.

1. Splitting Criterion

The first important difference to Random Forests is the splitting criterion. In Random Forests, where we want to predict an outcome variable Y, the split at each tree node is performed by minimising the mean squared error of the outcome variable Y. In other words, the variable and value to split at each tree node are chosen such that the greatest reduction in the mean squared error with regard to the outcomes Y is achieved. After each tree partition has been completed, the tree’s prediction for a new observation x is obtained by letting it trickle through all the way from tree’s root into a terminal node, and then taking the average of outcomes Y of all the observations x that fell into the same node during training. The Random Forest prediction is then calculated as the average of the predicted tree values.

In Causal Forests, we want to estimate treatment effects. As stated by the Fundamental Problem of Causal Inference however, we can never observe a treatment effect on an individual level. Therefore, the prediction of a treatment effect is given by the difference in the average outcomes Y between the treated and the untreated observations in a terminal node. Without going into too much detail, to find most heterogeneous but also accurate treatment effects, the splitting criterion is adapted such that it searches for a partitioning where the treatment effects differ the most including a correction that accounts for how the splits affect the variance of the parameter estimates.

tree branches

2. Honesty

Random Forests are usually evaluated by applying them to a test set and measure the accuracy of the predictions of Y using an error measure such as the mean squared error. Because we can never observe treatment effects, this form of performance measure is not possible in Causal Forests. When estimating causal effects, one, therefore, evaluates their accuracy by examining the bias, standard error and the related confidence interval of the estimates. To ensure that an estimate is as accurate as possible, the bias should asymptotically disappear and the standard error and, thus, the confidence interval, should be as small as possible. To enable this statistical inference in their Generalised Random Forest, Athey, Tibshirani and Wager introduce so-called honest trees.

In order to make a tree honest, the training data is split into two subsamples: a splitting subsample and an estimating subsample. The splitting subsample is used to perform the splits and thus grow the tree. The estimating subsample is then used to make the predictions. That is, all observations in the estimating subsample are dropped down the previously-grown tree until it falls into a terminal node. The prediction of the treatment effects is then given by the difference in the average outcomes between the treated and the untreated observations of the estimating subsample in the terminal nodes. With such honest trees, the estimates of a Causal Forest are consistent (i.e. the bias vanishes asymptotically) and asymptotically Gaussian which together with the estimator for the asymptotic variance allow valid confidence intervals.

Causal Forest in Action

To show the advantages of Causal Forests compared to old estimation methods, in the following I will compare the Generalised Random Forest to a Regression with interaction terms in a small simulation study. I use simulated data to be able to compare the estimated treatment effects with the actual treatment effects, which, as we know, would not be observable in real data. To compare the two algorithms with respect to the estimation of heterogeneous treatment effects, I test them on two different data sets, one with and one wihtout heterogeneity in the treatment effect:

Data Set Heterogeneity Heterogeneity Variables Variables Observations
1 No Heterogeneity   x_1x_{10} 20000
2 Heterogeneity x_1 and x_2 x_1x_{10} 20000

This means that in the first data set, all observations have the same treatment effect. In this case, the average treatment effect and the heterogeneous treatment effects are the same. In the second data set, the treatment effect varies with the variables x_1 and x_2. Without going into too much detail here (I will probably write a separate blog post only about causal data generating processes), the relationship between those heterogeneity variables (x_1 and x_2) and the treatment effect is not linear. Both simulated data sets have 20’000 observations containing an outcome variable Y and 10 covariates with values between zero and one.

To evaluate the two algorithms, the data sets are split in a train (75%) and a test set (25%). For the Causal Forest, I use the causal_forest() from the grf-package with tune.parameters = "all". I compare this to an lm() model, which includes all variables, the treatment indicator and the necessary interaction terms of the heterogeneity variables and the treatment indicator:

Linear Regression Model for data set with heterogeneity:

    \[Y = beta_0 + beta_1 x_1 + beta_2 x_2 + dots + beta_{10} x_{10} + beta_{11} w +\]


    \[beta_{12} (w * x_1) + beta_{13} (w * x_2) + beta_{14} (x_1 * x_2) + beta_{15} (w * x_1 * x_2)\]

Linear Regression Model for data set with no heterogeneity:

    \[Y = beta_0 + beta_1 x_1 + beta_2 x_2 + … + beta_{10} x_{10} + beta_{11} w\]

where x_1x_{10} are the heterogeneity variables and w is the treatment indicator (i.e. w = 1 if treated and w = 0 if not treated). As already explained above, we usually do not know which variables affect the treatment effect and have therefore to include all possible interaction terms into the Linear Regression Model to see which variables lead to treatment effect heterogeneity. In the case of 10 variables as we have it here, this means we would have to include a total of 2048 parameters in our Linear Regression Model. However, since the heterogeneity variables are known in the simulated data, here, I only include the interaction terms for those variables.

Data Set Metric grf lm
No Heterogeneity RMSE 0.01 0.00
Heterogeneity RMSE 0.08 0.45

Looking at the results, we can see that without heterogeneity, the treatment effect is equally well predicted by the Causal Forest (RMSE of 0.01) and the Linear Regression (RMSE of 0.00). However, as the heterogeneity level increases, the Causal Forest is far more accurate (RMSE of 0.08) than the Linear Regression (RMSE of 0.45). As expected, the Causal Forest seems to be better at detecting the underlying non-linear relationship between the heterogeneity variables and the treatment effect than the Linear Regression Model, which can also be seen in the plots below. Thus, even if we already know which variables influence the treatment effect and only need to include the necessary interaction terms, the Linear Regression Model is still less accurate than the Causal Forest due to its lack of modelling flexibility.

treatment effect hexplot

Outlook

I hope that this blog post has helped you to understand what Causal Forests are and what advantages they bring in estimating heterogeneous treatment effects compared to old estimation methods. In my upcoming blog posts on Causal Machine Learning, I will explore this new field of data science further. I will, for example, take a look at the problems of using classical Machine Learning algorithms to estimate causal effects in more detail or introduce different data generating processes to evaluate Causal Machine Learning methods in simulation studies.

References

  • Athey, S., Tibshirani, J., & Wager, S. (2019). Generalised random forests. The Annals of Statistics, 47(2), 1148-1178.
  • Knaus, M. C., Lechner, M., & Strittmatter, A. (2018). Machine learning estimation of heterogeneous causal effects: Empirical monte carlo evidence. arXiv:1810.13237v2.

At STATWORX, we are excited that a new promising field of Machine Learning has evolved in recent years: Causal Machine Learning. In short, Causal Machine Learning is the scientific study of Machine Learning algorithms that allow estimating causal effects. Over the last few years, different Causal Machine Learning algorithms have been developed, combining the advances from Machine Learning with the theory of causal inference to estimate different types of causal effects. My colleague Markus has already introduced some of these algorithms in an earlier blog post.

As Causal Machine Learning is a rather complex topic, I will write a series of blog posts to slowly dive into this new fascinating world of data science. This first blog post is an introduction into the topic, focusing on what Causal Machine Learning is and why it is important in practice and for the future of data science.

The Origins of Causal Machine Learning

As Markus has already explained in his earlier blog post, analysis in economic and other social sciences revolves primarily around the estimation of causal effects, that is, the isolated effect of a feature X on the outcome variable Y. An example, which has been investigated by my colleague Matthias, is the causal effect of oil prices on gas prices. Actually, in most cases, the interest lies in so-called treatment effects. A treatment effect refers to a causal effect of a treatment or intervention on an outcome variable of scientific or political interest. In economics, one of the most analyzed treatment effects is the causal effect of a subsidized training program on earnings.

Following the potential outcome framework introduced by Rubin (1947), the treatment effect of an individual is defined as follows:

    \[gamma_i = Y_i(1) - Y_i(0)\]

where Y_i(1) indicates the potential outcome of the individual i with treatment and contrary, Y_i(0) denotes the potential outcome of the individual i without treatment. However, as an individual can either receive the treatment or not, and thus, we can only ever observe one of the two potential outcomes for an individual at one point in time, the individual treatment effect is unobservable. This problem is also known as the Fundamental Problem of Causal Inference. Nevertheless, under certain assumptions, the averages of the individual treatment effect may be identified. In randomized experiments, where the treatment is randomly assigned, these assumptions are naturally valid, and the identification of any aggregation level of individual treatment effects is possible without further complications. In many situations, however, randomized experiments are not possible, and the researcher has to work with observational data, where these assumptions are usually not valid. Thus, an extensive literature in economics and other fields has focused on techniques identifying causal effects in cases where these assumptions are not given.

Prediction and causal inference are distinct (though closely related) problems.

Athey, 2017, p. 484

In contrast, (Supervised) Machine Learning literature has traditionally focused on prediction, that is, produce predictions of the outcome variable from the feature(s) . Machine Learning models are designed to discover complex structures in given data and generalize them so that they can be used to make accurate predictions on new data. These algorithms can handle enormous numbers of predictors and combining them in nonlinear and highly interactive ways. They have been proven to be hugely successful in practice and are used in applications ranging from medicine to resource allocations in cities.

Bringing Together the Best of Both Worlds

Although economists and other social scientists prioritize precise estimates of causal effects above predictive power, they were intrigued by the advantages of Machine Learning methods, such as the precise out-of-sample prediction power or the ability to deal with large numbers of features. But as we have seen, classical Machine Learning models are not designed to estimate causal effects. Using off-the-shelf prediction methods from Machine Learning leads to biased estimates of causal effects. The existing Machine Learning techniques had to be modified to use the advantages of Machine Learning for consistently and efficiently estimating causal effects– the birth of Causal Machine Learning!

distracted-economist

Currently, Causal Machine Learning can be broadly divided into two lines of research, defined by the type of causal effect to be estimated. One line of Causal Machine Learning research focuses on modifying Machine Learning methods to estimate unbiased and consistent average treatment effects. The average treatment effect is the mean of all individual treatment effects in an entire population of interest, and probably the most common parameter analyzed in econometric causal studies. Models from this line of research try to answer questions like: How will customers react on average to a marketing campaign? What is the average effect of a price change on sales? The other line of Causal Machine Learning research focuses on modifying Machine Learning methods to uncover treatment effect heterogeneity. That is, identifying subpopulations (based on features) of individuals who have a larger or smaller than average treatment effect. These models are designed to answer questions such as: Which customers respond the most to a marketing campaign? How does the effect of a price change on sales change with the age of customers?

Decision-Making Questions Need Causal Answers

Although the study of Causal Machine Learning has been mainly driven by economic researchers, its importance for other areas such as business should not be neglected. Companies often reach for classical Machine Learning tools to solve decision-making problems, such as where to set the price or which customers to target with a marketing campaign. However, there is a significant gap between making a prediction and making a decision. To make a data-driven decision, the understanding of causal relationships is key. Let me illustrate this problem with two examples from our daily business.

Example 1: Price Elasticities

At the core of every company’s pricing management is the understanding of how customers will respond to a change in price. To set an optimal price, the company needs to know how much it will sell at different (hypothetical) price levels. The most practicable and meaningful metric answering this question is the price elasticity of demand. Although it might seem straightforward to estimate the price elasticity of demand using classical Machine Learning methods to predict sales as the outcome with the price level as a feature, in practice, this approach does not simply give us the causal effect of price on sales.

There are a number of gaps between making a prediction and making a decision, and underlying assumptions need to be understood in order to optimise data-driven decision making.

Athey, 2017, p. 483

Following a similar example introduced by Athey (2017), assume we have historical data on airline prices and the respective occupancy rates. Typically, prices and occupancy rates are positively correlated as usual pricing policies of airlines specify that airlines raise their seat prices when their occupancy rate increases. In this case, a classical Machine Learning model will answer the following question: If on a particular day, airline ticket prices are high, what is the best prediction for the occupancy rate on that day? The model will correctly predict that the occupancy rate is likely to be high. However, it would be wrong to infer from this that an increase in prices leads to a higher occupancy rate. From common experience, we know that the true casual effect is quite the opposite – if an airline systematically raised its ticket prices by 10% everywhere, it would be unlikely to sell more tickets.

Example 2: Customer Churn

Another common problem, which companies like to solve with the help of Machine Learning, is the prediction of customer churn (i.e., customers abandoning the firm or service thereof). The companies are interested in identifying the customers with the highest risk of churn so that they can respond by allocating interventions in the hope of preventing these customers from leaving.

Classical Machine Learning algorithms have proven to be very good at predicting customer churn. Unfortunately, these results cannot sufficiently address the company’s resource allocation problem of which customers to best target with intervention strategies. The question of the optimal allocation of resources to customers is of causal nature: For which customers are the causal effect of intervention strategies on their churn behavior the highest? A study has shown that in many cases, the overlap between customers with the highest risk of churning and customers who would respond most to interventions was much lower than 100%. Thus, treating the problem of customer churn as a prediction problem and therefore using classical Machine Learning models is not optimal, yielding lower payoffs to companies.

The Wish of Every Data Scientist

Looking beyond these practical examples, we can observe that there is a more profound reason why Causal Machine Learning should be of interest to any data scientist: model generalisability. A Machine Learning model that can capture causal relationships of data will be generalizable to new settings, which is still one of the biggest challenges in Machine Learning.

rooster

To illustrate this, I’ll use the example of the rooster and the sun, from „The Book of Why“ by Pearl and Mackenzie (2018). A Machine Learning algorithm that is shown data about a rooster and the sun would associate the rising of the sun with the crow of the rooster and may be able to predict when the sun will rise accurately: If the rooster has just crowed, the sun will rise shortly after that. Such a model that is only capable of predicting correlations will not generalize to a situation where there is no rooster. In that case, a Machine Learning model will never predict that the sun will rise because it has never observed such a data point (i.e., without a rooster). If, however, the model captured the true causal relationship, that is, the sun being about to rise causes the rooster to crow, it would be perfectly able to predict that the sun will rise even if there is no rooster.

No True Artificial Intelligence Without Causal Reasoning

Pearl and Mackenzie (2018) go even further, arguing that we can never reach true human-level Artificial Intelligence without teaching machines causal reasoning since cause and effect are the key mechanisms through which we humans process and understand the complex world around us. The ability to predict correlations does not make machines intelligent; it merely allows them to model a reality based on data the algorithm is provided.

The algorithmisation of counterfactuals invites thinking machines to benefit from the ability to reflect on one’s past actions and to participate in this (until now) uniquely human way of thinking about the world.

Pearl & Mackenzie, 2018, p. 10

Furthermore, Machine Learning models need the capacity to detect causal effects to ask counterfactual questions, that is, to inquire how some relationship would change given some kind of intervention. As counterfactuals are the building blocks of moral behavior and scientific thought, machines will only be able to communicate more effectively with us humans and reach the status of moral beings with free will if they learn causal and thus counterfactual reasoning.

Outlook

Although this last part has become very philosophical in the end, I hope that this blog post has helped you to understand what Causal Machine Learning is and why it is necessary not only in practice but also for the future of data science in general. In my upcoming blog posts, I will discuss various aspects of this topic in more detail. I will, for example, take a look at the problems of using classical Machine Learning algorithms to estimate causal effects in more detail or compare different Causal Machine Learning algorithms in a simulation study.

References

  • Athey, S. (2017). Beyond prediction: using big data for policy problems. Science 335, 483-485.
  • Pearl, J., & Mackenzie, D. (2018). The book of why. New York, NY: Basic Books.
  • Rubin, D. B. (1974). Estimating causal effects of treatments in randomised and non-randomised studies. Journal of Educational Psychology, 66(5), 688-701.

Last Sunday, part of our STATWORX crew went on a day trip to the picturesque Rheingau, the famous wine region known for its Riesling. The trip was planned around the three tourist pillars of the region: wine, castles & hiking.

Weinwanderung-Collage-1

A warm welcome

Full of excitement (which we had already expressed in the days before, by sending each other funny wine and hiking related GIFs), we met at noon at Frankfurt’s central station, where we all took the train to the Rheingau. After a one-hour train journey, sweetened with some delicious self-made puff pastry snacks from our dear colleague Jessi, we arrived at the beautiful village of Oestrich-Winkel. There, our two tour guides Bernadette and Klaus Wolter from RheingauPur picked us up. After a warm welcome to the region, everyone received their own wine glass for the different tastings during the day, and off we went!

Wine tasting can be a real challenge

While walking through the beautiful vineyards, it was hard to believe that we were only a one-hour train ride away from Frankfurt. With the warm sun and the light breeze, the vast landscapes with beautiful castles could evoke feelings of holidays in Tuscany or Provence. On our way, Bernadette and Klaus taught us about the whole wine production process, starting with the right way to maintain a vineyard up to the correct wine-tasting technique. Because some of us had some starting difficulties with this tasting technique (or perhaps they just wanted to refill their glass very quickly 😄), here one more time: Take a big sip of wine in your mouth, close your nose and swirl the wine around a little. Then swallow the wine and open your nose at the same time. This way, you can taste the full flavors of the wine.

Weinwanderung-Collage-2

Gin at a vineyard?

After around three hours of learning, hiking, and of course, also some wine tasting, we arrived at the beautiful Castle Johannisberg. From there, we had a beautiful view of the Rhine and the surrounding vineyards. In the castle’s shop, some of us bought a bottle of the local Gin – yes, you’ve read correctly, they produce not only wine but also some delicious fruity and smooth Gin – a real treat for our STATWORX Gin lovers. After one last glass of delightful Riesling in the warm evening sun, we headed back through the vineyards to Oestrich-Winkel, where we all enjoyed a tasty traditional dinner.

Off to new adventures!

On the train back, we were still talking about all the lovely experiences of the day. Seeing a grape harvesting machine in action is not something you see every day! And even the non-wine drinkers among us had to admit that the wines we had tasted that day were excellent. Although none of us had the impression that we walked that much during the day, we all felt our feet and legs as we got off the train in Frankfurt. With this feeling and the warmth of the sun still on our faces, we all went home already looking forward to the next adventure on one of our STATWORX trips!

Thank you very much, Bernadette and Klaus, for the lovely tour!

Last Sunday, part of our STATWORX crew went on a day trip to the picturesque Rheingau, the famous wine region known for its Riesling. The trip was planned around the three tourist pillars of the region: wine, castles & hiking.

Weinwanderung-Collage-1

A warm welcome

Full of excitement (which we had already expressed in the days before, by sending each other funny wine and hiking related GIFs), we met at noon at Frankfurt’s central station, where we all took the train to the Rheingau. After a one-hour train journey, sweetened with some delicious self-made puff pastry snacks from our dear colleague Jessi, we arrived at the beautiful village of Oestrich-Winkel. There, our two tour guides Bernadette and Klaus Wolter from RheingauPur picked us up. After a warm welcome to the region, everyone received their own wine glass for the different tastings during the day, and off we went!

Wine tasting can be a real challenge

While walking through the beautiful vineyards, it was hard to believe that we were only a one-hour train ride away from Frankfurt. With the warm sun and the light breeze, the vast landscapes with beautiful castles could evoke feelings of holidays in Tuscany or Provence. On our way, Bernadette and Klaus taught us about the whole wine production process, starting with the right way to maintain a vineyard up to the correct wine-tasting technique. Because some of us had some starting difficulties with this tasting technique (or perhaps they just wanted to refill their glass very quickly 😄), here one more time: Take a big sip of wine in your mouth, close your nose and swirl the wine around a little. Then swallow the wine and open your nose at the same time. This way, you can taste the full flavors of the wine.

Weinwanderung-Collage-2

Gin at a vineyard?

After around three hours of learning, hiking, and of course, also some wine tasting, we arrived at the beautiful Castle Johannisberg. From there, we had a beautiful view of the Rhine and the surrounding vineyards. In the castle’s shop, some of us bought a bottle of the local Gin – yes, you’ve read correctly, they produce not only wine but also some delicious fruity and smooth Gin – a real treat for our STATWORX Gin lovers. After one last glass of delightful Riesling in the warm evening sun, we headed back through the vineyards to Oestrich-Winkel, where we all enjoyed a tasty traditional dinner.

Off to new adventures!

On the train back, we were still talking about all the lovely experiences of the day. Seeing a grape harvesting machine in action is not something you see every day! And even the non-wine drinkers among us had to admit that the wines we had tasted that day were excellent. Although none of us had the impression that we walked that much during the day, we all felt our feet and legs as we got off the train in Frankfurt. With this feeling and the warmth of the sun still on our faces, we all went home already looking forward to the next adventure on one of our STATWORX trips!

Thank you very much, Bernadette and Klaus, for the lovely tour!