Data Science, Machine Learning und KI
Kontakt

Data operations is an increasingly important part of data science because it enables companies to feed large business data back into production effectively. We at STATWORX, therefore, operationalize our models and algorithms by translating them into Application Programming Interfaces (APIs). Representational State Transfer (REST) APIs are well suited to be implemented as part of a modern micro-services infrastructure. They are flexible, easy to deploy, scale, and maintain, and they are further accessible by multiple clients and client types at the same time. Their primary purpose is to simplify programming by abstracting the underlying implementation and only exposing objects and actions that are needed for any further development and interaction. An additional advantage of APIs is that they allow for an easy combination of code written in different programming languages or by different development teams. This is because APIs are naturally separated from each other, and communication with and between APIs is handled by IP or URL (http), typically using JSON or XML format. Imagine, e.g., an infrastructure, where an API that’s written in Python and one that’s written in R communicate with each other and serve an application written in JavaScript.

In this blog post, I will show you how to translate a simple R script, which transforms tables from wide to long format, into a REST API with the R package Plumber and how to run it locally or with Docker. I have created this example API for our trainee program, and it serves our new data scientists and engineers as a starting point to familiarize themselves with the subject.

Translate the R Script

Transforming an R script into a REST API is quite easy. All you need, in addition to R and RStudio, is the package Plumber and optionally Docker. REST APIs can be interacted with by sending a REST Request, and the probably most commonly used ones are GET, PUT, POST, and DELETE. Here is the code of the example API, that transforms tables from wide to long or from long to wide format:

## transform wide to long and long to wide format
#' @post /widelong
#' @get /widelong
function(req) {
  # library
  require(tidyr)
  require(dplyr)
  require(magrittr)
  require(httr)
  require(jsonlite)

  # post body
  body <- jsonlite::fromJSON(req$postBody)

  .data <- body$.data
  .trans <- body$.trans
  .key <- body$.key
  .value <- body$.value
  .select <- body$.select

  # wide or long transformation
  if(.trans == 'l' || .trans == 'long') {
    .data %<>% gather(key = !!.key, value = !!.value, !!.select)
    return(.data)
  } else if(.trans == 'w' || .trans == 'wide') {
    .data %<>% spread(key = !!.key, value = !!.value)
    return(.data)
  } else {
    print('Please specify the transformation')
  }
}

As you can see, it is a standard R function, that is extended by the special plumber comments @post and @get, which enable the API to respond to those types of requests. It is necessary to add the path, /widelong, to any incoming request. That is done because it is possible to stack several API functions, which respond to different paths. We could, e.g., add another function with the path /naremove to our API, which removes NAs from tables.

The R function itself has one function argument req, which is used to receive a (POST) Request Body. In general, there are two different possibilities to send additional arguments and objects to a REST API, the header and the body. I decided to use a body only and no header at all, which makes the API cleaner, safer and allows us to send larger objects. A header could, e.g., be used to set some optional function arguments, but should be used sparsely otherwise.

Using a body with the API is also the reason to allow for GET and POST Requests (@post, @get) at the same time. While some clients prefer to send a body with a GET Request, when they do not permanently post something to the server etc., many other clients do not have the option to send a body with a GET Request at all. In this case, it is mandatory to add a POST Request. Typical clients are Applications, Integrated Development Environments (IDEs), and other APIs. By accepting both request types, our API, therefore, gains greater response flexibility.

For the request-response format of the API, I have decided to stick with the JavaScript Object Notation (JSON), which is probably the most common format. It would be possible to use Extensible Markup Language (XML) with R Plumber instead as well. The decision for one or the other will most likely depend on which additional R packages you want to use or on which format the API’s clients are predominantly using. The R packages that are used to handle REST Requests in my example API are jsonlite and httr. The three Tidyverse packages are used to do the table transformation to wide or long.

RUN the API

The finished REST API can be run locally with R or RStudio as follows:

library(plumber)

widelong_api <- plumber::plumb("./path/to/directory/widelongwide.R")
widelong_api$run(host = '127.0.0.1', port = 8000)

Upon starting the API, the Plumber package provides us with an IP address, and a port and a client, e.g., another R instance, can now begin to send REST Requests. It also opens a browser tool called Swagger, which can be useful to check if your API is working as intended. Once the development of an API is finished, I would suggest to build a docker image and run it in a container. That makes the API highly portable and independent of its host system. Since we want to use most APIs in production and deploy them to, e.g., a company server or the cloud, this is especially important. Here is the Dockerfile to build the docker image of the example API:

FROM trestletech/plumber

# Install dependencies
RUN apt-get update --allow-releaseinfo-change && apt-get install -y 
    liblapack-dev 
    libpq-dev

# Install R packages
RUN R -e "install.packages(c('tidyr', 'dplyr', 'magrittr', 'httr', 'jsonlite'), 
repos = 'http://cran.us.r-project.org')"

# Add API
COPY ./path/to/directory/widelongwide.R /widelongwide.R

# Make port available
EXPOSE 8000

# Entrypoint
ENTRYPOINT ["R", "-e", 
"widelong <- plumber::plumb('widelongwide.R'); 
widelong$run(host = '0.0.0.0', port= 8000)"]

CMD ["/widelongwide.R"]

Send a REST Request

The wide-long example API can generally respond to any client sending a POST or GET Request with a Body in JSON format, that contains a table in csv format and all needed information on how to transform it. Here is an example for a web application, which I have written for our trainee program to supplement the wide-long API:

The application is written in R Shiny, which is a great R package to transform your static plots and outputs into an interactive dashboard. If you are interested in how to create dashboards in R, check out other posts on our STATWORX Blog.

Last but not least here is an example on how to send a REST Request from R or RStudio:

library(httr)
library(jsonlite)
options(stringsAsFactors = FALSE)

# url for local testing
url <- "http://127.0.0.1:8000"

# url for docker container
url <- "http://0.0.0.0:8000"

# read example stock data
.data <- read.csv('./path/to/data/stocks.csv')

# create example body
body <- list(
  .data = .data,
  .trans = "w",
  .key = "stock",
  .value = "price",
  .select = c("X","Y","Z")
)

# set API path
path <- 'widelong'

# send POST Request to API
raw.result <- POST(url = url, path = path, body = body, encode = 'json')

# check status code
raw.result$status_code

# retrieve transformed example stock data
.t_data <- fromJSON(rawToChar(raw.result$content))

As you can see, it is quite easy to make REST Requests in R. If you need some test data, you could use the stocks data example from the Tidyverse.

Summary

In this blog post, I showed you how to translate a simple R script, which transforms tables from wide to long format, into a REST API with the R package Plumber and how to run it locally or with Docker. I hope you enjoyed the read and learned something about operationalizing R scripts into REST APIs with the R package Plumber and how to run them locally and with Docker. You are of welcome to copy and use any code from this blog post to start and create your REST APIs with R.

Until then, stay tuned and visit our STATWORX Blog again soon.

We’re hiring!

Data Engineering is your jam and you’re looking for a job? We’re currently looking for Junior Consultants and Consultants in Data Engineering. Check the requirements and benefits of working with us on our career site. We’re looking forward to your application!

In the last Docker tutorial Olli presented how to build a Docker image of R-Base scripts with rocker and how to run them in a container. Based on that, I’m going to discuss how to automate the process by using a bash/shell script. Since we usually use containers to deploy our apps at STATWORX, I created a small test app with R-shiny to be saved in a test container. It is, of course, possible to store any other application with this automated script as well if you like. I also created a repository at our blog github, where you can find all files and the test app.

Feel free to test and use any of its content. If you are interested in writing a setup script file yourself, note that it is possible to use alternative programming languages such as python as well.

the idea behind it

$ docker-machine ls
NAME          ACTIVE   DRIVER       STATE     URL   SWARM   DOCKER    ERRORS
Dataiku       -        virtualbox   Stopped                 Unknown   
default       -        virtualbox   Stopped                 Unknown   
ShowCase      -        virtualbox   Stopped                 Unknown   
SQLworkshop   -        virtualbox   Stopped                 Unknown   
TestMachine   -        virtualbox   Stopped                 Unknown   

$ docker-machine start TestMachine
Starting "TestMachine"...
(TestMachine) Check network to re-create if needed...
(TestMachine) Waiting for an IP...
Machine "TestMachine" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.

$ eval $(docker-machine env --no-proxy TestMachine)

$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             
cfa02575ca2c        testimage           "/sbin/my_init"     2 weeks ago         
STATUS                            PORTS                    NAMES
Exited (255) About a minute ago   0.0.0.0:2000->3838/tcp   testcontainer

$ docker start testcontainer
testcontainer

$ docker ps
...

Building and rebuilding Docker images over and over again every time you make some changes to your application can get a little tedious at times, especially if you type in the same old commands all the time. Olli discussed the advantage of creating an intermediary image for the most time-consuming processes, like installing R packages to speed things up during content creation. That’s an excellent practice, and you should try and do this for every project viable. But how about speeding up the containerisation itself? A small helper tool is needed that, once it’s written, does all the work for you.

the tools to use

To create Docker images and containers, you need to install Docker on your computer. If you want to test or use all the material provided in this blog post and on our blog github, you should also install VirtualBox, R and RStudio, if you do not already have them. If you use Windows (10) as your Operating System, you also need to install the Windows Subsystem for Linux. Alternatively, you can create your own script files with PowerShell or something similar.

The tool itself is a bash/shell script that builds and runs docker containers for you. All you have to do to use it, is to copy the docker_setup executable into your project directory and execute it. The only thing the tool requires from you afterwards is some naming input.

If the execution for some reason fails or produces errors, try to run the tool via the terminal.

source ./docker_setup

To replicate or start a new bash/shell script yourself, open your preferred text editor, create a new text file, place the preamble #!/bin/bash at the very top of it and save it. Next, open your terminal, navigate to the directory where you just saved your script and change its mode by typing chmod +x your_script_name. To test if it works correctly, you can e.g. add the line echo 'it works!' below your preamble.

#!/bin/bash
echo 'it works!'

If you want to check out all available mode options, visit the wiki and for a complete guide visit the Linux Shell Scripting Tutorial.

the code that runs it

If you open the docker_setup executable with your preferred text editor, the code might feel a little overwhelming or confusing at first, but it is pretty straight forward.

#!/bin/bash

# This is a setup bash for docker containers!
# activate via: chmod 0755 setup_bash or chmod +x setup_bash
# navigate to wd docker_contents
# excute in terminal via source ./setup_bash

echo ""
echo "Welcome, You are executing a setup script bash for docker containers."
echo ""

echo "Do you want to use the Default from the global configurations?"
echo ""
source global_conf.sh
echo "machine name = $machine_name"
echo "container = $container_name"
echo "image = $image_name"
echo "app name = $app_name"
echo "password = $password_name"
echo ""

docker-machine ls
echo ""
read -p "What is the name of your docker-machine [default]? " machine_name
echo ""
if [[ "$(docker-machine status $machine_name 2> /dev/null)" == "" ]]; then
    echo "creating machine..." 
        && docker-machine create $machine_name
else
    echo "machine already exists, starting machine..." 
        && docker-machine start $machine_name
fi
echo ""
echo "activating machine..."
eval $(docker-machine env --no-proxy $machine_name)
echo ""

docker ps -a
echo ""
read -p "What is the name of your docker container? " container_name
echo ""

docker image ls
echo ""
read -p "What is the name of your docker image? (lower case only!!) " image_name
echo ""

The main code structure rests on nested if statements. Contrary to a manual docker setup via the terminal, the script needs to account for many different possibilities and even leave some error margin. The first if statement for example – depicted in the picture above – checks if a requested docker-machine already exists. If the machine does not exist, it will be created. If it does exist, it is simply started for usage.

The utilised code elements or commands are even more straightforward. The echo command returns some sort of information or a blank for better readability. The read command allows for user input to be read and stored as a variable, which in return enters all further code instances necessary. Most other code elements are docker commands and are essentially the same as the ones entered manually via the terminal. If you are interested in learning more about docker commands check the documentation and Olli’s awesome blog post.

the git repository

The focal point of the Git Repository at our blog github is the automated docker setup, but also contains some other conveniences and hopefully will grow into an entire collection of useful scripts and bashes. I am aware that there are potentially better, faster and more convenient solutions for everything included in the repository, but if we view it as an exercise and a form of creative exchange, I think we can get some use out of it.

The docker_error_logs executable allows for quick troubleshooting and storage of log files if your program or app fails to work within your docker container.

The git_repair executable is not fully tested yet and should be used with care. The idea is to quickly check if your local project or repository is connected to a corresponding Git Hub repository, given an URL address, and if not to eventually ‚repair‘ the connection. It can further manage git pulls, commits and pushes for you, but again please use carefully.

the next project to come

As mentioned, I plan on further expanding the collection and usefulness of our blog github soon. In the next step I will add more convenience to the docker setup by adding a separate file that provides the option to write and store default values for repeated executions. So stay tuned and visit our STATWORX Blog again soon. Until then, happy coding.

In the last Docker tutorial Olli presented how to build a Docker image of R-Base scripts with rocker and how to run them in a container. Based on that, I’m going to discuss how to automate the process by using a bash/shell script. Since we usually use containers to deploy our apps at STATWORX, I created a small test app with R-shiny to be saved in a test container. It is, of course, possible to store any other application with this automated script as well if you like. I also created a repository at our blog github, where you can find all files and the test app.

Feel free to test and use any of its content. If you are interested in writing a setup script file yourself, note that it is possible to use alternative programming languages such as python as well.

the idea behind it

$ docker-machine ls
NAME          ACTIVE   DRIVER       STATE     URL   SWARM   DOCKER    ERRORS
Dataiku       -        virtualbox   Stopped                 Unknown   
default       -        virtualbox   Stopped                 Unknown   
ShowCase      -        virtualbox   Stopped                 Unknown   
SQLworkshop   -        virtualbox   Stopped                 Unknown   
TestMachine   -        virtualbox   Stopped                 Unknown   

$ docker-machine start TestMachine
Starting "TestMachine"...
(TestMachine) Check network to re-create if needed...
(TestMachine) Waiting for an IP...
Machine "TestMachine" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.

$ eval $(docker-machine env --no-proxy TestMachine)

$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             
cfa02575ca2c        testimage           "/sbin/my_init"     2 weeks ago         
STATUS                            PORTS                    NAMES
Exited (255) About a minute ago   0.0.0.0:2000->3838/tcp   testcontainer

$ docker start testcontainer
testcontainer

$ docker ps
...

Building and rebuilding Docker images over and over again every time you make some changes to your application can get a little tedious at times, especially if you type in the same old commands all the time. Olli discussed the advantage of creating an intermediary image for the most time-consuming processes, like installing R packages to speed things up during content creation. That’s an excellent practice, and you should try and do this for every project viable. But how about speeding up the containerisation itself? A small helper tool is needed that, once it’s written, does all the work for you.

the tools to use

To create Docker images and containers, you need to install Docker on your computer. If you want to test or use all the material provided in this blog post and on our blog github, you should also install VirtualBox, R and RStudio, if you do not already have them. If you use Windows (10) as your Operating System, you also need to install the Windows Subsystem for Linux. Alternatively, you can create your own script files with PowerShell or something similar.

The tool itself is a bash/shell script that builds and runs docker containers for you. All you have to do to use it, is to copy the docker_setup executable into your project directory and execute it. The only thing the tool requires from you afterwards is some naming input.

If the execution for some reason fails or produces errors, try to run the tool via the terminal.

source ./docker_setup

To replicate or start a new bash/shell script yourself, open your preferred text editor, create a new text file, place the preamble #!/bin/bash at the very top of it and save it. Next, open your terminal, navigate to the directory where you just saved your script and change its mode by typing chmod +x your_script_name. To test if it works correctly, you can e.g. add the line echo 'it works!' below your preamble.

#!/bin/bash
echo 'it works!'

If you want to check out all available mode options, visit the wiki and for a complete guide visit the Linux Shell Scripting Tutorial.

the code that runs it

If you open the docker_setup executable with your preferred text editor, the code might feel a little overwhelming or confusing at first, but it is pretty straight forward.

#!/bin/bash

# This is a setup bash for docker containers!
# activate via: chmod 0755 setup_bash or chmod +x setup_bash
# navigate to wd docker_contents
# excute in terminal via source ./setup_bash

echo ""
echo "Welcome, You are executing a setup script bash for docker containers."
echo ""

echo "Do you want to use the Default from the global configurations?"
echo ""
source global_conf.sh
echo "machine name = $machine_name"
echo "container = $container_name"
echo "image = $image_name"
echo "app name = $app_name"
echo "password = $password_name"
echo ""

docker-machine ls
echo ""
read -p "What is the name of your docker-machine [default]? " machine_name
echo ""
if [[ "$(docker-machine status $machine_name 2> /dev/null)" == "" ]]; then
    echo "creating machine..." 
        && docker-machine create $machine_name
else
    echo "machine already exists, starting machine..." 
        && docker-machine start $machine_name
fi
echo ""
echo "activating machine..."
eval $(docker-machine env --no-proxy $machine_name)
echo ""

docker ps -a
echo ""
read -p "What is the name of your docker container? " container_name
echo ""

docker image ls
echo ""
read -p "What is the name of your docker image? (lower case only!!) " image_name
echo ""

The main code structure rests on nested if statements. Contrary to a manual docker setup via the terminal, the script needs to account for many different possibilities and even leave some error margin. The first if statement for example – depicted in the picture above – checks if a requested docker-machine already exists. If the machine does not exist, it will be created. If it does exist, it is simply started for usage.

The utilised code elements or commands are even more straightforward. The echo command returns some sort of information or a blank for better readability. The read command allows for user input to be read and stored as a variable, which in return enters all further code instances necessary. Most other code elements are docker commands and are essentially the same as the ones entered manually via the terminal. If you are interested in learning more about docker commands check the documentation and Olli’s awesome blog post.

the git repository

The focal point of the Git Repository at our blog github is the automated docker setup, but also contains some other conveniences and hopefully will grow into an entire collection of useful scripts and bashes. I am aware that there are potentially better, faster and more convenient solutions for everything included in the repository, but if we view it as an exercise and a form of creative exchange, I think we can get some use out of it.

The docker_error_logs executable allows for quick troubleshooting and storage of log files if your program or app fails to work within your docker container.

The git_repair executable is not fully tested yet and should be used with care. The idea is to quickly check if your local project or repository is connected to a corresponding Git Hub repository, given an URL address, and if not to eventually ‚repair‘ the connection. It can further manage git pulls, commits and pushes for you, but again please use carefully.

the next project to come

As mentioned, I plan on further expanding the collection and usefulness of our blog github soon. In the next step I will add more convenience to the docker setup by adding a separate file that provides the option to write and store default values for repeated executions. So stay tuned and visit our STATWORX Blog again soon. Until then, happy coding.