Wednesday, August 26, 2020

Jupyter: JUlia PYThon and R

it's "ggplot2", not "ggplot", but it is ggplot()


Did you know that @projectJupyter's Jupyter Notebook (and JupyterLab) name came from combining 3 programming languages: JUlia, PYThon and R.

Readers of my blog do not need an introduction to Python. But what about the other 2?  

Today we will talk about R. Actually, R and Python, on the Raspberry Pi.

R Origin

R traces its origins to the S statistical programming language, developed in the 1970s at Bell Labs by John M. Chambers. He is also the author of books such as Computational Methods for Data Analysis (1977) and Graphical Methods for Data Analysis (1983). R is an open source implementation of that statistical language. It is compatible with S but also has enhancements over the original.


A quick getting started guide is available here:


Installing Python

As a recap, in case you don't have Python 3 and a few basic modules, the installation goes as follow (open a terminal window first):

pi@raspberrypi: $ sudo apt install python3 python3-dev build-essential

pi@raspberrypi: $ sudo pip3 install jedi pandas numpy

Installing R

Installing R is equally easy:


pi@raspberrypi: $ sudo apt install r-recommended


We also need to install a few development packages:

pi@raspberrypi: $ sudo apt install libffi-dev libcurl4-openssl-dev libxml2-dev

This will allow us to install many packages in R. Now that R is installed, we can start it:

pi@raspberrypi: $ R

Installing packages

Once inside R, we can install packages using install.packages('name') where name is the name of the package. For example, to install ggplot2 (to install tidyverse, simply replace ggplot2 with tidyverse):

> install.packages('ggplot2')

To load it:

> library(ggplot2)

And we can now use it. We will use the mpg dataset and plot displacement vs highway miles per gallon and set the color to:

>ggplot(mpg, aes(displ, hwy, colour=class))+


Combining R and Python

We can go at this 2 ways, from Python call R, or from R call Python. Here, from R we will call Python.

First, we need to install reticulate (the package that interfaces with Python):

> install.packages('reticulate')

And load it:

> library(reticulate)

We can verify which python binary that reticulate is using:

> py_config()

 Then we can use it to execute some python code. For example, to import the os module and use os.listdir(), from R we do ($ works a bit in a similar fashion to Python's .):

> os <- import("os")
> os$listdir(".")

Or even enter a Python REPL:

> repl_python()
>>> import pandas as pd


Type exit to leave the Python REPL.

One more trick: Radian

we will now exit R (quit()) and install radian, a command line REPL for R that is fully aware of the reticulate and Python integration:

pi@raspberrypi: $ sudo pip3 install radian

pi@raspberrypi: $ radian

This is just like the R REPL, only better. And you can switch to python very quickly by typing ~:

r$> ~

As soon as the ~ is typed, radian enters the python mode by itself:

r$> reticulate::repl_python()


Hitting backspace at the beginning of the line switches back to the R REPL:


I'll cover more functionality in a future post.

Francois Dion

Sunday, May 3, 2020

The Democratization of Computer Vision

The Raspberry Pi foundation does it again. There is both a new guide and some new hardware that will create a big impact for students and for academia and businesses.

See the details on the Dion Research Blog, The Democratization of Computer Vision

Francois Dion

Sunday, March 11, 2018

Are you smarter than a fifth grader?

"the editorial principle that nothing should be given both graphically and in tabular form has to become unacceptable" - John W. Tukey

Back to school

In the United States, most fifth grade students are learning about a fairly powerful type of visualization of data. In some states, it starts at an even younger age, in the 4th grade. As classwork and homework, they will produce many of these plots:

They are called stem-and-leaf displays, or stem-and-leaf plots. The left side of the vertical bar is the stem, and the right side, the leaves. The key or scale is important as it indicates the multiplier. The top row in the image above has a stem of 2 and leaves 0,6 and 7, representing 20, 26 and 27. Invented by John W. Tukey in the 1970's (see the statistics section of part II and the classics section of part V of my "ex-libris" series), few people use them once they leave school. Doing stem-and-leaf plots by hand is not the most entertaining thing to do. The original plot was also limited to handling small data sets. But there is a variation on the original display that gets around these limitations.

"Data! Data! Data!"

Powerful? Why did I say that in the first paragraph?

And why should stem-and-leaf plots be of interest to students, teachers, analysts, data scientists, auditors, statisticians, economists, managers and other people teaching, learning or working with data? There are a few reasons, with the two most important being:
  • they represent not only the overall distribution of data, but the individual data points themselves (or a close approximation)
  • They can be more useful than histograms as data size increases, particularly on long tailed distributions


An example with annual salaries

We will look at a data set of the salaries for government employees in Texas (over 690,000 values, from an August 2016 snapshot of the data from the Texas Tribune Salary Explorer). From this we create a histogram, one of the most popular plot for looking at distributions. As can be seen, we can't really tell any detail (left is Python Pandas hist, right is R hist):

It really doesn't matter the language or software package used, we get one very large bar with almost all the observations, and perhaps (as in R or seaborn), a second tiny bar next to it. A box plot (another plot popularized by John Tukey) would have been a bit more useful here adding some "outliers" dots. And, how about a stem-and-leaf plot? We are not going to sort and draw something by hand with close to 700,000 values...

Fortunately, I've built a package (python modules plus a command line tool) that handles stem-and-leaf plots at that scale (and much, much larger). It is available from and also from github (the code has been available as open source since 2016) and pypi (pip install stemgraphic).
So how does it look for the same data set?

Now we can see a lot of detail. Scale was automatically found to be optimal as 10000, with consecutive stems ranging from 0 to 35 (350000). We can read numbers directly, without having to refer to a color coded legend or other similar approach. At the bottom, we see a value of 0.00 (who works and is considered employed for $0 annual income? apparently, quite a few in this data set), and a maximum of $5,266,667.00 (hint, sports related), we see a median of about $42K and we see multiple classes of employees, ranging from non managerial, to middle management, upper management and beyond ($350,000+). We've limited the display here to 500 observations, and that is what the aggregate count on the leftmost column tells us. Notice also how we have a convenient sub-binning going on, allowing us to see which $1000 ranges are more common. All this from one simple display. And of course we can further trim, zoom, filter or limit what data or slice of data we want to inspect.

Knowing your data (particularly at scale) is a fundamental first step to turning it into insight. Here, we were able to know our data a lot better by simply using the function stem_graphic() instead of hist() (or use the included stem command line tool - compatible with Windows, Mac OS and Linux).

Tune in next episode...

Customers already using my software products for data governance, anomaly detection and data quality are already familiar with it. Many other companies, universities and individuals are using stemgraphic in one way or another. For everybody else, hopefully this has raised your interest, you'll master this visualization in no time, and you'll be able to answer the title question affirmatively...

Stemgraphic has another dozen types of visualizations, including some interactive and beyond numbers, adding support for categorical data and for text (as of version 0.5.x). In the following months I'll talk a bit more about a few of them.

Francois Dion

N.B. This article was originally published on LinkedIn at:

Tuesday, February 27, 2018

Stemgraphic v.0.5.x: stem-and-leaf EDA and visualization for numbers, categoricals and text

 Stemgraphic open source

In 2016 at PyDataCarolinas, I open-sourced my stem-and-leaf toolkit for exploratory data analysis and visualization. Later, in October 2016 I had posted the link to the video.


With the 0.5.x releases, I've introduced the categorical and text support. In the next few weeks, I'll be introducing some of the features, particularly those found in the new stemgraphic.alpha module of the stemgraphic package, such as back-to-back plots and stem-and-leaf heatmaps:

But if you want to get started, check out, and the github repo (especially the notebooks).

Github Repo

Francois Dion

Monday, February 26, 2018

Readings in Communication

"Ex-Libris" part VI: Communication

Part 6 of my "ex-libris" of a Data Scientist is now available. This one is about communication.
When I started this series, I introduced this diagram.

I also quoted Johh Tukey:
"Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise" 

This is quite important since a data science project has to start with a question and come up with an answer.

But how do we communicate at the time of formulating the question? How about at the time of providing an answer? By any means necessary.

Do check out the full article with the list of books:

"ex-libris" part vi

See also

Part I was on "data and databases": "ex-libris" of a Data Scientist - Part i
Part II, was on "models": "ex-libris" of a Data Scientist - Part II

Part III, was on "technology": "ex-libris" of a Data Scientist - Part III
Part IV, was on "code": "ex-libris" of a Data Scientist - Part IV
Part V was on "visualization". Bonus after that will be on management / leadership.
Francois Dion

Friday, August 11, 2017

Readings in Visualization

"Ex-Libris" part V: Visualization

Part 5 of my "ex-libris" of a Data Scientist is now available. This one is about visualization.

Starting from a historical perspective, particularly of statistical visualization, and covering a few classic must have books, the article then goes on to cover graphic design, cartography, information architecture and design and concludes with many recent books on information visualization (specific Python and R books to create these were listed in part IV of this series). In all, about 66 books on the subject.

Just follow the link to the LinkedIn post to go directly to it:

From Jacques Bertin’s Semiology of Graphics

"Le plus court croquis m'en dit plus long qu'un long rapport", Napoleon Ier

See also

Part I was on "data and databases": "ex-libris" of a Data Scientist - Part i
Part II, was on "models": "ex-libris" of a Data Scientist - Part II

Part III, was on "technology": "ex-libris" of a Data Scientist - Part III
Part IV, was on "code": "ex-libris" of a Data Scientist - Part IV
Part VI will be on communication. Bonus after that will be on management / leadership.
Francois Dion

Je vais aussi avoir une liste de publications en francais
En el futuro cercano voy a hacer una lista en espanol tambien

Tuesday, June 13, 2017

Readings in Programming

"Ex-Libris" part IV: Code

I've made available part 4 of my "ex-libris" of a Data Scientist. This one is about code. 

No doubt, many have been waiting for the list that is most related to Python.  In a recent poll by KDNuggets, the top tool used for analytics, data science and machine learning by respondents turned out to also be a programming language: Python.

The article goes from algorithms and theory, to approaches, to the top languages for data science, and more. In all, almost 80 books in just that part 4 alone. It can be found on LinkedIn:

"ex-libris" of a Data Scientist - Part IV

from Algorithms and Automatic Computing Machinesby B. A. Trakhtenbrot

See also

Part I was on "data and databases": "ex-libris" of a Data Scientist - Part i

Part II, was on "models": "ex-libris" of a Data Scientist - Part II

Part III, was on "technology": "ex-libris" of a Data Scientist - Part III

Part V will be on visualization, part VI on communication. Bonus after that will be on management / leadership.

Francois Dion

Je vais aussi avoir une liste de publications en francais
En el futuro cercano voy a hacer una lista en espanol tambien