Workflow-based data analysis with KNIME
Analyze This!
Visualization: Histogram
A histogram, provided by the Histogram
node, provides an overview of how many readers lie in which group. The Histogram
node can be connected directly to the Cell Replacer
from the previous step, but the whole thing looks a bit colorless. The Color Manage
r
node lets you assign a color – based on the value of a column – to the rows of a table. If the same column is also used for the x-axis of the histogram, KNIME automatically takes over the color for the bars (Figure 6).
Visualizing Multiple Dimensions
Visualizations are available to display data and also show whether the discovered clusters have any significance. In the past years, many visualization methods were implemented in KNIME with the help of JavaScript and the D3.js framework [5].
These visualizations are available in the KNIME JavaScript Views extension, which is where you will also find the Parallel Coordinates visualization. Parallel Coordinates represents the properties of the data with parallel y-axes (Figure 7).
D3.js, the JavaScript library on which most KNIME JavaScript visualizations are based, is one of the most widely used libraries for creating interactive data visualizations in the browser. However, the KNIME user can only use some of its capabilities.
For all cases that KNIME does not yet cover, use the Generic JavaScript View
.
Configuring this node means you can enter arbitrary JavaScript and CSS code to compute a colorful image from a table. The code executed by the node has access to the node table and the browser Document Object Model (DOM) and can generate HTML and SVG elements based on the data.
For example, you can use the Generic JavaScript View to create a Voronoi diagram (Figure 8), which visualizes the clustered reader groups in 2D. To ensure that the data is in a format suitable for visualization, you must first reduce the number of dimensions. Up to now, the example has used five dimensions per reader (one for each section), but can be broken down using the principal component analysis as calculated by the PCA (Principal Component Analysis) node. This type of transformation reduces the dimensionality, but at the same time tries to keep the variance in the data, so that as little information as possible is lost.
By this point, a proper data analysis workflow has already been created: from importing the data, through transformation and grouping, to visualizing the results. All this helps to gain interesting insights into the raw data. In a further step, it is now possible to identify for each reader the articles that a reader has not yet read, but which could also be interesting for the reader because of their preferences. A web application can then suggest these articles to the reader.
Keeping Track
With each further step, the workflow threatens to become more complex and confusing. To ensure that it remains comprehensible, it is a good idea to encapsulate individual parts in modules to conceal the complexity. This encapsulation is made possible using so-called meta nodes. Meta nodes also let you group the sections of the workflow using meaningful names. Figure 9 shows a possible restructuring of the workflow using meta nodes.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.