Introducing Conveyor

Conveyor Logo

Conveyor is a tool for plugging in various data types and sources into Elasticsearch. The open-source products offered by Elasticsearch are top notch. Though often used in the areas of server logging and searching, we found a much greater potential. Through multiple iterations of working with Elasticsearch’s primary data input tools, (API, Logstash, and Beats) we decided to construct a tool that would drastically lower the complexity of inserting data into Elasticsearch.

We are now using Conveyor as a plugin inside of Kibana to quickly and easily import data from Microsoft SQL Server, text files, and other data sources. But, we’re just getting started with creating additional sources. We are confident that if you need to get data into Elasticsearch, you can utilize Conveyor for an easy import method. You can check to see if a source has already been created for the data you have. Or, you can help us build an even better product and community by contributing a new source.

Keep reading to find out how Conveyor works, or skip to the installation docs to give it a try today.

Important Terms

Before we dive into how it works, lets define some important terms.

How It Works

Conveyor is an orchestration tool and at its core is an API that talks to Elasticsearch and Node-RED. To make interaction with Conveyor even easier and enhance the user experience, we have also built a plugin that works inside of Kibana. The combination of the Conveyor API, Conveyor Plugin, Node-RED, Kibana, and Elasticsearch make up the entire system.

System Overview

High Level Conveyor System Diagram

System Components

Basic Workflow

  1. A Developer or Author Creates a Source

A source or data source can be thought of as a template. In fact, a data source uses a templating language to achieve some of its functionality. Imagine a statement for extracting data from SQL Server. Verbally it would go like this:

Execute the query Select * from dbo.sales_orders on Company-SQL-Server every 3 minutes

But written as a template using the Mustache templating language it might look like this:

Execute the query {{query}} on {{server}} every {{timing}}

This is very similar to how a source works except that rather than templating a sentence, they are templating a Flow inside of Node-RED. For more details on this section of the workflow start with the Basics of Authoring

  1. A Kibana User Provides Parameter Values to Create A Channel

Each of the above templated values are considered parameters. Parameters can be anything from strings and numbers to files. They are input into the + Create screen in the Conveyor plugin inside of Kibana.

Once these values are submitted, the Conveyor API combines them with the source from above to create a customized Node-RED flow. Inside of Conveyor this is considered a Channel.

  1. Data is Supplied to the Channel

Now that we’ve created a fully customized channel, we supply data to it and the channel inserts it into Elasticsearch. How it is inserted, the rate, the index, etc is largely controlled by the Source design mentioned above and the parameters provided.