Cassandra Query

Cassandra Query

This component retrieves data on a Cassandra server and loads it into a table. This stages the data, so the table is reloaded each time. You may then use transformations to enrich and manage the data in permanent tables.

Warning: This component is destructive as it truncates or recreates its target table on each run. Do not modify the target table structure manually.


Properties

Property Setting Description
Name Text The descriptive name for the component.
Basic/Advanced Mode Choice Basic - This mode will build a Query for you using settings from Data Source, Data Selection and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced - This mode will require you to write an SQL-like query to call data from Cassandra. The available fields and their descriptions are documented in the data model.
Server Choice Address of the Cassandra server from which data is to be sourced.
User Text Login name for the Cassandra server.
Password Text Login password for the Cassandra server.
Database Text The name of the database you wish to source data from.
Data Source Choice Select a data source from the server.
Data Selection Choice Select one or more columns to return from the query.
Data Source Filter Input Column The available input columns vary depending upon the Data Source.
Qualifier Is - Compares the column to the value using the comparator.
Not - Reverses the effect of the comparison, so "equals" becomes "not equals", "less than" becomes "greater than or equal to" etc.
Comparator Choose one of Equal To, Greater than, Less than, Greater than or equal to, Less than or equal to.
Value The value to be compared.
SQL Text Custom SQL-like query only available during 'Advanced' mode.
Combine Filters Text Use the defined filters in combination with one another according to either "and" or "or".
Limit Number Limits the number of rows that are loaded from file.
Connection Options Parameter A JDBC parameter supported by the Database Driver. The available parameters are determined automatically from the driver, and may change from version to version.
They are usually not required as sensible defaults are assumed.
Value A value for the given Parameter.
S3 Staging Area Text The name of an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. See this document for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
Schema Select Select the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article.
Target Table Text Provide a new table name.
Warning: This table will be recreated and will drop any existing table of the same name.
Distribution Style Select Even - the default option, distribute rows around the Redshift Cluster evenly.
All - copy rows to all nodes in the Redshift Cluster.
Key - distribute rows around the Redshift cluster according to the value of a key column.
Table-distribution is critical to good performance - see the Amazon Redshift documentation for more information.
Sort Key Select This is optional, and specifies the columns from the input that should be set as the table's sort-key.
Sort-keys are critical to good performance - see the Amazon Redshift documentation for more information.
Load Options Multiple Selection Comp Update: Apply automatic compression to the target table (if ON). Default is ON.
Stat Update: Automatically update statistics when filling a table (if ON). Default is ON. In this case, it is updating the statistics of the target table.
Clean S3 Objects: Automatically remove UUID-based objects on the S3 Bucket (if ON). Default is ON. Effectively decides whether to keep the staged data in the S3 Bucket or not.
Project Text The target BigQuery project to load data into.
Dataset Text The target BigQuery dataset to load data into.
Cloud Storage Staging Area Text The URL and path of the target Google Storage bucket to be used for staging the queried data.


Variable Exports

This component makes the following values available to export into variables:

Source Description
Component Name of the component.
Status Successful or Unsuccessful.
Started At Time Component began.
Completed At Time Component finished.
Duration Duration of Components run.
Row Count Number of Rows queried by the component.
Message Any messages yielded by the component (usually empty).


Strategy

Connect to the target database and issue the query. Stream the results into objects on S3. Then create or truncate the target table and issue a COPY command to load the S3 objects into the table. Finally, clean up the temporary S3 objects.


Example

This example will take data from a Cassandra database and load it into a Redshift Table. The job begins by creating a table on the Redshift server, then the Cassandra Query Component is used to take data from Cassandra, copy it to an S3 bucket and then finally load it into the Redshift table.

Create/Replace Table Component is used to make the table 'Example_Cassandra', which is then entered into the Cassandra Query Component's Target Table field. Details regarding the Cassandra server and login credentials are added and a data source is chosen. Since we want all of the data, there is no need to add to the 'Data Source Filter' or 'Limit' properties. Similarly, a Sort Key is not required but is recommended for large tables.

Note that the table must exist before Cassandra Query is run. Running the Create/Replace component table first will create the table, then the Cassandra Query or the entire job can be run to load data from Cassandra into the Redshift table.