Excel Query Component

Excel Query

This component can load data stored in an Office Open XML Excel sheet into a table. This stages the data, so the table is reloaded each time. You may then use transformations to enrich and manage the data in permanent tables.

By default, data types are guessed by looking at the cell formatting, not cell contents. This is controlled using the Connection Option "type detection scheme" which can be set to ColumnFormat (the default, which examines the cell formatting), RowScan (which will scan 15 rows of data and guess the data type based on the data values), or None (treat everything as text). A second connection parameter, "row scan depth" controls how many rows to scan when determining column types. None is often a sensible choice if you intend to parse the values later anyway or the types in a single column are mixed.

The component offers both a Basic and Advanced mode (see below) for generating the Excel query.

Warning: This component is destructive as it truncates or recreates its target table on each run. Do not modify the target table structure manually.


Properties

Property Setting Description
Name Text The descriptive name for the component.
Basic/Advanced Mode Choice Basic - This mode will build a query for you using settings from Data Source, Data Selection and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced - This mode will require you to write an SQL-like query. The worksheets become table names, and the columns are either A, B, C... or use the first row as column names depending on the setting of the "Header" connection option.
Excel File S3 Select a .xlsx file from S3. Only Office Open XML (.xlsx) files are supported.
Contains Header Row Choice Yes - The first row of data is the column names.
No - The first row of data is just data. Columns will be named A, B, C...
Cell Range Text By default the whole worksheet is considered. However, you may optionally specify a range of cells instead. For example, A5:E100 would only consider rows 5-100 in columns A-E. Wildcards (*) are also supported, for example A5:E* would consider columns A-E and rows 5 onwards.
Data Source Choice Select a data source. Each sheet in the workbook is exposed as a table.
Data Selection Choice Select one or more columns to return from the query. These may be A, B, C or the first row may be used as a header to provide column names. See the "Header" connection option.
Data Source Filter Input Column The available input columns vary depending upon the Data Source.
Qualifier Is - Compares the column to the value using the comparator.
Not - Reverses the effect of the comparison, so "equals" becomes "not equals", "less than" becomes "greater than or equal to", etc.
Comparator Choose a method of comparing the column to the value. Possible comparators include: 'Equal To', 'Greater than', 'Less than', 'Greater than or equal to', 'Less than or equal to', 'Like', 'Null'.
'Equal To' can match exact strings and numeric values while other comparators such as 'Greater than' will work only with numerics. The 'Like' operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The Null operator matches only Null values, ignoring whatever the value is set to.
Not all data sources support all comparators, thus it is likely only a subset of the above comparators will be available to choose from.
Value The value to be compared.
Combine Filters Choice And - Multiple filters must ALL be true for a row to be returned.
Or - Any one of the filters must be true for a row to be returned.
SQL Query Text This is an SQL-like query, written in SQL. The worksheets become table names, and the columns are either A, B, C... or use the first row as column names depending on the setting of the "Header" connection option. (Property only available in 'Advanced' Mode)
Limit Number By default, all rows are returned, but you can use this to limit the number of rows loaded.
Connection Options Parameter A JDBC parameter supported by the Database Driver. The available parameters are determined automatically from the driver, and may change from version to version.
They are usually not required as sensible defaults are assumed.
Value A value for the given Parameter. The parameters and allowed values for the Excel provider are explained here.
Storage Account Select (Azure Only) Select a Storage Account with your desired Blob Container to be used for staging the data.
Blob Container Select (Azure Only) Select a Blob Container to be used for staging the data.
Staging Select (AWS Only) Snowflake Managed: Allow Matillion ETL to create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
Existing Amazon S3 Location: Selecting this will avail the user of properties to specify a custom staging area on S3.
S3 Staging Area Text (AWS Only) The name of an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. See this document for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
This property is available when using an Existing Amazon S3 Location for Staging.
Warehouse Select Choose a Snowflake warehouse that will run the load.
Database Select Choose a database to create the new table in.
Type Select Choose between using a standard table or an external table.
Standard: The data will be staged on an S3 bucket before being loaded into a table.
External: The data will be put into an S3 Bucket and referenced by an external table.
Schema Select Select the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article.
Note: An external schema is required if the 'Type' property is set to 'External'.
Target Table Text Provide a new table name.
Warning: This table will be recreated and will drop any existing table of the same name.
Location Text/Select When using an 'External' type table, Provide an S3 Bucket path that will be used to store the data. Once on an S3 bucket, the data can be referenced by the external table.
Table Distribution Style Select Even - the default option, distribute rows around the Redshift Cluster evenly.
All - copy rows to all nodes in the Redshift Cluster.
Key - distribute rows around the Redshift cluster according to the value of a key column.
Table-distribution is critical to good performance - see the Amazon Redshift documentation for more information.
Table Distribution Key Select This is only displayed if the Table Distribution Style is set to Key. It is the column used to determine which cluster node the row is stored on.
Table Sort Key Select This is optional, and specifies the columns from the input that should be set as the table's sort-key.
Sort-keys are critical to good performance - see the Amazon Redshift documentation for more information.
Sort Key Options Select Decide whether the sort key is of a compound or interleaved variety - see the Amazon Redshift documentation for more information.
Project Text The target BigQuery project to load data into.
Dataset Text The target BigQuery dataset to load data into.
Cloud Storage Staging Area Text The URL and path of the target Google Storage bucket to be used for staging the queried data.
Encryption Select (AWS Only) Decide on how the files are encrypted inside the S3 Bucket.This property is available when using an Existing Amazon S3 Location for Staging.
None: No encryption.
SSE KMS: Encrypt the data according to a key stored on KMS.
SSE S3: Encrypt the data according to a key stored on an S3 bucket
KMS Key ID Select (AWS Only) The ID of the KMS encryption key you have chosen to use in the 'Encryption' property.
Load Options Multiple Selection Comp Update: Apply automatic compression to the target table (if ON). Default is ON.
Stat Update: Automatically update statistics when filling a table (if ON). Default is ON. In this case, it is updating the statistics of the target table.
Clean S3 Objects: Automatically remove UUID-based objects on the S3 Bucket (if ON). Default is ON. Effectively decides whether to keep the staged data in the S3 Bucket or not.
String Null is Null: Converts any strings equal to "null" into a null value. This is case sensitive and only works with entirely lower-case strings. Default is ON.
Recreate Target Table:Choose whether the component recreates its target table before the data load. If OFF, the existing table will be used. Default is ON.
Load Options Multiple Select Clean Cloud Storage Files: (If On) Destroy staged files on Cloud Storage after loading data. Default is On.
Cloud Storage File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
Auto Debug Select Choose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
Debug Level Select The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
2: Will log everything included in Level 1, cache queries, and additional information about the request, if applicable.
3: Will additionally log the body of the request and the response.
4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
5: Will additionally log communication with the data source and additional details that may be helpful in troubleshooting problems. This includes interface commands.

Variable Exports

This component makes the following values available to export into variables:

Source Description
Time Taken To Stage The amount of time (in seconds) taken to fetch the data from the data source and upload it to storage.
Time Taken To Load The amount of time (in seconds) taken to execute the COPY statement to load the data into the target table from storage.

Strategy

Download the files from storage to a temporary area on the Matillion instance. Query the sheet and stream the results into objects on storage, recreate or truncate the target table as necessary and then use a COPY command to load the storage objects into the table. Finally, clean up the temporary storage objects.


Example 1

In this example, the Excel Query component is used to create a table populated with sales data from an excel file (.xlsx). The table data is then passed through a simple filter. Bringing data into a table requires an orchestration job, while filtering said data requires a Transformation job, seen below to the left and right, respectively.

The orchestration job requires 3 components: Start, Excel Query and Transform Data. Start requires no parameterisation and Transform Data should simply be given the name of the Transformation job.

The Excel Query component must be given the path of an existing .xlsx file and the name of the table to write this data to. If the table name does not exist, Excel Query will create it. If the table does exist, Excel Query will overwrite it.

In this example, an .xlsx file is taken from an S3 bucket using the Excel Query component set up as below. Since we want to grab all of the data, we needn't alter the 'Cell Range', 'Data Source Filter' and 'Limit' properties. The data is being written to a table by the name of 'excel_example', which can be done immediately by right clicking the component and selecting 'Run Component'. This example is particularly apt when you wish to import Excel data to a table with no serious alteration of the source material.

Ensure each property has 'OK' status before continuing. After this component has run, a table named 'excel_example' will exist in the specified staging area and can be used in transformation jobs. In this example, the excel_example table data is loaded using the Table Input component. Selecting the 'Sample' tab for the Table Input component allows the user to 'Retrieve' rows and a total row count and we can see from the sample below that the data has been read in correctly. Note the columns match the names found in the 'Data Selection' property of the Excel Query component.

The Filter component is used to find only the data where Jane is the sales rep. Editing the 'Filter Conditions' property of the Filter component allows a new filter to be added, in this case one to check sales_rep_name is equal to 'Jane'.

Finally, the Filter component's Sample tab can be viewed to ensure the table data is being filtered correctly. As expected, the sample shows only rows where Jane is the sales rep.


Example 2

In the previous example, a Filter component was used to take a subsection of data from a table of excel data. In this example, we see how the Excel Query component can be used to do this directly without need of a Transformation job.

Editing the 'Data Selection' property of the Excel Query component will bring up a filter similar to that of the Filter component. In the same way, we filter only for values where the sales rep name is Jane. Note the component (or job) must be rerun for this new data to overwrite old data and provide a sample.

An inspection of the resulting table (using a Table Input component to sample the data) shows that the Data Selection has been successful and only rows containing transactions by Jane are imported.

Finally, we decide that we don't need the 'value' column at all and we'd like Excel Query to omit it. This can be done through the Excel Query component's 'Cell Range' property. In this case we want all rows and all columns except for the 'value' column. Thus the excel formula:

 A*:D* 

Is useful, using the wildcard value for rows. Column E is then lost. Upon entering this Cell Range, the component may recognise an error in the Data Selection property as it expects the 'Value' column that we have now omitted. Editing the Data Selection property can fix this error. This same error may appear if you attempt to reuse the same Table Input component to view a sample, as it may expect a column that no longer exists, but can be remedied in the same way.

Again, inspecting the sample data for this table confirms the success of our new Excel Query properties.


Videos