EMR Load Component
Load data into an existing table from objects stored on an EMR cluster.
Many of the configuration settings on this component have sensible defaults, mirroring the defaults provided by Redshift when that option is not specified. Mandatory settings are:
- Target Table Name
- Load Columns
- EMR URL Location
- EMR Object Prefix
- Data File Type
In addition, it is likely you will need to confirm the following settings:
- Compression Method
- Ignore Header Rows
Note: This component requires working AWS Credentials with read access to the EMR cluster containing the source data file(s). The is easily achieved by attaching an IAM role to the instance when launching Matillion ETL for Redshift, however it can also be managed manually by editing an Environment.
|For more information on all the settings in this component, see the Amazon Redshift COPY syntax for more information.|
|Name||Text||The descriptive name for the component.|
|Schema||Select||Select the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article.|
|Target Table||Select||Select an existing table to load data into.|
|Load Columns||Select Multiple||One or more columns that exist in the target table.|
|EMR URL Location||Text||The URL of the EMR source path to get the files from. This follows the format emr://myemrclusterid/location, where location is optional.|
|EMR Object Prefix||Text||All files that begin with this prefix will be included in the load into the target table.|
|IAM Role ARN||Text||Supply the value of a role ARN that is already attached to your Redshift cluster,
and has the necessary permissions to access the EMR cluster. This is optional, since without this
style of setup, the credentials of the environment (instance credentials or manually
entered access keys) will be used.
See the Redshift documentation for more information about using a Role ARN with Redshift.
|Data Type||Select||Available options are:
Fixed Width - this requires an additional "Fixed Width Spec". See the amazon documentation for details.
JSON - this requires an additional "JSON Format". See the amazon documentation for details.
Avro - this requires an additional "Avro Format". See the amazon documentation for details.
|Delimiter||Text||The delimiter that separates columns. The default is a Comma. A [TAB] character can be specified as "\t".|
|Fixed Width Spec||Text||Loads the data from a file where each column width is a fixed
length, rather than separated by a delimiter. Each column is described by a name and length, seperated by a colon. Each described column is then separated by a comma.
e.g. We have four columns; name, id, age, state. These columns have the respective lengths; 12,8,2,2.
The written description to convert this data into a table using fixed-width columns would then be:
Note that the columns can have any plaintext name. For more information on fixed width inputs, please consult the AWS documentation.
|CSV Quoter||Text||Specifies the character to be used as the quote character when using the CSV option.|
|JSON Layout||Text||Defaults to 'auto' which should work for the majority of JSON files if the fields match the table field names. Optionally can specify the URL to a JSONPaths file to map the data elements in the JSON source data to the columns in the target table.|
|AVRO Layout||Text||Defaults to 'auto' which should work for the majority of Avro files if the fields match the table field names. Optionally can specify the URL to a JSONPaths file to map the data elements in the Avro source data to the columns in the target table.|
|Compression Method||Select||Whether the input file is compressed in GZIP format, LZOP format, or not compressed at all.|
|Encoding||Select||The encoding the data is in. This defaults to UTF-8.|
|Remove Quotes||Select||Whether to remove any quotes surrounding data values.|
|Replace Invalid Characters||Text||If there are any invalid unicode characters in the data, this parameter specified the single character replacement for them. Defaults to '?'.|
|Maximum Errors||Text||The maximum number of individual parsing errors that cause the whole load to fail. Values up to this will be substituted as null values.This value defaults to 0, but the Amazon default is 1000.|
|Date Format||Text||Defaults to 'auto' - this can be used to manually specify a date format.|
|Time Format||Text||Defaults to 'auto' - this can be used to manually specify a time format.|
|Ignore Header Rows||Text||The number of rows at the top of the file to ignore - defaults to 0.|
|Accept Any Date||Select||If this is enabled, invalid dates such as '45-65-2018' are not considered an error, but will be loaded as the null value.|
|Ignore Blank Lines||Select||If this is set, any blank lines in the input file are ignored.|
|Truncate Columns||Select||If this is set, any instance of data in the input file that is too long to fit into the specified target column width will be truncated to fit instead of causing an error.|
|Fill Record||Select||Allows data files to be loaded when contiguous columns are missing at the end of some of the records. The remaining columns are set to null.|
|Trim Blanks||Select||Removes trailing and leading whitespace from the input data.|
|Null As||Text||This option replaces the specified string with null in the output table. Use this is your data has a particular representation of missing data.|
|Empty As Null||Select||If this is set, empty columns in the input file will become NULL.|
|Blanks As Null||Select||If this is set, blank columns in the input file will become NULL.|
|Comp Update||Select||Controls whether compression encodings are automatically applied during a COPY. This is usually a good idea to optimise the compression used when storing the data.|
|Escape||Select||When this option is specified, the backslash character (\) in input data is treated as an escape character.|
|Stat Update||Select||Governs automatic computation and refresh of optimizer statistics at the end of a successful COPY command.|
|Round Decimals||Select||If this option is set, round any decimals to fit into the column when the number of decimal places in the input data is larger than defined for the target column.|
|Explicit IDs||Select||Whether or not to load data from the EMR Objects into an IDENTITY column. See the Redshift documentation for more information.|
In this example, we have data on Amazon EMR and we want to get it into Redshift for some transformation. To do this, we will be creating a Redshift table, then loading the data from EMR to that table using the EMR Load Component in Matillion ETL. The job canvas is shown below.
Looking at the EMR Load component, the first thing to note is that it requires a valid EMR URL Location where the data is stored - not including a filename. The filename is instead given in 'EMR Object Prefix'. We set the Data File Type to JSON and choose the Target Table Name. The 'Load Columns' property allows us to select specific columns we wish to load and, in this case, we choose to load all columns (this can also be achieved by leaving the property blank). From there, it is largely a case of trusting EMR Load's default options that will autocomplete intelligently.
Running this job will load the data into a Redshift table that can then be sampled if we use a Table Input component in a Transformation job.