Create External Table RS
    • Dark
      Light

    Create External Table RS

    • Dark
      Light

    Article Summary

    This article is specific to the following platforms - Redshift.

    Create External Table

    This component enables users to create a table that references data stored in an S3 bucket. Note that this creates a table that references the data that is held externally, meaning the table itself does not hold the data. It is important that the Matillion ETL instance has access to the chosen external data source.

    Referencing externally held data can be valuable when wanting to query large datasets without resorting to storing that same volume of data on the Redshift cluster.

    External tables are part of Amazon Redshift Spectrum and may not be available in all regions. For a list of supported regions, read Amazon Redshift endpoints and quotas.

    For more information about working with external tables, read Creating external tables for Redshift Spectrum.

    Note

    Nested data loads from JSON or Parquet file formats may also be set up using this component via the Define Nested Metadata checkbox in the Table Metadata property.

    Properties

    Redshift Properties

    PropertySettingDescription
    NameTextA human-readable name for the component.
    SchemaSelectSelect the table schema. External tables require external schemas and regular schemas will not work. To learn more about external schemas, read 'Configuring The Matillion ETL Client' from Getting Started With Amazon Redshift Spectrum documentation. The special value, [Environment Default], will use the schema defined in the environment.
    For more information about using multiple schemas, read Schemas.
    New Table NameTextThe name of the table to create or replace.
    Create/ReplaceSelectCreate: Creates a new table. This will generate an error if a table with the same name already exists, but will never destroy existing data. This is the default setting.
    Create if not exists: This will only create a new table if a table of the same name does not already exist. It will not destroy existing data.
    If the schema of the existing table does not match the schema defined in this component, no attempt is made to fix or correct it, which could lead to errors later in the job if you did not expect an existing table to exist, or to have a different schema to the one defined in this component.
    Replace: This drops any existing table of the same name, and then creates a new table. This guarantees that after the component succeeds, the table matches the schema defined in this component; however, any existing data in an existing table will be lost.
    Note: Since other database objects may depend upon this table, drop ... cascade is used, which may actually remove many other database objects.
    Table MetadataColumn NameThe name of the new column
    Data TypeText: This type can hold any type of data, subject to a maximum size. More...
    Integer: This type is suitable for whole-number types (no decimals). More...
    Numeric: This type is suitable for numeric types, without or with decimals. More...
    Boolean: This type is suitable for data that is either true or false. More...
    Date: This type is suitable for dates without times. More...
    DateTime: This type is suitable for dates, times, or timestamps (both date and time). More...
    SizeFor text types, this is the maximum length. This is a limit on the number of bytes, not characters. For Amazon Redshift, since all data is stored using UTF-8, any non-ASCII character will count as 2 or more bytes.
    For numeric types, this is the total number of digits allowed, whether before or after the decimal point.
    Decimal PlacesRelevant only for numeric, it is the maximum number of digits that may appear to the right of the decimal point.
    Table Metadata (Nested)CheckboxWhen the Define Nested Metadata checkbox is ticked inside the Table Metadata property, a tree structure can be defined for metadata.
    PartitionSelect MultipleMark one or more columns in this table as potential partitions. Partition columns allow queries on large datasets to be optimized when that query is made against the columns chosen as partition columns. When a partition is created, values for that column become distinct Amazon S3 storage locations, allowing rows of data in a location that is dependent on their partition column value. For example, it is common for a date column to be chosen as a partition column, thus storing all other data according to the date it belongs to. When creating partitioned data using the Add Partition Component, it is vital that those partitioned columns have already been marked using this property.
    LocationSelectThe Amazon S3 bucket location for the external table data. The Matillion ETL instance must have access to this data (typically, access is granted according to the AWS credentials on the instance, or if the bucket is public).
    FormatSelectChoose a format for the source file.
    Field TerminatorText(TEXTFILE only) The delimiter to be used that separates fields (columns) in the file. Defaults to \\A.
    Line TerminatorText(TEXTFILE only) The delimiter to be used that separates records (rows) in the file. Defaults to newline. \ can also signify a newline. \\r can signify a carriage return.
    Skip Header RowsIntegerThe number of rows at the top of the file to skip. The default setting is an empty field.
    Strip Outer ArraySelect(JSON only) Strips the outer array from the JSON file, enabling JSON files that contain a single, anonymous array to be loaded without error. The default setting is No.