SAP Netweaver Query
SAP NetWeaver Query
This component uses the SAP NetWeaver API to retrieve data and load it into a table. This stages the data, so the table is reloaded each time. You may then use transformations to enrich and manage the data in permanent tables.
Note: Use of the SAP NetWeaver Query component requires an additional "JCo" connection library (libsapjco3.so) along with the java wrapper (sapjco3.jar). Log in to the SAP Service Marketplace and access the SAP JCo download software from http://service.sap.com/connectors. If necessary, select the Tools & Services page to display the download page. Download the most recent version of the SAP JCo 3.x for linux. Then, copy the files to the Matillion ETL for instance in the following location, and restart tomcat.
Without these additional files, you may get errors such as: "SAP JCo library not found: verify the correct jar file is present.".
With an incorrect password, or other authentication problem, you may get errors such as: "Initialization of repository destination INERP_JCO_DESTIANTION_NAME3 failed".
The component offers both a Basic and Advanced mode (see below) for generating the SAP NetWeaver API query. Note however that although this is exposed in an SQL-like language, the exact semantics can be surprising - for example filtering on a column can return more data tha not filtering on it, an impossible scenario with regular SQL.
There are some special pseudo columns which can be part of a query filter, but are not returned as data. This is fully described in the data model.
Warning: This component is destructive as it truncates or recreates its target table on each run. Do not modify the target table structure manually.
|Name||Text||The descriptive name for the component.|
Basic - This mode will build a SAP NetWeaver Query for you using settings from Data Source, Data Selection
and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced - This mode will require you to write an SQL-like query which is translated into one or more SAP NetWeaver API calls. The available fields and their descriptions are documented in the data model.
|Host||Text||The hostname or IP of the SAP NetWeaver server.|
|Username||Text||A valid SAP NetWeaver username.|
|Password||Text||A valid SAP NetWeaver password.|
|Client||Text||The client authenticating to the SAP system.|
|Data Source||Choice||Select a data source, for example Account.|
|Data Selection||Choice||Select one or more columns to return from the query.|
|Data Source Filter||Input Column||The available input columns vary depending upon the Data Source.|
Is - Compares the column to the value using the comparator.
Not - Reverses the effect of the comparison, so "equals" becomes "not equals", "less than" becomes "greater than or equal to", etc.
Note: Not all comparators will work with all possible data sources.
Choose one of Equal To, Greater than, Less than, Greater than or equal to, Less than or equal to or Like.
|Value||The value to be compared.|
|SQL Query||Text||This is an SQL-like query, written according to the SAP NetWeaver data model. (Property only available in 'Advanced' Mode)|
|Limit||Number||Fetching a large number of results from SAP NetWeaver will use multiple API calls. These calls are rate-limited by the provider, so fetching a very large number may result in errors.|
|Connection Options||Parameter||A JDBC parameter supported by the Database Driver. The available parameters
are determined automatically from the driver, and may change from
version to version.
Most options are usually not required as sensible defaults are assumed. However, the SAP NetWeaver it may be necessary to set additional connection options, e.g. "stsurl".
|Value||A value for the given Parameter.
|Storage Account||Select||(Azure Only) Select a Storage Account with your desired Blob Container to be used for staging the data.|
|Blob Container||Select||(Azure Only) Select a Blob Container to be used for staging the data.|
(AWS Only) Snowflake Managed: Allow Matillion ETL to create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
Existing Amazon S3 Location: Selecting this will avail the user of properties to specify a custom staging area on S3.
|S3 Staging Area||Text||(AWS Only) The name of an S3 bucket for temporary storage. Ensure
your access credentials have S3 access and permission to write
to the bucket. See this document for
details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
This property is available when using an Existing Amazon S3 Location for Staging.
|Warehouse||Select||Choose a Snowflake warehouse that will run the load.|
|Database||Select||Choose a database to create the new table in.|
|Type||Select||Choose between using a standard table or an external table.
Standard: The data will be staged on an S3 bucket before being loaded into a table.
External: The data will be put into an S3 Bucket and referenced by an external table.
|Schema||Select||Select the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article.
Note: An external schema is required if the 'Type' property is set to 'External'.
|Target Table||Text||Provide a new table name.
Warning: This table will be recreated and will drop any existing table of the same name.
|Location||Text/Select||When using an 'External' type table, Provide an S3 Bucket path that will be used to store the data. Once on an S3 bucket, the data can be referenced by the external table.|
|Table Distribution Style||Select||
Even - the default option, distribute rows around the Redshift Cluster evenly.
All - copy rows to all nodes in the Redshift Cluster.
Key - distribute rows around the Redshift cluster according to the value of a key column.
Table-distribution is critical to good performance - see the Amazon Redshift documentation for more information.
|Table Distribution Key||Select||This is only displayed if the Table Distribution Style is set to Key. It is the column used to determine which cluster node the row is stored on.|
|Table Sort Key||Select||This is optional, and specifies the columns from the input that should be
set as the table's sort-key.
Sort-keys are critical to good performance - see the Amazon Redshift documentation for more information.
|Sort Key Options||Select||Decide whether the sort key is of a compound or interleaved variety - see the Amazon Redshift documentation for more information.|
|Project||Text||The target BigQuery project to load data into.|
|Dataset||Text||The target BigQuery dataset to load data into.|
|Cloud Storage Staging Area||Text||The URL and path of the target Google Storage bucket to be used for staging the queried data.|
|Encryption||Select||(AWS Only) Decide on how the files are encrypted inside the S3 Bucket.This property is available when using an Existing Amazon S3 Location for Staging.
None: No encryption.
SSE KMS: Encrypt the data according to a key stored on KMS.
SSE S3: Encrypt the data according to a key stored on an S3 bucket
|KMS Key ID||Select||(AWS Only) The ID of the KMS encryption key you have chosen to use in the 'Encryption' property.|
|Load Options||Multiple Selection||
Comp Update: Apply automatic compression to the target table (if ON). Default is ON.
Stat Update: Automatically update statistics when filling a table (if ON). Default is ON. In this case, it is updating the statistics of the target table.
Clean S3 Objects: Automatically remove UUID-based objects on the S3 Bucket (if ON). Default is ON. Effectively decides whether to keep the staged data in the S3 Bucket or not.
String Null is Null: Converts any strings equal to "null" into a null value. This is case sensitive and only works with entirely lower-case strings. Default is ON.
Recreate Target Table:Choose whether the component recreates its target table before the data load. If OFF, the existing table will be used. Default is ON.
|Load Options||Multiple Select||Clean Cloud Storage Files: (If On) Destroy staged files on Cloud Storage after loading data. Default is On.
Cloud Storage File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
|Auto Debug||Select||Choose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.|
|Debug Level||Select||The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
2: Will log everything included in Level 1, cache queries, and additional information about the request, if applicable.
3: Will additionally log the body of the request and the response.
4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
5: Will additionally log communication with the data source and additional details that may be helpful in troubleshooting problems. This includes interface commands.
This component makes the following values available to export into variables:
|Time Taken To Stage||The amount of time (in seconds) taken to fetch the data from the data source and upload it to storage.|
|Time Taken To Load||The amount of time (in seconds) taken to execute the COPY statement to load the data into the target table from storage.|
Connect to the SAP NetWeaver service and issue the one or more API calls. Stream the results into objects on S3, recreate or truncate the target table as necessary and then use a COPY command to load the S3 objects into the table. Finally, clean up the temporary S3 objects.
In this example we load in some data using the SAP NetWeaver Query component. The job we will use, shown below, creates a table which is then used to load data into.
The properties for the SAP NetWeaver Query component are shown below. An IP address for our server is entered into the 'Host' property and a username and password that can access this server are also entered. After choosing a data source and including some columns into the data selection, other options can be left default (usually blank) and the component will intelligently use correct options.
This job can now be run in its entirity. If wanting to run this multiple times without cleaning the table up, it is advised to switch the Create/Replace Table component to 'Replace' mode.
With the job run, this data is loaded into a table. The output can be checked by creating to a Transformation job and loading the table using a Table Input component. Checking the 'Sample' tab will sample the data in the table.