Name is required.
Email address is required.
Invalid email address
Answer is required.
Exceeding max length of 5KB

Read from Redshift table using Python component


I have a scenario to read from an existing table and transform each row (each row is a JSON and I need to flatten it based on some business logic) Can you please suggest an approach? I did try the Python component but we cannot use it in transformation jobs. Is there a way to pull data and transform it using a custom logic?

2 Community Answers

Matillion Agent  

Laura Malins —

Hi Vijay

Can I ask why you need to do it row by row please? Matillion works best when processing batches of data rather than at a row by row level. If you are able to process batches in one go you can use a transformation job with a table input component to read the data in .You can use a calculator, possibly with some of the Redshift JSON functions to extract the elements you require to flatten and then you can write to to another table using one of the write component.

If this genuinely doesn’t work for your use case, you can use python. You’re right in that you have to do this for an Orchestration job but if you use the Jython interpreter then you can use the database cursor using the context.cursor() command and you can then extract and loop through the rows processing them as required. You can write them back to Redshift using an insert statement in the cursor.

I hope that helps.


Vijay Reddy —

Thanks Laura.

Let me explore the calculator component to see if it suits our need.


Post Your Community Answer

To add an answer please login