Hi, I am trying to write Pandas dataframe object to Redshift table via Python component. Currently we are passing dataframe data to Matillion Grid variable which siginificantly slows down the process and leads to out-of-memory issues. Can you please suggest a better approach?
1 Community Answers
Kalyan Arangam —
One way is to up your instance size which may then give you sufficient memory for your operations.
Alternatively, have you tried writing to a file on disk, upload the file to S3 and then load into redshift?