I am following this article: https://redshiftsupport.matillion.com/customer/en/portal/articles/2243961-triggering-etl-from-an-s3-event-via-aws-lambda?b_id=8915
I can see that messages are being consumed by Matillion from my SQS queue as messages disappear from SQS whenever I "Enable SQS". But it is not triggering my job neither it is logging any errors. How can I find out where did message go?
5 Community Answers
Veronica Kupetz —
There could be a few different reasons as to why your job is not running. One of them could be due to permissions set on your IAM role that is attached to the Lambda function. This role was created once you set up your Lambda function. The IAM role attached can be found at the bottom of the Lambda function under “Execution Roles”. If you select the role, can you share what policies you have attached to that role?
Also, do you have an Event tied to your S3 bucket/folder that will trigger Lambda? If so, what is the option you have selected for the Event (Put, Post, Copy, etc)? Any additional information on your setup will be helpful. To answer your last question, when testing the function, you can view the SQS dashboard in AWS. Once you select your SQS queue, you can select “Queue Actions” and “View/Delete Messages”.
It turns out it didn't like the value that I passed in the variable field. I found out looking at the log. I think the outstanding question is why it didn't send it to ERROR queue rather than silently erroring out. (without notifying in UI either). How can we prevent this?
"com.matillion.bi.emerald.server.queue.QueueMessage[“variables”]->java.util.LinkedHashMap[“xyz”]) com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of `java.lang.String` out of START_ARRAY token at"
Aha - this could be bad documentation + bug - it looks like Success and Failure queue shouldn’t be FIFO queue since FIFO queue requires MessageGroupId and MessageDeduplicationId and there is no place to put that in this UI.