Name is required.
Email address is required.
Invalid email address
Answer is required.
Exceeding max length of 5KB

profiling memory use to matillion jobs

I had a general question, so I thought it might make a better community question than support ticket. We monitor the memory use of our m4.large ec2 instance with cloudwatch metrics and every so often we find the total host memory usage climbs over 80% and at times even result in out of memory errors. Overall the ec2 instance is not pushing capacity, so I suspect we have some jobs that are 'bad apples' over-consuming memory, and may be a candidate for other options (ex. https://www.matillion.com/blog/offload-large-python-scripts/ ). The trouble we have is how to link memory use to jobs. At any given time there may be a several jobs from different developers in different projects running. How do I find the bad apples? thanks! Blair

1 Community Answers

Matillion Agent  

Dan D'Orazio —

Hi Blair -

These can be tricky to troubleshoot so I’m quite interested to see if there are some better suggestions from the community. From my experience, the .hprof files that are created by the Out Of Memory condition, along with the server log and a rough time the event occurred, can help us pinpoint the culprit. Our Diagnostic Data Policy support page can answer some common questions about capturing this information, how to send it to us, and what we do with it.

Best -
Dan

Post Your Community Answer

To add an answer please login