Monyog Repeatedly Killed By Sigsegv

by Jan 8, 2016

After running in a fairly stable way for months, we've upgraded to MONyog 6.5 before Christmas, and had the process die 3 times in 4 weeks. Each time the process logged a segmentation fault. Running on Centos 6.5:

[root@(server) MONyog]# cat /etc/centos-release 
CentOS release 6.5 (Final)
[root@(server)  MONyog]# uname -a
Linux (server.fqdn) 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@(server) MONyog]# rpm -qa | grep MONyog

The error was logged 3 minutes after a failed call to the AWS API to get logs from an RDS server, and a curl timeout seems to be involved in the crime. It had been logging a similar failure every 5 minutes; something about that RDS DB isn't happy – but that shouldn't be crashing my entire database monitoring application!

[6.5] [2016-01-07 18:55:59] [Server: (endpointname)] populatemysql.cpp(471) ErrCode:-1 ErrMsg:RDS Error log not present
[6.5] [2016-01-07 18:58:56] linservicemgr.cpp(106) ErrCode:11 ErrMsg:Stopping MONyog: Received signal SIGSEGV -- Segmentation fault!

We're running the enterprise version, if that's relevant. I'll try to hunt down the customer login details and send over the core dump, but wanted to post something public so anyone else with the same issue can see it's not a one-off!