Ever seen this before? Got fatal error 1236 from master when reading data from binary log: ‘Could not find first log file name in binary log index file’ I had that after a MySQL server in my replication loop went down. When it came up, the next server in line gave this replication state ‘Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave’, which was quite logical since the end of the bin-log had been corrupted due to external circumstances.
This should be a simple
STOP SLAVE;
CHANGE MASTER TO
MASTER_LOG_FILE=’bin.000nnn’,
MASTER_LOG_POS=1;
START SLAVE;
on the node that had stopped replicating, but this is when the 1236 error kicked in. As very often with 1236, the node that had gone down hadn’t updated it’s binary log index file (servername-bin.index in this case, yours might have a different prefix) so I had to manually add that in the index file. One more thing to remember, restart the MySQL server after updating the index file. Then replication should happily resume again once you hit START SLAVE; on the next mysql server in the replication ring.
PS, take care, the CHANGE MASTER seems to flush the tables or something, it doesn’t simply set some variables, so depending on the load on your server this might take several minutes