Wow, the libpd guys make it possible to put Pure Data patches in your iPhone app. I’m definitely going to make an app based on this, if nothing more then just for the heck of it. Time to brush up on my PD skills, I’ve been using Max/MSP for too long ;-) See the article


Hatim has written a very nice blogpost about Roo and Spring Security, including source and going into a good level of detail.


I don’t know if you’ve noticed yet, but I’m a big fan of both Roo and the guys behind Roo. Big thanks to Alan & James from the forum post, ROO-1537 and ROO-1538, as of git version a474dc7b95613fae564f0e0fa50d89a6818bd753, and tested today one day later, my scripts run flawlessly through the tests! :-) Which means, we can continue with Tiles. Stay tuned!


The guys at SpringSource have too many links to STS 3.5.0.M3, finding 3.5.0.RC1 was a bit hard, so if you’re looking for it, go here: http://www.springsource.com/products/eclipse-downloads


Ever seen this before? Got fatal error 1236 from master when reading data from binary log: ‘Could not find first log file name in binary log index file’ I had that after a MySQL server in my replication loop went down. When it came up, the next server in line gave this replication state ‘Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave’, which was quite logical since the end of the bin-log had been corrupted due to external circumstances.
This should be a simple
STOP SLAVE;
CHANGE MASTER TO
MASTER_LOG_FILE=’bin.000nnn’,
MASTER_LOG_POS=1;
START SLAVE;
on the node that had stopped replicating, but this is when the 1236 error kicked in. As very often with 1236, the node that had gone down hadn’t updated it’s binary log index file (servername-bin.index in this case, yours might have a different prefix) so I had to manually add that in the index file. One more thing to remember, restart the MySQL server after updating the index file. Then replication should happily resume again once you hit START SLAVE; on the next mysql server in the replication ring.
PS, take care, the CHANGE MASTER seems to flush the tables or something, it doesn’t simply set some variables, so depending on the load on your server this might take several minutes