We investigate the cross-lingual transferability of backdoor attacks in instruction-tuned large language models. Our findings reveal that backdoors can transfer across languages even when the trigger and target are in different languages, posing significant security risks for multilingual AI systems. We propose defense mechanisms to mitigate these vulnerabilities.