We introduce Mixtures of In-Context Learners (MiCL), a novel approach that combines multiple in-context learning strategies to improve few-shot performance. Our method automatically selects and weights different prompt strategies based on the input context, leading to more robust and adaptable language model behavior across diverse tasks.