One form of AI resistance, aimed at sabotaging the functionality of AI large language models, is data poisoning.