:tada: Our latest benchmark on the zeroth-order optimization methods for LLM fine-tuning has been released on arXiv. Codes are also available!