i'm learning backend and currently working on one that gives a prompt to multiple LLMs and compare their response latency. so i would love to use this model for that.
· Sign up or log in to comment