Will the code/scripts be released?
Hi, the details in https://huggingface.co/spaces/LLM360/TxT360 is amazing. Will you release an open-source project for others to improve on it?
Hi, @Leon-Leee
We are glad that you find this blog post helpful. We are preparing the code base and if everything goes smoothly we should be able to release the code next week.
Best,
Hector L
Hi, @Leon-Leee
We are glad that you find this blog post helpful. We are preparing the code base and if everything goes smoothly we should be able to release the code next week.
Best,
Hector L
Hi, is everything going all right?
Hi @Leon-Leee ,
Thanks for your patience. We are doing final review on the code base before the release. We'll let you know once the code is ready.
Best,
Liping
Hello! I was wondering if the source code for the software you used to synthesize TxT360_QA is available anywhere.
I have cloned https://github.com/LLM360/TxT360.git but did not find it there.
Thank you for all you do π
hi, @ttkciar ,
This part of the code is relatively simple. our code is mainly tied to our cluster setup and messy. Probably we only need to share our prompts: @omkarenator , right?
Also, I believe this version of the dataset has a bug that didn't create QA for longer documents. Our model is trained with this, but something to highlight too.
Yes, @ttkciar the code mainly contains a distributed Slurm job submission pipeline, which is specifically tied to our internal infrastructure. Here's the prompt we use for generations:
Write QA pairs based on this document, making them as challenging as possible. Ensure the output follows the format "Q: {{question}}\nA: {{answer}}" Additionally, make sure that the questions and answers are strictly based on the provided text. \n"""\n{text}\n"""\n
text is substituted with actual document
Thank you both! You are right, that is really straightforward. Nonetheless, I appreciate your willingness to share it with me.