Crypto LLM
Collection
{実験名}_{実験条件}_{aの場合鍵長}_{a,bの場合ptが事前学習でftが継続事前学習}_{ABCI Job ID}
•
24 items
•
Updated
{lingua_repo_path}/data/cryptollm_exp5/以下に置く。(こちらを参照)apps/main/generate.pyを参考にgeneratorを呼び出すdef main():
# Load CLI arguments (overrides) and combine with a YAML config
cfg = OmegaConf.from_cli()
gen_cfg = dataclass_from_dict(
PackedCausalTransformerGeneratorArgs, cfg, strict=False
)
print(cfg)
model, tokenizer, _ = load_consolidated_model_and_tokenizer(cfg.ckpt)
generator = PackedCausalTransformerGenerator(gen_cfg, model, tokenizer)
# Allow multiple prompts
prompts = []
while True:
prompt = input("Enter a prompt (or press enter to finish): ")
if not prompt:
break
prompts.append(prompt)
# Start generation
start_time = time.time()
generation, loglikelihood, greedy = generator.generate(prompts)
end_time = time.time()
# Calculate tokens per second
total_tokens = sum(len(tokenizer.encode(gen, False, False)) for gen in generation)
tokens_per_second = total_tokens / (end_time - start_time)
# Display the results
for i, gen in enumerate(generation):
print(f"\nPrompt {i+1}: {prompts[i]}")
print(f"Generated Text: {gen}")
print(f"\nTokens per second: {tokens_per_second:.2f}")
上記generate.pyは以下のコマンドで実行できるpython -m apps.main.generate ckpt={model_repo_path}/consolidated