| --- |
| language: en |
| license: mit |
| base_model: microsoft/Phi-3.5-mini-instruct |
| tags: |
| - project-management |
| - communication |
| - lora |
| - peft |
| - phi-3.5 |
| pipeline_tag: text-generation |
| --- |
| |
| # PMCommunicator |
|
|
| PMCommunicator is a LoRA fine-tune of [Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) |
| (3.8B parameters) specialized for generating professional project management communications. |
|
|
| Given a project context (from PMPlanner + PMReasoner), it generates stakeholder-ready prose: |
| kickoff emails, status reports, risk escalation memos, executive summaries, board updates, |
| and project closeout reports. |
|
|
| ## Model Details |
|
|
| | Property | Value | |
| |---|---| |
| | Base model | microsoft/Phi-3.5-mini-instruct (3.8B) | |
| | Fine-tuning method | LoRA (PEFT) | |
| | LoRA rank | 16, alpha 32 | |
| | Trainable params | 25M / 3.82B (0.65%) | |
| | Training data | 28,000+ PM communication examples | |
| | Val loss | 0.0105 | |
| | License | MIT | |
|
|
| ## Usage |
|
|
| ```python |
| from transformers import AutoTokenizer, AutoModelForCausalLM |
| import torch |
| |
| model_id = "pmcore/pmcommunicator" |
| tokenizer = AutoTokenizer.from_pretrained(model_id) |
| model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") |
| |
| system = ( |
| "You are PMCommunicator, an expert project manager and communications specialist. " |
| "Generate professional, stakeholder-ready project communications based on the provided " |
| "project context. Be specific — use the actual project name, numbers, and timeline. " |
| "Write in clear business English. Output only the communication document itself." |
| ) |
| user = "Project: Cloud migration, 50 legacy apps, 18 months, $8M budget. 3 phases planned.\n\nWrite a weekly status report for stakeholders." |
| |
| prompt = f"<|system|>\n{system}<|end|>\n<|user|>\n{user}<|end|>\n<|assistant|>\n" |
| inputs = tokenizer(prompt, return_tensors="pt").to(model.device) |
| outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.3, do_sample=True) |
| print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)) |
| ``` |
|
|
| ## Full Pipeline |
|
|
| Use PMCommunicator as part of the full PMCore pipeline for best results. |
| See [PMCore on GitHub](https://github.com/snavazio/pmcore). |
|
|