File size: 2,220 Bytes
7099bd3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
language: en
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
  - project-management
  - communication
  - lora
  - peft
  - phi-3.5
pipeline_tag: text-generation
---

# PMCommunicator

PMCommunicator is a LoRA fine-tune of [Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)
(3.8B parameters) specialized for generating professional project management communications.

Given a project context (from PMPlanner + PMReasoner), it generates stakeholder-ready prose:
kickoff emails, status reports, risk escalation memos, executive summaries, board updates,
and project closeout reports.

## Model Details

| Property | Value |
|---|---|
| Base model | microsoft/Phi-3.5-mini-instruct (3.8B) |
| Fine-tuning method | LoRA (PEFT) |
| LoRA rank | 16, alpha 32 |
| Trainable params | 25M / 3.82B (0.65%) |
| Training data | 28,000+ PM communication examples |
| Val loss | 0.0105 |
| License | MIT |

## Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "pmcore/pmcommunicator"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")

system = (
    "You are PMCommunicator, an expert project manager and communications specialist. "
    "Generate professional, stakeholder-ready project communications based on the provided "
    "project context. Be specific — use the actual project name, numbers, and timeline. "
    "Write in clear business English. Output only the communication document itself."
)
user = "Project: Cloud migration, 50 legacy apps, 18 months, $8M budget. 3 phases planned.\n\nWrite a weekly status report for stakeholders."

prompt = f"<|system|>\n{system}<|end|>\n<|user|>\n{user}<|end|>\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.3, do_sample=True)
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
```

## Full Pipeline

Use PMCommunicator as part of the full PMCore pipeline for best results.
See [PMCore on GitHub](https://github.com/snavazio/pmcore).