File size: 2,377 Bytes
835b15f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
language: fr
license: apache-2.0
tags:
- mistral
- lora
- peft
- project-management
- resource-allocation
datasets:
- custom
---

# French Project Resource Allocator

This model is fine-tuned from Mistral-7B to allocate resources for projects based on project descriptions, duration, complexity, sector, and identified tasks.

## Model Description

The model takes project information as input and outputs:
- Required skills
- Allocated employees
- Distribution by skills

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("your-username/french-project-resource-allocator")
model = AutoModelForCausalLM.from_pretrained(
    "your-username/french-project-resource-allocator",
    device_map="auto"
)

# Example input
project_info = {
    "Nom du projet": "Développement application mobile",
    "Description": "Création d'une application mobile pour la gestion des stocks",
    "Durée (mois)": 6,
    "Complexité (1-5)": 4,
    "Secteur": "Logistique",
    "Tâches Identifiées": "Analyse des besoins, Conception UI/UX, Développement backend, Développement frontend, Tests, Déploiement"
}

# Build prompt
def build_prompt(project_info):
    prompt = (f"Nom du projet: {project_info['Nom du projet']}\n"
              f"Description: {project_info['Description']}\n"
              f"Durée (mois): {project_info['Durée (mois)']}\n"
              f"Complexité (1-5): {project_info['Complexité (1-5)']}\n"
              f"Secteur: {project_info['Secteur']}\n"
              f"Tâches Identifiées: {project_info['Tâches Identifiées']}\n\n"
              "### Instruction:\n"
              "Fournis les informations en format JSON pour:\n"
              "- Compétences Requises\n"
              "- Employés Alloués\n"
              "- Répartition par Compétences\n\n"
              "### Réponse:\n")
    return prompt

# Generate response
prompt = build_prompt(project_info)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=800)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response.split("### Réponse:")[-1].strip())
```

## Training

This model was fine-tuned from Mistral-7B using LoRA on a custom dataset of project descriptions and resource allocations.