French Project Resource Allocator
This model is fine-tuned from Mistral-7B to allocate resources for projects based on project descriptions, duration, complexity, sector, and identified tasks.
Model Description
The model takes project information as input and outputs:
- Required skills
- Allocated employees
- Distribution by skills
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("your-username/french-project-resource-allocator")
model = AutoModelForCausalLM.from_pretrained(
"your-username/french-project-resource-allocator",
device_map="auto"
)
# Example input
project_info = {
"Nom du projet": "Développement application mobile",
"Description": "Création d'une application mobile pour la gestion des stocks",
"Durée (mois)": 6,
"Complexité (1-5)": 4,
"Secteur": "Logistique",
"Tâches Identifiées": "Analyse des besoins, Conception UI/UX, Développement backend, Développement frontend, Tests, Déploiement"
}
# Build prompt
def build_prompt(project_info):
prompt = (f"Nom du projet: {project_info['Nom du projet']}\n"
f"Description: {project_info['Description']}\n"
f"Durée (mois): {project_info['Durée (mois)']}\n"
f"Complexité (1-5): {project_info['Complexité (1-5)']}\n"
f"Secteur: {project_info['Secteur']}\n"
f"Tâches Identifiées: {project_info['Tâches Identifiées']}\n\n"
"### Instruction:\n"
"Fournis les informations en format JSON pour:\n"
"- Compétences Requises\n"
"- Employés Alloués\n"
"- Répartition par Compétences\n\n"
"### Réponse:\n")
return prompt
# Generate response
prompt = build_prompt(project_info)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=800)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response.split("### Réponse:")[-1].strip())
Training
This model was fine-tuned from Mistral-7B using LoRA on a custom dataset of project descriptions and resource allocations.
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support