YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Startup Team Advisor - QLora finetuned Phi model
This model is a finetuned version of Microsoft's Phi-2 model, optimized for analyzing startup candidates, evaluating founding teams, and generating optimal team compositions.
Model Details
- Base Model: Microsoft Phi-2 (2.7B parameters)
- Finetuning Method: QLora (4-bit quantization with Low-Rank Adaptation)
- Domain: Startup team formation and analysis
- Use Cases:
- Candidate skill assessment
- Team composition analysis
- Optimal founding team generation
- Skill gap identification
Usage
This model is deployed as a Hugging Face Inference Endpoint with a custom handler that provides three main operations:
1. Candidate Analysis
{
"inputs": {
"operation": "analyze_candidate",
"candidate": {
"name": "Jane Doe",
"skills": ["JavaScript", "React", "Node.js"],
"experience": [
{"title": "Senior Developer", "company": "Tech Company", "years": 3}
],
"education": [
{"institution": "Stanford", "degree": "MS Computer Science"}
]
}
}
}
2. Team Analysis
{
"inputs": {
"operation": "analyze_team",
"team": [
{
"name": "Jane Doe",
"skills": ["JavaScript", "React", "Node.js"]
},
{
"name": "John Smith",
"skills": ["Python", "Machine Learning", "Data Science"]
}
],
"include_startup_comparison": true
}
}
3. Team Generation
{
"inputs": {
"operation": "generate_team",
"candidates": [
/* Array of candidate objects */
],
"requirements": "Create a balanced team for a SaaS startup",
"team_size": 5
}
}
Response Format
All operations return responses with the same structure:
{
"team_analysis": "Detailed text analysis...",
"model_info": {
"device": "cuda",
"model_type": "phi-2-qlora-finetuned"
}
}
The specific analysis field name will vary based on the operation (team_analysis, candidate_analysis, etc.).
Limitations
- This model works best with detailed candidate profiles
- Processing time increases with the number of candidates
- The model has a fallback mode if quantized loading fails
- Maximum context length is limited to approximately 2048 tokens
Implementation Details
The model uses:
- 4-bit quantization for efficient inference
- Phi-2 as the base model for high performance with fewer parameters
- Custom prompt templates optimized for team analysis
- Fallback mechanisms for graceful degradation
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support