likhonhfai commited on
Commit
63eaf53
·
verified ·
1 Parent(s): 49aade2

Add model card

Browse files
Files changed (1) hide show
  1. model_card.md +53 -0
model_card.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Mysterious Coding Model
2
+
3
+ This repository contains a specialised AI model for agentic code generation and text generation tasks. The model is inspired by the GPT-OSS series (gpt oss 20b and gpt oss 120b) described in the corresponding paper. It is built on open-source Llama architecture and fine-tuned for programming assistance, conversation and multi-language support.
4
+
5
+ ## Key Features
6
+
7
+ - **Open source**: released under the Apache-2.0 license.
8
+ - **Text and code generation**: supports code completion, bug fixing, refactoring and documentation generation.
9
+ - **Efficient storage**: models are stored in the secure and fast safetensors format.
10
+ - **Multiple precisions**: includes base FP16 models, 8-bit quantised models and MXFP4 (mixed precision) variants.
11
+ - **vLLM friendly**: compatible with the vLLM high-throughput inference engine for code generation.
12
+
13
+ ## Repository Structure
14
+
15
+ The repository follows a modular structure to organise base models, quantised variants, adapters and datasets. See `README.md` or `docs/` for detailed explanation.
16
+
17
+ - **models/library=safetensors/base/**: contains the base CodeAI-7B model split across three shards and its tokenizer.
18
+ - **models/library=safetensors/quantized/**: 4-bit and 8-bit quantised models and AWQ quantisation.
19
+ - **models/library=safetensors/instruct/**: instruction tuned models.
20
+ - **models/library=safetensors/specialized/**: models specialised for Python, web dev, systems programming, and data science.
21
+ - **models/adapters/**: LoRA and coding-specific adapters.
22
+ - **datasets/**: training, evaluation and instruction-tuning datasets.
23
+ - **scripts/**: scripts for converting, validating, quantising and merging safetensors models, training adapters and evaluating code generation.
24
+ - **evaluation/**: evaluation tasks and benchmarks such as HumanEval and MBPP.
25
+ - **tools/**: utility scripts for code formatting, syntax validation and profiling.
26
+ - **docs/**: guides and API references.
27
+
28
+ ## Intended Uses & Limitations
29
+
30
+ This model is intended for research and development of coding assistants. It can be used to generate or complete code snippets, explain code, fix bugs, and assist in documentation. Users should review and test generated code before use. The model may produce incorrect or insecure code for complex tasks and should not be relied on for safety-critical systems.
31
+
32
+ ## How to Use
33
+
34
+ You can load the model in `transformers` or `vllm` for inference. Below is an example using `transformers` (ensure you have `safetensors`, `transformers` and `torch` installed):
35
+
36
+ ```python
37
+ from transformers import AutoModelForCausalLM, AutoTokenizer
38
+
39
+ # Load model and tokenizer from this repository
40
+ model = AutoModelForCausalLM.from_pretrained("likhonhfai/mysterious-coding-model", torch_dtype="auto")
41
+ tokenizer = AutoTokenizer.from_pretrained("likhonhfai/mysterious-coding-model")
42
+
43
+ prompt = "def fibonacci(n):"
44
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
45
+ outputs = model.generate(**inputs, max_new_tokens=64)
46
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
47
+ ```
48
+
49
+ For faster inference on large models, you can use the [vLLM](https://github.com/vllm-project/vllm) engine.
50
+
51
+ ## Citing
52
+
53
+ If you use this model in your research, please cite the arXiv preprint [2508.10925] and this repository.