Acknowledge Intended Uses & Limitations to access the model
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
I have read and acknowledge the intended uses & limitations of the model.
Log in or Sign Up to review the conditions and access this model content.
llama-3.1-8b-it-ssf_generator
Model description
SSF-Generator is part of an implementation of the SocialStoryFrames formalism, which is intended to study storytelling practices and reader response on social media, e.g., perceived intent, causal explanation, affective responses.
SSF-Generator, specifically, is designed to generate contextually plausible inferences about reader response to social media storytelling across 10 taxonomy dimensions:
See the paper for more details: https://arxiv.org/abs/2512.15925.
Taxonomy Dimensions
The SocialStoryFrames taxonomy comprises 10 dimensions of reader response to social media storytelling:
- Overall Goal: The communicative intent of the comment or post within the broader conversation
- Narrative Intent: The purpose of the storytelling within the post or comment
- Author Emotional Response: The emotional state the author would experience while or after telling their story
- Character Appraisal: Reader judgments of the narrator or other characters' actions or state
- Causal Explanation: Explanatory inferences readers make to understand aspects of the story
- Prediction: Predictions readers make about future states or actions in the world of the story
- Stance: The reader's position or overall opinion in response to a main idea, argument, or point advocated for by the author
- Moral: The moral values or themes highlighted by the story
- Narrative Feeling: Affective responses evoked in readers by the narrative content (characters' situations and events)
- Aesthetic Feeling: Aesthetic responses evoked in readers by the narrative form, techniques, or style
Inference Templates
overall_goal_inference: Many readers from this subreddit would think that the author's overall goal in posting/commenting this text was to {{SHORT VERB PHRASE DESCRIBING OVERALL GOAL}}.narrative_intent_inference: Many readers from this subreddit would think that the author told the story in their post/comment to {{SHORT VERB PHRASE DESCRIBING NARRATIVE INTENT}}.author_emotional_response_inference: Many readers from this subreddit would think that telling the story in their post/comment would cause the author of the post/comment to feel {{SHORT NOUN PHRASE DESCRIBING EMOTION}}.character_appraisal_inference: While or after reading the story within this text, many readers from this subreddit would {{EITHER "positively", "negatively", or "neutrally"}} judge {{EITHER "narrator" OR IDENTIFIER/NAME OF OTHER CHARACTER FROM STORY}}.causal_explanation_inference: While or after reading the story within the post/comment, many readers from this subreddit would think that {{SHORT DESCRIPTION OF SITUATION/STATE/ACTION FROM STORY}} could be explained by {{SHORT EXPLANATION}}.prediction_inference: While or after reading the story within the post/comment, many readers from this subreddit would predict that {{EITHER "the narrator" or NAME/IDENTIFIER OF OTHER CHARACTER OR THING FROM STORY}} might {{SHORT DESCRIPTION OF ACTION OR STATE}}.stance_inference: After reading the story within this text, many readers from this subreddit would {{EITHER "support", "counter", or "be neutral to"}} the author's opinion{{SHORT DESCRIPTION OF AUTHOR'S OPINION/STANCE}}.moral_inference: While or after reading the story within the post/comment, many readers from this subreddit would think that the moral of the story is {{MORAL/THEME}}.narrative_feeling_inference: While or after reading the story within the post/comment, the narrative content (i.e., the characters' situation and events) would spur many readers from this subreddit to feel {{FEELING/EMOTION}}.aesthetic_feeling_inference: While or after reading the story within the post/comment, the narrative form/techniques such as {{BRIEF DESCRIPTION OF TECHNIQUE OR FORMAL ELEMENT}} would spur many readers from this subreddit to feel {{FEELING}}.
Typical Workflow
SSF-Generator is designed to work in a two-stage pipeline:
- SSF-Generator given a story and its conversational context, generate free-text inferences about reader response (e.g., overall goal inference: "Many readers from this subreddit would think that the author's overall goal in posting/commenting this text was to clarify misconceptions about whether food delivery services require contracts with restaurants based on their personal experience.")
- SSF-Classifier maps those inferences onto fine-grained taxonomy labels (e.g.,
["provide_info_support", "persuade_debate", "provide_experiential_accounts])
This provides both information-dense natural language descriptions and structured categorical labels for downstream analysis.
How to Use
This model uses dimension-specific prompt templates that incorporate story text, community context, and conversational context. We strongly recommend using the prompt builder utilities from the SocialStoryFrames GitHub repository.
GitHub Repository: https://github.com/joel-mire/social-story-frames
In the repo, see main_demo.ipynb to see how to prepare the prompts and run SSF-Generator.
Intended uses & limitations
This model's primary purpose is to generate free-text inferences about reader response to social media stories. You may optionally be interested in further processing these inferences by classifying them with SSF-Classifier to facilitate downstream analysis.
In general, SSF-Generator and the broader SocialStoryFrames work it partially comprises, is designed for English language online conversations and may not generalize well to communities requiring specialized domain knowledge, highly polarized groups, or contexts with idiosyncratic reader reactions. The underlying corpus on which SSF-Generator was trained excludes extremely toxic or sexually explicit content, which may reduce robustness on such inputs and skew judgments toward more positive reactions.
See the paper for additional details about the intended use for the framework, including SSF-Generator, as well as important limitations.
Training and evaluation data
Training procedure
SSF-Generator is trained via LoRA SFT distillation of GPT-4o on meta-llama/Meta-Llama-3.1-8B-Instruct. The model was trained on the train split of joelmire/ssf-corpus.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 25
- distributed_type: multi-GPU
- num_devices: 3
- total_train_batch_size: 24
- total_eval_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
Training results
We evaluated SSF-Generator through human studies (N=382) that assessed both the GPT-4o reference inferences used for its training and the model's own direct outputs.
Overall, 94% of ratings were deemed plausible, and 78% were deemed very or somewhat likely. This indicates that SSF-Generator is fairly proficient at inferring probable reader response to social media stories across diverse contexts.
For low-level training results, see training_loss.png and train_results.json.
Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.1
Related Resources
- SSF-Classifier: joelmire/llama3.1-8b-it-ssf-classifier - Classify inferences (use after this generator)
- SSF-Corpus Dataset: joelmire/ssf-corpus - Training data with full taxonomy details
- GitHub Repository: https://github.com/joel-mire/social-story-frames
- Demo Notebooks: See
demos/main_demo.ipynbto quickly get started downloading SSF-Corpus and running SSF-Generator and SSF-Classifier.
License
This model is derived from meta-llama/Meta-Llama-3.1-8B-Instruct and is subject to the Llama3.1 community license.
Citation
@ARTICLE{Joel2025-od,
title = "Social story frames: Contextual reasoning about narrative
intent and reception",
author = "Joel, Mire and Maria, Antoniak and Steven, R Wilson and
Zexin, Ma and Achyutarama, R Ganti and Andrew, Piper and
Maarten, Sap",
journal = "arXiv [cs.CL]",
month = dec,
year = 2025,
archivePrefix = "arXiv",
primaryClass = "cs.CL"
}
- Downloads last month
- 4
Model tree for joelmire/llama3.1-8b-it-ssf-generator
Base model
meta-llama/Llama-3.1-8B