Reconcile the Irreconcilable

This model tries to reconcile the views of Hegel and Ayn Rand on a given philosophical topic.

Model Details

Model Description

The purpose of the model is to give the views of Hegel and Ayn Rand on a given philosophical topic. Then it is supposed to write a paragraph reconciling their (most likely contrary) views. The model was multitasked fine-tuned on bloomz-3b. Data for the fine-tuning was generated using chatGPT GPT-4.

Direct Use

I am not sure how to get the model to work on this site. So, annoyingly, you need to cut and paste the following code into a notebook.

  1. Download the adapter and connect with base model.
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer

peft_model_id = "ntedeschi/reconcile_the_irreconcilable"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=False, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)

# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
  1. Set up prompt and query function:
from IPython.display import display, Markdown

def make_inference(topic):
  batch = tokenizer(
      f"### INSTRUCTION\nBelow is a philosophy topic. Please write Hegel's view on the topic, Ayn Rand's view \
      on the topic and a reconciliation of their views. \
      \n\n### Topic:\n{topic}\n \
      \n\n### Hegel:\n \
      \n\n### Ayn Rand:\n \
      \n\n### Reconciliation:\n",
      return_tensors='pt'
  )


  with torch.cuda.amp.autocast():
    output_tokens = model.generate(**batch, max_new_tokens=512)

  display(Markdown((tokenizer.decode(output_tokens[0], skip_special_tokens=True))))
  1. Make the inference by giving a philosophy topic. For example,
philosophy_topic = "Mind body dualism"
make_inference(philosophy_topic)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support