WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
Paper • 2306.07906 • Published • 15
# Load model directly
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("zai-org/WebGLM-2B", trust_remote_code=True)
model = AutoModel.from_pretrained("zai-org/WebGLM-2B", trust_remote_code=True)📃 Paper (KDD 2023) | 💻 Github Repo
WebGLM-2B aspires to provide an efficient and cost-effective web-enhanced question-answering system using the 2-billion-parameter General Language Model (GLM). It aims to improve real-world application deployment by integrating web search and retrieval capabilities into the pre-trained language model.
WebGLM is built by the following parts:
This repo is the implementation of Bootstrap Generator.
See our Github Repo for more detailed usage.
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="zai-org/WebGLM-2B", trust_remote_code=True)