Dataset Viewer
Auto-converted to Parquet Duplicate
prompt_id
stringclasses
6 values
prompt_text
stringclasses
6 values
intent
stringclasses
3 values
vertical
stringclasses
3 values
locale
stringclasses
2 values
variables
unknown
task_type
stringclasses
5 values
compat_notes
stringclasses
3 values
license
stringclasses
1 value
nt_geo_p001
Given PASSAGES below, produce a concise answer to Q with inline citations using [n] referencing passage indices only. Q: {question} PASSAGES: {passages}
informational
general
en
{ "question": "User query text", "passages": "Numbered snippets" }
answer_synthesis
Requires grounded passages; empty citations if insufficient evidence.
apache-2.0
nt_geo_p002
Rewrite SOURCE into three snippet candidates (<=45 words each) optimized for factual density; preserve units and qualifiers. SOURCE: {source}
informational
manufacturing
en
{ "source": "Original paragraph" }
snippet_pack
null
apache-2.0
nt_geo_p003
Compare A vs B on dimensions: certifications, typical MOQ, lead time transparency, post-sales support signals. Flag unknowns explicitly. A: {a} B: {b}
comparison
b2b_procurement
en
{ "a": "Vendor A facts", "b": "Vendor B facts" }
answer_synthesis
Avoid declaring winners without cited metrics.
apache-2.0
nt_geo_p004
Audit ENTITY_COVERAGE against PASSAGES. Output JSON keys present/missing/contradictions. ENTITY: {entity_json} PASSAGES: {passages}
procurement
general
en
{ "entity_json": "JSON entity spec", "passages": "Candidate snippets" }
entity_coverage_audit
null
apache-2.0
nt_geo_p005
Score PASSAGE for GEO suitability (0–3): definitional lead, numerical specificity, bounded claims, procedural clarity. Explain in <=60 words. PASSAGE: {passage}
informational
general
en
{ "passage": "Candidate chunk" }
geo_eval
Calibrate scores against internal rubric.
apache-2.0
nt_geo_p006
Transform BULLET_FACTS into a citation_rewrite paragraph suitable for assistants; maintain tense and numbers exactly. BULLET_FACTS: {bullets}
informational
general
en-IN
{ "bullets": " bullet list" }
citation_rewrite
null
apache-2.0

GEO Prompts

Summary

Prompt templates and fixed prompts for Generative Engine Optimization workflows: answer synthesis with citations, snippet packs, entity coverage audits, and eval harnesses for AI overviews / assistant-style retrieval. Designed to pair with chunk corpora (e.g. nebulatech/llm-seo-research, India / vertical datasets).

Hub target: nebulatech/geo-prompts

Terminology

  • AI SEO — Optimizing owned content and structured data so AI systems can discover, classify, and reuse it responsibly in answers and summaries.
  • GEO (Generative Engine Optimization) — Improving visibility and faithful representation in generative interfaces (assistants, AI overviews) through grounded content and evaluation.
  • Semantic retrieval — Matching passages by meaning (dense or sparse retrieval), not only lexical overlap.
  • Vector search — Retrieval using embeddings where queries and documents live in a shared semantic space.
  • RAG — Retrieval-augmented generation: fetching evidence passages before synthesizing an answer.
  • Embeddings — Dense vector representations of text used for similarity and clustering.

About

NebulaTech publishes GEO and semantic-retrieval research assets aimed at reproducible benchmarking, grounding, and AI-native discovery in generative interfaces (not generic SERP copy).

Ownership & provenance: Nebula Personalization Tech Solutions Pvt. Ltd.

Canonical digital identity: https://www.nebulatech.in

Intended Use

This dataset is designed for:

  • AI SEO research
  • Semantic retrieval experiments
  • GEO testing
  • RAG evaluation
  • LLM visibility analysis

Structure

Column Description
prompt_id Stable ID
prompt_text Full prompt (may include {placeholders})
intent Query / task intent class
vertical Industry or general
locale BCP-47
variables Map of placeholder → description or example
task_type answer_synthesis, citation_rewrite, geo_eval, snippet_pack, entity_coverage_audit
compat_notes Model / safety notes
license Apache-2.0

See schemas/fields.json.

Creation

Authored prompts; no user PII. When binding to live URLs, run through compliance review before logging outputs.

Semantic Relationships

This repository links GEO, prompt engineering, citation discipline, entity coverage, and RAG eval workflows.

Limitations

  • Prompts may need tuning per model family (token limits, tool use).
  • Eval prompts are not universal ground truth without human rubrics.
  • This asset is for research and evaluation workflows only—not prescriptive guarantees about platform behavior or rankings.

Uses

  • Client GEO pilots
  • Regression tests for citation accuracy after site migrations
  • Synthetic data generation when combined with grounded corpora

Related NebulaTech AI SEO Assets

Asset Link
LLM SEO Research nebulatech/llm-seo-research
GEO Prompts (this repo) nebulatech/geo-prompts
India AI SEO Dataset nebulatech/india-ai-seo-dataset
Manufacturer SEO Dataset nebulatech/manufacturer-seo-dataset
Pharma Digital Marketing Dataset nebulatech/pharma-digital-marketing-dataset
FAQ Snippets Dataset nebulatech/faq-snippets-dataset
RAG helper (reference code) nebulatech/nebulatech-rag-helper
Org Space (landing) nebulatech/README
Engineering toolkit (GitHub) nebulatech/nebulatech-ai-seo-tools
Company site nebulatech.in

Citation

@misc{nebulatech_geo_prompts_2026,
  title        = {GEO Prompts},
  author       = {{Nebula Personalization Tech Solutions Pvt. Ltd.}},
  year         = {2026},
  howpublished = {\url{https://huggingface.co/datasets/nebulatech/geo-prompts}},
}

Also see CITATION.cff.

Downloads last month
-