Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ValueError
Message:      Split  already present
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1029, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
                  ).get_module()
                    ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 682, in get_module
                  config_name: DatasetInfo.from_dict(dataset_info_dict)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 284, in from_dict
                  return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names})
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "<string>", line 20, in __init__
                File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 179, in __post_init__
                  self.splits = SplitDict.from_split_dict(self.splits)
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/splits.py", line 571, in from_split_dict
                  split_dict.add(split_info)
                File "/usr/local/lib/python3.12/site-packages/datasets/splits.py", line 548, in add
                  raise ValueError(f"Split {split_info.name} already present")
              ValueError: Split  already present

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Visual Instruction Learning (VIL)

Glyphmatic Video–Language Pretraining (GVL-P)

Author: Matthew Blake Ward (Nine1Eight)
Location: Tulsa, Oklahoma, USA
Status: Public Disclosure / Defensive Publication
Canon: Φ-111 Triple Canon
Encoder: vil-encoder-v1.1 (GVL-P trained)


🔴 Live Demo (Hugging Face Space)


Abstract

Visual Instruction Learning (VIL) is a vision-native computational framework in which all forms of information—natural language, programming languages, mathematics, scientific data, and arbitrary binary files—are deterministically compiled into a fixed canonical glyph space and interpreted through visual structure rather than linguistic tokens.

Glyphmatic Video–Language Pretraining (GVL-P) is a self-supervised training regime in which glyph sequences are rendered as images and temporal videos, enabling a vision encoder to learn execution-relevant semantics from structural continuation, repetition, variation, symmetry, and absence, without reliance on externally supplied labels or language supervision.

This document establishes authorship, priority, and reduction to practice.


1. Technical Field

This work relates to:

  • Vision–language models
  • Program compilation and intermediate representations
  • Self-supervised and unsupervised learning
  • Video representation learning
  • Deterministic symbolic execution
  • Multimodal artificial intelligence

2. Background

Existing multimodal systems depend on probabilistic tokenization and language priors. These approaches suffer from:

  • Language-dependent semantics
  • Token drift across modalities
  • Non-deterministic execution
  • Inability to represent arbitrary binaries
  • Dependence on curated labeled datasets

No prior system provides a deterministic, vision-native execution substrate applicable to all data types.


3. System Overview

VIL introduces:

  1. A fixed canonical glyph space
  2. A deterministic compiler (bytes → glyphs)
  3. Visual structures as executable instructions
  4. Self-synthesizing training (GVL-P)
  5. Glyph-video pretraining
  6. Optional neural augmentation

Meaning is executed through structure, not language.


4. Canonical Glyph System (Φ-111)

4.1 Canon Structure

The system defines three immutable canons:

  • Visible Canon (111 glyphs)
  • Invisible / Pointer Canon (111 glyphs)
  • Vocabulary / Execution Canon (111 glyphs)

Total: 333 glyphs

4.2 Properties

  • Deterministic
  • Lossless
  • Reversible
  • Modality-agnostic
  • Canon-locked across training and inference

Each glyph can expand to all others except itself, enabling recursive expressivity.


5. Deterministic Compilation

5.1 Inputs

  • Natural language
  • Programming languages
  • Mathematics
  • Scientific data
  • Arbitrary binary files

5.2 Method

  1. Input → raw bytes
  2. Bytes → large integer
  3. Integer → base-111 digits
  4. Digits → glyph indices

No tokenization. No vocabulary learning. No probability.


6. Visual Instruction Representation

Glyph sequences are rendered as:

  • Static collages (spatial execution)
  • Temporal videos (instruction evolution)

Structural Semantics

  • Ordering → execution flow
  • Repetition → identity lock
  • Variation → motion grammar
  • Symmetry → temporal loop
  • Absence → negative constraint

7. Vision Encoder (Optional)

A neural vision encoder may be attached.

  • ViT-style architecture
  • No language tokens
  • No prompts
  • Optional and replaceable
  • Canon semantics remain authoritative

The encoder learns execution-aware embeddings, not words.


8. Glyphmatic Video–Language Pretraining (GVL-P)

GVL-P is fully self-supervised.

Training Signals

  • Partial glyph collage → next glyph
  • Masked glyphs → reconstruction
  • Glyph video → structural continuation

No labels. No captions. No language supervision.


9. Adapter Training

Optional LoRA / adapters may be attached:

  • Vision encoder layers only
  • Canon remains immutable
  • Enables specialization without drift

10. Execution Semantics

VIL does not automatically execute decoded binaries.

Execution refers to:

  • Structural interpretation
  • Constraint propagation
  • Visual instruction semantics

All real execution is user-controlled.


11. Intended Use

  • Vision-native reasoning
  • Multimodal research
  • Program visualization
  • Deterministic symbolic AI
  • Self-supervised video learning

12. Limitations

  • No automatic system calls
  • No language generation
  • Requires vision encoder for embeddings
  • Visual resolution bounds expressivity

13. Dataset Binding

This model is canonically bound to:

Nine1Eight/vil-canonical-glyph-system

All glyph definitions, mappings, and validation originate there.


14. Authorship & Priority

Conceived, authored, and reduced to practice by:

Matthew Blake Ward (Nine1Eight)
Tulsa, Oklahoma, USA


15. Citation

Ward, Matthew Blake. "Visual Instruction Learning (VIL) and Glyphmatic Video–Language Pretraining (GVL-P)." Public Disclosure, Tulsa, Oklahoma, USA.

16. Legal Notice

This document constitutes a public technical disclosure. All derivative systems trace to this disclosure and its author.


Status

  • Canon finalized
  • Dataset published
  • Encoder trained
  • Space deployed
  • Claims disclosed
Downloads last month
27

Models trained or fine-tuned on Nine1Eight/vil-canonical-glyph-system