Paperweb / README.md
Seriki's picture
Update README.md
a42338f verified
---
title: Doccumentation
emoji: 🏆
colorFrom: gray
colorTo: indigo
sdk: gradio
sdk_version: 6.5.1
app_file: app.py
pinned: false
---
## README.md
[https://fastht.ml](https://github.com/Web4application/Aura_Full_Project.xlsl)
`fastht.ml` acceass the @local and repository gather data and flood paperwebht.ml sire with docs
[include diagrams like this]
(https://arxiv.org/pdf/2405.01535)
# paperweb documentation
#reference this diccumentation to [huggingface.co/QUBUHUB/Paperweb](https://github.com/Web4application/Brain)
##
[configure paperwebht.ml](https://huggingface.co/docs/hub/spaces-config-reference)
<p>
Audit Logs
Storage Regions
Data Studio for Private datasets
Resource Groups
Advanced Security
Tokens Management
Network Security
Rate Limits
Repositories
Introduction
Getting Started
Repository Settings
Storage Limits
Storage Backend (Xet)
Pull requests and Discussions
Notifications
Collections
Webhooks
Next Steps
Licenses
Models
Introduction
The Model Hub
Model Cards
Gated Models
Uploading Models
Downloading Models
Libraries
Tasks
Widgets
Inference API
Download Stats
Datasets
Introduction
Datasets Overview
Dataset Cards
Gated Datasets
Uploading Datasets
Downloading Datasets
Streaming Datasets
Editing Datasets
Libraries
Dataset Viewer
Download Stats
Data files Configuration
Spaces
Introduction
Spaces Overview
Gradio Spaces
Static HTML Spaces
Docker Spaces
ZeroGPU Spaces
Embed your Space
Run with Docker
Reference
Advanced Topics
Sign in with HF
Jobs
Introduction
Jobs Overview
Quickstart
Pricing
Manage Jobs
Jobs Configuration
Popular images
Schedule Jobs
Webhooks Automation
Reference
Agents
Introduction
Agents on the Hub
Hugging Face MCP Server
Hugging Face Agent Skills
Agents and the hf CLI
Building agents with the SDK
Other
Organizations
Billing
Security
Moderation
Paper Pages
Search
Digital Object Identifier (DOI)
Hub API Endpoints
Sign in with HF
Contributor Code of Conduct
Content Guidelines
## What's paperweb platform and auraxlsl?
define it according to
[https://fastht.ml]
# https://github.com/Web4application/Aura_Full_Project.xlsl
We are helping the community work together towards the goal of advancing Machine Learning 🔥.
The Hugging Face Hub is a platform with over 2M models, 500k datasets, and 1M demos in which people can easily collaborate in their ML workflows. The Hub works as a central place where anyone can share, explore, discover, and experiment with open-source Machine Learning.
No single company, including the Tech Titans, will be able to “solve AI” by themselves – the only way we'll achieve this is by sharing knowledge and resources in a community-centric approach. We are building the largest open-source collection of models, datasets, and demos on the Hugging Face Hub to democratize and advance ML for everyone 🚀.
We encourage you to read the [Code of Conduct](https://huggingface.co/code-of-conduct) and the [Content Guidelines](https://huggingface.co/content-guidelines) to familiarize yourself with the values that we expect our community members to uphold 🤗.
## What can you find on thier base?
The paperweb hosts web paper based and scientific workbook, which are version-controlled buckets that can contain all your files. 💾
# example https://arxiv.org/pdf/2405.01535
On it, you'll be able to upload and discover...
- Models: _hosting the latest state-of-the-art models for LLM, text, vision, and audio tasks_
- Datasets: _featuring a wide variety of data for different domains and modalities_
- Spaces: _interactive apps for demonstrating ML models directly in your browser_
The Hub offers **versioning, commit history, diffs, branches, and over a dozen library integrations**!
All repositories build on [Xet](./xet/index), a new technology to efficiently store Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads.
[paperweb](https://huggingface.co/docs/dataset-viewer/analyze_data)
Youcan learn more about the features that all repositories share in the [**Repositories documentation**](./repositories).
## Models
You can discover and use dozens of thousands of open-source ML models shared by the community. To promote responsible model usage and development, model repos are equipped with [Model Cards](./model-cards) to inform users of each model's limitations and biases. Additional [metadata](./model-cards#model-card-metadata) about info such as their tasks, languages, and evaluation results can be included, with training metrics charts even added if the repository contains [TensorBoard traces](./tensorboard). It's also easy to add an [**inference widget**](./models-widgets) to your model, allowing anyone to play with the model directly in the browser! For programmatic access, a serverless API is provided by [**Inference Providers**](./models-inference).
To upload models to the Hub, or download models and integrate them into your work, explore the [**Models documentation**](./models). You can also choose from [**over a dozen libraries**](./models-libraries) such as 🤗 Transformers, Asteroid, and ESPnet that support the Hub.
## Datasets
The Hub is home to over 500k public datasets in more than 8k languages that can be used for a broad range of tasks across NLP, Computer Vision, and Audio. The Hub makes it simple to find, download, and upload datasets. Datasets are accompanied by extensive documentation in the form of [**Dataset Cards**](./datasets-cards) and [**Data Studio**](./datasets-viewer) to let you explore the data directly in your browser. While many datasets are public, [**organizations**](./organizations) and individuals can create private datasets to comply with licensing or privacy issues. You can learn more about [**Datasets here on the Hugging Face Hub documentation**](./datasets-overview).
The [🤗 `datasets`](https://huggingface.co/docs/datasets/index) library allows you to programmatically interact with the datasets, so you can easily use datasets from the Hub in your projects. With a single line of code, you can access the datasets; even if they are so large they don't fit in your computer, you can use streaming to efficiently access the data.
## Spaces
[Spaces](https://huggingface.co/spaces) is a simple way to host ML demo apps on the Hub. They allow you to build your ML portfolio, showcase your projects at conferences or to stakeholders, and work collaboratively with other people in the ML ecosystem.
We currently support two awesome Python SDKs (**[Gradio](https://gradio.app/)** and **[Streamlit](./spaces-sdks-streamlit)**) that let you build cool apps in a matter of minutes. Users can also create static Spaces, which are simple HTML/CSS/JavaScript pages, or deploy any Docker-based application.
If you need GPU power for your demos, try [**ZeroGPU**](./spaces-zerogpu): it dynamically provides NVIDIA H200 GPUs, in real-time, only when needed.
After you've explored a few Spaces (take a look at our [Space of the Week!](https://huggingface.co/spaces)), dive into the [**Spaces documentation**](./spaces-overview) to learn all about how you can create your own Space. You'll also be able to upgrade your Space to run on a GPU or other accelerated hardware. ⚡️
## Organizations
Companies, universities and non-profits are an essential part of the Hugging Face community! The Hub offers [**Organizations**](./organizations), which can be used to group accounts and manage datasets, models, and Spaces. Educators can also create collaborative organizations for students using [Hugging Face for Classrooms](https://huggingface.co/classrooms). An organization's repositories will be featured on the organization’s page and every member of the organization will have the ability to contribute to the repository. In addition to conveniently grouping all of an organization's work, the Hub allows admins to set roles to [**control access to repositories**](./organizations-security), and manage their organization's [payment method and billing info](https://huggingface.co/pricing). Machine Learning is more fun when collaborating! 🔥
[Explore existing organizations](https://huggingface.co/organizations), create a new organization [here](https://huggingface.co/organizations/new), and then visit the [**Organizations documentation**](./organizations) to learn more.
## Security
The Hugging Face Hub supports security and access control features to give you the peace of mind that your code, models, and data are safe. Visit the [**Security**](./security) section in these docs to learn about:
- User Access Tokens
- Access Control for Organizations
- Signing commits with GPG
- Malware scanning
# Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
Analyze a dataset on the Hub
In the Quickstart, you were introduced to various endpoints for interacting with datasets on the Hub. One of the most useful ones is the /parquet endpoint, which allows you to get a dataset stored on the Hub and analyze it. This is a great way to explore the dataset, and get a better understanding of it’s contents.
To demonstrate, this guide will show you an end-to-end example of how to retrieve a dataset from the Hub and do some basic data analysis with the Pandas library.
Get a dataset
The Hub is home to more than 200,000 datasets across a wide variety of tasks, sizes, and languages. For this example, you’ll use the codeparrot/codecomplex dataset, but feel free to explore and find another dataset that interests you! The dataset contains Java code from programming competitions, and the time complexity of the code is labeled by a group of algorithm experts.
Let’s say you’re interested in the average length of the submitted code as it relates to the time complexity. Here’s how you can get started.
Use the /parquet endpoint to convert the dataset to a Parquet file and return the URL to it:
Python
JavaScript
cURL
import requests
API_URL = "https://datasets-server.huggingface.co/parquet?dataset=codeparrot/codecomplex"
def query():
response = requests.get(API_URL)
return response.json()
data = query()
{"parquet_files":
[
{"dataset": "codeparrot/codecomplex", "config": "default", "split": "train", "url": "https://huggingface.co/datasets/codeparrot/codecomplex/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet", "filename": "0000.parquet", "size": 4115908}
],
"pending": [], "failed": [], "partial": false
}
Read dataset with Pandas
With the URL, you can read the Parquet file into a Pandas DataFrame:
import pandas as pd
url = "https://huggingface.co/datasets/codeparrot/codecomplex/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet"
df = pd.read_parquet(url)
df.head(5)
src complexity problem from
import java.io.*;\nimport java.math.BigInteger… quadratic 1179_B. Tolik and His Uncle CODEFORCES
import java.util.Scanner;\n \npublic class pil… linear 1197_B. Pillars CODEFORCES
import java.io.BufferedReader;\nimport java.io… linear 1059_C. Sequence Transformation CODEFORCES
import java.util.;\n\nimport java.io.;\npubl… linear 1011_A. Stages CODEFORCES
import java.io.OutputStream;\nimport java.io.I… linear 1190_C. Tokitsukaze and Duel CODEFORCES
Calculate mean code length by time complexity
Pandas is a powerful library for data analysis; group the dataset by time complexity, apply a function to calculate the average length of the code snippet, and plot the results:
df.groupby('complexity')['src'].apply(lambda x: x.str.len().mean()).sort_values(ascending=False).plot.barh(color="orange")
< > Update on GitHub
---