Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
1
11
Tobias Jefferson
hwarang241
Follow
0 followers
·
33 following
AI & ML interests
None yet
Recent Activity
liked
a Space
about 2 months ago
DontPlanToEnd/UGI-Leaderboard
reacted
to
hesamation
's
post
with ❤️
5 months ago
longer context doesn't generate better responses. it can even hurt your llm/agent. 1M context window doesn't automatically make models smarter as it's not about the size; it's how you use it. here are 4 types of context failure and why each one happens: 1. context poisoning: if hallucination finds its way into your context, the agent will rely on that false information to make its future moves. for example if the agent hallucinates about the "task description", all of its planning to solve the task would also be corrupt. 2. context distraction: when the context becomes too bloated, the model focuses too much on it rather than come up with novel ideas or to follow what it has learned during training. as Gemini 2.5 Pro technical report points out, as context grows significantly from 100K tokens, "the agent showed a tendency toward favoring repeating actions from its vast history rather than synthesizing novel plans". 3. context confusion: everyone lost it when MCPs became popular, it seemed like AGI was achieved. I suspected there is something wrong and there was: it's not just about providing tools, bloating the context with tool use derails the model from selecting the right one! even if you can fit all your tool metadata in the context, as their number grows, the model gets confused over which one to pick. 4. Context Clash: if you exchange conversation with a model step by step and provide information as you go along, chances are you get worse performance rather than providing all the useful information at once. one the model's context fills with wrong information, it's more difficult to guide it to embrace the right info. agents pull information from tools, documents, user queries, etc. and there is a chance that some of these information contradict each other, and it's not good new for agentic applications. check this article by Drew Breunig for deeper read: https://www.dbreunig.com/2025/06/26/how-to-fix-your-context.html?ref=blog.langchain.com
liked
a Space
8 months ago
RiverZ/ICEdit
View all activity
Organizations
None yet
hwarang241
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
a Space
about 2 months ago
Running
1.35k
UGI Leaderboard
📢
1.35k
Uncensored General Intelligence Leaderboard
liked
a Space
8 months ago
Running
on
Zero
Featured
661
ICEdit
🖼
661
Universal Image Editing is worth a single LoRA
liked
3 models
12 months ago
Dracones/Evathene-v1.3_exl2_2.5bpw
Text Generation
•
Updated
Dec 3, 2024
•
6
•
1
Dracones/Evathene-v1.3_exl2_4.0bpw
Text Generation
•
Updated
Dec 3, 2024
•
6
•
1
sophosympatheia/Evathene-v1.3
Text Generation
•
73B
•
Updated
Dec 10, 2024
•
154
•
•
38
liked
2 models
about 1 year ago
FPHam/StoryCrafter
Updated
Dec 18, 2024
•
5
sphiratrioth666/SillyTavern-Presets-Sphiratrioth
Updated
Aug 22
•
238
liked
2 models
over 1 year ago
TheDrummer/Cydonia-22B-v1
22B
•
Updated
Sep 18, 2024
•
35
•
62
DavidAU/Gemma-The-Writer-9B-GGUF
Text Generation
•
9B
•
Updated
9 days ago
•
1.12k
•
45
liked
a dataset
over 1 year ago
davanstrien/would-you-read-it
Viewer
•
Updated
Apr 17
•
268
•
253
•
4
liked
a model
almost 2 years ago
LoneStriker/gemma-7b-GGUF
9B
•
Updated
Mar 1, 2024
•
153
•
1