chore(doc): add prompt caching to README
Browse files
README.md
CHANGED
|
@@ -31,6 +31,7 @@ https://thinktank.ottomator.ai
|
|
| 31 |
- ✅ Ability to revert code to earlier version (@wonderwhy-er)
|
| 32 |
- ✅ Cohere Integration (@hasanraiyan)
|
| 33 |
- ✅ Dynamic model max token length (@hasanraiyan)
|
|
|
|
| 34 |
- ⬜ **HIGH PRIORITY** - Prevent Bolt from rewriting files as often (file locking and diffs)
|
| 35 |
- ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
|
| 36 |
- ⬜ **HIGH PRIORITY** - Load local projects into the app
|
|
@@ -42,7 +43,6 @@ https://thinktank.ottomator.ai
|
|
| 42 |
- ⬜ Perplexity Integration
|
| 43 |
- ⬜ Vertex AI Integration
|
| 44 |
- ⬜ Deploy directly to Vercel/Netlify/other similar platforms
|
| 45 |
-
- ⬜ Prompt caching
|
| 46 |
- ⬜ Better prompt enhancing
|
| 47 |
- ⬜ Have LLM plan the project in a MD file for better results/transparency
|
| 48 |
- ⬜ VSCode Integration with git-like confirmations
|
|
|
|
| 31 |
- ✅ Ability to revert code to earlier version (@wonderwhy-er)
|
| 32 |
- ✅ Cohere Integration (@hasanraiyan)
|
| 33 |
- ✅ Dynamic model max token length (@hasanraiyan)
|
| 34 |
+
- ✅ Prompt caching (@SujalXplores)
|
| 35 |
- ⬜ **HIGH PRIORITY** - Prevent Bolt from rewriting files as often (file locking and diffs)
|
| 36 |
- ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
|
| 37 |
- ⬜ **HIGH PRIORITY** - Load local projects into the app
|
|
|
|
| 43 |
- ⬜ Perplexity Integration
|
| 44 |
- ⬜ Vertex AI Integration
|
| 45 |
- ⬜ Deploy directly to Vercel/Netlify/other similar platforms
|
|
|
|
| 46 |
- ⬜ Better prompt enhancing
|
| 47 |
- ⬜ Have LLM plan the project in a MD file for better results/transparency
|
| 48 |
- ⬜ VSCode Integration with git-like confirmations
|