name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_jf24wgc
There is, yes. Over on the LLM side of things we call it a...LoRA. The easiest way to get started is to install Oobabooga and LLaMA; it has a simple web UI to get started. One hold-up right now is that you can't train 4 bit models, so you'll need to run the full 8 bit models. If you've got a single 24GB VRAM GPU, that...
5
0
2023-04-05T15:02:56
the_quark
false
null
0
jf24wgc
false
/r/LocalLLaMA/comments/12bzlhu/question_instruction_finetuning_details_for/jf24wgc/
false
5
t1_jf24sdz
13B 8bit or 30B 4bit. Examples were generated on 30B 4bit, as noted in the last slide's caption.
1
0
2023-04-05T15:02:11
friedrichvonschiller
false
2023-04-06T03:46:29
0
jf24sdz
false
/r/LocalLLaMA/comments/12b9se3/writing_llama_prompts_for_long_custom_stories/jf24sdz/
false
1
t1_jf24063
Its also much smaller in terms of file size and dependencies.
5
0
2023-04-05T14:56:58
HadesThrowaway
false
null
0
jf24063
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf24063/
false
5
t1_jf23xw6
Oh that's great to hear! It seems to memorize the WikiDoc/Anki cards really well and is then able to serve you information from it. But we still need to do thorough checks of the model to evaluate how factual it is. For example, I did encounter some wrong information when I asked it about specific antibiotic treatmen...
2
0
2023-04-05T14:56:34
SessionComplete2334
false
null
0
jf23xw6
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf23xw6/
false
2
t1_jf23m4w
It still does, but I have made it a lot more tolerable since I added two things: 1. Context fast forwarding when continuing a prompt, so continuing a previous prompt only needs to process the new tokens. 2. Integrating OpenBlas for faster prompt ingestion. So it's not perfect but now is usable.
5
0
2023-04-05T14:54:23
HadesThrowaway
false
null
0
jf23m4w
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf23m4w/
false
5
t1_jf222gk
Interesting, I think I might find the input string on the source code and do a string replace on all punctuation characters to get rid of them.
1
0
2023-04-05T14:44:08
ThePseudoMcCoy
false
null
0
jf222gk
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf222gk/
false
1
t1_jf1zwzk
Ahh, I understand. Thank you so much for the information, it helps contextualize a lot for me!! I've suffered through some rather rare medical conditions for my age and gender, and was testing the 13B LLaMA model with your 13B medLora. I got very good results, it was able to deduce the correct diagnostic and the cor...
3
0
2023-04-05T14:29:40
Inevitable-Start-653
false
null
0
jf1zwzk
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1zwzk/
false
3
t1_jf1zkt8
Might be because of the aposteophes etc - as the prompt is given to the main program via command line parameter, I’ve noticed many kind of symbols seem to mess it up.
3
0
2023-04-05T14:27:24
althalusian
false
null
0
jf1zkt8
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1zkt8/
false
3
t1_jf1z59d
What would be the main difference to oobabooga? I guess this one doesn't utilize the GPU, anything else?
2
0
2023-04-05T14:24:27
akubit
false
null
0
jf1z59d
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf1z59d/
false
2
t1_jf1z4a6
Yes, this is the right approach. LoRA weights a basically adapters for the model, hence the weights are only a couple of MB. You will still need the original LLaMA model. We are trying different things at the moment. We saw that models trained with LoRA seem to not go below a loss of 1.5 on our dataset. I seems they ...
3
0
2023-04-05T14:24:15
SessionComplete2334
false
null
0
jf1z4a6
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1z4a6/
false
3
t1_jf1yxfc
I actually did some tests and asked it what the first 10 words were in the text and I noticed it was getting hung up on periods and apostrophes. Once I removed all punctuation marks and made it just one long sentence it actually worked. Is this actually somewhat normal behavior and I'm just tagging the text wrong o...
3
0
2023-04-05T14:22:58
ThePseudoMcCoy
false
null
0
jf1yxfc
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1yxfc/
false
3
t1_jf1xt8e
Me also! :D I have the training data, but need the small hardware that can push. I see the KoboldCpp, that seem pretty interesting. I am going to try this too, if that works, maybe with the smaller model for the now. Then later we can upgrade when NVIDIA gives us the better hobby SBC.
2
0
2023-04-05T14:15:13
SlavaSobov
false
null
0
jf1xt8e
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf1xt8e/
false
2
t1_jf1whl4
First thing I see is that all the slashes are backwards, so this is a windows implementation? Is there a Linux version?
1
0
2023-04-05T14:06:01
venividiavicii
false
null
0
jf1whl4
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jf1whl4/
false
1
t1_jf1wckn
[deleted]
1
0
2023-04-05T14:05:02
[deleted]
true
null
0
jf1wckn
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/jf1wckn/
false
1
t1_jf1vcf0
I've had really good results with the 4bit in oobabooga, here's my settings [https://www.reddit.com/r/LocalLLaMA/comments/12cimwv/comment/jf1n7iz/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/LocalLLaMA/comments/12cimwv/comment/jf1n7iz/?utm_source=share&utm_medium=web2x&context=3)
3
0
2023-04-05T13:57:47
i_wayyy_over_think
false
null
0
jf1vcf0
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1vcf0/
false
3
t1_jf1uu0u
I see, thanks!!
1
0
2023-04-05T13:54:06
reddiling
false
null
0
jf1uu0u
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1uu0u/
false
1
t1_jf1u3eo
Yeah the chat modes just parse the response and know when it sees ### Human ( or whatever other stop sequence is for any particular model ) then it should cutoff the chat there. But I find it kind of interesting because the fake human response often asks stuff that is interesting that I'd not have thought of to ask. ...
3
0
2023-04-05T13:48:49
i_wayyy_over_think
false
2023-04-05T13:55:15
0
jf1u3eo
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf1u3eo/
false
3
t1_jf1tvi4
Nice!
2
0
2023-04-05T13:47:15
PM_ME_ENFP_MEMES
false
null
0
jf1tvi4
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf1tvi4/
false
2
t1_jf1tu3p
Haven't worked with this interface before. The only issue I have is that after it answers the first question, it asks itself another question related to the first one. Basically: ### Human(me): Write code connecting to the Twilio SMS API. ### Assistant: Here's your code and how it works. But then it writes i...
2
0
2023-04-05T13:46:58
synn89
false
null
0
jf1tu3p
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf1tu3p/
false
2
t1_jf1terk
All the numbers in your comment added up to 69. Congrats! 9 + 4 + 16 + 4 + 9 + 27 = 69 ^([Click here](https://www.reddit.com/message/compose?to=LuckyNumber-Bot&subject=Stalk%20Me%20Pls&message=%2Fstalkme) to have me scan all your future comments.) \ ^(Summon me on specific comments with ...
9
0
2023-04-05T13:43:46
LuckyNumber-Bot
false
null
0
jf1terk
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1terk/
false
9
t1_jf1tdt1
I use 9GB for 4bit so fp16 will probably be somewhere around 4*9=27GB
2
0
2023-04-05T13:43:34
Haydern2019
false
null
0
jf1tdt1
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1tdt1/
false
2
t1_jf1stoi
Thank you so much for the information <3 I've downloaded your Loras, I used oobabooga to load in the 13B LLaMA model in 8-bit mode and then loaded in your 13B Lora. Things seem to be working, is that the intent of the Lora models? To load them with the llama model? And right now you are training a whole llama model...
1
0
2023-04-05T13:39:26
Inevitable-Start-653
false
null
0
jf1stoi
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1stoi/
false
1
t1_jf1sq3s
I see. Thank you very much.
2
0
2023-04-05T13:38:42
Art10001
false
null
0
jf1sq3s
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jf1sq3s/
false
2
t1_jf1s6av
of course I can. I'm looking what is the possibility to run my own ia and fine tune it for my purpose
2
0
2023-04-05T13:34:37
makakiel
false
null
0
jf1s6av
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf1s6av/
false
2
t1_jf1s5e2
and in the style of an italian mobster: https://i.imgur.com/EzpvaoV.png
7
0
2023-04-05T13:34:25
MoneyPowerNexis
false
null
0
jf1s5e2
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf1s5e2/
false
7
t1_jf1s2f6
Are you a robot or smth? Because your message has nothing to do with mine and it is supposed to be a reply?
-3
0
2023-04-05T13:33:47
SourceCodeplz
false
null
0
jf1s2f6
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1s2f6/
false
-3
t1_jf1rk6g
[deleted]
1
0
2023-04-05T13:29:56
[deleted]
true
null
0
jf1rk6g
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf1rk6g/
false
1
t1_jf1rezs
Yes. We are experimenting a lot with the training, and once we have narrowed down the best approach (LoRA with/wo 8bit, not LoRA) we will scale up to 30b. Maybe even 65b, if we can manage. But the hardware we have access to might not be enough to train a 65b model. Also the data is not cleaned up fully, so training a 6...
3
0
2023-04-05T13:28:49
SessionComplete2334
false
null
0
jf1rezs
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1rezs/
false
3
t1_jf1q7lq
Oh wow that's great thank you!
1
0
2023-04-05T13:19:36
ThePseudoMcCoy
false
2023-04-05T14:13:40
0
jf1q7lq
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1q7lq/
false
1
t1_jf1q20z
Awesome results!
3
0
2023-04-05T13:18:22
PM_ME_ENFP_MEMES
false
null
0
jf1q20z
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf1q20z/
false
3
t1_jf1onxj
Does this mean you are going to upload the 13B and 30B models in addition to the loras you uploaded :3
1
0
2023-04-05T13:07:32
Inevitable-Start-653
false
null
0
jf1onxj
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1onxj/
false
1
t1_jf1okk9
[deleted]
1
0
2023-04-05T13:06:47
[deleted]
true
null
0
jf1okk9
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1okk9/
false
1
t1_jf1nl9v
llama.cpp needs to be [updated](https://github.com/ggerganov/llama.cpp/issues/771#issuecomment-1497351398) to use the correct prompt format.
3
0
2023-04-05T12:58:49
Nextil
false
null
0
jf1nl9v
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1nl9v/
false
3
t1_jf1njbz
Same for me with all 4bit models running through oobabooga textui, vicuna/gpt4-x-alpaca etc they all have bad output. The models running on hugging face are usually much better, which I find strange
3
0
2023-04-05T12:58:22
Necessary_Ad_9800
false
null
0
jf1njbz
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1njbz/
false
3
t1_jf1n7iz
Your settings might be off, got this first shot. Used oobabooga webui. downloaded the anon8231489123\_vicuna-13b-GPTQ-4bit-128g python [server.py](https://server.py) \--wbits 4 --groupsize 128 --model\_type llama temperature=0.7 top\_k=40 top\_p=0.1 repetition\_penalty=1.1764705882352942 Used default interface (...
21
0
2023-04-05T12:55:41
i_wayyy_over_think
false
2023-04-05T13:37:09
0
jf1n7iz
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf1n7iz/
false
21
t1_jf1mykx
How much VRAM is needed for Vicuna fp16?
2
0
2023-04-05T12:53:40
reddiling
false
null
0
jf1mykx
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1mykx/
false
2
t1_jf1lgxa
Thanks
1
0
2023-04-05T12:41:13
Clear-Jelly2873
false
null
0
jf1lgxa
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1lgxa/
false
1
t1_jf1ld81
-r "### Human:" ;-)
3
0
2023-04-05T12:40:18
reddiling
false
null
0
jf1ld81
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1ld81/
false
3
t1_jf1jbt0
For better or worse this is exactly how AI takes over highly skilled, highly paid human jobs that require years of education. I know of a Med-Tech corporation that has been working with AI to examine x-rays, but the big hitch is that the radiologists won't share their diagnostic data. I wonder why /s. Why pay a sin...
1
0
2023-04-05T12:22:27
c4r_guy
false
null
0
jf1jbt0
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1jbt0/
false
1
t1_jf1igaq
Oh that makes sense, thank you!
2
0
2023-04-05T12:14:31
ThePseudoMcCoy
false
null
0
jf1igaq
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1igaq/
false
2
t1_jf1h8z9
>reverse prompt how to insert reverse prompt?
2
0
2023-04-05T12:03:14
Clear-Jelly2873
false
null
0
jf1h8z9
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1h8z9/
false
2
t1_jf1h1bp
I do not agree and think we need open models, as this is the only way we can somehow control their behavior. There are numerous other applications of theses models besides chat bots. They can help you go through data, extract structured information from large documents, aggregate test values etc. Given the immense a...
11
0
2023-04-05T12:01:11
SessionComplete2334
false
null
0
jf1h1bp
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1h1bp/
false
11
t1_jf1gv40
Because you are running a 4-bit quantized version, whereas Vicuna's web demo is most likely using the full version. Quantization is super useful but still it comes at a cost. Furthermore llama.cpp have yet to support GPTQ quantization, they're using a traditional method, which degrade quality even further.
15
0
2023-04-05T11:59:32
AffordableQC
false
null
0
jf1gv40
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1gv40/
false
15
t1_jf1gpr1
Some reasons for this; 1) probably different seed. If you don’t give a seed parameter and value on the command line, it uses epoch timestamp as the seed (if I remember correctly) - so even running the exact same command locally again gives a different result because the seed has changed. 2) at least on some models, cha...
2
0
2023-04-05T11:58:07
althalusian
false
2023-04-05T12:02:32
0
jf1gpr1
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1gpr1/
false
2
t1_jf1gjvx
I have this same problem as well with using oobabooga text-generation-webui 4-bit quantize version. In particular it hallucinate much more frequently and often get off-topic. Can quantization be the culprit here? Have anyone ever tried the fp16 version of Vicuna?
4
0
2023-04-05T11:56:35
Haydern2019
false
null
0
jf1gjvx
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1gjvx/
false
4
t1_jf1ge7w
You can already do this, less accurately, on current websites. Have a sore tooth, go to symptom checker, it tells you it may be from 100 causes. A good AI with medical knowledge can tell you if you're having heart burn or a heart attack. That technology should be easily accessible.
3
0
2023-04-05T11:55:05
synn89
false
null
0
jf1ge7w
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1ge7w/
false
3
t1_jf1gb4s
Hey, that's a very cool project (again!). Having only 8 GB VRAM, I wanted to look into the cpp-family of LLaMA/Alpaca tools, but was put off by their limitation of [generation delay scaling with prompt length](https://github.com/ggerganov/llama.cpp/discussions/229). That discussion hasn't been updated in a week. Does ...
4
0
2023-04-05T11:54:16
WolframRavenwolf
false
null
0
jf1gb4s
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf1gb4s/
false
4
t1_jf1elad
Yes, you need to use as a reverse prompt "### Human:" You should consider opening an issue on llama.cpp repository if you observe it to be significantly worse
3
0
2023-04-05T11:37:04
reddiling
false
null
0
jf1elad
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf1elad/
false
3
t1_jf1eiav
No problem, good thinking! If maybe there is a 2-bit LLaMa/Alpaca we can squeezing it down to 2GB, but I do not know if there can be added speed right now. Maybe if the Nano 2GB can have the SSD for swap file. If you are having the 4GB Nano, then your performance should be higher, I am thinking.
2
0
2023-04-05T11:36:14
SlavaSobov
false
null
0
jf1eiav
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf1eiav/
false
2
t1_jf1e7lx
According to the Bing, "65 billion parameters would require 32.5 GB of RAM" so that seems reasonable that the AGX Orin 64GB can run the 65B LLaMa, with the room for spare. &#x200B; I would think from the keynote from the NVIDIA the other day, the new better hardware will come soon. They want to push the Generative AI...
1
0
2023-04-05T11:33:08
SlavaSobov
false
null
0
jf1e7lx
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf1e7lx/
false
1
t1_jf1duit
[deleted]
3
0
2023-04-05T11:29:18
[deleted]
true
null
0
jf1duit
false
/r/LocalLLaMA/comments/12cbpqg/short_prompts_are_fun/jf1duit/
false
3
t1_jf1d7ht
I just copy the test code from the huggingsface page. I did not tweaking the setting yet. :D
2
0
2023-04-05T11:22:31
SlavaSobov
false
null
0
jf1d7ht
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf1d7ht/
false
2
t1_jf1d6r5
Any model that uses LLama is restricted by the modified GLPv3 license in which it was released under by Meta. it specifically states >**You will not**, and will not permit, assist or cause any third party to: > >a. **use, modify, copy, reproduce**, create derivative works of, or distribute the Software Products (...
1
0
2023-04-05T11:22:17
Nezarah
false
null
0
jf1d6r5
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1d6r5/
false
1
t1_jf1cpzk
oh is that shared ram on the orin?
1
0
2023-04-05T11:17:17
[deleted]
false
null
0
jf1cpzk
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf1cpzk/
false
1
t1_jf1c9wv
If there's one domain that should be left to professionals, this is it! &#x200B; There is no greater anxiety producing technology than this: typing some symptoms and getting 10 possible diagnostics that make you shit your pants. Your medical professional(s) has/have your chart and history and can immensely influence ...
-2
0
2023-04-05T11:12:21
SourceCodeplz
false
null
0
jf1c9wv
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1c9wv/
false
-2
t1_jf1bxv0
So you are running vanilla 13b in 8bit WSL?
1
0
2023-04-05T11:08:36
Necessary_Ad_9800
false
null
0
jf1bxv0
false
/r/LocalLLaMA/comments/12b9se3/writing_llama_prompts_for_long_custom_stories/jf1bxv0/
false
1
t1_jf1bb0v
> Llama Why would it be a bad thing?
1
0
2023-04-05T11:01:19
matija2209
false
null
0
jf1bb0v
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1bb0v/
false
1
t1_jf1ajr2
yup o.e single RTX A6000 ada is 10k
1
0
2023-04-05T10:52:26
makakiel
false
null
0
jf1ajr2
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf1ajr2/
false
1
t1_jf1a6so
I believe they official weights are not on huggingface anymore and the demo is down as the model hallucinated too much. But other people replicated the training ab uploaded the models. Data and code is still available on the Stanford GitHub. MedAlpaca largely builds on this.
0
0
2023-04-05T10:48:13
SessionComplete2334
false
null
0
jf1a6so
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1a6so/
false
0
t1_jf197nf
I’m still uploading the data and will add everything to huggingface. By the end of the week everything should be available. GPT 3.5 turbo is much better, which is unsurprising given it’s also much larger than the models we tested. I doubt we will beat it. However, you can host medAlpaca on your own hardware and finetu...
3
0
2023-04-05T10:36:14
SessionComplete2334
false
2023-04-05T10:51:05
0
jf197nf
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf197nf/
false
3
t1_jf1850b
Beat me to it! I have had this idea for the last month (was in the process of moving for a Allied Health Position). I planned to feed it case studies and examples of intake assessments, Psychosocial assessments, Mental State Examinations and Court Reports to make it an assistive tool for Social Workers and others in...
2
0
2023-04-05T10:22:22
Nezarah
false
2023-04-05T11:59:19
0
jf1850b
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf1850b/
false
2
t1_jf17qg3
why would someone use 4 bits or these? do you need the 3 9gb files?
0
0
2023-04-05T10:17:04
Luckylars
false
null
0
jf17qg3
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf17qg3/
false
0
t1_jf15vyz
Nah the model is really unfiltered, I never had a refusal from that model ever
2
0
2023-04-05T09:51:48
Wonderful_Ad_5134
false
null
0
jf15vyz
false
/r/LocalLLaMA/comments/12bsfvg/koala_a_dialogue_model_for_academic_research/jf15vyz/
false
2
t1_jf15vfn
Why is the top_p set so low?
1
0
2023-04-05T09:51:36
ambient_temp_xeno
false
null
0
jf15vfn
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf15vfn/
false
1
t1_jf14jw0
There aren't any. I use a chat like system, so all the prompt are delivered via the chat. I still have the logs, so if you want me to post more of what was going on before/after what I already showed let me know.
1
0
2023-04-05T09:32:45
redfoxkiller
false
null
0
jf14jw0
false
/r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jf14jw0/
false
1
t1_jf14am8
[deleted]
1
0
2023-04-05T09:28:57
[deleted]
true
null
0
jf14am8
false
/r/LocalLLaMA/comments/12awmmz/the_things_a_ai_will_ask_another_ai/jf14am8/
false
1
t1_jf1415f
[deleted]
1
0
2023-04-05T09:25:06
[deleted]
true
null
0
jf1415f
false
/r/LocalLLaMA/comments/12bsfvg/koala_a_dialogue_model_for_academic_research/jf1415f/
false
1
t1_jf13h1m
Me: Can father and mother get married? Vicuna-13B: No, father and mother cannot get married. In most societies, marriage is a legally recognized union between two people of the opposite sex, and parents are typically not allowed to marry their biological children. Incestuous relationships, including mariage, are cons...
2
0
2023-04-05T09:16:54
hasengames
false
null
0
jf13h1m
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf13h1m/
false
2
t1_jf13cxc
omg its $2000
3
0
2023-04-05T09:15:10
[deleted]
false
null
0
jf13cxc
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf13cxc/
false
3
t1_jf10gk6
I hope OP feels silly now :P Also, more models is somehow a bad thing?
0
0
2023-04-05T08:31:47
lacethespace
false
null
0
jf10gk6
false
/r/LocalLLaMA/comments/12bsfvg/koala_a_dialogue_model_for_academic_research/jf10gk6/
false
0
t1_jf0zm5m
\> The idea of randomly generating a penis intrigued me, so I decided to give it a go.
3
0
2023-04-05T08:19:12
NoGrapefruit6853
false
null
0
jf0zm5m
false
/r/LocalLLaMA/comments/12cbpqg/short_prompts_are_fun/jf0zm5m/
false
3
t1_jf0vh6n
Considering everything that happened, for them to come only now with such a shitty response is close to an insult to our intelligence. First time I went to AMD with their top-tier product, and I regret it every single day for their shitty customer care. Feels like I'm more of a beta tester than an actual paying cu...
2
0
2023-04-05T07:18:48
Notfuckingcannon
false
null
0
jf0vh6n
false
/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/jf0vh6n/
false
2
t1_jf0v9y4
You can do it on Vicuna though?
1
0
2023-04-05T07:15:49
Wonderful_Ad_5134
false
null
0
jf0v9y4
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf0v9y4/
false
1
t1_jf0ut5t
What redaction of alpaca? Alpaca paper and github are up. Alpaca pt and bin files are still up on huggingface.
3
0
2023-04-05T07:09:11
ThatLastPut
false
null
0
jf0ut5t
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf0ut5t/
false
3
t1_jf0uqq8
There is the non-woke, unrestricted model if you're interested (only the 4bit ggml quantized version though) [https://huggingface.co/ShreyasBrill/Vicuna-13B](https://huggingface.co/ShreyasBrill/Vicuna-13B)
2
0
2023-04-05T07:08:14
Wonderful_Ad_5134
false
2023-04-05T07:12:38
0
jf0uqq8
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf0uqq8/
false
2
t1_jf0u37h
Unfortunately, they really wanted the model to be woke and useless [https://github.com/lm-sys/FastChat/issues/115#issuecomment-1496654824](https://github.com/lm-sys/FastChat/issues/115#issuecomment-1496654824)
3
0
2023-04-05T06:59:19
Wonderful_Ad_5134
false
null
0
jf0u37h
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf0u37h/
false
3
t1_jf0sojh
[deleted]
3
0
2023-04-05T06:40:17
[deleted]
true
null
0
jf0sojh
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jf0sojh/
false
3
t1_jf0s81z
nice, thanks to share your results. Are you on windows or linux ? I've ubuntu 22 server Edit: nvm, after server restart, it works well :) Output generated in 10.69 seconds (18.61 tokens/s, 199 tokens, context 21)
1
0
2023-04-05T06:34:09
Nondzu
false
2023-04-05T06:46:42
0
jf0s81z
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf0s81z/
false
1
t1_jf0py34
And do you think a **Jetson AGX Orin™ 64GB** can be run llama 65b + Alpaca? Even a cluster of 3 ?
6
0
2023-04-05T06:04:05
makakiel
false
null
0
jf0py34
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf0py34/
false
6
t1_jf0p5sh
I ran gpt4all 7B on an X220 Thinkpad today using the llama.cpp repo, and it responds at a rate of about three words per second (after initializing). So, not bad, and much better than you are saying. But, previously, using the original gpt4all repo, the response was terribly slow (like 10-20 seconds per word). Maybe t...
2
0
2023-04-05T05:54:06
eat-more-bookses
false
null
0
jf0p5sh
false
/r/LocalLLaMA/comments/12bxgmy/help_d/jf0p5sh/
false
2
t1_jf0m6q8
Is it normal to take 220seconds to respond?
2
0
2023-04-05T05:18:31
Yahakshan
false
null
0
jf0m6q8
false
/r/LocalLLaMA/comments/12bxgmy/help_d/jf0m6q8/
false
2
t1_jf0lqsq
I've gotten that error with various attempts at using llama.cpp/alpaca.cpp but honestly don't know what it means. If you Google it, there's not much out there (really just [this thread](https://github.com/antimatter15/alpaca.cpp/issues/188) and a couple like it that don't provide much of use). I assume it's some sort o...
1
0
2023-04-05T05:13:24
AI-Pon3
false
null
0
jf0lqsq
false
/r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jf0lqsq/
false
1
t1_jf0l2tg
`$ git fetch` `$ git pull`
7
0
2023-04-05T05:05:51
Not3lliott
false
null
0
jf0l2tg
false
/r/LocalLLaMA/comments/12bqai6/script_to_automatically_update_llamacpp_to_newest/jf0l2tg/
false
7
t1_jf0k2d7
I was going to do this exact thing! I have a jNano Jetson that I used for a previous AI project that I'm going to repurpose for this. I'll check out what you've done, thanks!
6
0
2023-04-05T04:54:36
PacManFan123
false
null
0
jf0k2d7
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf0k2d7/
false
6
t1_jf0i4sz
I think I am interested in this...? I have been involved with the image AI models for quite a while now. I want to know if there is an equivalent of LoRA training for stuff like Llama. I want to create a support bot that can be trained on specific items pertaining to my organization. I would assume this is fine tunin...
2
0
2023-04-05T04:34:12
Alpha-Leader
false
null
0
jf0i4sz
false
/r/LocalLLaMA/comments/12bzlhu/question_instruction_finetuning_details_for/jf0i4sz/
false
2
t1_jf0bcqe
This is very interesting! Using something like this to retrieve data from medical reference materials will definitely be how those resources are accessed in the future. This is a great first step. I was curious to see how these results compared to having gpt-3.5-turbo, untrained, take the USMLE practice exams but the ...
4
0
2023-04-05T03:30:26
vff
false
null
0
jf0bcqe
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf0bcqe/
false
4
t1_jf0axhh
Someone already made something like this here https://github.com/project-baize/baize
2
0
2023-04-05T03:26:48
regstuff
false
null
0
jf0axhh
false
/r/LocalLLaMA/comments/12bsfvg/koala_a_dialogue_model_for_academic_research/jf0axhh/
false
2
t1_jf07lzf
I just used all defaults: &#x200B; https://preview.redd.it/ygxfssgmv0sa1.png?width=598&format=png&auto=webp&v=enabled&s=27d49ad6ae3d2127192997f1b58de23428f4112b
1
0
2023-04-05T02:59:16
ed2mXeno
false
null
0
jf07lzf
false
/r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/jf07lzf/
false
1
t1_jf06qm2
With the recent redaction of Alpaca and the general sensitive topic of healthcare, I assumed a somewhat gated access would be better. On the other hand this probably does not prevent anything. I haven't considered that people might feel uncomfortable with accessing the models this way, will probably turn it off then.
5
0
2023-04-05T02:52:09
SessionComplete2334
false
null
0
jf06qm2
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf06qm2/
false
5
t1_jf056ng
I’ve never seen a HuggingFace repository where you have to share your email address with the repository authors before accessing the models before. 🤷‍♂️ I might suggest turning off that option, since it’s not clear what you’re planning on doing with the email addresses or how you’ll be protecting them.
18
0
2023-04-05T02:39:38
vff
false
null
0
jf056ng
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf056ng/
false
18
t1_jf040bb
I'm using it in Oobabooga and using TavernAI characters. No issues with a couple hours of convos so far, but haven't tried erotic or anything like that yet.
1
0
2023-04-05T02:30:21
synn89
false
null
0
jf040bb
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf040bb/
false
1
t1_jf03mpo
I've found it to be quite terrible compared to gpt-4-x-alpaca-13b. It often gives 1 work answers and refuses to talk about a lot of subjects. Honestly borderline unusable unless all you want is to ask it how to reverse a string in python.
3
0
2023-04-05T02:27:23
TeamPupNSudz
false
null
0
jf03mpo
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf03mpo/
false
3
t1_jf03bob
I mean, it tells you what to do. You need "--model_type LLaMA", although I'm pretty sure this was automated by a recent commit. edit: just pull the recent version of the codebase. https://github.com/oobabooga/text-generation-webui/commit/ee4547cd34c0bb3ebbb862db159734135276baaf
3
0
2023-04-05T02:25:02
TeamPupNSudz
false
2023-04-05T02:28:03
0
jf03bob
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf03bob/
false
3
t1_jf02y3x
[deleted]
1
0
2023-04-05T02:22:10
[deleted]
true
null
0
jf02y3x
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf02y3x/
false
1
t1_jf01xig
[deleted]
1
0
2023-04-05T02:14:23
[deleted]
true
null
0
jf01xig
false
/r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jf01xig/
false
1
t1_jezytx4
😂
2
0
2023-04-05T01:51:06
Art10001
false
null
0
jezytx4
false
/r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jezytx4/
false
2
t1_jezysw3
We will trail not too behind, but at least we will be present.
3
0
2023-04-05T01:50:54
Art10001
false
2023-04-05T02:03:04
0
jezysw3
false
/r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jezysw3/
false
3
t1_jezyqzy
It was bound to happen.
3
0
2023-04-05T01:50:31
Art10001
false
null
0
jezyqzy
false
/r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jezyqzy/
false
3
t1_jezypzh
Same. I'm wondering if we're using bad parameters or if the Demo isn't actually running 13B? Or maybe the 4bit quantized versions are wrecked somehow?
3
0
2023-04-05T01:50:18
lelrofl
false
null
0
jezypzh
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jezypzh/
false
3
t1_jezxsx6
https://youtu.be/iQ3Lhy-eD1s
2
0
2023-04-05T01:43:25
YuhFRthoYORKonhisass
false
null
0
jezxsx6
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jezxsx6/
false
2