Upload data.jsonl
Browse files- data.jsonl +1 -1
data.jsonl
CHANGED
|
@@ -268,7 +268,7 @@
|
|
| 268 |
{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "f7bb578a1409b1f96aff534ff5ed2bd10502296f", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1527", "iss_label": "", "title": "Add copy to clipboard in plaintext for image details", "body": "Add copy to clipboard in plaintext for image details\r\n\r\nA button we can click to copy to clipboard all of the image details shown in the log output file. If not on the log page then on the app itself.\r\n\r\nThe quick copying of these settings enables us to share our work methods with others in the community more smoothly, thereby assisting them in a more efficient and effective way.\r\n\r\n\r\n\r\n\r\n\r\nWhen I copy the text manually from the log file it looks like a garbled mess. See example below.\r\n\r\n```\r\nPrompt | Cute troll with fluffy long spiked hair wearing a ugly Christmas sweater. snow falling down and troll village in the background. full body\r\n-- | --\r\nNegative Prompt | \u00a0\r\nFooocus V2 Expansion | Cute troll with fluffy long spiked hair wearing a ugly Christmas sweater. snow falling down and troll village in the background. full body, intricate, elegant, highly detailed, sharp focus, illuminated, sunny, magical, scenic, artistic, true colors, deep aesthetic, very inspirational, cute, cozy, inspired, original, fine detail, professional, winning, enhanced, polished\r\nStyles | ['SAI Photographic', 'Fooocus V2', 'Artstyle Hyperrealism', 'MRE Artistic Vision']\r\nPerformance | Quality\r\nResolution | (1024, 1024)\r\nSharpness | 3\r\nGuidance Scale | 1.7\r\nADM Guidance | (1.5, 0.8, 0.3)\r\nBase Model | dreamshaperXL_turboDpmppSDEKarras.safetensors\r\nRefiner Model | None\r\nRefiner Switch | 0.5\r\nSampler | dpmpp_sde\r\nScheduler | karras\r\nSeed | 5044578018584347060\r\nVersion | v2.1.853\r\n```", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/lllyasviel/Fooocus/commit/f7bb578a1409b1f96aff534ff5ed2bd10502296f", "file_loc": {"base_commit": "f7bb578a1409b1f96aff534ff5ed2bd10502296f", "files": [{"path": "fooocus_version.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "modules/async_worker.py", "status": "modified", "Loc": {"(None, 'handler', 116)": {"mod": [400, 401, 780, 782]}}}, {"path": "modules/private_logger.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3]}, "(None, 'log', 21)": {"add": [38, 61], "mod": [42, 60]}}}, {"path": "update_log.md", "status": "modified", "Loc": {"(None, None, 1)": {"add": [0]}}}, {"path": "webui.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 14, 111, 512], "mod": [103]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "max_topk": 6, "file_topk": 5, "loctype": {"code": ["modules/private_logger.py", "webui.py", "modules/async_worker.py", "fooocus_version.py"], "doc": ["update_log.md"], "test": [], "config": [], "asset": []}}
|
| 269 |
{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "3a55e7e3910b8ae58f82a5a0e4c11d7d4fa3143f", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/2561", "iss_label": "enhancement", "title": "[Feature Request]: Prompt embedded LoRAs", "body": "### Is there an existing issue for this?\r\n\r\n- [x] I have searched the existing issues and checked the recent builds/commits\r\n\r\n### What would your feature do?\r\n\r\nSimilar to how A1111 handles LoRAs by default, I believe there should be an option to embed LoRAs in the prompt by using the following structure:\r\n```csharp\r\n<LORA_NAME:WEIGHT>\r\n```\r\n\r\nThe current workflow works well, but has a few limitations, namely being able to use wildcards and LoRAs together for more dynamic prompts. Additionally, this feature already exists for embeddings, so I reckon adding it for LoRAs should be trivial.\r\n\r\n### Proposed workflow\r\n\r\n1. Enter LoRAs in the prompt using the `<LORA_NAME:WEIGHT>` structure\r\n2. Generate images, and LoRAs are loaded for each iteration\r\n\r\n### Additional information\r\n\r\n_No response_", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/2323", "commit_html_url": null, "file_loc": {"base_commit": "3a55e7e3910b8ae58f82a5a0e4c11d7d4fa3143f", "files": [{"path": "modules/async_worker.py", "status": "modified", "Loc": {"(None, 'handler', 134)": {"add": [435], "mod": [155, 453, 454, 655, 865, 908, 912]}, "(None, 'worker', 19)": {"mod": [47, 50, 51, 72]}, "(None, 'callback', 806)": {"mod": [810]}}}, {"path": "modules/config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [23], "mod": [11]}}}, {"path": "modules/sdxl_styles.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5, 7, 12]}, "(None, 'apply_wildcards', 68)": {"mod": [68, 69, 70, 71, 72, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 95]}, "(None, 'get_words', 95)": {"mod": [104]}}}, {"path": "modules/util.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8, 16], "mod": [1]}, "(None, 'get_files_from_folder', 166)": {"mod": [166, 167, 168, 170, 172, 173, 174, 175, 176, 177, 178, 179, 180, 182]}, "('PromptStyle', None, 358)": {"mod": [358]}, "(None, 'get_enabled_loras', 396)": {"mod": [397]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "max_topk": 11, "file_topk": 4, "loctype": {"code": ["modules/async_worker.py", "modules/sdxl_styles.py", "modules/config.py", "modules/util.py"], "doc": [], "test": [], "config": [], "asset": []}}
|
| 270 |
{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "8e62a72a63b30a3067d1a1bc3f8d226824bd9283", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1671", "iss_label": "bug (AMD)", "title": "Cannot use image prompts", "body": "I am trying to use 2x images as an image prompt but when I press generate this is what I'm getting (I can generate just fine without image prompts):\r\n\r\nFull console log:\r\n\r\n<code>[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 3\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 1.5\r\n[Parameters] Seed = 953753918774495193\r\n[Fooocus] Downloading control models ...\r\n[Fooocus] Loading control models ...\r\n[Parameters] Sampler = dpmpp_2m_sde_gpu - karras\r\n[Parameters] Steps = 6 - 30\r\n[Fooocus] Initializing ...\r\n[Fooocus] Loading models ...\r\nRefiner unloaded.\r\nmodel_type EPS\r\nUNet ADM Dimension 2816\r\nUsing split attention in VAE\r\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\r\nUsing split attention in VAE\r\nextra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'}\r\nBase model loaded: H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\models\\checkpoints\\realisticStockPhoto_v10.safetensors\r\nRequest to load LoRAs [['None', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\models\\checkpoints\\realisticStockPhoto_v10.safetensors].\r\nRequested to load SDXLClipModel\r\nLoading 1 new model\r\n[Fooocus] Processing prompts ...\r\n[Fooocus] Encoding positive #1 ...\r\n[Fooocus] Encoding negative #1 ...\r\n[Fooocus] Image processing ...\r\nTraceback (most recent call last):\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\modules\\async_worker.py\", line 806, in worker\r\n handler(task)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\modules\\async_worker.py\", line 647, in handler\r\n task[0] = ip_adapter.preprocess(cn_img, ip_adapter_path=ip_adapter_path)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\ip_adapter.py\", line 185, in preprocess\r\n cond = image_proj_model.model(cond).to(device=ip_adapter.load_device, dtype=ip_adapter.dtype)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\resampler.py\", line 117, in forward\r\n latents = attn(x, latents) + latents\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\resampler.py\", line 55, in forward\r\n latents = self.norm2(latents)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\normalization.py\", line 190, in forward\r\n return F.layer_norm(\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\functional.py\", line 2515, in layer_norm\r\n return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, privateuseone:0 and cpu!\r\nTotal time: 37.40 seconds\r\n</code>\r\n", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/1678", "commit_html_url": null, "file_loc": {"base_commit": "8e62a72a63b30a3067d1a1bc3f8d226824bd9283", "files": [{"path": "extras/ip_adapter.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10], "mod": [5]}, "(None, 'load_ip_adapter', 90)": {"mod": [119, 120, 121, 122, 123, 124, 125, 126]}}}, {"path": "fooocus_version.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "max_topk": 3, "file_topk": 2, "loctype": {"code": ["fooocus_version.py", "extras/ip_adapter.py"], "doc": [], "test": [], "config": [], "asset": []}}
|
| 271 |
-
{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "d57afc88a48359bc1642c2ae30a091f0426eff43", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1063", "iss_label": "", "title": "Faceswap crashes ", "body": "**Describe the problem**\r\nThe program crashes when trying to use an image as prompt and selecting the faceswap advanced option\r\n\r\n**Full Console Log**\r\nRequirement already satisfied: pygit2==1.12.2 in /usr/local/lib/python3.10/dist-packages (1.12.2)\r\nRequirement already satisfied: cffi>=1.9.1 in /usr/local/lib/python3.10/dist-packages (from pygit2==1.12.2) (1.16.0)\r\nRequirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.9.1->pygit2==1.12.2) (2.21)\r\n/content\r\nfatal: destination path 'Fooocus' already exists and is not an empty directory.\r\n/content/Fooocus\r\nAlready up-to-date\r\nUpdate succeeded.\r\n[System ARGV] ['entry_with_update.py', '--preset', 'realistic', '--share']\r\nLoaded preset: /content/Fooocus/presets/realistic.json\r\nPython 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]\r\nFooocus version: 2.1.824\r\nRunning on local URL: http://127.0.0.1:7865/\r\nRunning on public URL: https://fb6371be5d9ced0c1d.gradio.live/\r\n\r\nThis share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)\r\nTotal VRAM 15102 MB, total RAM 12983 MB\r\n2023-11-29 21:03:50.202601: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n2023-11-29 21:03:50.202658: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2023-11-29 21:03:50.202708: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n2023-11-29 21:03:52.244376: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\nSet vram state to: NORMAL_VRAM\r\nDisabling smart memory management\r\nDevice: cuda:0 Tesla T4 : native\r\nVAE dtype: torch.float32\r\nUsing pytorch cross attention\r\nRefiner unloaded.\r\nmodel_type EPS\r\nadm 2816\r\nUsing pytorch attention in VAE\r\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\r\nUsing pytorch attention in VAE\r\nextra keys {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}\r\nBase model loaded: /content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors\r\nRequest to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors].\r\nLoaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for UNet [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 788 keys at weight 0.25.\r\nLoaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for CLIP [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 264 keys at weight 0.25.\r\nFooocus V2 Expansion: Vocab with 642 words.\r\nFooocus Expansion engine loaded for cuda:0, use_fp16 = True.\r\nRequested to load SDXLClipModel\r\nRequested to load GPT2LMHeadModel\r\nLoading 2 new models\r\n[Fooocus Model Management] Moving model(s) has taken 1.30 seconds\r\nApp started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or https://fb6371be5d9ced0c1d.gradio.live/\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 2\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 3.0\r\n[Parameters] Seed = 604471590939558783\r\n[Parameters] Sampler = dpmpp_2m_sde_gpu - karras\r\n[Parameters] Steps = 60 - 30\r\n[Fooocus] Initializing ...\r\n[Fooocus] Loading models ...\r\nRefiner unloaded.\r\n[Fooocus] Processing prompts ...\r\n[Fooocus] Preparing Fooocus text #1 ...\r\n[Prompt Expansion] Portrait of a young man on the beach, full light, gorgeous, amazing, elegant, intricate, highly detailed, dynamic, rich deep vivid colors, beautiful, very inspirational, inspiring, thought, fancy, sharp focus, colorful, epic, professional, artistic, new, charismatic, cool, brilliant, awesome, attractive, shiny, fine detail, pretty, focused, creative\r\n[Fooocus] Preparing Fooocus text #2 ...\r\n[Prompt Expansion] Portrait of a young man on the beach, full pretty, attractive, fine detail, intricate, elegant, luxury, elite, dramatic light, highly detailed, cinematic, complex, sharp focus, illuminated, amazing, marvelous, thought, epic, fabulous, colorful, shiny, brilliant, symmetry, great, excellent composition, ambient, dynamic, vibrant colors, relaxed, beautiful\r\n[Fooocus] Encoding positive #1 ...\r\n[Fooocus Model Management] Moving model(s) has taken 0.11 seconds\r\n[Fooocus] Encoding positive #2 ...\r\n[Fooocus] Encoding negative #1 ...\r\n[Fooocus] Encoding negative #2 ...\r\n[Parameters] Denoising Strength = 1.0\r\n[Parameters] Initial Latent shape: Image Space (1152, 896)\r\nPreparation time: 3.60 seconds\r\n[Sampler] refiner_swap_method = joint\r\n[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828\r\nRequested to load SDXL\r\nLoading 1 new model\r\n[Fooocus Model Management] Moving model(s) has taken 2.40 seconds\r\n100% 60/60 [00:55<00:00, 1.09it/s]\r\nImage generated with private log at: /content/Fooocus/outputs/2023-11-29/log.html\r\nGenerating and saving time: 60.73 seconds\r\n[Sampler] refiner_swap_method = joint\r\n[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828\r\nRequested to load SDXL\r\nLoading 1 new model\r\n[Fooocus Model Management] Moving model(s) has taken 2.01 seconds\r\n100% 60/60 [00:56<00:00, 1.06it/s]\r\nImage generated with private log at: /content/Fooocus/outputs/2023-11-29/log.html\r\nGenerating and saving time: 61.85 seconds\r\nRequested to load SDXLClipModel\r\nRequested to load GPT2LMHeadModel\r\nLoading 2 new models\r\n[Fooocus Model Management] Moving model(s) has taken 1.57 seconds\r\nTotal time: 131.21 seconds\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 2\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 3.0\r\n[Parameters] Seed = 7513856776859948774\r\n[Fooocus] Downloading control models ...\r\n[Fooocus] Loading control models ...\r\nextra keys clip vision: ['vision_model.embeddings.position_ids']\r\n", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/1710", "commit_html_url": null, "file_loc": {"base_commit": "d57afc88a48359bc1642c2ae30a091f0426eff43", "files": [{"path": "fooocus_colab.ipynb", "status": "modified", "Loc": {"(None, None, 15)": {"mod": [15]}}}, {"path": "readme.md", "status": "modified", "Loc": {"(None, None, 127)": {"add": [127]}, "(None, None, 118)": {"mod": [118]}, "(None, None, 124)": {"mod": [124]}}}, {"path": "ldm_patched/modules/args_parser.py", "Loc": {"(None, None, None)": [99]}, "base_commit": "cca0ca704a713ab153938e78de6787609c723cad"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "max_topk": 4, "file_topk": 2, "loctype": {"code": ["fooocus_colab.ipynb", "ldm_patched/modules/args_parser.py"], "doc": ["readme.md"], "test": [], "config": [], "asset": []}}
|
| 272 |
{"organization": "odoo", "repo_name": "odoo", "base_commit": "72ec0050b442214c9be93907fc01a48832243c15", "is_iss": 0, "iss_html_url": "https://github.com/odoo/odoo/issues/7306", "iss_label": "", "title": "[v8.0] Bank statement : Customer Import invoice wizard do not auto-fill the right field", "body": "Step to reproduce:\n\ncreate a customer invoice\ncreate a new bank statement and import this invoice\nclick on 'Reconcile'\nProblem: No match proposition between the bank statement line and the invoice move line can be found since the communication field is '/'. (The invoice number is in the field 'Reference' instead)\n\nSo please the ref must go to communication\n\nThanks\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/odoo/odoo/commit/72ec0050b442214c9be93907fc01a48832243c15", "file_loc": {"base_commit": "72ec0050b442214c9be93907fc01a48832243c15", "files": [{"path": "addons/account/account_bank_statement.py", "status": "modified", "Loc": {"('account_bank_statement_line', 'get_reconciliation_proposition', 537)": {"mod": [575]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["addons/account/account_bank_statement.py"], "doc": [], "test": [], "config": [], "asset": []}}
|
| 273 |
{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "197287fc303119bf71caf9b3f72280cab08da749", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1147", "iss_label": "", "title": "[Bug]: \u7ffb\u8bd1arxiv\u6587\u6863\u62a5\u9519\uff0c\u65e0\u8bba\u672c\u5730\u81ea\u5df1\u642d\u5efa\u8fd8\u662f\u5b98\u65b9\u5728\u7ebf\u5747\u62a5\u9519", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOneKeyInstall (\u4e00\u952e\u5b89\u88c5\u811a\u672c-windows)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\n\u5b98\u65b9\u5728\u7ebf\u7248\u62a5\u9519\u4ee3\u7801\u5982\u4e0b\uff1a\r\n\r\n> Local Message] \u5b9e\u9a8c\u6027\u51fd\u6570\u8c03\u7528\u51fa\u9519:\r\n> \r\n> Traceback (most recent call last):\r\n> File \"./toolbox.py\", line 165, in decorated\r\n> yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n> File \"./crazy_functions/Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 249, in Latex\u7ffb\u8bd1\u4e2d\u6587\u5e76\u91cd\u65b0\u7f16\u8bd1PDF\r\n> txt, arxiv_id = yield from arxiv_download(chatbot, history, txt)\r\n> File \"./crazy_functions/Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 141, in arxiv_download\r\n> extract_archive(file_path=dst, dest_dir=extract_dst)\r\n> File \"./toolbox.py\", line 507, in extract_archive\r\n> with tarfile.open(file_path, 'r:*') as tarobj:\r\n> File \"/usr/lib/python3.8/tarfile.py\", line 1608, in open\r\n> raise ReadError(\"file could not be opened successfully\")\r\n> tarfile.ReadError: file could not be opened successfully\r\n> \r\n> \u5f53\u524d\u4ee3\u7406\u53ef\u7528\u6027:\r\n> \r\n> \u4ee3\u7406\u914d\u7f6e socks5h://localhost:7890, \u4ee3\u7406\u6240\u5728\u5730\uff1aJapan\r\n\r\n\u672c\u5730\u642d\u5efa\u7248\u62a5\u9519\u4ee3\u7801\u5982\u4e0b\uff1a\r\n\r\n> [Local Message] \u5b9e\u9a8c\u6027\u51fd\u6570\u8c03\u7528\u51fa\u9519:\r\n> \r\n> Traceback (most recent call last):\r\n> File \".\\toolbox.py\", line 150, in decorated\r\n> yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n> File \".\\crazy_functions\\Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 250, in Latex\u7ffb\u8bd1\u4e2d\u6587\u5e76\u91cd\u65b0\u7f16\u8bd1PDF\r\n> txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)\r\n> File \".\\crazy_functions\\Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 139, in arxiv_download\r\n> extract_archive(file_path=dst, dest_dir=extract_dst)\r\n> File \".\\toolbox.py\", line 461, in extract_archive\r\n> with tarfile.open(file_path, 'r:*') as tarobj:\r\n> File \"D:\\academic-gpt\\installer_files\\env\\lib\\tarfile.py\", line 1811, in open\r\n> raise ReadError(f\"file could not be opened successfully:\\n{error_msgs_summary}\")\r\n> tarfile.ReadError: file could not be opened successfully:\r\n> - method gz: ReadError('invalid header')\r\n> - method bz2: ReadError('not a bzip2 file')\r\n> - method xz: ReadError('not an lzma file')\r\n> - method tar: ReadError('invalid header')\r\n> \r\n> \u5f53\u524d\u4ee3\u7406\u53ef\u7528\u6027:\r\n> \r\n> \u4ee3\u7406\u914d\u7f6e socks5h://127.0.0.1:12341, \u4ee3\u7406\u6240\u5728\u5730\uff1aHong Kong - Cloudflare, Inc.\r\n\r\n\u6240\u7ffb\u8bd1\u7684arxiv\u6587\u6863\u7684\u5730\u5740\u4e3a\uff1ahttps://arxiv.org/abs/2112.10551\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/197287fc303119bf71caf9b3f72280cab08da749", "file_loc": {"base_commit": "197287fc303119bf71caf9b3f72280cab08da749", "files": [{"path": "shared_utils/handle_upload.py", "status": "modified", "Loc": {"(None, 'extract_archive', 91)": {"mod": [107, 108, 109, 110, 111, 112, 113, 114, 116, 117]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["shared_utils/handle_upload.py"], "doc": [], "test": [], "config": [], "asset": []}}
|
| 274 |
{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "65317e33af87640b68c84c9f6ee67188b76c6d7a", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/558", "iss_label": "", "title": "\u80fd\u5426\u5229\u7528EdgeGPT\uff0c\u652f\u6301\u8c03\u7528\u5fae\u8f6fBing\u63a5\u53e3", "body": "\u5927\u4f6c\u4eec\u6c42\u6c42\u4e86\uff0c\u770b\u770b\u8fd9\u4e2a\u9879\u76ee\u5427\uff0chttps://github.com/acheong08/EdgeGPT\r\n\u5982\u679c\u53ef\u4ee5\u65b9\u4fbf\u5730\u8c03\u7528Bing\u63a5\u53e3\uff0c\u6216\u8005\u672a\u6765\u7684\u767e\u5ea6\u3001\u963f\u91cc\u7b49\u7b2c\u4e09\u65b9\u63a5\u53e3\uff0c\u5bf9\u4e8e\u6ca1\u6709openAI-key\u4e5f\u6ca1\u6cd5\u672c\u5730\u90e8\u7f72GLM\u7684\u540c\u5b66\u662f\u798f\u97f3\u554a", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/65317e33af87640b68c84c9f6ee67188b76c6d7a", "file_loc": {"base_commit": "65317e33af87640b68c84c9f6ee67188b76c6d7a", "files": [{"path": "config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [65], "mod": [47, 48]}}}, {"path": "request_llm/bridge_all.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21, 119]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "max_topk": 2, "file_topk": 2, "loctype": {"code": ["request_llm/bridge_all.py", "config.py"], "doc": [], "test": [], "config": [], "asset": []}}
|
|
|
|
| 268 |
{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "f7bb578a1409b1f96aff534ff5ed2bd10502296f", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1527", "iss_label": "", "title": "Add copy to clipboard in plaintext for image details", "body": "Add copy to clipboard in plaintext for image details\r\n\r\nA button we can click to copy to clipboard all of the image details shown in the log output file. If not on the log page then on the app itself.\r\n\r\nThe quick copying of these settings enables us to share our work methods with others in the community more smoothly, thereby assisting them in a more efficient and effective way.\r\n\r\n\r\n\r\n\r\n\r\nWhen I copy the text manually from the log file it looks like a garbled mess. See example below.\r\n\r\n```\r\nPrompt | Cute troll with fluffy long spiked hair wearing a ugly Christmas sweater. snow falling down and troll village in the background. full body\r\n-- | --\r\nNegative Prompt | \u00a0\r\nFooocus V2 Expansion | Cute troll with fluffy long spiked hair wearing a ugly Christmas sweater. snow falling down and troll village in the background. full body, intricate, elegant, highly detailed, sharp focus, illuminated, sunny, magical, scenic, artistic, true colors, deep aesthetic, very inspirational, cute, cozy, inspired, original, fine detail, professional, winning, enhanced, polished\r\nStyles | ['SAI Photographic', 'Fooocus V2', 'Artstyle Hyperrealism', 'MRE Artistic Vision']\r\nPerformance | Quality\r\nResolution | (1024, 1024)\r\nSharpness | 3\r\nGuidance Scale | 1.7\r\nADM Guidance | (1.5, 0.8, 0.3)\r\nBase Model | dreamshaperXL_turboDpmppSDEKarras.safetensors\r\nRefiner Model | None\r\nRefiner Switch | 0.5\r\nSampler | dpmpp_sde\r\nScheduler | karras\r\nSeed | 5044578018584347060\r\nVersion | v2.1.853\r\n```", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/lllyasviel/Fooocus/commit/f7bb578a1409b1f96aff534ff5ed2bd10502296f", "file_loc": {"base_commit": "f7bb578a1409b1f96aff534ff5ed2bd10502296f", "files": [{"path": "fooocus_version.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "modules/async_worker.py", "status": "modified", "Loc": {"(None, 'handler', 116)": {"mod": [400, 401, 780, 782]}}}, {"path": "modules/private_logger.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3]}, "(None, 'log', 21)": {"add": [38, 61], "mod": [42, 60]}}}, {"path": "update_log.md", "status": "modified", "Loc": {"(None, None, 1)": {"add": [0]}}}, {"path": "webui.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 14, 111, 512], "mod": [103]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "max_topk": 6, "file_topk": 5, "loctype": {"code": ["modules/private_logger.py", "webui.py", "modules/async_worker.py", "fooocus_version.py"], "doc": ["update_log.md"], "test": [], "config": [], "asset": []}}
|
| 269 |
{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "3a55e7e3910b8ae58f82a5a0e4c11d7d4fa3143f", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/2561", "iss_label": "enhancement", "title": "[Feature Request]: Prompt embedded LoRAs", "body": "### Is there an existing issue for this?\r\n\r\n- [x] I have searched the existing issues and checked the recent builds/commits\r\n\r\n### What would your feature do?\r\n\r\nSimilar to how A1111 handles LoRAs by default, I believe there should be an option to embed LoRAs in the prompt by using the following structure:\r\n```csharp\r\n<LORA_NAME:WEIGHT>\r\n```\r\n\r\nThe current workflow works well, but has a few limitations, namely being able to use wildcards and LoRAs together for more dynamic prompts. Additionally, this feature already exists for embeddings, so I reckon adding it for LoRAs should be trivial.\r\n\r\n### Proposed workflow\r\n\r\n1. Enter LoRAs in the prompt using the `<LORA_NAME:WEIGHT>` structure\r\n2. Generate images, and LoRAs are loaded for each iteration\r\n\r\n### Additional information\r\n\r\n_No response_", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/2323", "commit_html_url": null, "file_loc": {"base_commit": "3a55e7e3910b8ae58f82a5a0e4c11d7d4fa3143f", "files": [{"path": "modules/async_worker.py", "status": "modified", "Loc": {"(None, 'handler', 134)": {"add": [435], "mod": [155, 453, 454, 655, 865, 908, 912]}, "(None, 'worker', 19)": {"mod": [47, 50, 51, 72]}, "(None, 'callback', 806)": {"mod": [810]}}}, {"path": "modules/config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [23], "mod": [11]}}}, {"path": "modules/sdxl_styles.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5, 7, 12]}, "(None, 'apply_wildcards', 68)": {"mod": [68, 69, 70, 71, 72, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 95]}, "(None, 'get_words', 95)": {"mod": [104]}}}, {"path": "modules/util.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8, 16], "mod": [1]}, "(None, 'get_files_from_folder', 166)": {"mod": [166, 167, 168, 170, 172, 173, 174, 175, 176, 177, 178, 179, 180, 182]}, "('PromptStyle', None, 358)": {"mod": [358]}, "(None, 'get_enabled_loras', 396)": {"mod": [397]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "max_topk": 11, "file_topk": 4, "loctype": {"code": ["modules/async_worker.py", "modules/sdxl_styles.py", "modules/config.py", "modules/util.py"], "doc": [], "test": [], "config": [], "asset": []}}
|
| 270 |
{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "8e62a72a63b30a3067d1a1bc3f8d226824bd9283", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1671", "iss_label": "bug (AMD)", "title": "Cannot use image prompts", "body": "I am trying to use 2x images as an image prompt but when I press generate this is what I'm getting (I can generate just fine without image prompts):\r\n\r\nFull console log:\r\n\r\n<code>[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 3\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 1.5\r\n[Parameters] Seed = 953753918774495193\r\n[Fooocus] Downloading control models ...\r\n[Fooocus] Loading control models ...\r\n[Parameters] Sampler = dpmpp_2m_sde_gpu - karras\r\n[Parameters] Steps = 6 - 30\r\n[Fooocus] Initializing ...\r\n[Fooocus] Loading models ...\r\nRefiner unloaded.\r\nmodel_type EPS\r\nUNet ADM Dimension 2816\r\nUsing split attention in VAE\r\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\r\nUsing split attention in VAE\r\nextra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'}\r\nBase model loaded: H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\models\\checkpoints\\realisticStockPhoto_v10.safetensors\r\nRequest to load LoRAs [['None', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\models\\checkpoints\\realisticStockPhoto_v10.safetensors].\r\nRequested to load SDXLClipModel\r\nLoading 1 new model\r\n[Fooocus] Processing prompts ...\r\n[Fooocus] Encoding positive #1 ...\r\n[Fooocus] Encoding negative #1 ...\r\n[Fooocus] Image processing ...\r\nTraceback (most recent call last):\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\modules\\async_worker.py\", line 806, in worker\r\n handler(task)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\modules\\async_worker.py\", line 647, in handler\r\n task[0] = ip_adapter.preprocess(cn_img, ip_adapter_path=ip_adapter_path)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\ip_adapter.py\", line 185, in preprocess\r\n cond = image_proj_model.model(cond).to(device=ip_adapter.load_device, dtype=ip_adapter.dtype)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\resampler.py\", line 117, in forward\r\n latents = attn(x, latents) + latents\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\resampler.py\", line 55, in forward\r\n latents = self.norm2(latents)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\normalization.py\", line 190, in forward\r\n return F.layer_norm(\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\functional.py\", line 2515, in layer_norm\r\n return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, privateuseone:0 and cpu!\r\nTotal time: 37.40 seconds\r\n</code>\r\n", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/1678", "commit_html_url": null, "file_loc": {"base_commit": "8e62a72a63b30a3067d1a1bc3f8d226824bd9283", "files": [{"path": "extras/ip_adapter.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10], "mod": [5]}, "(None, 'load_ip_adapter', 90)": {"mod": [119, 120, 121, 122, 123, 124, 125, 126]}}}, {"path": "fooocus_version.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "max_topk": 3, "file_topk": 2, "loctype": {"code": ["fooocus_version.py", "extras/ip_adapter.py"], "doc": [], "test": [], "config": [], "asset": []}}
|
| 271 |
+
{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "d57afc88a48359bc1642c2ae30a091f0426eff43", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1063", "iss_label": "", "title": "Faceswap crashes ", "body": "**Describe the problem**\r\nThe program crashes when trying to use an image as prompt and selecting the faceswap advanced option\r\n\r\n**Full Console Log**\r\nRequirement already satisfied: pygit2==1.12.2 in /usr/local/lib/python3.10/dist-packages (1.12.2)\r\nRequirement already satisfied: cffi>=1.9.1 in /usr/local/lib/python3.10/dist-packages (from pygit2==1.12.2) (1.16.0)\r\nRequirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.9.1->pygit2==1.12.2) (2.21)\r\n/content\r\nfatal: destination path 'Fooocus' already exists and is not an empty directory.\r\n/content/Fooocus\r\nAlready up-to-date\r\nUpdate succeeded.\r\n[System ARGV] ['entry_with_update.py', '--preset', 'realistic', '--share']\r\nLoaded preset: /content/Fooocus/presets/realistic.json\r\nPython 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]\r\nFooocus version: 2.1.824\r\nRunning on local URL: http://127.0.0.1:7865/\r\nRunning on public URL: https://fb6371be5d9ced0c1d.gradio.live/\r\n\r\nThis share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)\r\nTotal VRAM 15102 MB, total RAM 12983 MB\r\n2023-11-29 21:03:50.202601: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n2023-11-29 21:03:50.202658: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2023-11-29 21:03:50.202708: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n2023-11-29 21:03:52.244376: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\nSet vram state to: NORMAL_VRAM\r\nDisabling smart memory management\r\nDevice: cuda:0 Tesla T4 : native\r\nVAE dtype: torch.float32\r\nUsing pytorch cross attention\r\nRefiner unloaded.\r\nmodel_type EPS\r\nadm 2816\r\nUsing pytorch attention in VAE\r\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\r\nUsing pytorch attention in VAE\r\nextra keys {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}\r\nBase model loaded: /content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors\r\nRequest to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors].\r\nLoaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for UNet [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 788 keys at weight 0.25.\r\nLoaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for CLIP [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 264 keys at weight 0.25.\r\nFooocus V2 Expansion: Vocab with 642 words.\r\nFooocus Expansion engine loaded for cuda:0, use_fp16 = True.\r\nRequested to load SDXLClipModel\r\nRequested to load GPT2LMHeadModel\r\nLoading 2 new models\r\n[Fooocus Model Management] Moving model(s) has taken 1.30 seconds\r\nApp started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or https://fb6371be5d9ced0c1d.gradio.live/\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 2\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 3.0\r\n[Parameters] Seed = 604471590939558783\r\n[Parameters] Sampler = dpmpp_2m_sde_gpu - karras\r\n[Parameters] Steps = 60 - 30\r\n[Fooocus] Initializing ...\r\n[Fooocus] Loading models ...\r\nRefiner unloaded.\r\n[Fooocus] Processing prompts ...\r\n[Fooocus] Preparing Fooocus text #1 ...\r\n[Prompt Expansion] Portrait of a young man on the beach, full light, gorgeous, amazing, elegant, intricate, highly detailed, dynamic, rich deep vivid colors, beautiful, very inspirational, inspiring, thought, fancy, sharp focus, colorful, epic, professional, artistic, new, charismatic, cool, brilliant, awesome, attractive, shiny, fine detail, pretty, focused, creative\r\n[Fooocus] Preparing Fooocus text #2 ...\r\n[Prompt Expansion] Portrait of a young man on the beach, full pretty, attractive, fine detail, intricate, elegant, luxury, elite, dramatic light, highly detailed, cinematic, complex, sharp focus, illuminated, amazing, marvelous, thought, epic, fabulous, colorful, shiny, brilliant, symmetry, great, excellent composition, ambient, dynamic, vibrant colors, relaxed, beautiful\r\n[Fooocus] Encoding positive #1 ...\r\n[Fooocus Model Management] Moving model(s) has taken 0.11 seconds\r\n[Fooocus] Encoding positive #2 ...\r\n[Fooocus] Encoding negative #1 ...\r\n[Fooocus] Encoding negative #2 ...\r\n[Parameters] Denoising Strength = 1.0\r\n[Parameters] Initial Latent shape: Image Space (1152, 896)\r\nPreparation time: 3.60 seconds\r\n[Sampler] refiner_swap_method = joint\r\n[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828\r\nRequested to load SDXL\r\nLoading 1 new model\r\n[Fooocus Model Management] Moving model(s) has taken 2.40 seconds\r\n100% 60/60 [00:55<00:00, 1.09it/s]\r\nImage generated with private log at: /content/Fooocus/outputs/2023-11-29/log.html\r\nGenerating and saving time: 60.73 seconds\r\n[Sampler] refiner_swap_method = joint\r\n[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828\r\nRequested to load SDXL\r\nLoading 1 new model\r\n[Fooocus Model Management] Moving model(s) has taken 2.01 seconds\r\n100% 60/60 [00:56<00:00, 1.06it/s]\r\nImage generated with private log at: /content/Fooocus/outputs/2023-11-29/log.html\r\nGenerating and saving time: 61.85 seconds\r\nRequested to load SDXLClipModel\r\nRequested to load GPT2LMHeadModel\r\nLoading 2 new models\r\n[Fooocus Model Management] Moving model(s) has taken 1.57 seconds\r\nTotal time: 131.21 seconds\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 2\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 3.0\r\n[Parameters] Seed = 7513856776859948774\r\n[Fooocus] Downloading control models ...\r\n[Fooocus] Loading control models ...\r\nextra keys clip vision: ['vision_model.embeddings.position_ids']\r\n", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/1710", "commit_html_url": null, "file_loc": {"base_commit": "d57afc88a48359bc1642c2ae30a091f0426eff43", "files": [{"path": "fooocus_colab.ipynb", "status": "modified", "Loc": {"(None, None, 15)": {"mod": [15]}}}, {"path": "readme.md", "status": "modified", "Loc": {"(None, None, 127)": {"add": [127]}, "(None, None, 118)": {"mod": [118]}, "(None, None, 124)": {"mod": [124]}}}, {"path": "ldm_patched/modules/args_parser.py", "Loc": {"(None, None, None)": {"mod": [99]}}, "base_commit": "cca0ca704a713ab153938e78de6787609c723cad"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "max_topk": 4, "file_topk": 2, "loctype": {"code": ["fooocus_colab.ipynb", "ldm_patched/modules/args_parser.py"], "doc": ["readme.md"], "test": [], "config": [], "asset": []}}
|
| 272 |
{"organization": "odoo", "repo_name": "odoo", "base_commit": "72ec0050b442214c9be93907fc01a48832243c15", "is_iss": 0, "iss_html_url": "https://github.com/odoo/odoo/issues/7306", "iss_label": "", "title": "[v8.0] Bank statement : Customer Import invoice wizard do not auto-fill the right field", "body": "Step to reproduce:\n\ncreate a customer invoice\ncreate a new bank statement and import this invoice\nclick on 'Reconcile'\nProblem: No match proposition between the bank statement line and the invoice move line can be found since the communication field is '/'. (The invoice number is in the field 'Reference' instead)\n\nSo please the ref must go to communication\n\nThanks\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/odoo/odoo/commit/72ec0050b442214c9be93907fc01a48832243c15", "file_loc": {"base_commit": "72ec0050b442214c9be93907fc01a48832243c15", "files": [{"path": "addons/account/account_bank_statement.py", "status": "modified", "Loc": {"('account_bank_statement_line', 'get_reconciliation_proposition', 537)": {"mod": [575]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["addons/account/account_bank_statement.py"], "doc": [], "test": [], "config": [], "asset": []}}
|
| 273 |
{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "197287fc303119bf71caf9b3f72280cab08da749", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1147", "iss_label": "", "title": "[Bug]: \u7ffb\u8bd1arxiv\u6587\u6863\u62a5\u9519\uff0c\u65e0\u8bba\u672c\u5730\u81ea\u5df1\u642d\u5efa\u8fd8\u662f\u5b98\u65b9\u5728\u7ebf\u5747\u62a5\u9519", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOneKeyInstall (\u4e00\u952e\u5b89\u88c5\u811a\u672c-windows)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\n\u5b98\u65b9\u5728\u7ebf\u7248\u62a5\u9519\u4ee3\u7801\u5982\u4e0b\uff1a\r\n\r\n> Local Message] \u5b9e\u9a8c\u6027\u51fd\u6570\u8c03\u7528\u51fa\u9519:\r\n> \r\n> Traceback (most recent call last):\r\n> File \"./toolbox.py\", line 165, in decorated\r\n> yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n> File \"./crazy_functions/Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 249, in Latex\u7ffb\u8bd1\u4e2d\u6587\u5e76\u91cd\u65b0\u7f16\u8bd1PDF\r\n> txt, arxiv_id = yield from arxiv_download(chatbot, history, txt)\r\n> File \"./crazy_functions/Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 141, in arxiv_download\r\n> extract_archive(file_path=dst, dest_dir=extract_dst)\r\n> File \"./toolbox.py\", line 507, in extract_archive\r\n> with tarfile.open(file_path, 'r:*') as tarobj:\r\n> File \"/usr/lib/python3.8/tarfile.py\", line 1608, in open\r\n> raise ReadError(\"file could not be opened successfully\")\r\n> tarfile.ReadError: file could not be opened successfully\r\n> \r\n> \u5f53\u524d\u4ee3\u7406\u53ef\u7528\u6027:\r\n> \r\n> \u4ee3\u7406\u914d\u7f6e socks5h://localhost:7890, \u4ee3\u7406\u6240\u5728\u5730\uff1aJapan\r\n\r\n\u672c\u5730\u642d\u5efa\u7248\u62a5\u9519\u4ee3\u7801\u5982\u4e0b\uff1a\r\n\r\n> [Local Message] \u5b9e\u9a8c\u6027\u51fd\u6570\u8c03\u7528\u51fa\u9519:\r\n> \r\n> Traceback (most recent call last):\r\n> File \".\\toolbox.py\", line 150, in decorated\r\n> yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n> File \".\\crazy_functions\\Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 250, in Latex\u7ffb\u8bd1\u4e2d\u6587\u5e76\u91cd\u65b0\u7f16\u8bd1PDF\r\n> txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)\r\n> File \".\\crazy_functions\\Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 139, in arxiv_download\r\n> extract_archive(file_path=dst, dest_dir=extract_dst)\r\n> File \".\\toolbox.py\", line 461, in extract_archive\r\n> with tarfile.open(file_path, 'r:*') as tarobj:\r\n> File \"D:\\academic-gpt\\installer_files\\env\\lib\\tarfile.py\", line 1811, in open\r\n> raise ReadError(f\"file could not be opened successfully:\\n{error_msgs_summary}\")\r\n> tarfile.ReadError: file could not be opened successfully:\r\n> - method gz: ReadError('invalid header')\r\n> - method bz2: ReadError('not a bzip2 file')\r\n> - method xz: ReadError('not an lzma file')\r\n> - method tar: ReadError('invalid header')\r\n> \r\n> \u5f53\u524d\u4ee3\u7406\u53ef\u7528\u6027:\r\n> \r\n> \u4ee3\u7406\u914d\u7f6e socks5h://127.0.0.1:12341, \u4ee3\u7406\u6240\u5728\u5730\uff1aHong Kong - Cloudflare, Inc.\r\n\r\n\u6240\u7ffb\u8bd1\u7684arxiv\u6587\u6863\u7684\u5730\u5740\u4e3a\uff1ahttps://arxiv.org/abs/2112.10551\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/197287fc303119bf71caf9b3f72280cab08da749", "file_loc": {"base_commit": "197287fc303119bf71caf9b3f72280cab08da749", "files": [{"path": "shared_utils/handle_upload.py", "status": "modified", "Loc": {"(None, 'extract_archive', 91)": {"mod": [107, 108, 109, 110, 111, 112, 113, 114, 116, 117]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["shared_utils/handle_upload.py"], "doc": [], "test": [], "config": [], "asset": []}}
|
| 274 |
{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "65317e33af87640b68c84c9f6ee67188b76c6d7a", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/558", "iss_label": "", "title": "\u80fd\u5426\u5229\u7528EdgeGPT\uff0c\u652f\u6301\u8c03\u7528\u5fae\u8f6fBing\u63a5\u53e3", "body": "\u5927\u4f6c\u4eec\u6c42\u6c42\u4e86\uff0c\u770b\u770b\u8fd9\u4e2a\u9879\u76ee\u5427\uff0chttps://github.com/acheong08/EdgeGPT\r\n\u5982\u679c\u53ef\u4ee5\u65b9\u4fbf\u5730\u8c03\u7528Bing\u63a5\u53e3\uff0c\u6216\u8005\u672a\u6765\u7684\u767e\u5ea6\u3001\u963f\u91cc\u7b49\u7b2c\u4e09\u65b9\u63a5\u53e3\uff0c\u5bf9\u4e8e\u6ca1\u6709openAI-key\u4e5f\u6ca1\u6cd5\u672c\u5730\u90e8\u7f72GLM\u7684\u540c\u5b66\u662f\u798f\u97f3\u554a", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/65317e33af87640b68c84c9f6ee67188b76c6d7a", "file_loc": {"base_commit": "65317e33af87640b68c84c9f6ee67188b76c6d7a", "files": [{"path": "config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [65], "mod": [47, 48]}}}, {"path": "request_llm/bridge_all.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21, 119]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "max_topk": 2, "file_topk": 2, "loctype": {"code": ["request_llm/bridge_all.py", "config.py"], "doc": [], "test": [], "config": [], "asset": []}}
|