added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:35:53.532546
| 2021-08-26T15:47:16
|
980396265
|
{
"authors": [
"Macgician",
"itsmekingtiger",
"vlasovskikh",
"zapalap"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11916",
"repo": "vlasovskikh/intellij-micropython",
"url": "https://github.com/vlasovskikh/intellij-micropython/issues/175"
}
|
gharchive/issue
|
Backspace and other keys not working as expected in REPL
When using the Repl in pycharm, my backspace key sends a "<-[k" and if I hit backspace again the "K" is replaced by "<-[K"
Arrow keys are not working etc. pp. So corrections are not possible.
In the IDE text editor inside of PyCharm, this is not an issue.
Using PuTTY, this is not an issue.
I also get odd output for arrow keys and shift + arrow keys.
Is this a config setting in the plugin? Any solutions?
Miniterm was finally patched with this fix: https://github.com/pyserial/pyserial/pull/351. It looks like this in turn broken the plugin again.
I was able to fix the issue by removing the code from the miniterm fix that was introduced to the microrepl.py
Here's what I did
Removed the whole Windows10Console class from microrepl.py
https://github.com/vlasovskikh/intellij-micropython/blob/a068a81922151686142d01d93d4375c836ed92e6/scripts/microrepl.py#L82
Replaced the whole condition on line 168 with term = connect_miniterm(port)
https://github.com/vlasovskikh/intellij-micropython/blob/a068a81922151686142d01d93d4375c836ed92e6/scripts/microrepl.py#L168
After that, backspace and arrows work again.
Miniterm was finally patched with this fix: pyserial/pyserial#351. It looks like this in turn broken the plugin again.
I was able to fix the issue by removing the code from the miniterm fix that was introduced to the microrepl.py
Here's what I did
1. Removed the whole Windows10Console class from microrepl.py
https://github.com/vlasovskikh/intellij-micropython/blob/a068a81922151686142d01d93d4375c836ed92e6/scripts/microrepl.py#L82
2. Replaced the whole condition on line 168 with `term = connect_miniterm(port)`
https://github.com/vlasovskikh/intellij-micropython/blob/a068a81922151686142d01d93d4375c836ed92e6/scripts/microrepl.py#L168
After that, backspace and arrows work again.
@vlasovskikh I think this is quite a serious bug that turns off new users of the plugin. As it is right now the REPL is practically unusable. I can create a PR if you are ok with this.
It works to me, I edit file which located at C:\Users\$USER\AppData\Roaming\JetBrains\$PYCHARM\plugins\intellij-micropython\scripts\microrepl.py.
@zapalap Thanks for investigating it I'll try your fix in the version I'm about to release.
I'd appreciate any help with testing the update on Windows.
I've reverted by original workaround for #43 now when we seem to have the proper fix in pyserial>=3.5.
|
2025-04-01T04:35:53.535732
| 2021-09-23T08:45:47
|
1005167154
|
{
"authors": [
"marc-portier"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11917",
"repo": "vliz-be-opsci/pyldt",
"url": "https://github.com/vliz-be-opsci/pyldt/issues/2"
}
|
gharchive/issue
|
provide extra ctrl (control) object in the jinja context
having at least the following attributes:
isLast, bool, True for last row, else False
isFirst, bool, True for first row, else False
index, integer, 0..(length-1)
modus, string, "row" or "collection"
maybe also consider:
settings - access to the settings object?
isLast is going to be an issue --> iterator length is not known util one has finished it
possible hack is to prefetch index+1, still keeping things reasonable streaming / scaling up
|
2025-04-01T04:35:53.554943
| 2023-07-04T04:30:31
|
1787123641
|
{
"authors": [
"E1zo",
"WoosukKwon",
"grantbey",
"wjy3326"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11918",
"repo": "vllm-project/vllm",
"url": "https://github.com/vllm-project/vllm/issues/349"
}
|
gharchive/issue
|
error when run the vllm to generate
The code is here:
encoding: utf-8
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
print("sampling_params", sampling_params)
llm = LLM(model="/media/odin/software/PycharmProjects/OpenBuddy-main/model/openbuddy-openllama-7b-v5-fp16/")
outputs = llm.generate(prompts, sampling_params)
print("outputs", outputs)
Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
the error is here:
Traceback (most recent call last):
File "/media/odin/software/PycharmProjects/vllm-main/vllm_test.py", line 2, in
from vllm import LLM, SamplingParams
File "/media/odin/software/PycharmProjects/vllm-main/vllm/init.py", line 2, in
from vllm.engine.async_llm_engine import AsyncLLMEngine
File "/media/odin/software/PycharmProjects/vllm-main/vllm/engine/async_llm_engine.py", line 6, in
from vllm.engine.llm_engine import LLMEngine
File "/media/odin/software/PycharmProjects/vllm-main/vllm/engine/llm_engine.py", line 16, in
from vllm.worker.worker import Worker
File "/media/odin/software/PycharmProjects/vllm-main/vllm/worker/worker.py", line 8, in
from vllm.model_executor import get_model, InputMetadata, set_random_seed
File "/media/odin/software/PycharmProjects/vllm-main/vllm/model_executor/init.py", line 2, in
from vllm.model_executor.model_loader import get_model
File "/media/odin/software/PycharmProjects/vllm-main/vllm/model_executor/model_loader.py", line 9, in
from vllm.model_executor.models import (GPT2LMHeadModel, GPTBigCodeForCausalLM, GPTNeoXForCausalLM,
File "/media/odin/software/PycharmProjects/vllm-main/vllm/model_executor/models/init.py", line 1, in
from vllm.model_executor.models.gpt2 import GPT2LMHeadModel
File "/media/odin/software/PycharmProjects/vllm-main/vllm/model_executor/models/gpt2.py", line 30, in
from vllm.model_executor.layers.activation import get_act_fn
File "/media/odin/software/PycharmProjects/vllm-main/vllm/model_executor/layers/activation.py", line 5, in
from vllm import activation_ops
ImportError: cannot import name 'activation_ops' from partially initialized module 'vllm' (most likely due to a circular import) (/media/odin/software/PycharmProjects/vllm-main/vllm/init.py)
How to fix this error?Thanks!
Hi @wjy3326, could you check if vLLM is installed in your environment? The activation_ops module is created during pip installation. It seems you are using vLLM without installation.
I put the running code in the vllm folder, which caused the above error. Now I run the running code separately and not in the vllm folder, so there is no above error. Now when I run openbuddy's llama model there is a new error:
sampling_params SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, temperature=0.8, top_p=0.95, top_k=-1, use_beam_search=False, stop=[], ignore_eos=False, max_tokens=16, logprobs=None)
INFO 07-04 15:04:15 llm_engine.py:59] Initializing an LLM engine with config: model='/media/odin/software/PycharmProjects/OpenBuddy-main/model/openbuddy-openllama-7b-v5-fp16/', dtype=torch.float16, use_dummy_weights=False, download_dir=None, use_np_weights=False, tensor_parallel_size=1, seed=0)
INFO 07-04 15:04:15 tokenizer_utils.py:30] Using the LLaMA fast tokenizer in 'hf-internal-testing/llama-tokenizer' to avoid potential protobuf errors.
Traceback (most recent call last):
File "/media/odin/software/PycharmProjects/Test/vllm_test.py", line 11, in
llm = LLM(model="/media/odin/software/PycharmProjects/OpenBuddy-main/model/openbuddy-openllama-7b-v5-fp16/")
File "/home/odin/miniconda3/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 55, in init
self.llm_engine = LLMEngine.from_engine_args(engine_args)
File "/home/odin/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 151, in from_engine_args
engine = cls(*engine_configs, distributed_init_method, devices,
File "/home/odin/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 93, in init
worker = worker_cls(
File "/home/odin/miniconda3/lib/python3.10/site-packages/vllm/worker/worker.py", line 45, in init
self.model = get_model(model_config)
File "/home/odin/miniconda3/lib/python3.10/site-packages/vllm/model_executor/model_loader.py", line 47, in get_model
model.load_weights(
File "/home/odin/miniconda3/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 248, in load_weights
for name, loaded_weight in hf_model_weights_iterator(
File "/home/odin/miniconda3/lib/python3.10/site-packages/vllm/model_executor/weight_utils.py", line 73, in hf_model_weights_iterator
state = torch.load(bin_file, map_location="cpu")
File "/home/odin/miniconda3/lib/python3.10/site-packages/torch/serialization.py", line 815, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/odin/miniconda3/lib/python3.10/site-packages/torch/serialization.py", line 1033, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: could not find MARK
How to fix this, thanks!
Hey @wjy3326 I'm getting this error too - did you find a solution since you marked the issue closed?
just simply run the running code separately and not in the vllm folder,
Hey @wjy3326 I'm getting this error too - did you find a solution since you marked the issue closed?
just simply run the running code separately and not in the vllm folder,
Sorry - I see that I didn't reply specifically to @wjy3326's more recent post. I'm not getting OPs original error, I'm getting the _pickle.UnpicklingError: could not find MARK error when loading my fine-tuned models.
It loads the original model correctly, however my fine-tuned models don't load. I've checked and they load fine using HuggingFace's AutoModelForCausalLM.from_pretrained method, so I don't think it's a problem with corrupt files.
|
2025-04-01T04:35:53.558760
| 2024-03-26T22:16:45
|
2209439110
|
{
"authors": [
"GeauxEric",
"simon-mo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11919",
"repo": "vllm-project/vllm",
"url": "https://github.com/vllm-project/vllm/issues/3647"
}
|
gharchive/issue
|
[Feature]: make _init_tokenizer optional and support initiate LLMEngine without tokenizer
🚀 The feature, motivation and pitch
Currently the generate method supports inference based on prompt_token_ids:
def generate(
self,
prompts: Optional[Union[str, List[str]]] = None,
sampling_params: Optional[SamplingParams] = None,
prompt_token_ids: Optional[List[List[int]]] = None,
use_tqdm: bool = True,
lora_request: Optional[LoRARequest] = None,
) -> List[RequestOutput]:
that means tokenizer is optional to the LLM engine.
However, to initiate an LLM engine, it always calls _init_tokenizer , which effectively makes tokenizer required.
The LLM engine cannot be initialized without a valid tokenizer argument.
In our application, we would love to use LLM's powerful engine for inference, but want to keep tokenizer as a separate service.
Alternatives
No response
Additional context
No response
I think the main blocker is tokenizer is also used during decode. See #3635
|
2025-04-01T04:35:53.563780
| 2024-08-30T13:47:12
|
2497295781
|
{
"authors": [
"JoanFM",
"robertgshaw2-neuralmagic"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11920",
"repo": "vllm-project/vllm",
"url": "https://github.com/vllm-project/vllm/issues/8033"
}
|
gharchive/issue
|
[Misc]: Question about Serving with Server API
API Server
I have been digging around VLLM. and I have observed that the API server actually holds a client to an RPC server.
I guess this is needed because of the multi model potential nature. But if I were to serve a single model instance, would it be recommended to use directly an AsyncLLMEngine behind my FastAPI app or any web server?
Thanks for the clarification.
Before submitting a new issue...
[X] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Hey thanks,
This is quite interesting, however have you considered also simply using a ThreadPool to asynchronize blocking CPU actions? At the end most of the time you would be calling libraries releasing the GIL right?
The issue is not asyncio related.
The issue is that on H100 with small models, the GPU forward part of the inner loop is very fast (this is the only section that releases the GIL) and we have CPU contention between the rest of the inner_loop (schedule and process_outputs) operations and the FastAPI server. So running in a ThreadPool will not fix the issue because none of the operations that are causing CPU contention release the GIL
Thanks for the reply.
Thanks for the reply.
@JoanFM Are you running into some issues? The RPC caused some problems in v0.5.4 that we stabilized in v0.5.5, but if there are any ongoing problems I would greatly appreciate any reproduction instructions
Hey, no not issue thanks.
|
2025-04-01T04:35:53.591179
| 2021-03-10T21:56:16
|
828384748
|
{
"authors": [
"aaronshurley",
"danielhelfand",
"ewrenn8",
"vibhas"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11921",
"repo": "vmware-tanzu/carvel-kapp-controller",
"url": "https://github.com/vmware-tanzu/carvel-kapp-controller/issues/133"
}
|
gharchive/issue
|
Add more debug info to failing InstalledPackages
Describe the problem/challenge you have
When debugging an installed package that is failing reconciliation, the failure message just states "Reconcile failed: App failed reconciling", which doesn't give much direction to the user. This message also requires the user to move a level below the Packaging APIs and debug an app directly.
Describe the solution you'd like
Surface more useful information when an InstalledPackage is failing to reconcile due to the underlying app. One idea is to copy any status fields that have a non zero exit code to the installed package to surface the debug info without having to look at the App CR
[ ] Add debug info in package CR
[ ] Add debug tips to docs
Is there more or other info we should surface on the installed package?
We should also document some of these troubleshooting/debugging tips on our Packaging docs.
Awaiting the results of https://github.com/vmware-tanzu/carvel-kapp-controller/issues/110 before we decide what to do with this issue.
While it is a bit unclear about what the best way to fully close #110 is, there has been some work done in #157 and #159 that we can use to move forward with this issue. The current thought process is to at the very least surface the UsefulErrorMessage introduced in #157 through the InstalledPackage status. This will help present the most recent error of the underlying App CR to end users such that they do not also need to look at the App status after seeing the InstalledPackage failed.
|
2025-04-01T04:35:53.594467
| 2019-12-11T13:37:20
|
536375029
|
{
"authors": [
"Brzhk",
"wwitzel3"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11922",
"repo": "vmware-tanzu/octant",
"url": "https://github.com/vmware-tanzu/octant/issues/472"
}
|
gharchive/issue
|
Allow setting dark theme via an env var or a flag
Pretty self-explanatory; but i'd like to set the 'default' theme to dark, through an env-var or an argument.
Thanks!
@Brzhk I think this would be nice. We have some other enhancement requests similar to this. I think it is likely that we will tackle them all at the same time when we implement the idea of a local settings file for Octant.
|
2025-04-01T04:35:53.599417
| 2021-01-22T13:28:17
|
792004831
|
{
"authors": [
"cfryanr",
"enj"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11923",
"repo": "vmware-tanzu/pinniped",
"url": "https://github.com/vmware-tanzu/pinniped/issues/352"
}
|
gharchive/issue
|
Design: multiple IDP support
TODO
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Are you considering submitting a PR for this feature?
How will this project improvement be tested?
How does this change the current architecture?
How will this change be backwards compatible?
How will this feature be documented?
Additional context
Add any other context or screenshots about the feature request here.
Started a design document here: https://hackmd.io/bPcs_c2ZR8WnpcuQ73FC-w?view
Done. See https://github.com/vmware-tanzu/pinniped/tree/main/proposals/1406_multiple-idps
|
2025-04-01T04:35:53.644573
| 2020-05-12T22:37:12
|
617008443
|
{
"authors": [
"anilrautvc",
"gnomeontherun"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11924",
"repo": "vmware/clarity",
"url": "https://github.com/vmware/clarity/issues/4592"
}
|
gharchive/issue
|
Clarity Stepper Issue
Describe the bug
I am using clarity stepper component version 3.1.2, i have 4 pages or steps in the stepper. When i am done with all the steps in submit button click i am clearing the form, looks like i am doing something wrong in clearing out the form.
A clear and concise description of what the bug is, and the conditions when it occurs.
When i open the stepper which is on top of a CLRMODAL popup, i am clicking the close button on top of the modal right hand corner, i am doing this twice after opening the modal. Then for the third time when i open the modal popup in my first step i added the required field and clicked next, at this point the first step is not collapsing by the accordian and moving to the second step. In the console i can notice following error :-
Even the UI looks pretty distorted
.
In the last page of my stepper i have a Summary page where i shows the last steps selected items but last 2 steps list items are not showing up on the summary page.
How to reproduce
Please provide a link to a reproduction scenario using one of the Clarity Stackblitz templates. Reports without a clear reproduction may not be prioritized until one is provided.
https://stackblitz.com/edit/clarity-v3-light-theme-hra5bn
Steps to reproduce the behavior:
Go to 'https://stackblitz.com/edit/clarity-v3-light-theme-hra5bn'
Click on '....' The Create Button. Open the modal twice and close the modal dialog twice by clicking the right hand top corner close button. Third time after opening the modal by clicking the Create button, just enter required details in 1st step and click Next button you can see in console toggle error and the UI looks bad.
Scroll down to '....'
See error
You can see toggle error in browser console and UI distorted.
In the last page of my stepper i have a Summary page where i shows the last steps selected items but previous 2 steps selected items are not showing up on the summary page.
Expected behavior
A clear and concise description of what you expected to happen.
I should not see any error on the browser console as toggle error and the UI should not look distorted, the toggling of steps in my stepper should happened without any issues.
The Summary page should list down the previous page selected items.
Versions
App
Angular: [e.g. 6] = 9.1.6
Node: [e.g. 8.10.0]
Clarity: [e.g. 0.12.5] = 3.1.2
Device:
Type: [e.g. MacBook] = Windows
OS: [e.g. iOS] = Windows 10
Browser [e.g. chrome, safari] = Chrome
Version [e.g. 22] = Version 81.0.4044.138
Additional notes
Add any other notes about the problem here.
Here is another stackblitz sample exactly the copy paste of existing stepper example of clarity design :- https://stackblitz.com/edit/clarity-v3-light-theme-hh77xr.
I need to have the stepper on top of a clrmodal so that i can pop it up with a button click, this is my requirement.
The steps to reproduce is pretty simple just open the popup by clicking create and close it by clicking close button.
Open the popup again and input name and description then click Next you will see toggle error in console and the UI does not refresh.
Something is strange about the stepper in a modal, I haven't fully investigated it yet but also wonder if you want a modal why not use the Wizard which is a modal based workflow? That might be a better solution in general here.
Hi @gnomeontherun as i mentioned in my problem description the issue was due to form reset in clr stepper on a clr modal. Then while searching online i found one of your post about resetting forms which seems to be fixed my issue.
https://stackoverflow.com/questions/52459708/how-to-reset-error-state-of-clarity-forms.
But i think we need a proper fix for this as this is a workaround you have provided in this post.
Please keep me posted if you have a proper fix for this now or in future.
|
2025-04-01T04:35:53.653104
| 2023-11-30T04:38:03
|
2017860079
|
{
"authors": [
"arunmk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11925",
"repo": "vmware/cluster-api-provider-cloud-director",
"url": "https://github.com/vmware/cluster-api-provider-cloud-director/pull/550"
}
|
gharchive/pull-request
|
CAFV-357: Add new v1beta3 API to CAPVCD
Description
Please provide a brief description of the changes proposed in this Pull Request
Added new API version in anticipation of upcoming changes in CAPVCD and also as a CoDB since the branch of the current release has been cut.
Checklist
[X] tested locally
[X] updated any relevant dependencies
[ ] updated any relevant documentation or examples
API Changes
Are there API changes?
[X] Yes
[ ] No
If yes, please fill in the below
Updated conversions?
[X] Yes
[ ] No
[ ] N/A
Updated CRDs?
[ ] Yes
[ ] No
[X] N/A
Updated infrastructure-components.yaml?
[X] Yes
[ ] No
[ ] N/A
Updated ./examples/capi-quickstart.yaml?
[ ] Yes
[X] No
[ ] N/A
Updated necessary files under ./infrastructure-vcd/v1.0.0/?
[ ] Yes
[X] No I did not find v1.0.0 or anything other than v0.5.1
[ ] N/A
Issue
If applicable, please reference the relevant issue
Fixes #
This change is
@sahithi I have made the changes requested. I will run through the tests soon.
|
2025-04-01T04:35:53.660485
| 2017-04-07T22:25:58
|
220343848
|
{
"authors": [
"govint",
"msterin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11926",
"repo": "vmware/docker-volume-vsphere",
"url": "https://github.com/vmware/docker-volume-vsphere/issues/1153"
}
|
gharchive/issue
|
Rework python code layout and function names to comply with module and naming convesions
.py code is not properly modularized - it's layout in src/ is different from what it ends up on ESX whihc makes it hard to lint and debug, the naming convensions for intenral/external functions are not followed,
all and init.py are not defined for modules. All this defeats many lint and IDE code control support, We need to fix it eventually
Setting future milestone.
|
2025-04-01T04:35:53.661875
| 2017-05-19T10:02:32
|
229928614
|
{
"authors": [
"reasonerjt",
"ywk253100"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11927",
"repo": "vmware/harbor",
"url": "https://github.com/vmware/harbor/issues/2346"
}
|
gharchive/issue
|
Refactor ldap, email API
I assume after this one we cover everything, is that correct?
Yes, it is.
Done.
|
2025-04-01T04:35:53.665714
| 2018-03-13T05:19:25
|
304631585
|
{
"authors": [
"jessehu",
"shinji62"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11928",
"repo": "vmware/harbor",
"url": "https://github.com/vmware/harbor/issues/4399"
}
|
gharchive/issue
|
Harbor tile supporting signature v2
In the last version of the Pivotal tile 1.4.1 we can add compatible s3 API object-storage.
But most of Swift version do not support v4 signature, could be nice to be able to add the options to enable or not the v4 signature.
Thanks
@shinji62 do you add "v4auth: false" option ? Any other option you need ?
Well I will prefer having a real choice meaning the real Swift support. But v4auth: false should be ok
Here is the options list on https://docs.docker.com/registry/configuration/. Are 'secure: false' and 'encrypt: false' needed in your case ?
s3:
accesskey: awsaccesskey
secretkey: awssecretkey
region: us-west-1
regionendpoint: http://myobjects.local
bucket: bucketname
encrypt: true
keyid: mykeyid
secure: true
v4auth: true
chunksize: 5242880
multipartcopychunksize: 33554432
multipartcopymaxconcurrency: 100
multipartcopythresholdsize: 33554432
rootdirectory: /s3/object/name/prefix
Fixed by http://url/6qff in master branch.
@jessehu Thanks will let you know once the tile get release.
|
2025-04-01T04:35:53.693697
| 2015-04-28T07:42:59
|
71513700
|
{
"authors": [
"GhostofGoes",
"avnish30jn",
"clement10601",
"cybervedaa",
"glenc2004",
"houcinedz",
"iahmad-khan",
"prziborowski",
"shyamachilles"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11929",
"repo": "vmware/pyvmomi-community-samples",
"url": "https://github.com/vmware/pyvmomi-community-samples/issues/166"
}
|
gharchive/issue
|
Sample of how to create a VM with all the proper configurations...Nothing fancy
Hello all. I am having a little difficulty understanding the samples and I'm wondering if anyone can provide some samples on how to create a working vm. This is nothing fancy but I'm having issues trying to add the NIC, Disk and other necessary things in the config to create the VM. The sample code to create a basic VM (marvel vm's) works just fine for me but it does not have all the proper code in it to make it all operational. Also too I need to be able to add an ISO image so the VM can boot. But just trying to get a operational one with NIC and disk is the most important thing. The other I can probably figure out from the other provided examples.
Can anyone help? Here is what I have....It's pretty much a hack right now just trying to get it to work. The pretty will come later. :-)
#!/usr/bin/env python
# William lam
# www.virtuallyghetto.com
"""
vSphere SDK for Python program for creating tiny VMs (1vCPU/128MB) with random
names using the Marvel Comics API
"""
import atexit
import hashlib
import json
import random
import time
import requests
from pyVim import connect
from pyVmomi import vim
from tools import cli
from tools import tasks
import requests
requests.packages.urllib3.disable_warnings()
def create_dummy_vm(name, service_instance, vm_folder, resource_pool, datastore):
devices = []
nic_type = 'E1000'
net_name = 'VM Network'
vm_name = name
datastore_path = '[' + datastore + '] ' + vm_name
vmx_file = vim.vm.FileInfo(logDirectory=None,
snapshotDirectory=None,
suspendDirectory=None,
vmPathName=datastore_path)
#nicspec = vim.vm.device.VirtualDeviceSpec()
#nicspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
#nicspec.device = nic_type
#nicspec.device.deviceInfo = vim.Description()
#nicspec.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo()
#nicspec.device.backing.network = self.get_obj(content, [vim.Network], net_name)
#nicspec.device.backing.deviceName = net_name
#nicspec.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
#nicspec.device.connectable.startConnected = True
#nicspec.device.connectable.allowGuestControl = True
#devices.append(nicspec)
config = vim.vm.ConfigSpec(name=vm_name, memoryMB=1024, numCPUs=1,
files=vmx_file, guestId='rhel6_64Guest',
version='vmx-09', deviceChange=devices)
print "Creating VM %s" % (vm_name)
task = vm_folder.CreateVM_Task(config=config, pool=resource_pool)
tasks.wait_for_tasks(service_instance, [task])
def main():
name = 'MyVM'
DS = 'Disk01-4TB'
service_instance = connect.SmartConnect(host="xx.xx.xx.xx", user="builder", pwd="xxxxxx", port="443")
if not service_instance:
print("Could not connect to the specified host using specified "
"username and password")
return -1
atexit.register(connect.Disconnect, service_instance)
content = service_instance.RetrieveContent()
datacenter = content.rootFolder.childEntity[0]
vmfolder = datacenter.vmFolder
hosts = datacenter.hostFolder.childEntity
resource_pool = hosts[0].resourcePool
#print name, service_instance, vmfolder, resource_pool, DS
create_dummy_vm(name, service_instance, vmfolder, resource_pool, DS)
return 0
# Start program
if __name__ == "__main__":
main()
I tired to create to create a vm with the same method. On powering on it gave error 'OS not found'
used guest ID : 'centos64Guest'
could you please tell me which all guests are supported by my esxi server or how to map the existing ISOs to the guestId....if there is any.
Thanks in advance :)
Hi, thank you for this very helpful code sample. One question i have is, how can i retrieve a list of ALL guestID values that an esxi instance supports?
I see that in youe example above you have specified the guestId as 'rhel6_64Guest'. I am curious to know what would be the guestId value for Windows, Centos and the various other guest OSs
Thank you
@avnish30jn: there are any ISOs associated with the guestId, so you would have to provide that in some form. the guestId will give you some configuration and will associate which vmware tools ISO to supply.
@cybervedaa: you can use the EnvironmentBrowser to pull up this information.
Let me give a short example of pulling up that data:
si = connect.SmartConnect(...) # assume inputs are given to connect similar to most sample scripts
computeResource = si.content.rootFolder.childEntity[0].hostFolder.childEntity[0] # first CR of first datacenter
environmentBrowser = computeResource.environmentBrowser
for optionDescriptor in environmentBrowser.QueryConfigOptionDescriptor():
versionKey = optionDescriptor.key
configOption = environmentBrowser.QueryConfigOption(key=versionKey)
if configOption is not None:
supportedGuests = map(lambda x: x.id, configOption.guestOSDescriptor)
print("Guests supported for version %s:\n%s" % (versionKey, ', '.join(supportedGuests)))
Reference to the configOption: ConfigOption
Ref to GuestOsDescriptor
Thank you @prziborowski! This is exactly what i was looking for
Could someone help how to get the datastore path
Here's snippets from my code. Hope it helps
def get_host(SI, datacenter, hostname):
"""
Returns the vim.HostSystem object associated with the hostname
specified.
None if the host is not found
"""
content = SI.RetrieveContent()
host_view =
content.viewManager.CreateContainerView(datacenter,[vim.HostSystem],True)
for obj in host_view.view:
if obj.name == hostname:
return obj
host = get_host(SI, datacenters[target_datacenter_name],hypervisor)
#Assuming your host has only one datastore
datastore = host.datastore[0].name
#vm_name is whatever you want to call your VM
datastore_path = '[' + datastore + '] ' + vm_name
On Tue, Aug 29, 2017 at 6:20 PM, shyamachilles<EMAIL_ADDRESS>wrote:
Could someone help how to get the datastore path
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/vmware/pyvmomi-community-samples/issues/166#issuecomment-325849989,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAtcnVaBe0ACTohAYNCvVo6ivGeDKsXJks5sdLjcgaJpZM4EKZgJ
.
Some of the code for my project that utilizes vSphere may be helpful, notably the "VM" class built for interacting with vSphere VMs in a more "Pythonish" manner.
https://github.com/GhostofGoes/ADLES/blob/master/adles/vsphere/vm.py
Thak you so much @cybervedaa. This helps. Will look at your full
implantation too.
--Shyam
On Wed, Aug 30, 2017 at 8:24 AM, Christopher Goes<EMAIL_ADDRESS>wrote:
Some of the code for my project that utilizes vSphere may be helpful,
notably the "VM" class built for interacting with vSphere VMs in a more
"Pythonish" manner.
https://github.com/GhostofGoes/ADLES/blob/master/adles/vsphere/vm.py
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/vmware/pyvmomi-community-samples/issues/166#issuecomment-326026029,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AOPLpQKr0POqvCv6IhVSSyjcsLJyJvXnks5sdX6rgaJpZM4EKZgJ
.
Hi , I want to create a VM from a template. with everything default and getting ip from dhcp.
how can i do that.
thanks
@iahmad-khan do you have NSX enabled? if not, you need to build your own dhcp server within the layer 2 network (for VM networking). you can generate MAC address for the network adapter attached on the virtual machine (use vmconf.deviceChange = virtdev.add_virtif_spec(content, network name, mac_list)) and you can use that MAC address to setup a IP-MAC mapping for your dhcp server.
if you have NSX enabled, just use vim.vm.customization.DhcpIpGenerator().
Hi ,
Yes , It take IP automatically from dhcp when I create VM from Web client.
template not found
how i can get exact path
|
2025-04-01T04:35:53.696612
| 2018-10-17T18:56:12
|
371221121
|
{
"authors": [
"jeking3"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11930",
"repo": "vmware/pyvmomi-community-samples",
"url": "https://github.com/vmware/pyvmomi-community-samples/issues/517"
}
|
gharchive/issue
|
In the sample upload_file_to_datastore cookie logic truncates the cookie
See source code:
https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/upload_file_to_datastore.py#L105
Input data:
(Pdb) p client_cookie
'vmware_soap_session="e6e1091d84d035c6a730bae86cda11a033e76c01"; Path=/; HttpOnly; Secure;'
Result:
(Pdb) p cookie_name
'vmware_soap_session'
(Pdb) p cookie_text
' "e6e1091d84d035c6a730bae86cda11a033e76c01"; $Path=/'
Some of the cookie data got dropped.
Looks like I was wrong - nevermind.
|
2025-04-01T04:35:53.722330
| 2019-10-26T17:31:07
|
512863262
|
{
"authors": [
"andrewlef",
"vn-ki"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11931",
"repo": "vn-ki/anime-downloader",
"url": "https://github.com/vn-ki/anime-downloader/pull/259"
}
|
gharchive/pull-request
|
Fixed Incorrect mp4upload Parsing & Animeflv downloading
See Issue #258 and Issue #252.
Why did you remove natsuki?
Because, as far as I can tell, animeflv no longer has a content host server called natsuki. The only servers I saw were mega, zippyshare, openload and streamango.
Ok, I made some additional changes to the code and added some error handling with a descriptive error message. That should eliminate certain bug reports.
Also noticed that the build fails the auto test. Builds just fine for me in Python 3.7 on Mac, though.
|
2025-04-01T04:35:53.727354
| 2020-02-01T02:30:05
|
558464353
|
{
"authors": [
"ibmibmibm",
"michaelforney",
"vnmakarov"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11932",
"repo": "vnmakarov/mir",
"url": "https://github.com/vnmakarov/mir/issues/44"
}
|
gharchive/issue
|
Use of reserved identifiers
I noticed that MIR uses reserved identifiers throughout the codebase:
Names beginning with an underscore and capital letter are reserved for the implementation for any use (C11 7.1.3p1). MIR uses many symbols named this way (_MIR_*).
POSIX reserves identifiers ending in _t for use in any header (POSIX.1-2017 name space). MIR names many of its types this way (MIR_*_t).
It is possible to choose different naming schemes for the identifiers used in MIR so that they don't conflict with those reserved for ISO C and POSIX?
Thank you for reporting this. I think changing _MIR prefix is not a problem. Entities with such prefix name are used rarely and only internally.
Changing MIR_*_t could be a problem. As I understand some people already using MIR code and I can not yet figure out alternative naming I like. I think the probability of conflicts with posix headers are extremely low. In any case I did not decide yet what to do MIR_*_t names.
Thank you again for pointing out the issue.
export is also a c++ reserved keyword in struct MIR_item
export is also a c++ reserved keyword in struct MIR_item
I missed this. It is serious. I renamed members export/import/forward.
https://github.com/vnmakarov/mir/commit/36c1c43ee406e7eb9b5e4f4407dd48f2e6b83e06#annotation_108525331
Thank you for reporting this issue.
|
2025-04-01T04:35:53.748429
| 2022-08-16T17:21:52
|
1340659941
|
{
"authors": [
"beon9273",
"jburz2001"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11933",
"repo": "vnvlabs/vnv",
"url": "https://github.com/vnvlabs/vnv/issues/7"
}
|
gharchive/issue
|
Possible typo in Injection.h macro?
@beon9273
Am I correct that the ) after STAGE should be removed in the code that is substituted in for INJECTION_LOOP_ITER_D ?
path: /vnv/injection/include/c-interfaces/points/Injection.h
# define INJECTION_LOOP_ITER(PNAME, NAME, STAGE) \
_VnV_injectionPoint_loop(VNV_STR(PNAME), VNV_STR(NAME), VNV_STR(STAGE), __FILE__, __LINE__);
//FIXME remove ) after STAGE ?
# define INJECTION_LOOP_ITER_D(PNAME, NAME, STAGE) \
_VnV_injectionPoint_loop(VNV_STR(PNAME), VNV_STR(NAME), STAGE), __FILE__, __LINE__);
Yea. You are right. We must not use that macro in any of the examples
anywhere I guess
Good catch
On Tue, Aug 16, 2022, 1:22 PM Justin Burzachiello @.***>
wrote:
@beon9273 https://github.com/beon9273
Am I correct that the ) after STAGE should be removed in the code that is
substituted in for INJECTION_LOOP_ITER_D ?
path: /vnv/injection/include/c-interfaces/points/Injection.h
define INJECTION_LOOP_ITER(PNAME, NAME, STAGE) \
_VnV_injectionPoint_loop(VNV_STR(PNAME), VNV_STR(NAME), VNV_STR(STAGE), __FILE__, __LINE__);
//FIXME remove ) after STAGE ?
define INJECTION_LOOP_ITER_D(PNAME, NAME, STAGE) \
_VnV_injectionPoint_loop(VNV_STR(PNAME), VNV_STR(NAME), STAGE), __FILE__, __LINE__);
—
Reply to this email directly, view it on GitHub
https://github.com/vnvlabs/vnv/issues/7, or unsubscribe
https://github.com/notifications/unsubscribe-auth/AH3KA4YBGDW75UQIFMCCBNDVZPE3XANCNFSM56WT5JYQ
.
You are receiving this because you were assigned.Message ID:
@.***>
Well, couldn't we just fix the bug? I'm fine with doing it, I just wanted to check with you beforehand
Yip - Go for it/
On Wed, Aug 17, 2022 at 11:01 AM Justin Burzachiello <
@.***> wrote:
Well, couldn't we just fix the bug? I'm fine with doing it, I just wanted
to check with you beforehand
—
Reply to this email directly, view it on GitHub
https://github.com/vnvlabs/vnv/issues/7#issuecomment-1218130105, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AH3KA4Z5DCD6XD6QIID7HRTVZT5DPANCNFSM56WT5JYQ
.
You are receiving this because you were mentioned.Message ID:
@.***>
Sounds good. Thanks for the context, Ben.
I fixed the bug in a recent commit to the main branch. I'll try to write an example that incorporates it soon.
|
2025-04-01T04:35:53.752077
| 2024-04-09T16:44:09
|
2233870515
|
{
"authors": [
"CarlFK",
"MaZderMind",
"danimo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11934",
"repo": "voc/voctomix",
"url": "https://github.com/voc/voctomix/issues/314"
}
|
gharchive/issue
|
Wish: all the input's muxed into one multi track output stream
I would like to save all the input streams to disk for editing later (sometimes the live edits aren't good and it is worth the effort to fix.)
I think/hope it would be better if all the tracks were in one big file as opposed to each input having it's own file.
if all the streams were muxed and available on a port, I could hook up a client that saved them to disk.
You can do that in a script reading from multiple Ports. See https://github.com/voc/voctomix/blob/main/example-scripts/ffmpeg/record-mixed%2Bslides%2B8channel-audio-ffmpeg-segmented-timestamps.sh for an example which records mix-out, slides and 8ch Audio into mpeg-ts segments usable with fuse-ts.
Additionally you can use a script like https://github.com/voc/voctomix/blob/main/example-scripts/control-server/generate-cut-list.py to also record the cut-commands and re-create a projectfile of your favourite Video-Editor from that.
Note: That method of ISO/multitrack recording does not guarantee a that the timestamps will match up. It was good enough for us as we only record the slide input, but with proper ISO recording, this might turn into an issue.
|
2025-04-01T04:35:53.781595
| 2024-01-31T20:14:17
|
2110889334
|
{
"authors": [
"CodiumAI-Agent",
"arpagon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11935",
"repo": "vocodedev/vocode-python",
"url": "https://github.com/vocodedev/vocode-python/pull/489"
}
|
gharchive/pull-request
|
482 unify version across demos
Summary
The objective is to harmonize version information across all Vocode platforms. This involves updating pyproject.toml and requirements.txt files with the new version numbers and verifying that all demo applications operate correctly with these updates. The focus of this update is on version alignment without the introduction of new features.
fix #482
PR Analysis
🎯 Main theme: This PR is about updating the version of the Vocode library across different applications and making some minor improvements in the code.
📝 PR summary: The PR updates the Vocode version in various applications and makes some minor improvements in the code. It also updates the Python version in the Dockerfile and makes some changes in the synthesizer classes. The PR also adds some comments and improves the organization of the imports in the code.
📌 Type of PR: Enhancement
🧪 Relevant tests added: No
⏱️ Estimated effort to review [1-5]: 3, because the PR involves changes in multiple files and applications. It requires a good understanding of the Vocode library and its usage in different applications.
🔒 Security concerns: No security concerns found
PR Feedback
💡 General suggestions: The PR is well-structured and the changes are logically grouped. However, it would be beneficial to add some tests to ensure that the updated Vocode version works as expected in all applications. Also, it would be helpful to provide more context in the commit messages, explaining why certain changes were made.
🤖 Code feedback:relevant fileapps/telegram_bot/main.pysuggestion
Consider making the synthesize method asynchronous as indicated by the TODO comment. This can improve the performance of the application by not blocking the execution while synthesizing the response. [important]
relevant line# TODO make asyncrelevant fileapps/telephony_app/main.pysuggestion
It would be beneficial to handle the case where neither BASE_URL nor NGROK_AUTH_TOKEN are set. Currently, if both are not set, the application may fail without a clear error message. [important]
relevant lineif not BASE_URL:relevant fileapps/telephony_app/speller_agent.pysuggestion
Consider adding type hints to the respond method parameters. This can improve code readability and make it easier to understand the expected types of the parameters. [medium]
relevant linehuman_input: str,relevant fileapps/voice_rag/Dockerfilesuggestion
It would be good to use a specific version of the Python image in the Dockerfile instead of the bullseye tag. This can ensure that the Docker build is repeatable and consistent across different environments. [medium]
relevant lineFROM python:3.9-bullseye
✨ Usage guide:
Overview:
The review tool scans the PR code changes, and generates a PR review. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR.
When commenting, to edit configurations related to the review tool (pr_reviewer section), use the following template:
/review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...
With a configuration file, use the following template:
[pr_reviewer]
some_config1=...
some_config2=...
Utilizing extra instructions
The review tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify the relevant sub-tool, and the relevant aspects of the PR that you want to emphasize.
Examples for extra instructions:
[pr_reviewer] # /review #
extra_instructions="""
In the code feedback section, emphasize the following:
- Does the code logic cover relevant edge cases?
- Is the code logic clear and easy to understand?
- Is the code logic efficient?
...
"""
Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.
How to enable\disable automation
When you first install PR-Agent app, the default mode for the review tool is:
pr_commands = ["/review", ...]
meaning the review tool will run automatically on every PR, with the default configuration.
Edit this field to enable/disable the tool, or to change the used configurations
About the 'Code feedback' section
The review tool provides several type of feedbacks, one of them is code suggestions.
If you are interested only in the code suggestions, it is recommended to use the improve feature instead, since it dedicated only to code suggestions, and usually gives better results.
Use the review tool if you want to get a more comprehensive feedback, which includes code suggestions as well.
Auto-labels
The review tool can auto-generate two specific types of labels for a PR:
a possible security issue label, that detects possible security issues (enable_review_labels_security flag)
a Review effort [1-5]: x label, where x is the estimated effort to review the PR (enable_review_labels_effort flag)
Extra sub-tools
The review tool provides a collection of possible feedbacks about a PR.
It is recommended to review the possible options, and choose the ones relevant for your use case.
Some of the feature that are disabled by default are quite useful, and should be considered for enabling. For example:
require_score_review, require_soc2_review, enable_review_labels_effort, and more.
More PR-Agent commands
To invoke the PR-Agent, add a comment using one of the following commands:
/review: Request a review of your Pull Request.
/describe: Update the PR title and description based on the contents of the PR.
/improve [--extended]: Suggest code improvements. Extended mode provides a higher quality feedback.
/ask <QUESTION>: Ask a question about the PR.
/update_changelog: Update the changelog based on the PR's contents.
/add_docs 💎: Generate docstring for new components introduced in the PR.
/generate_labels 💎: Generate labels for the PR based on the PR's contents.
/analyze 💎: Automatically analyzes the PR, and presents changes walkthrough for each component.
See the tools guide for more details.
To list the possible configuration parameters, add a /config comment.
See the review usage page for a comprehensive guide on using this tool.
Persistent review updated to latest commit https://github.com/vocodedev/vocode-python/commit/7114d3f60371dab120e9fb781cbc3be394c3110e
|
2025-04-01T04:35:53.844289
| 2018-07-23T11:18:50
|
343590951
|
{
"authors": [
"sankhakarfa",
"voidpp"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11936",
"repo": "voidpp/PCA9685-driver",
"url": "https://github.com/voidpp/PCA9685-driver/issues/3"
}
|
gharchive/issue
|
Requirements before Pip Install
Please add this line in your readme before PIP Install
sudo apt-get install build-essential libi2c-dev i2c-tools python-dev libffi-dev
This is required for smbus-cffi pip package
Thx for the contribution.
|
2025-04-01T04:35:53.877847
| 2016-11-27T18:21:42
|
191876040
|
{
"authors": [
"sr3d",
"volmer"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11938",
"repo": "volmer/bootsy",
"url": "https://github.com/volmer/bootsy/issues/245"
}
|
gharchive/issue
|
Bootsy and Carrierwave 1.0.0 compatibility
Bootsy isn't compatible yet with Carrierwave 1.0.0 because in the gemspec file it's locked at 'carrierwave', '~> 0.11'
The carrierwave 1.0.0 has this version tag:
gem 'carrierwave', '>= 1.0.0.rc', '< 2.0'
which leads to incompatibility issue.
Yes, that's because Carrierwave 1.0.0 is not released yet. I created a branch that supports its release candidate if you want to give it a try: https://github.com/volmer/bootsy/tree/carrierwave-1-0-0
Once Carrierwave 1.0.0 is out I'll merge it into master.
|
2025-04-01T04:35:53.882478
| 2020-11-03T16:35:01
|
735469770
|
{
"authors": [
"Cretezy",
"rwjblue",
"shaungrady"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11939",
"repo": "volta-cli/action",
"url": "https://github.com/volta-cli/action/issues/47"
}
|
gharchive/issue
|
Support actions/setup-node's registry-url input
Example:
# Setup .npmrc file to publish to npm
- uses: actions/setup-node@v1
with:
registry-url: 'https://registry.npmjs.org'
Creates an .npmrc file with the following:
//registry.npmjs.org/:_authToken=${NODE_AUTH_TOKEN}
registry=https://registry.npmjs.org/
always-auth=true
Thanks for opening the issue! Definitely seems like something we should add.
Any updates on this? Would be very useful!
|
2025-04-01T04:35:53.905538
| 2023-05-22T18:54:27
|
1720256996
|
{
"authors": [
"robertnurnberg"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11940",
"repo": "vondele/chessgraph",
"url": "https://github.com/vondele/chessgraph/pull/2"
}
|
gharchive/pull-request
|
allow SAN input
Allow input argument --san and fix some typos. Also go black.
Just to let you know: The "An" typo is still in the About-blurb on github. No PR for that, I'm afraid. ;)
|
2025-04-01T04:35:53.913332
| 2020-03-01T07:57:54
|
573503329
|
{
"authors": [
"darbean",
"vorburger"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11941",
"repo": "vorburger/MariaDB4j",
"url": "https://github.com/vorburger/MariaDB4j/issues/339"
}
|
gharchive/issue
|
the precision of timestamp(6) is lost while insert data
When I insert '2020-03-01 12:00:00.121212' into filed with timestamp(6), it's always '2020-03-01 12:00:00.000000', please check and fix, thx!
Thank you for your interest in this project.
This issue has nothing to do with MariaDB4j, this project which wraps
MariaDB; you need to take this up with the database itself, not this
wrapper.
Can I let you please close this issue?
On Sun, 1 Mar 2020, 08:57 darbean<EMAIL_ADDRESS>wrote:
When I insert '2020-03-01 12:00:00.121212' into filed with timestamp(6),
it's always '2020-03-01 12:00:00.000000', please check and fix, thx!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/vorburger/MariaDB4j/issues/339?email_source=notifications&email_token=AACI4ZWMTULIXI2KX4V5MYTRFIIQFA5CNFSM4K7CPFG2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IRO65QQ,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AACI4ZXP5FKFOXODAZUY3ULRFIIQFANCNFSM4K7CPFGQ
.
Thank you for your interest in this project. This issue has nothing to do with MariaDB4j, this project which wraps MariaDB; you need to take this up with the database itself, not this wrapper. Can I let you please close this issue?
…
On Sun, 1 Mar 2020, 08:57 darbean, @.***> wrote: When I insert '2020-03-01 12:00:00.121212' into filed with timestamp(6), it's always '2020-03-01 12:00:00.000000', please check and fix, thx! — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#339?email_source=notifications&email_token=AACI4ZWMTULIXI2KX4V5MYTRFIIQFA5CNFSM4K7CPFG2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IRO65QQ>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACI4ZXP5FKFOXODAZUY3ULRFIIQFANCNFSM4K7CPFGQ .
ok
|
2025-04-01T04:35:53.968212
| 2021-11-19T16:15:35
|
1058697012
|
{
"authors": [
"202RaRa",
"voteblake"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11942",
"repo": "voteblake/bpm-is",
"url": "https://github.com/voteblake/bpm-is/issues/2"
}
|
gharchive/issue
|
Not the most accurate.
When i saw this i was like wait whats going on, there were no real instructions on how to use it. But once I figured out what was on, I quickly realized it wasn’t that accurate.
I even used my metronome with it to be sure i was tapping with consistency/on beat. But regardless of the tempo i was goin at the tempo seem to rise regardless on the app.
If you can get the tempo to be accurate, you on to something great. Keep going.
Thanks for taking a look. Do you remember about what you had the metronome set at and about what the tempo read on the site? Knowing the direction and magnitude of the error might help diagnose it. There are probably practical limitations to accuracy from relying on browser API's for time and click information, but I wouldn't be surprised if there is low-hanging fruit to improve the accuracy from where it is now.
No problem. I was kind of excited when i ran accident it. And i had my
metronome in my DAW set to 120 bpm, and the tempo online when be at 120 for
about 8 beats or so and then it would continue to increase going up to
about 150 or so. I hope this feed back helps a lot.
On Sat, Nov 20, 2021 at 10:57 AM Blake Johnson @.***>
wrote:
Thanks for taking a look. Do you remember about what you had the metronome
set at and about what the tempo read on the site? Knowing the direction and
magnitude of the error might help diagnose it. There are probably practical
limitations to accuracy from relying on browser API's for time and click
information, but I wouldn't be surprised if there is low-hanging fruit to
improve the accuracy from where it is now.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/voteblake/bpm-is/issues/2#issuecomment-974670642, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AWSQVAKGGR3XCZ27N5U443LUM7APTANCNFSM5IMQBB6A
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Appreciate it - that definitely sounds wrong. I may not be able to get it 'perfect' but the tempo running away like that isn't even usable. I forgot to ask, which browser was this in? Since I'm relying on browser API's via WASM this behavior might be browser-specific. I'm off work all next week and will be able to poke at this a bit. Knowing the browser might help me reproduce and troubleshoot.
Yes exactly like if it at least stuck around the 120-130 that would be a
whole different world. I was using googles chrome browser on my iphone xs
and on my ipad 5th generation same browser on both.
On Sat, Nov 20, 2021 at 11:40 AM Blake Johnson @.***>
wrote:
Appreciate it - that definitely sounds wrong. I may not be able to get it
'perfect' but the tempo running away like that isn't even usable. I forgot
to ask, which browser was this in? Since I'm relying on browser API's via
WASM this behavior might be browser-specific. I'm off work all next week
and will be able to poke at this a bit. Knowing the browser might help me
reproduce and troubleshoot.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/voteblake/bpm-is/issues/2#issuecomment-974676547, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AWSQVAMPBNRJPPUSXPF3TBTUM7FODANCNFSM5IMQBB6A
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
|
2025-04-01T04:35:53.970438
| 2015-09-28T19:17:18
|
108727758
|
{
"authors": [
"bwreid",
"tie-rack"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11943",
"repo": "votinginfoproject/Metis",
"url": "https://github.com/votinginfoproject/Metis/pull/286"
}
|
gharchive/pull-request
|
Loggly
Simplify logging, while adding support for logging to Loggly via a linked Docker container.
Remove complicated, unused file-based logging
Remove unused winston-papertrail dependency
Add winston-syslog dependency
Configure and use syslog logging
Pivotal story: 102023084
Looks Good To Me. :+1:
|
2025-04-01T04:35:53.979954
| 2021-09-02T15:19:44
|
986869082
|
{
"authors": [
"benjaminpkane"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11944",
"repo": "voxel51/fiftyone",
"url": "https://github.com/voxel51/fiftyone/pull/1245"
}
|
gharchive/pull-request
|
Configurable session waiting
Adds an integer keyword argument wait to Session.wait() and a corresponding --wait argument to relevant CLI commands.
session.wait(-1) permanently blocks execution until a keyboard interrupt, which is useful in cases where you are running an App session that you want to serve "forever".
session.wait(10) might be more appropriate than the default wait time (3 seconds) for remote sessions over slow internet connections, where refreshing the App via ctrl + R in the browser could take >3 seconds to re-establish a connection.
import fiftyone as fo
dataset, session = fo.quickstart()
# ex: serve forever
session.wait(-1)
# ex: wait 10 seconds before continuing execution
session.wait(10)
Yes, good adjustments. Thanks.
|
2025-04-01T04:35:53.981578
| 2024-07-13T07:36:38
|
2406755077
|
{
"authors": [
"ProbablePrime",
"bredo228"
],
"license": "MIT-0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11945",
"repo": "voxelbonecloud/headless-docker",
"url": "https://github.com/voxelbonecloud/headless-docker/issues/2"
}
|
gharchive/issue
|
Changes for clarity
The repositories layout is becoming confusing and overwhelming due to the varieties of examples.
My advice would be to keep the root of the repository as minimal as possible and to present a "known good path", you can then put the other examples that might be more complex in an examples folder.
Extra examples have been moved into https://github.com/voxelbonecloud/headless-docker/tree/main/examples, this should be sorted out now
|
2025-04-01T04:35:53.990182
| 2017-11-26T14:46:12
|
276829870
|
{
"authors": [
"bastelfreak",
"tampakrap"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11946",
"repo": "voxpupuli/puppet-mcollective",
"url": "https://github.com/voxpupuli/puppet-mcollective/pull/358"
}
|
gharchive/pull-request
|
use different delimiter for the sed at the puppet facts cronjob
Using '%' as delimiter for the sed at the 'puppet facts' cronjob ends up
to the following error:
Subject: Cron root@server puppet facts --render-as yaml |sed 's
/bin/sh: -c: line 0: unexpected EOF while looking for matching `''
/bin/sh: -c: line 1: syntax error: unexpected end of file
Replacing the delimiter with '#' instead works fine
Thanks for this @tampakrap!
|
2025-04-01T04:35:54.045122
| 2022-06-12T08:41:28
|
1268510905
|
{
"authors": [
"vactomas",
"vrtmrz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11947",
"repo": "vrtmrz/obsidian-livesync",
"url": "https://github.com/vrtmrz/obsidian-livesync/issues/77"
}
|
gharchive/issue
|
CouchDB created but then throws remote database is newer or corrupted error.
Information about used configuration
Platform: Windows 11
LiveSync version: 0.11 – downloaded through Obsidian
Database: CouchDB docker – 3.2.2
Reverse proxy: Caddy docker – 2.51
Bug
After setting up the docker container with CouchDB, I have added the correct URL, username and password into the LiveSync settings page. After hitting apply, I got the “Initilize done!” notification and later a notification saying that I should lock the remote database. When I tried to sync my files to the DB, an error saying Remote database is newer or corrupted showed up.
Expected behaviour
After setup, data syncs without issues.
Additional info
I have not tested it on any other platform. Simply after setting up the CouchDB container, I tried to upload the DB with Obsidian without any previous database initiation.
Thank you for making the issue!
May I ask for the information shown when you hit the Check database configuration button?
Did everything pass with a checkmark?
Yeah, it seems like it did pass.
Thank you for your testing!
I'll investigate this!
If any additional information regarding my configuration would help you, just tell me what you need.
Fixed it. It was an issue with db name.
@vactomas
I’m so relieved to hear that! May I ask for the database name?
I have to fix the checking database logic (It has to say 'database name is wrong')
I didn’t realise, that it needed to be a single word and used spaces (e.g. My Vault).
Could you maybe mark the field as mandatory and place a comment there regarding how the name should look like?
|
2025-04-01T04:35:54.046822
| 2024-12-31T09:15:21
|
2764147871
|
{
"authors": [
"vrugtehagel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11948",
"repo": "vrugtehagel/vim-whitespace-control",
"url": "https://github.com/vrugtehagel/vim-whitespace-control/issues/1"
}
|
gharchive/issue
|
Plugin formats space-indented JSDoc comments
When using tabs as indentation, the following situation gets incorrectly formatted:
/**
* Makes sure VIM Whitespace control knows the file is tabs
*/
function foo(){
const identation = '<-- tab'
}
Gets formatted to
/**
* Makes sure VIM Whitespace control knows the file is tabs
*/
function foo(){
const identation = '<-- tab'
}
Note the dedenting of the JSDOC comment.
Fixed
|
2025-04-01T04:35:54.048366
| 2020-10-13T20:49:45
|
720818568
|
{
"authors": [
"Alireza-Sampour",
"vs666"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11949",
"repo": "vs666/E-Commerce_Database",
"url": "https://github.com/vs666/E-Commerce_Database/issues/1"
}
|
gharchive/issue
|
Add CASCADING in the database file for all refrenced attributes.
CASCADING is added to enable Delete / Update does not fail the integrity constraints.
Hi @vs666, assign this to me.
Be sure to update the dumpfile also @Alireza-Sampour
@vs666 if there is no problem with this pull request please label this issue with hacktoberfest-accepted
|
2025-04-01T04:35:54.053277
| 2017-04-01T14:20:19
|
218698733
|
{
"authors": [
"clartaq",
"vsch"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11950",
"repo": "vsch/flexmark-java",
"url": "https://github.com/vsch/flexmark-java/issues/84"
}
|
gharchive/issue
|
Usage in Clojure
I've recently switched to using flexmark as a markdown processor in a Clojure program. Setting up the "plain vanilla" processor works great. But I'm having an issue activating extensions. Here's some code that doesn't quite work.
(def parser ^Parser$Builder (let [options (MutableDataSet.)
st (StrikethroughExtension/create)
al (ArrayList. [st]) ]
(.set options Parser/EXTENSIONS al)
(.build (Parser/builder options))))
(def renderer ^HtmlRenderer$Builder (.build (HtmlRenderer/builder)))
(defn convert-markdown-to-html
"Convert the markdown formatted input string to html
and return it."
[mkdn]
(let [out (->> mkdn
(.parse parser)
(.render renderer))]
out))
This code produces no warnings or errors. The type hints are to quiet a reflection warning from the IDE. The code works the same with or without them. I know this is not particularly idiomatic Clojure. I've expanded it like this to aid in my (fruitless) debugging attempts.
This input:
Here is normal text
<del>Here is strikthrough with the "del" tag.</del>
~~Here is strikethrough with double tildes.~~
produces this HTML:
<p>Here is normal text</p>
<p><del>Here is strikthrough with the "del" tag.</del></p>
<p></p>
I've tried some of the test case text and observed similar results. (Don't have Java 7 installed yet to actually run the tests.)
Am I mis-using/configuring the extension somehow?
@clartaq, I don't know Clojure but it looks like you are only passing the options to Parser.builder() and not to HtmlRenderer.builder(). If you look at the samples in Java the options are created and the same instance is passed to both parser and renderer builders to make sure both have the same extensions and settings.
The missing extension in the renderer causes the missing strikethrough text since the custom node is missing a renderer it is not rendered in the HTML.
That was the issue. I didn't realize that if the options only contain parser extensions that you had to use them for the renderer too.
Thanks for the help and quick response.
@clartaq, extensions can register all possible extension points in the API. For now this is parser, renderer and formatter. This allows an extension to customize all aspects of the library.
Options passed to the builder are available in the Document node and renderer context. You can pass anything you want through the options mechanism to your extension implementation.
The options were deliberately made universal to allow them to contain all configuration and context information for complete flexibility of configuring core and extensions. Passing context to extension implementation code without intervening code needing to know about it makes it much easier to implement functionality not envisioned by the library.
|
2025-04-01T04:35:54.056554
| 2016-04-30T10:26:12
|
152022494
|
{
"authors": [
"UnrulyNatives",
"vsch"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11951",
"repo": "vsch/laravel-translation-manager",
"url": "https://github.com/vsch/laravel-translation-manager/issues/30"
}
|
gharchive/issue
|
mismatch in class names
IT seems that you have a bug in this commit: https://github.com/vsch/laravel-translation-manager/commit/fb5b0b759dd695614ea2b0f42e9967c0718a3152
In one place tere we see: $router->pushMiddlewareToGroup('web', 'Vsch\TranslationManager\RouteAfterMiddleware');
an in the file src/RouteAfterMiddleWare.php the class class RouteAfterMiddleWare
The names don't match: RouteAfterMiddleware vs RouteAfterMiddleWare
Peter
@UnrulyNatives, thank you. My dev system is OS X which has a case insensitive file system. Got caught a few times with file name case mismatches.
I'll fix it and make a new release.
@UnrulyNatives, version 2.1.1 released. Thank you for the heads up.
at your service - Stansfield
|
2025-04-01T04:35:54.070352
| 2015-12-15T20:31:07
|
122359905
|
{
"authors": [
"Saeven",
"vslavik"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11952",
"repo": "vslavik/poedit",
"url": "https://github.com/vslavik/poedit/issues/229"
}
|
gharchive/issue
|
Can't modify catalog sources path, OS X 10.11.2
First I see the catalog pref panel:
http://imgur.com/ObrtY62
Issues:
Clicking on the little rightward arrow does nothing.
Clicking on a file/folder in the path shows a file finder that has no "Open" or "Select" button, see http://imgur.com/HFfUOTq
Please see http://www.chiark.greenend.org.uk/~sgtatham/bugs.html and try to describe the problem better, because it’s not clear from the above why you think you can’t modify the paths list — no part of the description indicates any attempts to make any modifications to begin with.
So far you only demonstrated that you have a broken file that has hardcoded paths that you manually typed in as absolute paths (hence the missing icons in the path bar — that much I can deduce even though you didn’t even include the file). That Finder would be refusing to open such non-existent places should, frankly, to be expected, but I don’t think even that part is accurate:
Clicking on the items in the list doesn’t do anything — there’s no code to do anything. It certainly doesn’t open Finder (which is what your screenshot shows and not the file panel with buttons). It very much looks like everything works as it should and as is common in other apps: the link icon does do something: open Finder in the location (which is nonexistent, so it goes to the root). And for some reason, you’re ignoring the appearance of the Finder window and attributing it to some random clicking afterwards that in actuality did nothing.
If that’s incorrect and if you still believe adding or removing paths (using the standard +/- buttons used by every other app on OS X) is somehow broken, please describe how, exactly, and how to reproduce it.
Closing due to lack of information.
|
2025-04-01T04:35:54.074518
| 2015-03-05T16:24:52
|
59975833
|
{
"authors": [
"dkhamsing",
"muescha",
"vsouza"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11953",
"repo": "vsouza/awesome-ios",
"url": "https://github.com/vsouza/awesome-ios/issues/66"
}
|
gharchive/issue
|
Nomoji :cry:
Hey buddy, doesn't look like :large_orange_diamond: is working.. is there some workaround?
fun fact in CONTRIBUTING.md the sign works
Maybe we can ask @github
@github :cry:
https://twitter.com/__vsouza/status/573838868757221376
I've received an email from James Dennes (GitHub Staff)
We have temporarily disabled Emoji rendering in overly large documents because of performance issues. This is a temporary measure while we work on improving the performance of our rendering pipeline. We will restore this functionality as soon as possible. Aoplogies for the inconvenience.
Thanks,
James
Ah.. too bad :disappointed:
|
2025-04-01T04:35:54.082894
| 2016-10-20T08:17:15
|
184162903
|
{
"authors": [
"danger-awesome-ios",
"lfarah",
"ufosky"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11954",
"repo": "vsouza/awesome-ios",
"url": "https://github.com/vsouza/awesome-ios/pull/1188"
}
|
gharchive/pull-request
|
Add Aojet
Project URL
https://github.com/aojet/Aojet
Description
Add Aojet by @ufosky to Concurrency section.
Why it should be included to awesome-ios (optional)
It would be useful for simplify concurrency logic implemetation.
Checklist
[x] Only one project/change is in this pull request
[x] Addition in chronological order (bottom of category)
[x] Supports iOS 8 or later
[x] Supports Swift 3
[x] Has a commit from less than 2 years ago
[x] Has a clear README in English
1 Error
<tr>
<td>:no_entry_sign:</td>
<td data-sticky="true">Found 3 link issues</td>
</tr>
Link issues by awesome_bot
Line
Status
Link
1447
301
https://github.com/OEASLAN/OEANotification redirects tohttps://github.com/OEA/OEANotification
2285
301
https://github.com/sxyx2008/awesome-ios-animation redirects tohttps://github.com/ameizi/awesome-ios-animation
2286
301
https://github.com/sxyx2008/awesome-ios-chart redirects tohttps://github.com/ameizi/awesome-ios-chart
Generated by :no_entry_sign: danger
Thanks for contributing, @ufosky! 🎉
|
2025-04-01T04:35:54.089454
| 2018-06-18T14:05:17
|
333284493
|
{
"authors": [
"JustasKuizinas",
"ridan",
"rodrigograca31",
"vstirbu"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11955",
"repo": "vstirbu/InstagramPlugin",
"url": "https://github.com/vstirbu/InstagramPlugin/issues/104"
}
|
gharchive/issue
|
Upgrade to using cordova-clipboard plugin rather than the universal-clipboard-plugin
The universal-clipboard-plugin barely works and doesn't look like it's maintained anymore. We need to switch to a better clipboard plugin. I would reccomend cordova-clipboard
Thanks for the suggestion, although I was thinking to remove the dependency on the clipboard plugin altogether. The caption functionality is not part of Instagram application's hooks and can be handled easily outside this plugin by the developer.
@vstirbu I agree you should remove clipboard dependency because now for example if you have cordova-clipboard installed you get this error while installing cordova-instagram-plugin
Failed to install 'cordova-universal-clipboard': CordovaError: Uh oh!
"D:...\platforms\android\app\src\main\java
com\verso\cordova\clipboard\Clipboard.java" already exists!
I know 2 years have passed but I just saw this and seems like a simple fix so I will do it now.
:hugs:
|
2025-04-01T04:35:54.092202
| 2023-11-21T18:14:40
|
2004877529
|
{
"authors": [
"mauricefisher64",
"vstroebel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11956",
"repo": "vstroebel/jfifdump",
"url": "https://github.com/vstroebel/jfifdump/issues/6"
}
|
gharchive/issue
|
APPn segments are allowed to be zero size
The library will error out when the length of a marker is '2' (i.e. corresponds data segment is of length 0), see code here
https://github.com/vstroebel/jfifdump/blob/main/jfifdump/src/reader.rs
line 64
fn read_length(&mut self) -> Result<usize, JfifError> {
let length = self.read_u16()? as usize;
if length <= 2 {
return Err(JfifError::InvalidMarkerLength(length));
}
Ok(length - 2)
}
But from JPEG spec, length = 2 is legal for APPn box (see B.2.4.6 of ISO+IEC+10918-1-1994.pdf), so it seems that jfifdump should be modified to use:
**if length < 2 {** ...
I've release 0.5.1 containing your fix.
And sorry for the late reply. I was quite busy last year and complete forgot this issue.
|
2025-04-01T04:35:54.109804
| 2023-07-05T19:08:58
|
1790123513
|
{
"authors": [
"hellofanny"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11957",
"repo": "vtex/faststore",
"url": "https://github.com/vtex/faststore/pull/1874"
}
|
gharchive/pull-request
|
docs: Review Create New Section doc
What's the purpose of this pull request?
Update Create New Section to work with new version.
How to test it?
You should be able to create a new section/component in the CMS following the guide.
I merged because a few important updates should go live ASAP! We can improve this doc later! :)
|
2025-04-01T04:35:54.124765
| 2021-03-29T01:42:48
|
842895952
|
{
"authors": [
"alexvremja",
"namoscato"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11958",
"repo": "vue-gapi/vue-gapi",
"url": "https://github.com/vue-gapi/vue-gapi/issues/161"
}
|
gharchive/issue
|
gmail api not supported?
need to use gmail api to verify the forwarding email filters that in plain js, can be accessed with this function:
gapi.client.gmail.users.settings.filters.list()
therefore I thought to access using this.$gapi.client.gmail.users.settings.filters.list(), but didn't exists:
actually I found gmail but in a "strange" position:
this.$gapi.clientProvider.client.gapi.client.gmail
I tried to use this object but it does not work.
Can suggest if gmail is accessable via vue-gapi, and if yes, how?
Thanks
@alexvremja, you can get a reference to gapi via the promise-based getGapiClient method, i.e.
this.$gapi.getGapiClient().then((gapi) => {
// gapi.client.gmail.users.settings.filters.list()...
});
Works!
great!
lost hours for nothing :-(
Thanks a lot!
|
2025-04-01T04:35:54.201071
| 2017-01-18T17:35:28
|
201641645
|
{
"authors": [
"LinusBorg",
"TotomInc",
"aacassandra",
"atilkan",
"baagi-rebel",
"bigsee",
"darkylmnx",
"david-saint",
"fralonra",
"hoainamcr",
"juniorknx",
"kaankucukx",
"khaled0fares",
"maximilianfixl",
"nfer",
"nicobaguio",
"revolter",
"rimiti",
"routbiplab",
"syntaxhacker"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11959",
"repo": "vuejs-templates/webpack",
"url": "https://github.com/vuejs-templates/webpack/issues/450"
}
|
gharchive/issue
|
Vue-cli compile image src via dynamic property
When i reference the image source via the assets directory it works as expected and compile the image path to static one like this /static/img/img.1e7c8df.jpg
but when i use a dynamic property and pass the value of the property via the parent component,
the image path doesn't compile to /static/img/img.1e7c8df.jpg path
you have to actually import the image in JS. then webpack knows about it as a depenency and can manage the path.
import Image from './assets/image.jpg'
// `Image` will not be a string, pointing to '/static/img/img.1e7c8df.jpg'
does this seem logical or even easy to reason about ? Let me explain :
Assuming i have a team page on my web app, i have 50 teammates, it would be anoying to write 50 times a div + h2 + img + some description.
So i would use a loop, my image would go from 1 to 50 so i would dynamically use :src with the index of the loop.
BUT those images would be in my assets, so are you expecting someone to write 50 times and import for his images ?
Knowing that we can't loop over an import
here's a fiddle of what i mean : https://jsfiddle.net/76sythpu/
i think we need to fin another solution, importing the asset isn't a good idea if you have many dynamic assets.
Copying a folder of assets to the dist folder maybe the only solution for now...
Well, that's how webpack works for dynamic assets. You still have options, though.
use the /static folder as explained here.
Use a dynamic require to make webpack require all images in one call, as explained here
I won't go into much detail (for that we have forum.vuejs.org).
I hope that I get a answere here, though its closed.
I can‘t understand that I can load an image from the laravel /public/ images directory, but not from /storage/app/public!
If image file name and subfolder is dynamic. Could anybody explain it to me? Please!?
Please use the forum, as I mentioned in my last reply.
This is a closed isssue.
I did already
https://forum.vuejs.org/t/get-path-from-storage-directory-inside-vue-component-in-laravel-5-5/21646
<img :src="require(@/assets/${posts.img})" alt="">
🥇
Hi @kaankucukx ,
What if I want to add it to the style tag like
<div class="left" :style="backgroundImage: require(assets/img/${image}.png);">
How should I go about this?
@david-saint Hey,
Go with this.. ;)
<div class="left" :style="`backgroundImage: require(assets/img/${image}.png)`">
Check this by Addy. https://developers.google.com/web/updates/2015/01/ES6-Template-Strings
Thanks @kakahikari
this worked
<div class="left" :style="{backgroundImage: `url(${require(`../assets/img/${image}.png`)})`}">
@kaankucukx may I ask what the $ sign means here? This also worked with my Vue app and I don't know what's happening.
@nicobaguio
Template Strings can contain placeholders for string substitution using the ${ }
you can get more info on link as kaankucukx mentioned: https://developers.google.com/web/updates/2015/01/ES6-Template-Strings
<img :src="require(@/assets/login.png)" alt="">
same
<img<EMAIL_ADDRESS>/>
but shorter
Also worth to look at assetsDir option.
@kaankucukx you saved my morning 🚀
hey! i need some help
here is the json object passed to component slides.vue
and props were passed when i wanted to retrieve text it worked but for image retrieval this.image.url (used images[0 ] for tesing) it just displaying nothing 😭😭 but when i log this.image.url it displays
../assets/img1.jpg
but when i pass actual path <img src="../pathtoimg"> it works .
i dont know why it does not passing this computed return value to the template
I keep getting the Cannot find module error, even though the path is correct.
Note that you can also use the following syntax to avoid needing to remember the relative paths...
<div class="picture" :style="{ backgroundImage: url(${require(@/assets/images/${image})}) }">
...where @ signifies your src folder and image is the full filename.
thank you @kaankucukx, your suggestion is to work with charm, but your code is not beautiful with vue framework
thank you @LinusBorg, your advice is to work very beautiful
import Image from "../../assets/img/services/s1.png";
export default {
data () {
return {
myPic: Image;
}
}
}
<img v-bind:src="myPic" />
tested on vue cli 3
@aacassandra Thank you!
And you mean importing all images is fine? :)
So good luck with that. I believe that is uglier :(
thank you @kaankucukx, you remind me. I just found out that the way it loads all the images.
I did something like <img :src="require(@/assets/logo.png)" >
@fralonra you are missing quotes on @/assets/logo.png.
Your :src attribute should looks like this :src="require('@/assets/logo.png')".
@TotomInc Thanks for your reply!
That's just a typo.I tried :src="require('@/assets/logo.png')"
file.html
file.js
image:require('@/assets/tutorial/onWhite.jpg'),
@routbiplab
It was my fault not pointing out what I want to do.
I want to load image dynamically, by setting image path in props or data.
the alt attribute is working fine but src attrib. is not working. Can anybody explain?????? PLZ!!!
<food-items itemPrice= 239 itemName="Desert" image="./assets/images/desert.jpg" altText="image here from app"></food-items>// App.vue
<img :src="image" :alt="altText"> //component food-item
@baagi-rebel
See here. https://github.com/vuejs-templates/webpack/issues/126.
In short, you should use require or import to tell webpack that this path is a module.
Hi, I can display image
<img :src="require('@/assets/images/logo.png')" />
but it just diplay after rendered
<img data-v-359d76e0="" src="[object Module]" class="">
Do you know why it return [object Module]? please help me to resolve it :(
Hi, I can't display image, this is my code.
<img :src="require('@/assets/images/logo.png')" />
but it just diplay after rendered
<img data-v-359d76e0="" src="[object Module]" class="">
Do you know why it return [object Module]? please help me to resolve it :(
Oh, I found the solutiton. just install url-loader :( OMG, thank you guys so much.
This works for me!! thanks
|
2025-04-01T04:35:54.202968
| 2017-07-18T01:42:25
|
243575662
|
{
"authors": [
"gustaYo",
"kazupon",
"nickmessing"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11960",
"repo": "vuejs/awesome-vue",
"url": "https://github.com/vuejs/awesome-vue/pull/1298"
}
|
gharchive/pull-request
|
added vue-chess-storybook example
Refactoring components to Vue2
@kazupon, it's in Examples category.
@nickmessing ah, sorry 🙇
|
2025-04-01T04:35:54.211070
| 2023-02-17T09:07:23
|
1589002808
|
{
"authors": [
"DrPhil",
"sxzz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11961",
"repo": "vuejs/core",
"url": "https://github.com/vuejs/core/pull/7743"
}
|
gharchive/pull-request
|
fix(ssr): reset current instance if setting up options component errors (fix #7733)
Warning: I'm just a monkey with a wrench - I don't know what I am doing.
Looking at the PR of https://github.com/vuejs/core/pull/6184 I think this might be something close-ish to the right fix. It at least fixes the tests I hallucinated together. 😄
Maybe @danielroe would be interested in reviewing this fix too?
close #7733
/ecosystem-ci run
Just checking that it's not me that we are waiting for. Is there something else I should do to get this merged? There's this one unresolved comment, but I'm not sure what the right resolution is for it. If there's anything else I can do, please let me know.
|
2025-04-01T04:35:54.228556
| 2022-06-03T15:38:28
|
1260057400
|
{
"authors": [
"cexbrayat",
"freakzlike",
"lmiller1990"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11962",
"repo": "vuejs/test-utils",
"url": "https://github.com/vuejs/test-utils/pull/1569"
}
|
gharchive/pull-request
|
chore(find): extend tests with deep nested multiple roots
Extends some test for findAll with deep nested multiple root nodes. Created to test #1546. Feel free to merge or close if not necessary.
Neat. In this case do you think we should do a beta for the next release @cexbrayat @freakzlike, or can we just push out 2.0.1? If so, I can do that now (today).
Let's do a release and see then!
|
2025-04-01T04:35:54.231657
| 2019-09-21T14:45:26
|
496663731
|
{
"authors": [
"AdamNimDev",
"Akryum",
"austinbv"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11963",
"repo": "vuejs/vue-apollo",
"url": "https://github.com/vuejs/vue-apollo/issues/788"
}
|
gharchive/issue
|
$apollo.loading is always 0
I'm having trouble linking page/element loading states to Apollo's loading state. The value seems to never update.
I am running a query in a method like this:
login() {
await this.$apollo.query({
query: login,
fetchPolicy: "network-only"
});
}
}
...and my element is linked to the loader like this :loading="$apollo.loading"
I don't have any errors at all during the entire auth process utilising vue-apollo's onLogin method.
Vue cli project using:
"vue-apollo": "^3.0.0-beta.11",
"vue-router": "^3.0.3",
"vuetify": "^2.0.0"
Doing queries manually won't update $apollo.loading. Also in your example maybe you want to do a mutation instead.
We also noticed that network-only cache policys don't update the apollo loading state
|
2025-04-01T04:35:54.237921
| 2018-05-15T02:21:55
|
323043657
|
{
"authors": [
"archSeer",
"yyx990803"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11964",
"repo": "vuejs/vue-cli",
"url": "https://github.com/vuejs/vue-cli/issues/1294"
}
|
gharchive/issue
|
Can't get any webpack output/webpack-bundle-analyzer output on yarn build -- typescript semantic errors break the build
Version
3.0.0-beta.10
Reproduction link
https://gist.github.com/archSeer/da83151406461966adaa9bd7c4125622
Steps to reproduce
yarn build
What is expected?
⠼ Building for production...
WARNING Compiled with 3 warnings 11:13:50 AM
warning
asset size limit: The following asset(s) exceed the recommended size limit (244 KiB).
This can impact web performance.
Assets:
js/vendors~app.0c7b91b4.js (700 KiB)
warning
entrypoint size limit: The following entrypoint(s) combined asset size exceeds the recommended limit (244 KiB). This can impact web performance.
Entrypoints:
app (992 KiB)
css/vendors~app.0.0c2d3117.css
js/vendors~app.0c7b91b4.js
css/app.db2af613.css
js/app.ef588c7d.js
warning
webpack performance recommendations:
You can limit the size of your bundles by using import() or require.ensure to lazy load some parts of your application.
For more info visit https://webpack.js.org/guides/code-splitting/
File Size Gzipped
dist/js/vendors~app.0c7b91b4.js 699.72 kb 224.05 kb
dist/js/app.ef588c7d.js 126.79 kb 22.49 kb
dist/css/app.db2af613.css 157.99 kb 34.97 kb
dist/css/vendors~app.0.0c2d3117.css 7.92 kb 1.83 kb
Images and other types of assets omitted.
DONE Build complete. The dist directory is ready to be deployed.
✨ Done in 15.46s.
What is actually happening?
...
error in /.../views/Tasks.vue
(81,28): Type '{ assignedTasks: never[]; }' is not assignable to type 'User'.
Object literal may only specify known properties, and 'assignedTasks' does not exist in type 'User'.
ERROR Build failed with errors.
error An unexpected error occurred: "Command failed.
Exit code: 1
Command: sh
Arguments: -c vue-cli-service build
Directory: /ui
Output:
".
info If you think this is a bug, please open a bug report with the information provided in "/ui/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
I'm using the typescript plugin with the class syntax, and on yarn build, I can't get any useful webpack output in the console (a breakdown of the webpack chunks that you'd usually see on the old cli).
I do have a bunch of errors in typescript, but those are unfixable since a lot of the ecosystem still isn't prepared to work with vue typescript. I figured it might be that the errors swallow up the output (but I couldn't find any ignore errors flag), so what I did next was add the bundle analyzer. It runs fine on yarn serve, but I can't get any output from it on yarn build, I've tried both static and the server setting. Meanwhile the compressor plugins ran just fine.
I think this issue is related https://github.com/wmonk/create-react-app-typescript/issues/171, there's no way to get typescript semantic errors to not halt the build.
In your case it is not a semantic error, but a type error.
But either way this not fixable in Vue CLI - it has to either happen in ts-loader or TypeScript itself.
@yyx990803 in the linked issue, they made the type checking step optional. vue-cli could offer a flag for that:
(process.env.NO_EMIT_ON_ERROR ? new ForkTsCheckerWebpackPlugin({
async: false,
watch: paths.appSrc,
tsconfig: paths.appTsConfig,
tslint: paths.appTsLint,
}) : null),
].filter(Boolean),
You can conditionally delete the ForkTSChecker plugin in vue.config.js.
|
2025-04-01T04:35:54.242002
| 2019-12-20T14:03:46
|
541010863
|
{
"authors": [
"danielbrooks4p",
"sodatea"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11965",
"repo": "vuejs/vue-cli",
"url": "https://github.com/vuejs/vue-cli/issues/4984"
}
|
gharchive/issue
|
Cannot Create Project
Version
4.1.1
Environment info
System:
OS: macOS 10.15.1
CPU: (8) x64 Intel(R) Core(TM) i7-4980HQ CPU @ 2.80GHz
Binaries:
Node: 8.9.4 - ~/.nvm/versions/node/v8.9.4/bin/node
Yarn: 1.9.4 - /usr/local/bin/yarn
npm: 6.9.0 - ~/.nvm/versions/node/v8.9.4/bin/npm
Browsers:
Chrome: 79.0.3945.88
Firefox: 71.0
Safari: 13.0.3
npmGlobalPackages:
@vue/cli: 4.1.1
Steps to reproduce
Using the vue ui command and the environment settings above, follow along with the project setup detailed in Vue School's Vue Router course (https://vueschool.io/lessons/create-a-new-project-with-vue-router-using-the-vue-cli-ui).
What is expected?
A project is set up correctly.
What is actually happening?
The CLI cannot install many dependencies and crashes. Projects get "created" but are never recognized as projects in the UI.
The CLI cannot install many dependencies and crashes. Projects get "created" but are never recognized as projects in the UI.
What's the created project like?
Might be caused by pre-existing NODE_ENV environment variable in your environment. Make sure it's unset. (If it was set to production, devDependencies will fail to install.)
|
2025-04-01T04:35:54.258328
| 2021-12-19T19:58:58
|
1084178711
|
{
"authors": [
"FenderStrat85"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11966",
"repo": "vuejs/vue-cli",
"url": "https://github.com/vuejs/vue-cli/issues/6882"
}
|
gharchive/issue
|
Vue3 and GraphQL leading to linting and compiling errors
Version
5.0.0-rc.1
Reproduction link
github.com
Environment info
Environment Info:
System:
OS: macOS 11.6
CPU: (8) arm64 Apple M1
Binaries:
Node: 16.13.1 - /usr/local/bin/node
Yarn: Not Found
npm: 8.1.2 - /usr/local/bin/npm
Browsers:
Chrome: 96.0.4664.110
Edge: Not Found
Firefox: Not Found
Safari: 15.0
npmPackages:
@vue/apollo-composable: ^4.0.0-alpha.16 => 4.0.0-alpha.16
@vue/babel-helper-vue-jsx-merge-props: 1.2.1
@vue/babel-helper-vue-transform-on: 1.0.2
@vue/babel-plugin-jsx: 1.1.1
@vue/babel-plugin-transform-vue-jsx: 1.2.1
@vue/babel-preset-app: 4.5.15
@vue/babel-preset-jsx: 1.2.4
@vue/babel-sugar-composition-api-inject-h: 1.2.1
@vue/babel-sugar-composition-api-render-instance: 1.2.4
@vue/babel-sugar-functional-vue: 1.2.2
@vue/babel-sugar-inject-h: 1.2.2
@vue/babel-sugar-v-model: 1.2.3
@vue/babel-sugar-v-on: 1.2.3
@vue/cli-overlay: 4.5.15
@vue/cli-plugin-babel: ~4.5.0 => 4.5.15
@vue/cli-plugin-eslint: ~4.5.0 => 4.5.15
@vue/cli-plugin-router: 4.5.15
@vue/cli-plugin-vuex: 4.5.15
@vue/cli-service: ~4.5.0 => 4.5.15
@vue/cli-shared-utils: 4.5.15
@vue/compiler-core: 3.2.26
@vue/compiler-dom: 3.2.26
@vue/compiler-sfc: ^3.0.0 => 3.2.26
@vue/compiler-ssr: 3.2.26
@vue/component-compiler-utils: 3.3.0
@vue/devtools-api: 6.0.0-beta.20.1
@vue/preload-webpack-plugin: 1.1.2
@vue/reactivity: 3.2.26
@vue/reactivity-transform: 3.2.26
@vue/runtime-core: 3.2.26
@vue/runtime-dom: 3.2.26
@vue/server-renderer: 3.2.26
@vue/shared: 3.2.26
@vue/web-component-wrapper: 1.3.0
eslint-plugin-vue: ^7.0.0 => 7.20.0
typescript: 4.5.4
vue: ^3.0.0 => 3.2.26
vue-cli-plugin-apollo: ~0.22.2 => 0.22.2
vue-demi: undefined (0.12.1)
vue-eslint-parser: 7.11.0
vue-hot-reload-api: 2.3.4
vue-loader: 15.9.8 (16.8.3)
vue-router: ^4.0.12 => 4.0.12
vue-style-loader: 4.1.3
vue-template-es2015-compiler: 1.9.1
vuex: ^4.0.2 => 4.0.2
npmGlobalPackages:
@vue/cli: 4.5.15
Steps to reproduce
clone repo
cd client
npm i
npm run serve
What is expected?
In the console in the code editor you will see this error.
INFO Starting development server...
98% after emitting CopyPlugin
ERROR Failed to compile with 1 error 19:49:22
Syntax Error: TypeError: Cannot read properties of undefined (reading 'parseComponent')
You may use special comments to disable some warnings.
Use // eslint-disable-next-line to ignore the next line.
Use /* eslint-disable */ to ignore all warnings in a file.
If you open up local host 8080 which the project is running on (you will need to set a .env.loval file to set the BASE_URL that is used in router/index.js to createWebHistory.
If you go to local host 8080 you will see this error:
./src/App.vue
Module Error (from ./node_modules/vue-loader/lib/index.js):
[vue-loader] vue-template-compiler must be installed as a peer dependency, or a compatible compiler implementation must be passed via options.
What is actually happening?
I had manually installed apollo and was able to fetch a basic mutation I had built. I closed my development server to work on something and then ran npm run serve and was greeted with the errors shown. I have never encountered this error before and have tried and a number of things to solve it. None of which have worked. Any help would be greatly appreciated.
I have tried:
deleting node modules and reinstall,
add vue.config.js as suggested in earlier errors displayed,
installing eslint-plugin-graphql, however I could not implement with eslint,
npm i vue-template-compiler => this actually made things worse and I had to revert to a previous commit,
I am now getting this error:
WARNING Compiled with 8 warnings 20:16:41
warning in ./src/App.vue
"export 'staticRenderFns' was not found in './App.vue?vue&type=template&id=7ba5bd90&'
warning in ./src/views/Home.vue
"export 'staticRenderFns' was not found in './Home.vue?vue&type=template&id=fae5bece&'
warning in ./src/views/items/ItemDetails.vue
"export 'staticRenderFns' was not found in './ItemDetails.vue?vue&type=template&id=bcd52dd4&'
warning in ./src/views/items/Items.vue
"export 'staticRenderFns' was not found in './Items.vue?vue&type=template&id=d9eaf832&'
warning in ./src/views/Login.vue
"export 'staticRenderFns' was not found in './Login.vue?vue&type=template&id=26084dc2&'
warning in ./src/components/NavBar.vue
"export 'staticRenderFns' was not found in './NavBar.vue?vue&type=template&id=4295d220&'
warning in ./src/views/NotFound.vue
"export 'staticRenderFns' was not found in './NotFound.vue?vue&type=template&id=46a88b29&'
warning in ./src/views/Signup.vue
"export 'staticRenderFns' was not found in './Signup.vue?vue&type=template&id=024d905c&'
App running at:
Local: http://localhost:8080/
Network: http://<IP_ADDRESS>:8080/
Note that the development build is not optimized.
To create a production build, run npm run build.
I am now getting this error:
WARNING Compiled with 8 warnings 20:16:41
warning in ./src/App.vue
"export 'staticRenderFns' was not found in './App.vue?vue&type=template&id=7ba5bd90&'
warning in ./src/views/Home.vue
"export 'staticRenderFns' was not found in './Home.vue?vue&type=template&id=fae5bece&'
warning in ./src/views/items/ItemDetails.vue
"export 'staticRenderFns' was not found in './ItemDetails.vue?vue&type=template&id=bcd52dd4&'
warning in ./src/views/items/Items.vue
"export 'staticRenderFns' was not found in './Items.vue?vue&type=template&id=d9eaf832&'
warning in ./src/views/Login.vue
"export 'staticRenderFns' was not found in './Login.vue?vue&type=template&id=26084dc2&'
warning in ./src/components/NavBar.vue
"export 'staticRenderFns' was not found in './NavBar.vue?vue&type=template&id=4295d220&'
warning in ./src/views/NotFound.vue
"export 'staticRenderFns' was not found in './NotFound.vue?vue&type=template&id=46a88b29&'
warning in ./src/views/Signup.vue
"export 'staticRenderFns' was not found in './Signup.vue?vue&type=template&id=024d905c&'
App running at:
Local: http://localhost:8080/
Network:
Note that the development build is not optimized.
To create a production build, run npm run build.
Fixed. I had installed vue-cli-apollo-plugin. Removing this from the package.json, deleting node modules and running npm i fixed the issue.
|
2025-04-01T04:35:54.260842
| 2017-07-29T14:39:35
|
246525172
|
{
"authors": [
"AdrianDuan",
"caugner",
"femaimi9527",
"posva"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11967",
"repo": "vuejs/vue-devtools",
"url": "https://github.com/vuejs/vue-devtools/issues/381"
}
|
gharchive/issue
|
Icon missing in Firefox Developer Edition
My environment
Browser: Firefox Developer Edition 55.0b13 (64 bit)
Vue.js Devtools: 3.1.6
Duplicate of #366
Thanks for signaling it @caugner 🙂
Devtools inspection is not available because it's in production mode or explicitly disabled by the author.
|
2025-04-01T04:35:54.290264
| 2016-03-09T20:41:21
|
139695070
|
{
"authors": [
"andreliem",
"ayyobro",
"obonyojimmy"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11968",
"repo": "vuejs/vue-resource",
"url": "https://github.com/vuejs/vue-resource/issues/234"
}
|
gharchive/issue
|
Using Browserify + Laravel 5, vue-resource never gets included
I'm using Browserify with Laravel 5.2.
I included the vue-resource package into my package.json file as "vue-resource": "^0.7.0".
I included the package into my main Vuejs file like so:
var Vue = require('vue');
Vue.use(require('vue-resource'));
But I still get the error:
Uncaught TypeError: Cannot read property 'get' of undefined at this chunk of code;
methods: {
fetchUsers: function() {
this.$http.get('/my/test/uri', function(accounts) {
this.$set('accounts', accounts);
});
}
}
I have done npm install, npm install vue-resource, and npm install vue-resource --save, but none of these have remedied the issue. I have been making sure that I run gulp to make the changes visible.
Are there any steps I'm missing?
Did you ever resolve this problem? I'm having the same problem and based my setup off the vue-cli browserify installer.
So I'm not sure if there were any regressions with new releases, but I looked at issues from a year ago and there was a fix where you need to make sure Vue is attached to the window.
See this: https://github.com/vuejs/vue-resource/issues/2
Following the recommendation, I have the following setup which now works:
var Vue = require('vue');
window.Vue = Vue
Vue.use(require('vue-resource'))
I don't think we should need to use window, but for now I'll stick with this until I can figure out what's going on.
Also, it doesn't seem to work if you use babel/es6 with imports.
if you are sending post put patch request you must set the HTTP
|
2025-04-01T04:35:54.296567
| 2018-12-04T04:01:34
|
387110138
|
{
"authors": [
"leoyli",
"medmin",
"posva"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11969",
"repo": "vuejs/vue-router",
"url": "https://github.com/vuejs/vue-router/issues/2516"
}
|
gharchive/issue
|
beforeRouteEnter next callback cannot return next(false)
Version
3.0.2
Reproduction link
https://github.com/whizjs/netlify-identity-demo-vue/blob/master/src/components/Protected.vue#L21
Steps to reproduce
https://github.com/whizjs/netlify-identity-demo-vue/blob/master/src/components/Protected.vue#L21
Change the code on Line 21 to "return next(false);" or "return false;" the router will proceed even the user is not logged in.
It's very easy to reproduce this bug.
What is expected?
I need the unauthenticated guest to stay in the current page, not redirect to home or any other page.
What is actually happening?
Change the code on Line 21 to "return next(false);" or "return false;" the router will proceed even the user is not logged in.
When reporting a bug, please provide a boiled down runnable repro (in a jsfiddle or codesandox). Closing until a valid repro is provided
Just git clone the repo and change one single line of it and you can reproduce the bug.
What else do you expect me to provide ?
@medmin, did you read the reply... he said using jsfiddle or codesandbox, which is easier and safer for people to inspect your code. Also, your code is problematic, you should read the doc; next is not accept a callback function in your use case.
|
2025-04-01T04:35:54.298940
| 2016-10-25T07:39:10
|
185040846
|
{
"authors": [
"fnlctrl",
"luoyunjiao"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11970",
"repo": "vuejs/vue-router",
"url": "https://github.com/vuejs/vue-router/issues/829"
}
|
gharchive/issue
|
$route.router cannot go the path which same to $route.path, but without query string ?
The current path is "/some/path?back=true"
now use this.$route.router.go('/some/path') will throw an error as follows:
Uncaught error during transition
Uncaught TypeError: transition.next is not a function
Hi, thanks for filling this issue. Please follow the Issue Reporting Guidelines and provide a live reproduction on jsfiddle, codepen etc. Thanks!
ok, wait a few minutes O(∩_∩)O @fnlctrl
Closing due to inactivity.
|
2025-04-01T04:35:54.312018
| 2017-08-22T06:03:50
|
251846921
|
{
"authors": [
"blackbetty",
"simplesmiler"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11971",
"repo": "vuejs/vue",
"url": "https://github.com/vuejs/vue/issues/6425"
}
|
gharchive/issue
|
Class binding class name b--black or b--gray causes template compilation error
Version
2.4.2
Reproduction link
https://jsfiddle.net/chrisvfritz/50wL7mdz/
Steps to reproduce
Take any element and add
v-bind:class="{ b--black: bool }"
to it. Seems to happen with any class name that contains two hyphens in a row. This is a problem for people using Tachyons CSS
What is expected?
That the b--black class (border = black) should be applied to the containing element
What is actually happening?
a compilation error
I am using Tachyons CSS for styling and trying to conditionally change a border color.
https://github.com/tachyons-css/tachyons-border-colors
a double hyphen seems to be a legitimate class name to me.
Hi!
Like you would do in Javascript, you need to wrap b--black in quotes, because it's not a valid JS identifier.
Demo: https://jsfiddle.net/50wL7mdz/55320/
D'oh, thank you!
|
2025-04-01T04:35:54.323296
| 2017-12-14T13:14:30
|
282096574
|
{
"authors": [
"Akryum",
"JounQin",
"exse2",
"misq007",
"printercu",
"ramedju",
"yyx990803"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11972",
"repo": "vuejs/vue",
"url": "https://github.com/vuejs/vue/issues/7240"
}
|
gharchive/issue
|
Make it possible to clear/disable templates cache
What problem does this feature solve?
When using <script type="x-template"> for providing template source - it get cached (https://github.com/vuejs/vue/blob/cfd73c2386623341fdbb3ac636c4baf84ea89c2c/src/platforms/web/entry-runtime-with-compiler.js#L12). When using same elements for templates like #order-template on different pages like /orders/1, /orders/2, both components are rendered with first compiled template.
Even when using different template ids, their content is cached which is unnecessary and leads to memory bloat: it caches inner html which is fast to fetch.
Proposed feature will give ability to opt out template caching.
What does the proposed API look like?
Vue.clearAllCaches() or Vue.templateCache = false.
Just wondering, why /orders/1 and /orders/2 have different templates instead of use logic directive in same template?
That's not how templates are intended to be used. The same component should always have the same template, you should use Vue's own logic directives to display different content based on data.
Thank you, I'll try this.
What if I have localized templates (template with same id will have different translations), and want to give ability to switch language without full page reload?
@JounQin some features are available only for premium users, so we want some parts of templates not be visible even in page source.
@yyx990803 what if cached wrapper will take optional cache argument to set cache object explicitly, and in this place Vue.templateCache will be passed? This way template cache will be accessible from user-space and can be cleared manually.
The template cache is something we do not intend to expose to the user, nor should your app logic rely on it. The template self is a language to express dynamic information and you should work within the template, not outside of it.
If you don't want to expose your template source, you should use a pre-compile step, e.g. with vue-loader.
@yyx990803 thank you! I've just checked vue-loader, it seems to work only with webpack.
Can you please advise, how should this be done: server renders vue template into <script id='some-template'> with some translations (eg., action names, labels, etc.). When user changes locale, server renders new content (same markup, different translated phrases) into the same script tag. This way, vue will continue render this template in old locale. The only way i see is to use locale suffix in all template ids, but this will require having global js variable to be used in template: "#some-template-#{window.LOCALE}". This looks dirty to me.
Are you saying when changing locale the page does not reload, but simply fetches new templates via Ajax and updates the <script> nodes?
Maybe you need https://github.com/kazupon/vue-i18n
@yyx990803 yes, we use turbolinks, it replaces body with one fetched via xhr.
@Akryum thank you! If i get i right, it's lib for client-side translations. We are not able to move translation logic to frontend for now, it's out of scope of current tasks.
@yyx990803 one more issue we've faced is rendering old templates after deploying updates: server renders updated vue template but it's not applied until user reloads page.
I want to create more complex widgets/components.
I've describe one of them here: https://forum.vuejs.org/t/how-to-create-reusable-component-with-external-plugin-behavior-logic/28101
I did it with dynamic templates and i've also notice the issue with template cache.
But if on call I change some prop value of this component it works fine...
Is there gonna be some core problem ?
ytetet
I still consider this a bug. When doing "new Vue({template:'#template1'.." and then purposefully load a different page where #template1 has a different content, doing "new Vue({template:'#template1'.." should respect that and not still use the old template content. This is especially weird when using Vue as an on demand tool when extending foreign single page apps.
My workaround for now is using "template: $('#template1').text(),"
|
2025-04-01T04:35:54.328648
| 2017-10-02T18:12:44
|
262176151
|
{
"authors": [
"chrisvfritz",
"jedahan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11973",
"repo": "vuejs/vuejs.org",
"url": "https://github.com/vuejs/vuejs.org/issues/1170"
}
|
gharchive/issue
|
Style Guide: provide reasoning for all Priority A: Essential items
Awesome work with the style guide!
I especially love the explanation on the first item:
This prevents conflicts with existing and future HTML elements, since all HTML elements are a single word.
On a meta-level, it seems if you are recommending a style as being Priority A/Essential, an explanation of reasoning/why will go a long way to help people understand the value in doing a thing, and not just blindly doing it.
I agree! 🙂 Which Priority A rule(s) are you not seeing sufficient explanation for?
When using the data property on a component (i.e. anywhere except on new Vue), the value must be a function that returns an object.
There is a 'Detailed explanation' button that I totally missed, but having a single line summary of that detailed explanation above it, like
Making the value of data a function that returns an object, allows components to maintain separate states instead of sharing the same state.
I think, revisiting this issue, I didn't realize that some have 'Detailed Explanation' blocks, and some don't. There is a lot of information here, maybe a side-by-side design with the detailed explanations in a separate column to the right would help make both more visible.
maybe a side-by-side design with the detailed explanations in a separate column to the right would help make both more visible
I like the idea of making these more visible, but a lot of users are viewing on screens too small for a side-by-side design like that. Maybe changing the background of the collapsed Detailed Explanation blocks on hover would help. What do you think?
That would help.
Maybe having the text of the button for detailed explanation being the first line of the contents, with an ellipses if its too long?
I don't think that'd be possible if we stay with native <details> elements, but I'll play around with it. 🙂
|
2025-04-01T04:35:54.386223
| 2021-07-10T04:14:33
|
941168335
|
{
"authors": [
"Jelledb",
"JohannesRudolph",
"KnorpelSenf",
"Mister-Hope",
"Zhengqbbb",
"hustcer",
"jrcharles",
"liziwl",
"maodou38",
"meteorlxy",
"ramesh-dada",
"taozuhong"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11974",
"repo": "vuepress/vuepress-next",
"url": "https://github.com/vuepress/vuepress-next/pull/277"
}
|
gharchive/pull-request
|
feat(plugin-sitemap): add sitemap plugin (close #337)
Close #337
Addtional: CI failed because of coverage, but since this plugin is deeply coupled with vuepress plugin api, it's hard to add some tests.
Got it. i will update these days!
@meteorlxy @Mister-Hope THANK You so much for doing great work is their any estimated release date that when this plugin will be released ?
I am finishing all the changes, except I remain zh jsdocs in SitemapOptions.
IMO, as we are providing both chinese and english docs, it's reasonable to keep bothe languages in the final options provided to users to make sure they get full hints.
IMO, as we are providing both chinese and english docs, it's reasonable to keep both the languages in the final options provided to users to make sure they get full hints.
Remove them, or we need to add translations to all other comments.
it's happy to know that we will support support sitemap recently.And also appreciate @Mister-Hope ‘s sitemap plugin.
i will use your plugin until the official release version online
@Mister-Hope Before you merge this can you please take a look at this also https://github.com/vuepress/vuepress-next/issues/353
I would prefer not to support any of the 2 fr in #353. For my personal reason, see https://github.com/vuepress/vuepress-next/issues/353#issuecomment-898859697
If you have different ideas and think any of them should be supported, just leave a message.
why the pull request bog down?
Any progress on the feature of sitemap?
What are the chances that this lands in the 2.0 release?
Any update here?
I will finish it once my winter vacation begins, just busy being a postgraduate student studying quantum physics
@meteorlxy Should be ready
Some explaination:
A lot of plugin option has been renamed
modifyTimeGetter is better with a Page arg, see plugin docs example
A short description of sitemap is added.
@Mister-Hope Hi
Just used it today. The feeling is the same as imagined, very good feeling, but at the same time there are problems and bugs.
The priority configuration item in the document, (docs/zh/reference/plugin/sitemap.md: 85).Until I found that there is no SitemapOptions type, it means that the configuration will not take effect.This is very good design for normal pages.
About robots.txt generate options. see: https://developers.google.com/search/docs/advanced/robots/robots_txt
The excludeUrls default option not work. If I don't add the option manually, the 404 page will not be excluded
The excludeFrontmatter not work until I configured excludeUrls option.
@Mister-Hope Hi Just used it today. The feeling is the same as imagined, very good feeling, but at the same time there are problems and bugs.
The priority configuration item in the document, (docs/zh/reference/plugin/sitemap.md: 85).Until I found that there is no SitemapOptions type, it means that the configuration will not take effect.This is very good design for normal pages.
About robots.txt generate options. see: https://developers.google.com/search/docs/advanced/robots/robots_txt
The excludeUrls default option not work. If I don't add the option manually, the 404 page will not be excluded
The excludeFrontmatter not work until I configured excludeUrls option.
Hi, thanks for the feed back, I will have a look later on 1,3,4 , but I am not catching what you mean about 2. All the output folder is deployed directly, so if user set a base, there is no way for me to generate a robot.txt and place outside the dest folder. The robots.txt should be ignored by google. Also, I am not sure all the search engine won't read subfolder robot.txt. Besides, a sitemap plugin should not try to set allow and disallow for developers, in my case, I am having a valid robot.txt in my project public folder, and I only want the plugin appends sitemap link for me. If you have any idea (which is better), please point it out since I have no idea how to improve it now.
@Mister-Hope
Hi, thanks for the feed back, I will have a look later on 1,3,4 , but I am not catching what you mean about 2. All the output folder is deployed directly, so if user set a base, there is no way for me to generate a robot.txt and place outside the dest folder. The robots.txt should be ignored by google. Also, I am not sure all the search engine won't read subfolder robot.txt. Besides, a sitemap plugin should not try to set allow and disallow for developers, in my case, I am having a valid robot.txt in my project public folder, and I only want the plugin appends sitemap link for me. If you have any idea (which is better), please point it out since I have no idea how to improve it now.
ok, I just think that if there is automatic generate robots.txt, it is better to have a better place to configure.
Maybe someone needs to disable the search spider to crawl their own website to prevent it from being website included, such as only for Google Spider, we can provide option to set user-agent.
If you want to uniformly set recommended website inclusion and non-recommended website inclusion, you need Allow and Disallow.
I will change the logic, only when base is / and user has a robot.txt in public folder, the plugin will try to add sitemap url to it. This should be better.
And for 1, you should set priority in frontmatter.sitemap.priority and it's injected in the speard operator I think.
const sitemapInfo: SitemapPageInfo = {
changefreq,
links,
...(lastmodifyTime ? { lastmod: lastmodifyTime } : {}),
...frontmatterOptions,
}
Is this option problematic to you?
The missing line in https://github.com/vuepress/vuepress-next/pull/277/commits/f043211d2cea84c83396cf1f91c1123fa4b7d22a should solve both 3 and 4 @Zhengqbbb Thanks for point out the bug.
I will change the logic, only when base is / and user has a robot.txt in public folder, the plugin will try to add sitemap url to it. This should be better.
I also think this is the right design, whatever this is a sitemap plugin.
But robot.txt is spider friendly for sites that don't submit sitemaps. For documentation, it would be better to mention that robot.txt needs to be added to the publish folder.
And for 1, you should set priority in frontmatter.sitemap.priority and it's injected in the speard operator I think.
Is this option problematic to you?
But you can see that it is not mentioned in the English document, but mentioned in the one asked in Chinese.
I think the significance of this option item is that for md files that do not declare formatter on normal pages, this is a unified source of priority. After all, everyone is not very willing to think about the priority of this page.
And for 1, you should set priority in frontmatter.sitemap.priority and it's injected in the speard operator I think.
Is this option problematic to you?
But you can see that it is not mentioned in the English document, but mentioned in Chinese document.
I think the significance of this option item is that for md files that do not declare formatter on normal pages, this is a unified source of priority. After all, everyone is not very willing to think about the priority of this page.
The missing option in docs is added.
And for 1, you should set priority in frontmatter.sitemap.priority and it's injected in the speard operator I think.
Is this option problematic to you?
But you can see that it is not mentioned in the English document, but mentioned in the one asked in Chinese.
I think the significance of this option item is that for md files that do not declare formatter on normal pages, this is a unified source of priority. After all, everyone is not very willing to think about the priority of this page.
A description is added
@meteorlxy Could you have another check?
Looking forward to the sitemap plugin!
I can confirm the sitemap plugin is working fine, pulled the code from the PR in as a "vendored" dependency and it works great for our site.
One feedback, I'd like to have the option to set the changefreq option to null/undefined so that it's not part of the sitemap. Since we already have lastmod this is going to be fine for most sites and sends less confusing signals to search engine crawlers.
It's really a long time to wait
You can use vuepress-plugin-sitemap2
Any updates?
Also, I found this site helpful:
https://developers.google.com/search/docs/advanced/crawling/localized-versions#sitemap
Hey u can use vuepress-plugin-sitemap2
Demo:
https://github.com/Zhengqbbb/vuepress-plugin/blob/b5383fa7d8a548d8306f5ab49e60a4a567281874/docs/.vuepress/config.ts#L42-L45
That plugin is published by me as the same with this pr.
能不能早点合入并发布稳定版本?要不 VitePress 都出来了,它的价值空间就会被严重挤压,这么多努力就白费了。
@taozuhong You can take a look at https://github.com/vuejs/vitepress/discussions/548
VitePress might be VuePress 3 as a decision of Vue.js Team. So VuePress 2 has already been in an awkward position. That's why the stable version delayed a lot.
I am updating vuepress-plugin-sitemap2 with every vp2 version, you are free to use that.
@taozuhong You can take a look at vuejs/vitepress#548
VitePress might be VuePress 3 as a decision of Vue.js Team. So VuePress 2 has already been in an awkward position. That's why the stable version delayed a lot.
Nice to see it, hope the plugins Api become more stable than last, make the plugin ecosystem release more power.
Closing, this will be maintained speratedly
|
2025-04-01T04:35:54.390273
| 2022-02-22T17:40:09
|
1224408273
|
{
"authors": [
"Baroshem",
"OlegKunitsyn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11975",
"repo": "vuestorefront-community/vendure",
"url": "https://github.com/vuestorefront-community/vendure/issues/205"
}
|
gharchive/issue
|
HTTP 500 calling /api/vendure/resetPassword
Reproduction
Create a password-reset token, then try
const {setNew} = useForgotPassword();
await setNew({
tokenValue: 'mytoken',
newPassword: 'mypassword,
});
HTTP request
[{"tokenValue":"mytoken","newPassword":"mypassword"},null]
HTTP response
{"graphQLErrors":[],"networkError":{"name":"ServerError","response":{"size":0,"timeout":0},"statusCode":400,"result":{"errors":[{"message":"Variable \"$token\" of required type \"String!\" was not provided.","locations":[{"line":1,"column":32}]},{"message":"Variable \"$password\" of required type \"String!\" was not provided.","locations":[{"line":1,"column":49}]}]}},"message":"Network error: Response not successful: Received status code 400"}
See https://github.com/vuestorefront/template-magento/blob/main/pages/ResetPassword.vue
@OlegKunitsyn
Thanks for reporting that.
There seems to be an issue with parameter names. Mutation expects the variable with the name token, while it receives a variable with name tokenValue. Would you like to create a pull request with a fix? It should be rather a single line change :)
|
2025-04-01T04:35:54.442323
| 2024-11-14T10:24:41
|
2658340815
|
{
"authors": [
"light-matters",
"vukics"
],
"license": "BSL-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11976",
"repo": "vukics/Wigner_Time",
"url": "https://github.com/vukics/Wigner_Time/issues/15"
}
|
gharchive/issue
|
ANALOG_SUFFIXES ?
This variable doesn't seem necessary and probably shouldn't be hardcoded into the timeline module in any case?
If you have a timeline then you can extract the suffices from the variable names, such that they're always up to date.
The variable is used in display_new. Plus it could be used for sanitization? (Which is the reason why I defined it in timeline.py)
Not sure what you mean. If these suffices are extractable from the timeline (and display uses the timeline), then why are they needed here?
More importantly, this seems like a very specific usecase. Other users would have more or fewer suffices and changing the constant in the module doesn't seem the right place. This should be at the user's layer of abstraction. In any case, if you're thinking about ADwin then it should be in that module.
|
2025-04-01T04:35:54.451711
| 2023-07-01T21:07:35
|
1784240926
|
{
"authors": [
"D3vil0p3r",
"phith0n"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11977",
"repo": "vulhub/vulhub",
"url": "https://github.com/vulhub/vulhub/pull/446"
}
|
gharchive/pull-request
|
Create environments.toml
Currently I don't think it is needed to add a description since all the needed info are in the README.md.
@phith0n the structure of each item appears like the following:
[[container]]
app = "ActiveMQ"
path = "activemq/CVE-2015-5254"
[[container.name]]
en = "Apache ActiveMQ Deserialization"
zh = "Apache ActiveMQ 反序列化漏洞"
[[container.cve]]
id = "CVE-2015-5254"
nvd = "https://nvd.nist.gov/vuln/detail/CVE-2015-5254"
In case one lab has more than one CVE, it should appear as:
[[container]]
app = "Struts2"
path = "struts2/s2-015"
[[container.name]]
en = "S2-015 Remote Code Execution"
zh = "S2-015 远程代码执行漏洞"
[[container.cve]]
id = "CVE-2013-2134"
nvd = "https://nvd.nist.gov/vuln/detail/CVE-2013-2134"
[[container.cve]]
id = "CVE-2013-2135"
nvd = "https://nvd.nist.gov/vuln/detail/CVE-2013-2135"
So we have a table array named container and subtables for each property, for example container.name subtable will contain en and zh languages. Then. container.cve contains some info about CVE (currently id and NVD link). For those containers with no CVE, I just set as empty string "" in order to not break any parsing by a custom script.
The properties outside the subtables should be at the beginning of the main table otherwise they will be seen as part of subtables.
Example of Python code:
import toml
data = toml.load("/home/user/environments.toml")
for cve_lab in data['container']:
print(cve_lab['name'][0]['en'])
print(cve_lab['cve'])
print(cve_lab['cve'][0]['id'])
print(cve_lab['path'])
Hi @D3vil0p3r
I think the design is too difficult to understand.
My philosophy is to design a configuration that users can understand what each field does at first time without having to read any documentation. For example the current configuration is very simple:
[{
"name_en": "A simple CVE title",
"name_zh": "A simple CVE title",
"app": "example",
"cve": "CVE-2023-0001",
"path": "path/to/vuln"
},...]
Since there is one lab has more than one CVE, can just use:
[{
"name_en": "A simple CVE title",
"name_zh": "A simple CVE title",
"app": "example",
"cve": ["CVE-2023-0001", "CVE-2023-0002"],
"path": "path/to/vuln"
},...]
NVD link is no need to exist in the configuration because you can simply build a NVD link from the CVE id like 'https://nvd.nist.gov/vuln/detail/' + env[0]['cve']. Which is "Entities should not be multiplied unnecessarily".
In TOML format:
[[environments]]
name_en = "ActiveMQ Deserialization Vulnerabilities"
name_zh = "ActiveMQ 反序列化漏洞"
app = "ActiveMQ"
cve = ["CVE-2023-0001", "CVE-2023-0002"]
path = "path/to/vuln"
Even I think we don't need to put Chinese title in the file, it's a previous design that I will slowly change.
So it should appear like:
[[environments]]
name = "ActiveMQ Deserialization Vulnerabilities"
app = "ActiveMQ"
cve = ["CVE-2023-0001"]
path = "path/to/vuln"
I will define cve as a list of strings, also for cases with single CVE, in order to make it easier to script and get data.
@phith0n fixed. Let me know if it is good.
I will define cve as a list of strings, also for cases with single CVE, in order to make it easier to script and get data.
Yep, we can leave it 3 types of CVE field: string, array of strings and null.
In my point of view, most of environments have only one CVE id so can just use string as default:
[[environment]]
name = "Apache ActiveMQ Deserialization"
cve = "CVE-2015-5254"
app = "ActiveMQ"
path = "activemq/CVE-2015-5254"
LGTM, thanks for your contribution.
LGTM, thanks for your contribution.
You are welcome. And thank you for your project. I integrated it on Athena Cyber Hub Athena OS tool.
|
2025-04-01T04:35:54.485118
| 2020-03-30T11:06:13
|
590200750
|
{
"authors": [
"aniston",
"jrester",
"vx3r"
],
"license": "WTFPL",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11978",
"repo": "vx3r/wg-gen-web",
"url": "https://github.com/vx3r/wg-gen-web/issues/27"
}
|
gharchive/issue
|
Suggestion , CLIENTS UI as Tabular list.
Hello,
I like your idea and project, thanks for the good work.
A suggestion towards the CLIENTS UI, i think it would be more useful to have the CLIENTS TAB as a Tabular list rather than each CLIENT being shown in Card form with the QR codealongside.
This is because the CLIENT and SERVER Tabs would normally be administered by an administrator not by the actual Client user. This way realestate screenspace can be better utilized , also not being the actual user there is no direct need for the QR code upfront on screen, a link should suffice (or a mouseover event).
just a thought, sadly i'm not a programmer to help out.
as said before good work ;)
Best Regards,
Aniston
Hi @aniston
Thank you for using my app.
I personally like cards but I can add a client tab as well, you will be able to switch the view.
Let me a couple of days to add this (:
wow, thanks for the quick response.
will test and give feedback when you have implented something.
I also prefer the card view but for a large amount of clients the view gets quite confusing. A list would improve this a lot and you can much easier take a look at all clients.
But if there is support for multiple users maybe a hybrid approach would work better. So that every user has a card view of the different devices that belong to him and an administrator view which is a list of all the users and their devices.
What do you think?
@jrester , from my point of view the administrator looking at a table of clients is fully justified, the card view looks neat, but often drilling down and finding IP's, Names or emails etc. etc. (for whatever reasons) is what an Administrator would do apart from just adding , removing or configuring clients.
I had mentioned that the Clients QR code would not be much to an Admin as that is something he would not use to actually administer the users, but that is my use perception , but must say looking at a QR code alongside a name is undoubtly shiny and gives the panel a morden day approach.
Lastly, the importance of a highlighted line (if possible in the UI) will make the use case very important to drilling down thru many users details.
Sadly as said in my first post, I am zero with GO and GIN and I still do not know even the skeletal approach to the whole environment used in the Docker Container, but that is my deficiency, although willing to learn with some useful pointers ;)
i agree that admin view with a table of clients is better,but i dont really work on multi user implementation right now. I still dont know what do we need for a multi user part?
Authelia forward user and group in http headers
Wg Gen Web for admin
server configs
table of clients
Wg Gen Web for user
card view of user devices
And last part, add and edit allowed only on admin side.
Dose this make sens ?
Also do you think that Authelia usage will impact users ? My initial goal was to make it really easy and not force people to use anything (iptables, nftables or whatewer) for the config generation.
Maybe i need to let the choice with or without Authelia (multi user or not ).
Thank you all for your feedback
@aniston I totally agree with you!
Sadly as said in my first post, I am zero with GO and GIN and I still do not know even the skeletal approach to the whole environment used in the Docker Container, but that is my deficiency, although willing to learn with some useful pointers ;)
I added development instructions to the README. Hope that helps you to get started!
@vx3r I also don't like the idea of being forced to use an extra service like authelia especially because not everybody needs multiple users and access right management. Do you think it might be better to directly include the authentication part into the app? Through something like casbin or some simple self-made implementation. I have never worked with it but it looks promising. What do you think?
But maybe a first step would be the logical separation of clients into devices which can be associated with different users. Because often you have multiple clients for different devices that all belong to one user. In a later step access right management and multiple user support could be implemented.
@jrester Thanks for the starter tips in the README.md, got the Backend and Frontend running, but this is outside the docker container on a Windows machine, is there anyway to modify the running docker container as a development machine , that I can save back as a new docker image. Not sure if that makes sense, but for now atleast I have something rolling to play with :+1:
@vx3r , in the README.md there is a small mixup in the download links under "Directly without docker", ie: Backend downloads the frontend files and vice versa :) sendng a PR to correct, please check if that is correct.
As for the discussion on Authelia, I have not used it as yet, what I understand from linking Authelia with wg-gen-web depends on the many use cases, which can differ completely, for eg:
in a multi App scenario where the WG client user probably need a SSO (SingleSignOn) to go from one App to another then this can be useful, but still I do not see how that could benifit from and Administrator point of view. I don't know if you mean to let the Client directly loginto the web interface?
in a pure VPN link structure (my case) the client has nothing to do with SSO or Authelia , Client is happy to just get to the other side of the Tunnel (WG in this case) there is no more use of the AUTH Token or SSO Object once the VPN is up as there are no Apps involved within and beyond the scope of the tunnel, just networking resources. In my case I'm trying to use wg-gen-web to add, edit, activate, and keep track of Clients configurations and IP's basically Administration, but in no way is the Client getting to see the web interface , when required he gets the config via e-mail or messenger services.
I see a good use as an Administration tool, here I wonder if an Admin will let Clients/Users generate their own configs, wich will eventually be a mess between the network topology! At best admins like to send the client configuration via e-mail to the client. I do not think the goal should be multi client but multi Admin would definately help, but then logging will be needed to see which Admin made what changes and where, this I feel is not required at the moment, as it just complicates the usefulness currently.
All said it's good to know scenarios from other use cases to get a broader picture.
@aniston So you are running the development server inside the docker container? It is possible to commit the container and save it to an image but normally you want to build the image using the Dockerfile because you later want to publish the changes as source code not as a container.
Have you tried using WSL? That way you can modify the code in VSCode and run the dev server inside WSL without the use of docker.
I saw the use of multi user in a scenario where you have multiple users each of them having their own devices and you as the administrator don't want to cope with the management of all the different devices each user has. So I would just give a user the permission to create and delete clients so they can handle everything themselves. Not sure if that makes sense or I put too much trust into my clients :)
But I still believe the logical division of clients into users and devices would be beneficial. @vx3r is it fine with you if I take a shot at this. Without all the client accessing the web interface stuff just for the admin.
@aniston I totally agree with you!
Sadly as said in my first post, I am zero with GO and GIN and I still do not know even the skeletal approach to the whole environment used in the Docker Container, but that is my deficiency, although willing to learn with some useful pointers ;)
I added development instructions to the README. Hope that helps you to get started!
@vx3r I also don't like the idea of being forced to use an extra service like authelia especially because not everybody needs multiple users and access right management. Do you think it might be better to directly include the authentication part into the app? Through something like casbin or some simple self-made implementation. I have never worked with it but it looks promising. What do you think?
But maybe a first step would be the logical separation of clients into devices which can be associated with different users. Because often you have multiple clients for different devices that all belong to one user. In a later step access right management and multiple user support could be implemented.
Sorry for the late reply, I was quit busy.
I don’t really want to implement a user management into the application, I thought authelia can be a quick win, they also have double authentification which I personally use a lot.
Regarding the separation between users and devices I agree that we need to implement that first.
@jrester Thanks for the starter tips in the README.md, got the Backend and Frontend running, but this is outside the docker container on a Windows machine, is there anyway to modify the running docker container as a development machine , that I can save back as a new docker image. Not sure if that makes sense, but for now atleast I have something rolling to play with 👍
@vx3r , in the README.md there is a small mixup in the download links under "Directly without docker", ie: Backend downloads the frontend files and vice versa :) sendng a PR to correct, please check if that is correct.
As for the discussion on Authelia, I have not used it as yet, what I understand from linking Authelia with wg-gen-web depends on the many use cases, which can differ completely, for eg:
in a multi App scenario where the WG client user probably need a SSO (SingleSignOn) to go from one App to another then this can be useful, but still I do not see how that could benifit from and Administrator point of view. I don't know if you mean to let the Client directly loginto the web interface?
in a pure VPN link structure (my case) the client has nothing to do with SSO or Authelia , Client is happy to just get to the other side of the Tunnel (WG in this case) there is no more use of the AUTH Token or SSO Object once the VPN is up as there are no Apps involved within and beyond the scope of the tunnel, just networking resources. In my case I'm trying to use wg-gen-web to add, edit, activate, and keep track of Clients configurations and IP's basically Administration, but in no way is the Client getting to see the web interface , when required he gets the config via e-mail or messenger services.
I see a good use as an Administration tool, here I wonder if an Admin will let Clients/Users generate their own configs, wich will eventually be a mess between the network topology! At best admins like to send the client configuration via e-mail to the client. I do not think the goal should be multi client but multi Admin would definately help, but then logging will be needed to see which Admin made what changes and where, this I feel is not required at the moment, as it just complicates the usefulness currently.
All said it's good to know scenarios from other use cases to get a broader picture.
I was actually thinking of letting clients generate N devices but you are right it can be a mess with the ip addresses (I use 3 ranges to go out in different locations).
We must decide if a user can have access to the web portal (to just see devices infos) or only admins will have access to manage everything.
@vx3r
We must decide if a user can have access to the web portal (to just see devices infos) or only admins will have access to manage everything.
I'd vote for only admins to have access to the web portal and manage things or sort out customer config. As an example, we are using many VPN clients with our customers and we (as a client side approach connecting to their servers) are and never allowed to their VPN management systems, if the VPN fails or has problems then the Administrator or Customer manager is our e-mail or telephone contact point and the isse would be resolved, this goes for all our customers VPN servers right from Ctrix, Sophos, Cisco, OpenVPN, IPSEC and some other players too.
Currently I'm implementing our own WG VPN to a small customer and find it useful not to have the customer directly see the web portal and only send his client config via e-mail. This is good and safer without the added bloat of client logon infrastructure and security and to avoid more logons and more password management parallel to the acutual VPN usage.
@jrester
@aniston So you are running the development server inside the docker container? It is possible to commit the container and save it to an image but normally you want to build the image using the Dockerfile because you later want to publish the changes as source code not as a container.
yes that is my case, and I see what you mean of publishing the source code changes, had a mix up of ideas there.
Have you tried using WSL? That way you can modify the code in VSCode and run the dev server inside WSL without the use of docker.
no never used WSL, but at the moment I still have to get my head around the GO lang and GIN Framework, which for me will take a long time. Thanks for the tips though, I sure was able to get started atleast :)
I think I will cut short the dev threads here as they are going off topic to the original issue for the fairness of anyone trying to follow the Issue.
Ok i will implement oauth2 and authelia, I personally use authelia so I need it.
Only admin will be able to connect to the interface and create clients.
OAuth2 OIDC is implemented, for any issue please reopen
|
2025-04-01T04:35:54.517152
| 2017-11-18T00:23:08
|
275034380
|
{
"authors": [
"deltaskelta",
"notjrbauer",
"w0rp"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11979",
"repo": "w0rp/ale",
"url": "https://github.com/w0rp/ale/issues/1145"
}
|
gharchive/issue
|
gometalinter - Lint only opened files
Is it possible to revert https://github.com/w0rp/ale/commit/e721f851b41b8f6f31067ae2a137019e1cb5546c and reintroduce this with a config option?
I ask because this changes the previously expected behavior drastically. Previously, we linted open files/buffers. Now, we lint the entire directory.
I also believe the path provided to the quickfix window from the linting error is incorrect. /foo/bar/file.go turns into /foo/bar/foo/bar/file.go - to reproduce, try navigating to file failing the lint from quickfix (using gometalinter).
@deltaskelta Do you recall me asking if we could add an option for this? Could we add an option for this?
@notjrbauer Feel free to create a pull request if you know how to handle this.
I'll try and make an option for this. Should the default be to lint only the file or let gometalinter lint ./?
@notjrbauer In the past, it was linting everything and then matching a regex to only show the errors from the current file, so it wasn't set to lint open files, but only the current open file.
About the paths, they do show the full path to the file only when the directory has changed to be different from the directory that the file is in. I have never seen any errors in the path myself so IDK how to reproduce the bug
I reverted the changes. You can come back later with something better.
|
2025-04-01T04:35:54.520033
| 2019-07-06T20:50:29
|
464889527
|
{
"authors": [
"jordanmendler",
"w0rp"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11980",
"repo": "w0rp/ale",
"url": "https://github.com/w0rp/ale/issues/2633"
}
|
gharchive/issue
|
Enable all linters/fixers through an option?
I just started using Ale and it's great. Definitely open to contributing if I find issues or make improvements locally. Something I am wondering is that the default group of linters used is minimal relative to what is supported. Tried to search and can't find an answer. Is there a reason not to just enable all linters and/or fixers for all languages? I often switch between languages and I would think any linter/fixer good enough for someone to install on their box would also be good enough that they would want to use it with ale. I can obviously parse the source code to find all linters for all languages and then create my own config tying each to the appropriate language, but it feels like I am missing something here and reinventing the wheel. Is there an option to "enable all" or some reason why that is a bad idea? What about something like a broader default set to include the best of breed for more languages?
Another question is has anyone out together a script to install all available linters/fixers for a given language or for all languages? Again, it seems like it might be duplicate effort that everyone is doing if they develop in many different languages and want something broader than just using their 1 or 2 go-to linters. E.g. for each specified language install all linters using the local tool. It's outside of the core scope of Ale, but definitely is relevant as an optional script.
ALE enables all linters by default, except those that can either potentially damage files on your system, or consume too much CPU time. Text linters are also disabled by default, as they can be very annoying. See :help g:ale_linters for the exceptions that are applied.
ALE is not responsible for installing external programs. I think a good idea would be to build a complementary plugin which makes it easy to install external programs that ALE can use.
|
2025-04-01T04:35:54.843965
| 2015-11-10T18:07:51
|
116167483
|
{
"authors": [
"g-ortuno",
"jgraham"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11981",
"repo": "w3c/testharness.js",
"url": "https://github.com/w3c/testharness.js/pull/163"
}
|
gharchive/pull-request
|
Function to add a cleanup function to every test in the file.
In Web Bluetooth tests we would like to run the same clean up function after each test. Using test.add_cleanup() in each test is very tedious and error prone. The function added in this PR allows developers to easily add the same cleanup function to all their tests.
@scheib @jyasskin @inexorabletash
How much better is this than using add_result_callback to do the same thing?
argh. I hadn't realized add_result_callback could be use to achieve the same result. Closing this.
|
2025-04-01T04:35:55.172441
| 2020-08-06T13:29:57
|
674315680
|
{
"authors": [
"hober",
"jwrosewell",
"lknik"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11982",
"repo": "w3ctag/security-questionnaire",
"url": "https://github.com/w3ctag/security-questionnaire/issues/92"
}
|
gharchive/issue
|
Incompatible with guidance on other concerns
The W3C guidance in relation to issues of markets and people's choices states “W3C does not … in any way restrict competition. W3C's policy is that its activities are conducted to the highest ethical standards and in compliance with all applicable antitrust and competition laws and regulations.”
The document groups entities into first and third party based on a singular view of a persons willingness or ability to trust or understand entities that operate within each of those groups. The document does not recognise that entities within those different groups may operate within the same market and therefore will compete with one another. If a technical standard, or a particular implementation that progresses as a technical standard after it has been widely deployed, is assessed against this document then it is possible that the technical standard will restrict competition. Given the lack of browser diversity, but wide diversity of web stakeholders and entities, it is highly likely such an outcome will and has occurred in practice.
The mitigations proposed may only be possible for some players within a market and not others. Gaining people’s consent for something is far easier when it is combined with the acceptance of terms associated with an essential service like the setup of an operating system, or use of a mapping product. Large vertically integrated companies that also operate web browsers will find implementing such mitigations easy in practice. However smaller players that are not vertically integrated will find such mitigations impossible.
Other mitigations my be more or less practical based on financial strength, available engineering skills, available engineers, legacy solutions, among other factors.
This is one example of documents produced within the W3C that incorporate a specific and narrow view of a single issue without considering all the issues that the all 4,000,000,000 stakeholders in the web care about. Other examples include Mitigating Browser Fingerprinting in Web Specifications and [Target Privacy Threat Model](Target Privacy Threat Model).
To resolve this conflict the W3C could adopt a single policy covering all issues. All W3C documents would then have policy positions removed and would simply reference the single unified W3C policy document. This remedy would not only deal with the issues raised in relation to this document, but also improve horizontal review as all matters of policy will be crystal clear and defined once. They would not be open to interpretation by individual participants.
As external stakeholders such as Partnership for Responsible Addressable Media (PRAM), the UK Competition and Market Authority (CMA) and European Commission among others, take a more active interest in the work of the W3C such an approach will also support better engagement with these stakeholders.
To resolve this conflict the W3C could adopt a single policy covering all issues. All W3C documents would then have policy positions removed and would simply reference the single unified W3C policy document.
I don't think that the security and privacy questionnaire is an appropriate target for a petition aimed at the W3C as an organisation. Perhaps try the AB representatives instead?
I agree with @lknik; this request appears out of scope for this document. Were W3C to change its policies in a way that affected this questionnaire, we would of course update the questionnaire to reflect such a change.
I will progress the issue via the AB as advised by @lknik.
|
2025-04-01T04:35:55.186798
| 2022-12-09T09:56:30
|
1486453964
|
{
"authors": [
"CLAassistant",
"alxs"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11983",
"repo": "w3f/Grants-Program",
"url": "https://github.com/w3f/Grants-Program/pull/1345"
}
|
gharchive/pull-request
|
Terminate Tribal Protocol
https://github.com/w3f/Grants-Program/pull/979#issuecomment-1326700327
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T04:35:55.216799
| 2023-08-12T04:14:16
|
1847729939
|
{
"authors": [
"pySilver",
"zerolab"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11985",
"repo": "wagtail/wagtail",
"url": "https://github.com/wagtail/wagtail/issues/10784"
}
|
gharchive/issue
|
Ability to retire deprecated image formats to avoid unrecoverable errors
Is your proposal related to a problem?
It is really hard to retire an image format that was used in number of pages with data migration. Once format is removed but there are places where such formats were used we cannot safely solve the issue:
Attempt to edit content where such unavailable format is being used fails with KeyError at wagtail.images.formats.get_image_format
Web page that uses unavailable format produces server error with same KeyError at wagtail.images.formats.get_image_format
Describe the solution you'd like
get_image_format should be improved with fallback format such as fullwidth or user defined (settings? hook?);
Some warning can be produced so at least image.url can be captured in logs if one wants to investigate.
Describe alternatives you've considered
Actually, there are none.
I'm ready to provide a PR on this but I need some input on fallback. Should it be fullwidth or maybe first available format? Should we produce warning in logs?
I'm not sure we should introduce any new settings or hooks for this as it is an edge case. Maybe we can change wagtail.images.formats.Format and add is_default there so when doing fallback we will use first is_default format?
@pySilver marking as needing design decision. A fallback such as fullwidth makes sense. At the same time it would be good to log these so the team can go and fix things.
@zerolab thx. Yes, I personally think there should be a way to have fullwidth disabled and at the same time having ability to fallback to some format. Thats why is_default or sounds good idea.
Considering we have unregister_image_format we most certainly want a mechanism that will just work. On one hand having a setting allows developers have full control (but it is yet another setting 🙈 ), on the other a persisting Format instance with a reasonble name (like wagtail-fallback-format) may be better
@zerolab what if we simply extend register_image_format so it takes an instance of a Format and an optional boolean fallback that designates whether format should be used as a fallback? Then, at the wagtail/images/formats.py we can add another global variable FALLBACK_FORMAT: str that will be modified by register_image_format/unregister_image_format
This way we can attempt to use fallback at get_image_format and produce a recoverable, nicely explained error in case no fallback format is registered.
@zerolab I've created the PR #10787
|
2025-04-01T04:35:55.223327
| 2023-06-09T08:16:33
|
1749384966
|
{
"authors": [
"MikeMavrok",
"gasman",
"laymonage"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11986",
"repo": "wagtail/wagtail",
"url": "https://github.com/wagtail/wagtail/pull/10525"
}
|
gharchive/pull-request
|
Fix incorrect import of total_ordering
Fixes our tests from failing against Django's main as of https://github.com/django/django/pull/16958/commits/ee36e101e8f8c0acde4bb148b738ab7034e902a0.
Based on the description of that commit and Python's docs:
Note
While this decorator makes it easy to create well behaved totally ordered types, it does come at the cost of slower execution and more complex stack traces for the derived comparison methods. If performance benchmarking indicates this is a bottleneck for a given application, implementing all six rich comparison methods instead is likely to provide an easy speed boost.
and given that we only have two instances of total_ordering within the codebase, I think it's worth having a good first issue to replace total_ordering with our own comparison functions implementation. Will create one shortly. Edit: it's #10526.
Please check the following:
[x] Do the tests still pass?[^1]
[x] Does the code comply with the style guide?
[x] Run make lint from the Wagtail root.
Looks like the issue is bigger than that, as _delegate_text no longer exists on the Promise object. Will take a look later.
@laymonage I want to make my first pr in wagtail , could i have a chnace to start working on this?
Hi @MikeMavrok, not quite sure what you refer to by "this", as this is a PR, not an issue.
The problem with _delegate_text I mentioned above seems to stem from telepath, a separate package used within Wagtail.
I'd assume you meant you'd like to work on #10526? If so, feel free to have a go! If you can create a PR for that issue, I'll close this PR as it will no longer be needed.
Have now released the fixed version of telepath as v0.3.1, so the pinning can be removed.
Thanks @gasman, I took the opportunity to unpin django-taggit as well.
|
2025-04-01T04:35:55.230427
| 2022-06-08T21:57:49
|
1265357572
|
{
"authors": [
"lb-",
"rohitsrma",
"vsalvino"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11987",
"repo": "wagtail/wagtail",
"url": "https://github.com/wagtail/wagtail/pull/8650"
}
|
gharchive/pull-request
|
Add icon attribute to settings models to be shown in header
Currently, when specifying an icon to register a custom setting, the icon is only shown in the menu. It is not shown in the header. The header is hard-coded to show cogs. However Wagtail's own settings do show a custom icon in the header. This makes any user-defined settings feel a bit second-class and inconsistent. https://docs.wagtail.org/en/stable/reference/contrib/settings.html#appearance
This change adds an icon attribute to the settings model, which is used in the header.
I totally understand that this is not a good solution, but hopefully this is a starting point to get this UI inconsistency fixed. I would prefer that @register_setting() automatically wires up the icon in both the menu and the header - but that seems a bit impossible based on the current implementation.
@vsalvino would you be able to add a unit test that covers this?
Sure, was planning to add a test, but assumed this approach would not be usable. If this approach looks good then I'll go ahead and add the test.
I think it makes sense. It's forwards compatible - users won't see anything different until the header icon is also added.
Maybe - it's worth changing the name to header_icon on the settings class. This aligns with what we do for the generic class based views.
I'm going to suggest we close this PR and raise a new issue to reflect the goals of this PR.
It's been a while since this original approach, thanks @vsalvino but I think that there will be some nicer ways to archive this now that icons are better supported with the shared breadcrumbs/header usage.
Feel free to close @lb- . Ideally the icon should be specified once and used throughout the UI. Hopefully this will be possible now since this PR was made several versions ago.
@lb- I would like to work on this. Could you please provide some guidance on how to approach this?
@rohitsrma I have created a new issue out of this PR https://github.com/wagtail/wagtail/issues/11790 with some links to related issues and a few notes on suggested approaches. Feel free to reach out to myself or others (inc. @vsalvino ) on this issue if you feel like giving this a go and have some ideas. I would recommend you just try to 'hack' something together locally that works and then refine to a point that may make a good proposal.
Additionally, be sure to think about testing and you may even want to submit a PR to the Bakerydemo repo with a few different settings icons for the built in settings models so it's easier for team to validate this.
Finally, be sure to read the full thread on https://github.com/wagtail/wagtail/issues/9652 as it relates to icons for models generally.
I will close this PR for now as a new approach is needed with a bit more discussion.
@lb- thanks for the guidance. I'll check out new issue created.
|
2025-04-01T04:35:55.257676
| 2022-03-01T03:45:07
|
1154825508
|
{
"authors": [
"fracpete"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11988",
"repo": "waikato-ufdl/wai-annotations-core",
"url": "https://github.com/waikato-ufdl/wai-annotations-core/issues/3"
}
|
gharchive/issue
|
Creating file splits from batches
Projects quite often collect data in batches. In order to build models that are build with subsets from each of the batches, a tool for generating file list of train/test or train/test/val splits would be very useful (currently achieved with ADAMS workflows).
Tool:
wai-annotations split-batches
Parameters:
-i/--input DIR [DIR...]
-f/--files GLOB [GLOB...]
--seed SEED (default: 1)
--split-names SPLIT NAME [SPLIT NAME ...]
--split-ratios RATIO [RATIO ...]
--extension EXT (default: .list)
-o/--output DIR
Process:
- initialize random number generator with SEED
- for each input DIR:
- find files that match all the GLOBs
- randomize file list
- split files according to RATIOs
- write splits to files in output directory, using dir name as file name, split name as suffix and specified extension
implemented (in slightly different form) in ed8894897e917c06ca7c42e3f559de7b28b51373
|
2025-04-01T04:35:55.295281
| 2023-03-23T11:12:49
|
1637316910
|
{
"authors": [
"danisharora099",
"fryorcraken"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11991",
"repo": "waku-org/js-waku",
"url": "https://github.com/waku-org/js-waku/pull/1259"
}
|
gharchive/pull-request
|
feat: use nwaku docker instead of building binaries
Problem
When testing nwaku interop/running tests locally, the nwaku repo needs to be cloned (the first time) & built (along with building the Nim compiler) which takes a lot of time
Solution
Using tagged nwaku images directly within the js-waku context using dockerode that would allow for:
easily switching between nwaku versions without rebuilding binary
no need to clone nwaku within the repo
significantly reduce time it takes for CI to run (=$)
spawn/run multiple tests async (can leverage in future)
TODO:
[x] fix CI for go-waku (use statusteam/go-waku:latest)
Notes
Resolves https://github.com/waku-org/js-waku/issues/1192
Testing against go-waku is currently done against latest go-waku image instead of the version as was previously being used. This is because the current infra does not automatically publish docker images for tagged releases. cc @richard-ramos
OR if we want docker method to replace binary method, then how does it look like when I want to run the test with go-waku or nwaku compiled locally? Currently, only docker images supported are the one published.
As discussed in a call, an acceptable outcome would be to add a doc that explains how to run a local instance of nwaku or go-waku with the js-waku CI. I imagine something like:
cd nwaku; make wakunode2
docker build .
set some env to point to locally built docker and use it to run js-waku tests.
|
2025-04-01T04:35:55.296588
| 2023-09-21T08:25:46
|
1906404963
|
{
"authors": [
"fbarbu15"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11992",
"repo": "waku-org/js-waku",
"url": "https://github.com/waku-org/js-waku/pull/1591"
}
|
gharchive/pull-request
|
chore: bump nwaku version to 0.20
Problem
Bump nwaku version to 0.20 in the js-waku CI
sorry, mis-saw! LGTM :)
Thanks, please merge :)
|
2025-04-01T04:35:55.346042
| 2024-01-21T18:39:26
|
2092713006
|
{
"authors": [
"Darkhydra8788",
"mrmarkbrown",
"wallofroy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11993",
"repo": "wandaweb/Fooocus-Sagemaker-Studio-Lab",
"url": "https://github.com/wandaweb/Fooocus-Sagemaker-Studio-Lab/issues/7"
}
|
gharchive/issue
|
ImportError: libgthread-2.0.so.0: cannot open shared object file: No such file or directory
notebook was working fine previously, i didn't make any changes in it still i am getting an libgthread import error.
I am providing the full error code.
(studiolab) studio-lab-user@default:~/Fooocus-Sagemaker-Studio-Lab$ sh start.sh
Cloning into 'Fooocus'...
remote: Enumerating objects: 5229, done.
remote: Counting objects: 100% (310/310), done.
remote: Compressing objects: 100% (299/299), done.
remote: Total 5229 (delta 13), reused 295 (delta 7), pack-reused 4919
Receiving objects: 100% (5229/5229), 32.24 MiB | 51.91 MiB/s, done.
Resolving deltas: 100% (3012/3012), done.
Already up to date.
JSON file created: config.txt
Port 7865 is free.
Enter the token: 2bHAgoL3IVYorJS09GLelw6rAy9_2B8LwRf4gGgbp2haTMvFV
Enter the domain: loudly-eminent-cardinal.ngrok-free.app
Token: 2bHAgoL3IVYorJS09GLelw6rAy9_2B8LwRf4gGgbp2haTMvFV
Domain: loudly-eminent-cardinal.ngrok-free.app
https://loudly-eminent-cardinal.ngrok-free.app
Press Ctrl+C to exit
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus/entry_with_update.py', '--always-high-vram']
Traceback (most recent call last):
File "/home/studio-lab-user/Fooocus-Sagemaker-Studio-Lab/Fooocus/entry_with_update.py", line 46, in
from launch import *
File "/home/studio-lab-user/Fooocus-Sagemaker-Studio-Lab/Fooocus/launch.py", line 24, in
from modules.config import path_checkpoints, path_loras, path_vae_approx, path_fooocus_expansion,
File "/home/studio-lab-user/Fooocus-Sagemaker-Studio-Lab/Fooocus/modules/config.py", line 7, in
import modules.sdxl_styles
File "/home/studio-lab-user/Fooocus-Sagemaker-Studio-Lab/Fooocus/modules/sdxl_styles.py", line 5, in
from modules.util import get_files_from_folder
File "/home/studio-lab-user/Fooocus-Sagemaker-Studio-Lab/Fooocus/modules/util.py", line 6, in
import cv2
File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/cv2/init.py", line 181, in
bootstrap()
File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/site-packages/cv2/init.py", line 153, in bootstrap
native_module = importlib.import_module("cv2")
File "/home/studio-lab-user/.conda/envs/fooocus/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ImportError: libgthread-2.0.so.0: cannot open shared object file: No such file or directory
im also facing the same issue... Wanda please help!
im also facing the same issue... Wanda please help!
Same
|
2025-04-01T04:35:55.347316
| 2024-07-15T18:11:27
|
2409330089
|
{
"authors": [
"jsbroks",
"zacharyblasczyk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11994",
"repo": "wandb/terraform-google-wandb",
"url": "https://github.com/wandb/terraform-google-wandb/pull/146"
}
|
gharchive/pull-request
|
fix!: Index error and missing breaking change
Fixes missing bool check and addresses a breaking change in 3.6.0
This PR is included in version 4.0.0 :tada:
|
2025-04-01T04:35:55.380254
| 2015-03-23T09:49:57
|
63679938
|
{
"authors": [
"YongbaoWang",
"wangruofeng"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11996",
"repo": "wangruofeng/RFSegmentView",
"url": "https://github.com/wangruofeng/RFSegmentView/pull/1"
}
|
gharchive/pull-request
|
Add selected item Action.
It's a common action.
It's indeed useful.
There is a conflict. because this code is released about a year ago.
so ,I change the original code in my repositories.
but Thank you suggestion all the same!
|
2025-04-01T04:35:55.402604
| 2016-04-22T00:13:19
|
150225017
|
{
"authors": [
"michaelaye",
"pelson",
"warner"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11997",
"repo": "warner/python-versioneer",
"url": "https://github.com/warner/python-versioneer/issues/114"
}
|
gharchive/issue
|
travis errors since using versioneer
I attach the screenshot of Travis' error.
Could it have something to do that I need to add a test class to the cmdclass dictionary?
I am currently doing it like this:
setup(
name="planet4",
version=versioneer.get_version(),
# this hack just to combine 2 dictionaries, as versioneer automates the
# cmdclass dic generation.
cmdclass=versioneer.get_cmdclass().update({'test': PyTest}),
Sorry, PEBKAC case.
Glad it worked out for you :-)
BTW, one thing to be aware of with travis and large projects is that they do a shallow clone, and if there have been a lot of commits since the last tag, there won't be enough information for git (or Versioneer) to compute a full version.
I'm thinking of writing a script that repeatedly fetches more and more commits (deepens the shallow clone, maybe 100 commits at a time, doubling each time) until a tag is found, or the whole history has been retrieved. Projects that have this happen often can add a copy to their source trees, and then configure travis to run it as a build step before any builds or tests happen.
All the more reason to release frequently :wink:
|
2025-04-01T04:35:55.424967
| 2020-06-10T07:01:40
|
635986059
|
{
"authors": [
"bmeurer",
"yurydelendik"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11998",
"repo": "wasdk/wasmparser",
"url": "https://github.com/wasdk/wasmparser/pull/60"
}
|
gharchive/pull-request
|
feat: add support for inline export notation
This adds a new IExportMetadata interface, which provides export
names for functions, globals, memories, and tables, and support for
it in the DevToolsNameGenerator. When such an object is installed
on the WasmDisassembler#exportMetadata property, the disassembly
will not print exports explicitly via (export "name" ...), but
instead put the export names inline with the functions, globals,
memories and tables. This saves space in the disassembly and puts
information about exports close to the relevant places.
Refs: #56
:tada: This PR is included in version 2.2.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T04:35:55.431878
| 2020-12-04T08:45:16
|
756918366
|
{
"authors": [
"saikat107",
"shengqiangzhang",
"wasiahmad"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11999",
"repo": "wasiahmad/NeuralCodeSum",
"url": "https://github.com/wasiahmad/NeuralCodeSum/issues/19"
}
|
gharchive/issue
|
Questions about the python and java datasets.
Hi @wasiahmad ,
The input data of the model in A Transformer-based Approach for Source Code Summarization is a series of tokens, but the input data of my model is abstract syntax tree (AST), I need to find the original source code (executable source code snippet) corresponding to a series of tokens, and then parse it to AST.
I have downloaded the data from their original work, but I found that the size of the dataset used in your paper is different from the size of their original dataset. For example, in the train set of the python dataset, the original size exceeds 100,000, while yours is about 50,000.
I want to compare with your model, so I selected the experiment dataset provided by your paper.
Since the series of tokens can not be parsed into AST, I need to find the corresponding original source code from their original work.
Unfortunately, I can not find the original source code for all the series of tokens.
If you could provide me with the corresponding original code files (the size of your experiment datasets are inconsistent with the original datasets), I believe I can convert them to AST and compare the experiment results with yours.
Thank you.
Hi, I understand your need. A few things to note.
The preprocessed python dataset we used is shared by the authors of Bolin et al., 2019 as we were unable to reproduce their results using the dataset we preprocessed.
Note that these datasets are extremely noisy, so you may not be able to use the full data if you use AST-based methods.
We also performed some naive experiments using AST, you can find the details in the paper. We did this only for the Java dataset and you can find the dataset (java_with_sbt.zip) in our provided Google drive link.
The AST extraction from the original Java code is done by our co-author Saikat (https://github.com/saikat107), I have asked him to reply in this thread.
Thanks!
Hi @shengqiangzhang ,
Like @wasiahmad mentioned, we used the same processed dataset as Bolin et.al., 2019 used. However, to my best knowledge,
the python dataset is from this paper and can be found here. You can find the description of the raw data here.
I hope that helps. Let me know if you have further questions. Feel free to close the issue if not.
Thanks!
Hi @saikat107 @wasiahmad ,
Thank you for your help, I am trying to transform the input data into AST format
|
2025-04-01T04:35:55.442101
| 2024-07-26T15:39:49
|
2432494672
|
{
"authors": [
"Aditya1404Sal",
"brooksmtownsend",
"vados-cosmonic"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12000",
"repo": "wasmCloud/wasmCloud",
"url": "https://github.com/wasmCloud/wasmCloud/pull/2586"
}
|
gharchive/pull-request
|
feat(host):Timeout component export future based on the provided max_execution_time
Feature or Problem
Related Issues
Closes #2530
Release Information
next
Consumer Impact
Testing
Unit Test(s)
Acceptance or Integration
Manual Verification
Ran Standard Wash up with max-execution-time flag ,then initialized a new component that ran as expected
This looks nice! I'd love to see what the error looks like when it actually times out.
It would get logged as ERROR component handler timed out: Elapsed , but so far I haven't been able to get the component handler to timeout, Even after adding an artificial delay inside the component to be more than max-execution-time-ms and stress testing it by a load-tester.
Do you have any suggestions @brooksmtownsend
@Aditya1404Sal interesting... just to be sure, what are you testing with? Is it an HTTP app? I would recommend updating to 0.22.0, because there was a bug with that future on older versions of capability providers.
@brooksmtownsend yep The HTTP tiny-go Component, I updated the version of the http-server capability to ghcr.io/wasmcloud/http-server:0.22.0 and still got no luck, maybe it's the way I'm inducing the artificial delay ??
I apologise for this question but I added a time.Sleep() before OutGoing_Response for the delay, I know it's probably wrong but where else can i add a delay ?
@Aditya1404Sal anywhere in that component adding a time.sleep should be enough, if it's not then it might be an indicator that the future isn't able to stop the execution of the component.
Would you mind also trying doing a loop of multiple sleeps? Just to see if blocking is the issue
Yeah sure, I'll try multiple sleep looping and get back to you.
@brooksmtownsend the problem isn't with blocking, even multiple sleep loops didn't work.
There might be an issue with the future polling and the way Timeout is handling the comparison.
I'm working on a fix ,so I'll keep you updated.
Hey @Aditya1404Sal now that we have wasmcloud-test-util out, what do you think about automating the testing you were doing?
I'd like to help with this PR (if I even can! the code looks right already), but it would be great to have a test that automates the testing you were doing so that we can be sure it works. This is also a great chance to have you kick the tires on wasmcloud-test-util!
Hey @Aditya1404Sal a great way to start would be using the WasmcloudTestHost like in the example! -- if you see anything wrong with the documentation any updates are also welcome :)
Basically if you can translate what you were doing manually with creating components, etc to a wash-cli test, that would be great
Hey @vados-cosmonic -- In the tests, what should I do to induce an artificial delay in the component exported code ? so that exporting the component takes longer than max_execution_time ?
@Aditya1404Sal I think that just adding a std::thread::sleep for a longer duration than the max execution time would be enough! Alternatively, a loop with multiple thread sleeps e.g.
loop {
std::thread::sleep(std::time::Duration::from_secs(1));
}
@brooksmtownsend , Thank you for you help ! I was confused as to where I should be writing the test component but it's clear to me now, I just have to develop a component using an http-hello-world template (rust) and add a delayer (like std::thread::sleep) then build it into a .wasm file and move it to wasmCloud/target/debug/build/testcomponent-......../out/ after which I can continue writing the testcase.
Hey @brooksmtownsend can we move ahead with integrating the test component into the build setup ?
If there are some resources that can guide me that would be very helpful.
|
2025-04-01T04:35:55.445662
| 2022-06-30T20:43:06
|
1290621965
|
{
"authors": [
"brooksmtownsend",
"connorsmith256"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12001",
"repo": "wasmCloud/wasmcloud-otp",
"url": "https://github.com/wasmCloud/wasmcloud-otp/issues/430"
}
|
gharchive/issue
|
[BUG] Cannot communicate with actors after stopping all instances
Describe the bug
When the last instance of an actor is stopped, the actor's RPC supervisor is terminated. There's a bug in the logic to start actor RPC supervisors that prevents starting one that used to exist
To Reproduce
Steps to reproduce the behavior:
Start one or more copies of the echo actor
Notice you can successfully invoke the actor
Terminate all instances of the actor
Notice you can't invoke the actor (because it's not running)
Start the echo actor again
Notice you still can't invoke the actor
Expected behavior
RPC supervisors should gracefully restart when an actor is restarted
Closed with #429
|
2025-04-01T04:35:55.509793
| 2016-08-04T19:14:45
|
169451619
|
{
"authors": [
"germanattanasio",
"jsstylos",
"kognate",
"nfriedly",
"prasanta303"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12002",
"repo": "watson-developer-cloud/speech-to-text-nodejs",
"url": "https://github.com/watson-developer-cloud/speech-to-text-nodejs/issues/121"
}
|
gharchive/issue
|
[Speech to Text]:Not working in mobile
Hello,
I have deployed the app in bluemix. Now the url https://<...>.mybluemix.net is working perfectly in laptops/desktops. It also opens the initial page in mobile but the buttons for microphone, file open dialog etc. are appearing one below another instead of side by side. Import issue is buttons are not working when pressed.
Is there any other version is created for mobile app ?
If not then please let me know how to run successfully in mobile.
The demo does not work in mobile safari. There isn't a mobile version of this demo app.
You can use the ios-sdk or the android-sdk to develop speech to text applications for mobile platforms.
Should disable record or give a warning message when on mobile
On pressing the buttons nothing happens. I am not getting any error message.
This is a known issue and it's because safari has a URL limitation that we easily exceed with the Watson token. We will probably fix this during the rebrading
I don't think Safari supports microphone access either (on desktop or mobile - http://caniuse.com/#search=getusermedia). So, it's doubly broken.
Regardless, I agree, we should put up some warning/error message.
@germanattanasio Says that this used to work on Chrome on iOS -- we should have tests to detect when we break compatibility.
I'm pretty sure this never worked on any iOS browser (since they're all basically just Safari + glitter on iOS). You just have to use the SDK for iOS support.
I agree that more testing is definitely in order; but the nature of this demo makes it harder than most to test. I have a halfways decent test suite set up for the speech JS SDK (it tests almost everything in Chrome and as much as it can in Firefox), so that could be a good starting point. And, once we get this demo transitioned to using the SDK, then it will directly benefit from those tests.
Closing this, as the new demo handles things much better. For modern browsers on Android, all features work. On iOS, sample playback works and the microphone button is greyed out and provides a more clear error message when clicked.
|
2025-04-01T04:35:56.085884
| 2021-11-12T23:27:00
|
1052476266
|
{
"authors": [
"coveralls",
"wbaldoumas"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12003",
"repo": "wbaldoumas/coding-blog",
"url": "https://github.com/wbaldoumas/coding-blog/pull/20"
}
|
gharchive/pull-request
|
Upgrade To .NET 6.0
Pull request
Proposed changes
Upgrade from .NET 5.0 to .NET 6.0.
Types of changes
[ ] New feature (non-breaking change which adds functionality).
[x] Enhancement (non-breaking change which enhances functionality)
[ ] Bug Fix (non-breaking change which fixes an issue).
[ ] Breaking change (fix or feature that would cause existing functionality to change).
Checklist
[x] I have read the README document.
[x] My change requires a change to the documentation.
[x] I have updated the documentation accordingly.
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 0.0%
Totals
Change from base Build<PHONE_NUMBER>:
0.0%
Covered Lines:
0
Relevant Lines:
286
💛 - Coveralls
|
2025-04-01T04:35:56.093247
| 2020-10-13T09:08:37
|
720031957
|
{
"authors": [
"Shaquu",
"dPacc",
"moklick",
"ryanwr"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12004",
"repo": "wbkd/react-flow",
"url": "https://github.com/wbkd/react-flow/issues/577"
}
|
gharchive/issue
|
Grid lines or dots not rendering
While moving the pane, the grid lines or dots do not render at certain points. This was working earlier in version 3.2.4, but the bug appears to be in the latest version.
I already tried to fix this bug. It is really strange what is going here. We are dynamically creating a svg which is used as a repeated background image. The svg only includes a <circle /> element. I can't explain why it is not always rendered and I don't see a pattern. Maybe these are rounding errors?
What is ETA to fix/merge PR it?
Wondering the same, waiting for it to be merged, if the fix works.
Unfortunately the solution is not working. The PR is still open, because @AndyLnd is still working on it.
I've had a go at fixing this and appears to be working for me, would be good to get opinions/confirmation.
https://github.com/wbkd/react-flow/pull/835
I've had a go at fixing this and appears to be working for me, would be good to get opinions/confirmation.
https://github.com/wbkd/react-flow/pull/835
Thanks @ryanwr! Works perfectly fine :)
released in v8.3.6
Thanks @ryanwr! Works perfectly fine :)
released in v8.3.6
|
2025-04-01T04:35:56.096217
| 2020-06-17T18:07:15
|
640641302
|
{
"authors": [
"andys8",
"moklick"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12005",
"repo": "wbkd/react-flow",
"url": "https://github.com/wbkd/react-flow/pull/298"
}
|
gharchive/pull-request
|
README: fitView needs an argument
Working
fitView({ padding: 10 })
fitView({})
Not working
Fix
Change readme (this PR)
Change the type signature
Hey @andys8
thanks for the PR. I would prefer to make this parameter optional but I don't know how to do this for the fitView action. Can you help with this?
I force pushed a change that seems to work. Please have a second look at it ;)
This looks good, thanks.
|
2025-04-01T04:35:56.097754
| 2020-03-23T11:20:34
|
586132518
|
{
"authors": [
"janverb",
"wbolster"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12006",
"repo": "wbolster/emacs-python-black",
"url": "https://github.com/wbolster/emacs-python-black/pull/4"
}
|
gharchive/pull-request
|
Don't break when there's no newline at the end of the buffer
python-black-statement failed when run on the last line with no newline at the end, because it tried to reach past the end of the buffer. This fixes that.
thanks, merged :shipit:
not sure how you can end without a newline at the end of the buffer... i guess it only happens for unsaved buffers, since saving usually adds it. and i guess with this change, black-macchiato will also add it when formatting the statement.
That's exactly right. Thank you!
|
2025-04-01T04:35:56.108289
| 2023-07-21T12:32:16
|
1815730442
|
{
"authors": [
"flxo",
"wcampbell0x2a"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12007",
"repo": "wcampbell0x2a/backhand",
"url": "https://github.com/wcampbell0x2a/backhand/issues/279"
}
|
gharchive/issue
|
FilesystemWriter::write on multiple threads
Hello,
I'm evaluation backhand as a replacement for calling external squashfs tools. There's a significant drop in performance when creating a fs due to the lack of parallelism. My application spends most of its time somewhere in a compression implementation which is executed within a single thread.
Is there currently any change to utilise more threads for the compression?
Thanks
@flxo
You are right, we currently don't have any methods that spawn multiple threads for writing a new filesystem. You would just need a new lock on the writer in this method, with a rayon::par_iter or something in this spot: https://github.com/wcampbell0x2a/backhand/blob/9fdf0cf8192155b04da9c3b5e34ba6c8047d8e16/src/filesystem/writer.rs#L440.
Thanks for the update. Probably this needs some refactoring because the compression happens in data_writer.add_bytes(...) and files is borrowed from self.
Thanks for the update. Probably this needs some refactoring because the compression happens in data_writer.add_bytes(...) and files is borrowed from self.
Feel free to submit a MR, I'm quite busy at the moment to do this!
|
2025-04-01T04:35:56.124521
| 2015-06-18T07:28:56
|
89221059
|
{
"authors": [
"JesterEE",
"shinji257",
"welwood08"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12008",
"repo": "wchill/steamSummerMinigame",
"url": "https://github.com/wchill/steamSummerMinigame/pull/277"
}
|
gharchive/pull-request
|
Update autoPlay.user.js
Added lock for the auto-fire cannon
Since wormhole strats don't use the auto-fire DPS, lets remove it as an option.
Do we not still need 1 for the auto lane switching to work?
I haven't been using it today and lane switching is working.
This is breaking something else in the script when it is integrated ... pushed the PR too soon. Looking at it now.
So the function works in the console and correctly removes the AFC from the list. However, if I load it in the firstRun() function, the game updates list in not yet populated (AJAX) so it causes an error and the script fails (the query does not return a value). I don't know your script well enough to know what gets loaded when, so I'm not sure where to implement this function call. The 2 lines of code are sound, it just needs to be placed in the correct place. I leave that to you.
I tried wrapping it in waitForKeyElements but that didn't work either. The script loads but the cannon option remains.
While what you say is true it is still needed if you want to get to boss loot since you need 10 to unlock it. I'd prefer if this was optional like the element upgrade lock is.
Closing to get this off my feed
|
2025-04-01T04:35:56.127429
| 2024-02-26T12:29:27
|
2154063982
|
{
"authors": [
"royteeuwen",
"stefanseifert"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12009",
"repo": "wcm-io/io.wcm.handler.link",
"url": "https://github.com/wcm-io/io.wcm.handler.link/issues/12"
}
|
gharchive/issue
|
Support externalizing links with vanity paths
I'm trying to read through all the specific docs / OSGi configs etc, but I don't think it's currently possible to externalize an URL with the vanity path used in the final externalized URL (like is done by setting the vanityConfig i the PathProcessor in the AEM components Link classes). Am I correct in this, or might I be missing something?
Is blocked by the mentioned issue
super, thanks @stefanseifert . Any idea when you could make a release for this and the url dependency?
will do it this week
|
2025-04-01T04:35:56.161390
| 2024-01-16T09:02:16
|
2083371858
|
{
"authors": [
"callegarimattia"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12010",
"repo": "weareprestatech/hotpdf",
"url": "https://github.com/weareprestatech/hotpdf/pull/37"
}
|
gharchive/pull-request
|
refactor: unified dev deps, update README.md
Unified all dependencies under [dev] section in pyproject.toml.
Now dev environment can be installed using pip install -e '.[dev]'
Modified github action and added the step in readme.md
Also added the fix flag to ruff to try and fix automatic linting errors
@krishnasism should we also format the code in the CI with ruff format?
Added test in CI: use as an external package and run a sample test of loading
|
2025-04-01T04:35:56.165616
| 2018-08-08T10:30:40
|
348667623
|
{
"authors": [
"coveralls",
"oleg-koval"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12011",
"repo": "wearereasonablepeople/trembita",
"url": "https://github.com/wearereasonablepeople/trembita/pull/29"
}
|
gharchive/pull-request
|
feat: Add options to UnexpectedErrorCode error for better logging
fixes #28
Pull Request Test Coverage Report for Build 92
12 of 12 (100.0%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 100.0%
Totals
Change from base Build 90:
0.0%
Covered Lines:
32
Relevant Lines:
32
💛 - Coveralls
|
2025-04-01T04:35:56.183908
| 2016-12-09T11:44:35
|
194576921
|
{
"authors": [
"paulbellamy",
"squaremo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12012",
"repo": "weaveworks/flux",
"url": "https://github.com/weaveworks/flux/issues/289"
}
|
gharchive/issue
|
Reconnect timeouts are quite long
We retry connecting every 5s (in fluxd and fluxsvc), but each attempt can take waaaay too long, which means, that if we restart authfe/fluxd/etc, the reconnection attempt can take longer than we would like.
but each attempt can take waaaay too long, which means, that if we restart authfe/fluxd/etc
The too-long reconnects are gateway timeouts, I believe. We can tune those down in the proxies, and we can cancel attempts from fluxd after a timeout.
We can tune those down in the proxies
aye, but I can see a user having some other proxy in the way potentially, so would be nice to timeout attempts in fluxd/ctl
Fixed by: https://github.com/weaveworks/flux/pull/379
|
2025-04-01T04:35:56.186857
| 2020-11-24T10:28:38
|
749580126
|
{
"authors": [
"atighineanu",
"evrardjp"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12013",
"repo": "weaveworks/kured",
"url": "https://github.com/weaveworks/kured/issues/239"
}
|
gharchive/issue
|
Implement lifecycle hook/grace timeout functional test
We already have a PR to test cluster features of kured https://github.com/weaveworks/kured/pull/183.
We should take this PR as inspiration to include a CI test (if possible in github actions) that does the following:
spin up a small kind cluster (2 nodes would be enough, check existing PR and modify if necessary)
deploy kured and a pod whose manifest contains a lifecyle preStop hook logging something (needs new manifest)
write the reboot sentinel on the worker node (check existing PR)
check that the pod with preStop hook behaved properly (simple code to be written)
check that kured restarts the node successfully (check existing PR)
I can tackle this Issue.
If everyone agrees, I would kindly prefer to go the upstream ginkgo way.
How would that work?
|
2025-04-01T04:35:56.188652
| 2018-01-10T07:27:19
|
287338475
|
{
"authors": [
"awh",
"evrardjp",
"winjer"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12014",
"repo": "weaveworks/kured",
"url": "https://github.com/weaveworks/kured/issues/9"
}
|
gharchive/issue
|
RBAC support
Thanks very much for this, it's really useful.
There doesn't seem to be any RBAC support yet, and it would be useful. It probably needs to run with it's own service account kube-system:kured, and have a role and rolebindings, maybe packaged up as a helm chart?
I'm happy to develop and submit a patch, if you are ok to review it?
@sabbour would you mind giving https://github.com/weaveworks/kured/releases/tag/1.1.0 a try?
I think we can close this.
|
2025-04-01T04:35:56.192394
| 2016-04-19T14:27:21
|
149476983
|
{
"authors": [
"errordeveloper",
"rade"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12015",
"repo": "weaveworks/scope",
"url": "https://github.com/weaveworks/scope/issues/1347"
}
|
gharchive/issue
|
newline char may arrive before what's being pasted
Typing cmd <paste><return> can cause <return> to arrive before <paste> (at least in Chrome).
uwsgi@9c90f04ba0ce:/app$ ping -c1
redis-data-monsterz.marathon.mesosping: missing host operand
Try 'ping --help' or 'ping --usage' for more information.
uwsgi@9c90f04ba0ce:/app$ redis-data-monsterz.marathon.mesos
bash: redis-data-monsterz.marathon.mesos: command not found
uwsgi@9c90f04ba0ce:/app$ ping -c1 redis-data-monsterz.marathon.mesos
PING redis-data-monsterz.marathon.mesos (<IP_ADDRESS>): 56 data bytes
64 bytes from <IP_ADDRESS>: icmp_seq=0 ttl=64 time=0.093 ms
--- redis-data-monsterz.marathon.mesos ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.093/0.093/0.093/0.000 ms
uwsgi@9c90f04ba0ce:/app$ ping -c1
ping: missing host operand
Try 'ping --help' or 'ping --usage' for more information.
uwsgi@9c90f04ba0ce:/app$ redis-data-monsterz.marathon.mesos
bash: redis-data-monsterz.marathon.mesos: command not found
uwsgi@9c90f04ba0ce:/app$ ping -c1 redis-data-monsterz.marathon.mesos
PING redis-data-monsterz.marathon.mesos (<IP_ADDRESS>): 56 data bytes
64 bytes from <IP_ADDRESS>: icmp_seq=0 ttl=64 time=0.079 ms
--- redis-data-monsterz.marathon.mesos ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.079/0.079/0.079/0.000 ms
uwsgi@9c90f04ba0ce:/app$
Do note that there was no newline after uwsgi@9c90f04ba0ce:/app$ redis-data-monsterz.marathon.mesos, I hit the return key there.
This could be just my thick fingers, but please try for yourself, I'm quite sure it's not just me.
Just noticed same happens in GCE terminal widget.
Is this possibly the same as #1158, which was fixed in #1648? @errordeveloper could you re-test with a recent (>=0.16.1) scope, please?
No, I think this is to do with input, it looks like a race condition in the library to me. Katacoda uses the same library, and I've seen the same issue there.
I cannot reproduce this, but I am not on a Mac, so the paste keystroke is different.
@errordeveloper can you reproduce this?
|
2025-04-01T04:35:56.195563
| 2022-03-04T18:29:18
|
1159929727
|
{
"authors": [
"jpellizzari"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12016",
"repo": "weaveworks/weave-gitops",
"url": "https://github.com/weaveworks/weave-gitops/issues/1619"
}
|
gharchive/issue
|
Add string filtering interaction to FilterableTable component
https://www.figma.com/file/IVHnM9iyeFWpd11evtY8ux/Weave-GitOps?node-id=5582%3A14936
Screen is WIP :Application Table View - Filters - text search
Also tweak the min-height of the page to avoid weird scrolling things when there are not rows in the table
|
2025-04-01T04:35:56.205333
| 2016-02-17T11:44:10
|
134258213
|
{
"authors": [
"abuehrle",
"awh",
"bboreham",
"rade"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12017",
"repo": "weaveworks/weave",
"url": "https://github.com/weaveworks/weave/pull/1978"
}
|
gharchive/pull-request
|
Documentation updates for 1.5 release
Right now this is just a rough outline of an operational guide that frames the most likely deployment scenarios that I can think of for the purposes of discussion.
cross-DC deployments may exhibit some additional patterns
Ideally we'd try to avoid mentioning the notion of 'leader', since it can easily leave readers with the wrong impression that weave funnels some of its operations through a single elected node
we may want to mention weave stop earlier, in the Removing a peer section, to explain that it is not the same (and serves a different purpose) to weave reset
in the 'Removing a Peer' sections, it may be worth mentioning that weave forget isn't necessary for correctness.
s/Removing a Node/Removing a Peer/
'Recovering Lost IPAM Space' - we should explain how to discover that space may have been lost
'Recovering Lost IPAM Space' - it would be better to rename, and perhaps split, this section so the title refers to something that has happened / some state, e.g. "peer failure", "removing a peer while partitioned".
weave status during bootstrapping a uniform fixed cluster - that's rather awkward to automate.
It does rather strike me that we'd benefit from connect and forget propagating through the cluster.
Discuss clock skew
Ideally we'd try to avoid mentioning the notion of 'leader', since it can easily leave readers with the wrong impression that weave funnels some of its operations through a single elected node
This is hard!
we may want to mention weave stop earlier, in the Removing a peer section, to explain that it is not the same (and serves a different purpose) to weave reset
I have tackled this by adding a 'Stopping a Peer' section before 'Removing a Peer' - see what you think.
in the 'Removing a Peer' sections, it may be worth mentioning that weave forget isn't necessary for correctness.
Amended.
s/Removing a Node/Removing a Peer/
Amended.
'Recovering Lost IPAM Space' - we should explain how to discover that space may have been lost
'Recovering Lost IPAM Space' - it would be better to rename, and perhaps split, this section so the title refers to something that has happened / some state, e.g. "peer failure", "removing a peer while partitioned".
I've had a stab at this - see what you think.
weave status during bootstrapping a uniform fixed cluster - that's rather awkward to automate.
Removed.
It does rather strike me that we'd benefit from connect and forget propagating through the cluster.
@bboreham unfortunately they have been collapsed due to out-of-date diffs, but I have responded to all your comments!
I think this is approaching MVP status! There are a number of author's notes left in at this stage, but some of them contain useful explicatory material that I am loathe to remove; am minded to either keep them in as hidden inline comments somehow, perhaps polishing some of them for external consumption as quoted blocks. Thoughts?
Bumping to 1.6 pending resolution of #2186 and #2187.
Would you like me to edit these files before you release them?
Would you like me to edit these files before you release them?
@abuehrle absolutely yes - the technical aspects of the content are nearly finalised now - will ping you again shortly.
Is this ready for @abuehrle to edit now?
Is this ready for @abuehrle to edit now?
The plan is:
merge #2305 (this PR assume that is in place)
merge this
@abuehrle can edit
@bboreham addressed your most recent comments - PTAL
|
2025-04-01T04:35:56.231772
| 2015-07-10T06:35:12
|
94227992
|
{
"authors": [
"Martin-Pitt",
"hgl",
"shans"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12018",
"repo": "web-animations/web-animations-next",
"url": "https://github.com/web-animations/web-animations-next/issues/394"
}
|
gharchive/issue
|
display property not overridden by animation in native impl
http://jsbin.com/vekakezuwa/1/edit?html,css,js,output
There is supposed be a teal box displayed, but in chrome, nothing is displayed.
This doesn't relate to the polyfill, but I guess you guys also write the native impl? Guess I will just report it here. Let me know if I should move this to crbugs.
Display blocks are 'not animatable' in CSS animations either, due to spec.
If the polyfill doesn't do it then this should be working as intended.
I know it sucks, because I think display should be perfectly reasonably animatable when given a step easing/timing function with only two states.
According to the spec, it should be animatable, I believe this is a chrome bug.
Hope the native impl could catch up soon. In the mean while, I'd like to suggest that the guarding code should test for properties like display, and when browsers can't animate them, let the polyfill kick in.
Thoughts?
It's not clear that this should be animatable.
Setting display can change running animations (animations are cancelled inside display: none subtrees). This makes display somewhat like the animation-* and transition-* properties, which are not animatable because of the fact that changing them can impact the state of running animations.
I posted the previous message when we haven't chatted on irc. :)
So have you reached a consensus with Birtles on this topic?
|
2025-04-01T04:35:56.234566
| 2024-08-28T08:30:54
|
2491433124
|
{
"authors": [
"chenjiahan",
"fi3ework",
"shulaoda"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12019",
"repo": "web-infra-dev/rspack",
"url": "https://github.com/web-infra-dev/rspack/pull/7713"
}
|
gharchive/pull-request
|
test(infra): improve e2e test stability
Summary
closes #7683
It seems that the issue is caused by the CI attempting to perform a hot update on the index.js module, but the update fails because it is the entry module.
Checklist
[x] Tests updated (or not required).
[ ] Documentation updated (or not required).
The first pipeline test passed, I will rebase and try again.
But why does it only have a small probability of failure, and can only be reproduced in CI?
But why does it only have a small probability of failure, and can only be reproduced in CI?
I'm not sure yet.
If this doesn't resolve the issue, we can investigate further.
|
2025-04-01T04:35:56.240174
| 2024-12-04T04:13:28
|
2716514956
|
{
"authors": [
"chenjiahan",
"inottn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12020",
"repo": "web-infra-dev/rspack",
"url": "https://github.com/web-infra-dev/rspack/pull/8614"
}
|
gharchive/pull-request
|
feat!: align AssetGeneratorDataUrlFunction with webpack
Summary
close #8538
This is a breaking change, but the previous behavior was inconsistent with webpack.
Checklist
[x] Tests updated (or not required).
[x] Documentation updated (or not required).
How about be compatible with the previous API signature to avoid breaking the user's code?
For example, if arg is an object, convert it to source + context.
We can align type declarations with webpack, but the internal implementation is compatible with Rspack <= 1.1.5. The compatible code can be removed in Rspack v2.0.
Sorry for the delay. If the parameters were directly passed to us by the user, we could maintain compatibility with the old version. However, in this case, the user provides a dataUrl function, and we pass the parameters to it, making compatibility with the old version unfeasible. Let’s put this PR on hold for now and merge it at a more suitable time.
This is not a very common usage, and it makes more sense to align with webpack. Maybe we can include this change in Rspack v1.2.0. @hardfist what do you think
@hardfist cc
After discussion, we decided to include this PR in v1.2.0 to align with webpack.
@inottn Can you rebase the PR again~ ❤️
After discussion, we decided to include this PR in v1.2.0 to align with webpack.
@inottn Can you rebase the PR again~ ❤️
done~
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.