added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:38:46.234054
| 2024-07-24T23:57:48
|
2428664642
|
{
"authors": [
"studyztp"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6202",
"repo": "gem5bootcamp/2024",
"url": "https://github.com/gem5bootcamp/2024/pull/45"
}
|
gharchive/pull-request
|
02 09 Sampling
[x] Introduction
[x] SimPoint/LoopPoint
[x] SimPoint analysis
[x] SimPoint checkpointing
[x] Simpoint restoring
[x] ElFie
[x] ElFie example
[x] SMARTS
[x] SMARTS example
[x] Summary
What hasn't been done
Third party test on materials and go through slides
On slide 2 and 50, it is waiting for a visualization
@powerjg I removed the config files, removed the SimPoint3.2 source files, removed the checkpoints, and added -img to the image directory.
|
2025-04-01T06:38:46.237006
| 2024-08-12T14:07:39
|
2461110821
|
{
"authors": [
"CEiderEVIDENT",
"fnoGematik"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6203",
"repo": "gematik/epa-deployment",
"url": "https://github.com/gematik/epa-deployment/issues/25"
}
|
gharchive/issue
|
Inconsistent documentation InformationService
Within the documentation on the main page it is written:
Information Service (information-service)
....
Limitations:
.....
for record status it will always send HTTP code **200** (OK)
But information/application.yaml says
nested-responses:
204:
response:
statusCode: **204**
Running actual Image deployment-1.0.8 repsonses with 204 and not 200
Hi @CEiderEVIDENT,
the documentation is updated with release version 1.0.12 - only the ReadMe wasn't updated accordingly after changing from 200 to 204 in previous API release.
--> https://github.com/gematik/epa-deployment/blob/main/README.md#information-service-information-service
Thanks for this finding and best regards,
|
2025-04-01T06:38:46.287711
| 2016-08-05T18:51:25
|
169672545
|
{
"authors": [
"austinmeier",
"cmungall",
"elserj",
"jaiswalp",
"kltm"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6204",
"repo": "geneontology/amigo",
"url": "https://github.com/geneontology/amigo/issues/368"
}
|
gharchive/issue
|
Annotated images in AmiGO
We'd like to have the ability to add annotated images that are displayable via the browser.
Discussion with @cmungall sounds like the ability to do something like a "gulp golr-load-image-annotations" where a tab delimited text file (format TBD) can be loaded into GOLR similar to how GAFs are done is the direction we'd like to head in.
Parts that I believe would need to be worked on include gulpfile changes to add the loading piece, actual code to load in GOLR, changes/addition to annotation pages in AmiGO to add the image type data, methods to display images (maybe carousel, maybe flat page with tables of images), probably other bits.
This is in conjunction with the image annotation work that @preecej is working on.
Also, time scale is pretty lax. Just creating the issue so that work/thoughts can get started and we can brainstorm on how this will work.
As a first pass we can just overload golr bioentity_association documents (the bioentity would be an image rather than gene or germplasm). We can then reuse all the existing nice faceting mechanisms (e.g. when on the leaf page I can narrow my search dynamically to monocots).
I suggest using isa-partof closure but we need more requirements analysis here. If I am interested in leafs generally I will want to see images of subtypes of leaves. I will probably also be interested in images of parts of leafs too, but I'd get a bit uneasy if I jump too many levels of granularity.
Realistically, the code changes would be minimal if using 3rd-party storage. Most of these point could be covered with additional handler code. Adding a page base type similarly easy (although, maybe that should be abstracted out a bit more).
Also: https://github.com/geneontology/amigo/issues/341
In reference to work being done on Planteome.
(To mark the specific use case for planteome, I'd add it to the planteome amigo tracker and have it blocked with this as the upstream issue.)
I think the first step here is to see what kind of data would be loaded.
There are two possibilities here: an overloaded GAF-like (or whatever) thing that would take a custom loader or to overlay the image data on top of already-loaded annotation data (what the current demo does).
I suspect that once we have some fairly concrete data running around, or at format/approach, the code to get to basic usability would be pretty fast.
We may want to experiment with overlays later, but this is simpler. The subject/bioentity is an image denoted by a URL that resolves to a jpg or similar (let's say a thumbnail). Just using the default amigo view this would be behave as any other bioentity. We'd want to then enhance the display a bit, I don't have strong opinions here: just showing the thumbnail, carousel, ...?
@cmungall and I were just talking about this a second ago. So, I think what we will have is some image URL that will display some term(s) so that people can see it and get a non-textual example of the term. I think GAF may be an acceptable input format, we just have to have a new object type of image. Maybe some client code that if the object type is an image to do something like make a thumbnail that links out to the source URL. In other words, instead of At5g20800 in column 2 or 3 of the GAF, have the URL. In column 12, have "image" as the object type, and then figure out how to make it look good in the browser.
The "carousel" has come up a couple of times here, and I don't quite understand--if it is a single object, what are the multiple things carouselling?
Otherwise, if we are literally treating these things as bioentities, then the code to detect an image URL ID would be very easy. Not so easy would be that have an ID component like that--it would likely throw a spanner into a lot of things. Preferable might be a standard bioentity document with an additional field that could act as the data overlay in a second loading step; population of the field would trigger the main effects.
This issue came up again in our ontology call this morning. We are discussing some very complex Plant Ontology terms related to inflorescence axes and these definitions are accompanied with nice line-art diagrams of different types of inflorescences. The textual definitions are complicated, and nuanced, but the images make it much clearer. So being able to imbed an image would go a long way in clarifying the meaning of these terms. This wouldn't require multiple images/carousel, rather just a single labeled image (from the NY crew).
Image example:
inflorescence_img_example.pdf
I think the addition is simple: add a new field, something like "auxiliary_external_reference_image" that is a remotely accessible PNG, etc. When the field is populated, AmiGO embeds whatever is at that end into the page. Simple.
Now, the hard part is to load that info into the store in the first place, which means we must descend into modifying the loader and loading a new file type (because GAF does not need a new field). That, or doing a second run over the index to populate it (like we did for the geo-spatial setup). If you want something out the door soon, the latter would be very very easy, especially if you don't have that many images.
When you say "new field" that would be a new field in what, exactly?
I think for a temporary solution, the example you outlined would work nicely. To get a sense of scale, I'd say we will likely start with just a single image for each term in the PO (actually it would be fewer, as we would not have images for some categorical terms) I could ask Dennis and his crew to gather the images that he would like to use, and deposit them somewhere on our repo (or elsewhere) with the PO:id in which they should be annotated to.
What format would be best? We can just load all the images somewhere, and provide a delimited file with an ID and a URL to the image?
A new field in the Solr schema, as defined by the amigo metadata files.
If one were to move in this direction, and I won't have time really until after the GO meeting, I would tack towards getting all of the images into S3--I think we're talking about a few thousand here? Well, thinking about it, if you have a webserver (probably apache) up for AmiGO anyways, you could always server it out of the AmiGO static directory or apache as well.
As an experimental load format, let's say a JSON list along the lines of:
[
{
"index": "PO:0022008",
"overlay": {
"auxiliary_external_reference_image": "http://my.nifty/s3/url"
}
}
]
We can reuse this for other overlays in the future; I think that we can probably just reuse most of what was done for the geo-spatial here.
Excellent. It will take some time to get images collected, and labeled correctly, so I'll see what can be done, then we can give it a whirl sometime in the semi-near future.
You're going to be in Corvallis this summer for the GO meeting, we can connect at that point.
Thanks for the explanation.
Yes, we can touch bases at Corvallis.
If there is a non-ontology label, or multiple images for a single term, a different overlay (or even strategy) would be necessary.
Yeah, and I'm sure that is in the long-term plan for the image annotation project, but if we could get just simple descriptive images imbedded in the term page on the browser, it would clear up a lot of the complex plant anatomy terms rapidly.
Well, yes, let's call this a one-off.
But for image annotations, we should revisit the work that has gone on with geospatial:
https://github.com/geneontology/amigo/issues/341
reiterating @austinmeier comments of displaying line diagrams etc. for explaining the anatomy terms on term detail pages. I recommend moving it up the priority.
Or, for that matter, thinking about #421, if we had a field that was essentially an overlay catch-all, a multivalued field (e.g. auxiliary_overlays) that could be loaded separately and incrementally, it could take a number of items to be render as needed. Each value could be like:
{
"overlay": "auxiliary_external_reference_image",
"index": "PO:0022008",
"type": "image",
"content": "http://snazzy.uri/foo/png"
}
or
{
"overlay": "patter_description",
"index": "PO:0022008",
"type": "markdown",
"content": "*** bleh"
}
|
2025-04-01T06:38:46.292399
| 2015-10-28T01:51:13
|
113725824
|
{
"authors": [
"jimhu-tamu",
"paolaroncaglia"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6205",
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/12127"
}
|
gharchive/issue
|
Odd default namespace 2015-10-24 release
Looking in the header of gene_ontology_ext.obo
default-namespace: file:/Users/tanyaberardini/go_svn/ontology/extensions/ro_pending.obo
where it's usually
default-namespace: gene_ontology
Hi Jim,
As you've probably seen by now, Doug reported the same issue on the go-consortium mailing list, and it was fixed.
For reference, https://github.com/geneontology/go-ontology/issues/12141
Thanks for reporting.
Paola
|
2025-04-01T06:38:46.305175
| 2016-09-05T13:25:28
|
175071254
|
{
"authors": [
"dosumis",
"rebeccafoulger",
"ukemi"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6206",
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/12635"
}
|
gharchive/issue
|
NTR: homotypic vesicle fusion and secretory granule maturation
Two new terms needed for annotation of Syt4 (Synaptotagmin-IV) in PMID:16618809:
homotypic vesicle fusion ; GO:NEW1
The fusion of two vesicle membranes to form a single vesicle.
GOC:bf, GOC:PARL
PMID:16618809
(Given that you’d have to have 2x ‘results_in_fusion_of: vesicle membrane’, I’m not sure I can currently create a decent logical definition for this).
secretory granule maturation ; GO:NEW2
Steps required to transform an immature secretory vesicle into a mature secretory vesicle. Typically proceeds through homotypic membrane fusion and membrane remodelling.
is_a: secretory granule organization ; GO:0033363
developmental maturation (GO:0021700) that results_in_organization_of: secretory granule ; GO:0030141
GOC:bf, GOC:PARL
PMID:16618809
is_a: child: dense core granule maturation ; GO:1990502
Note a similar logical definition could be added to dense core granule maturation ; GO:1990502
developmental maturation (GO:0021700) that results_in_organization_of: dense core granule ; GO:0031045
Thanks.
Including @dosumis because I'm sure he must have thought about this with respect to synaptic vesicles.
My hesitation with any of the 'homotypic' terms including the cell-cell adhesions terms is that at what level do we define the two entities as being the same. In the case of cell adhesion if two neurons bind one another, but they are distinct subtypes of neurons, is this homotypic? I have the same concerns with this term. In the ontology there is a rich substructure under membrane-bounded vesicle. At what point to we make the cut off for what is homotypic and what is heterotypic?
For the maturation term, we have 'synaptic vesicle maturation' and 'dense core granule maturation' as is_a children of 'vesicle organization'. We also have 'secretory granule organization' as a child. It seems like there are some patterns here of which we could take advantage. 'Synaptic vesicle maturation' is defined as 'developmental maturation' results_in_organization_of SOME 'synaptic vesicle' and is asserted as a subclass of 'vesicle organization'. There is also a term for 'synaptic vesicle coating' that has no relation to the maturation term. Any insights David?
I'd be reluctant to support a homotypic/heterotypic distinction if we can avoid it - for the resaons that @ukemi outlined above.
CC @Pimmelorus - General question about dense core vesicles here. Any comments? Could you see a need for a DCV maturation term for the Synapse work?
I see the problem with the 'homotypic' wording. How about making it more explicit in the GO term? It would go alongside the more specific instances of vesicle fusion:
E.g
vesicle fusion ; GO:0006906
--[isa]vesicle fusion with vesicle ; GO:NEW
--[isa]vesicle fusion to plasma membrane ; GO:0099500
--[isa]vesicle fusion with vacuole ; GO:0051469
--[isa]vesicle fusion with endoplasmic reticulum ; GO:0048279
I see the problem with the 'homotypic' wording. How about making it more explicit in the GO term? It would go alongside the more specific instances of vesicle fusion:
E.g
vesicle fusion ; GO:0006906
--[isa]vesicle fusion with vesicle ; GO:NEW
--[isa]vesicle fusion to plasma membrane ; GO:0099500
--[isa]vesicle fusion with vacuole ; GO:0051469
--[isa]vesicle fusion with endoplasmic reticulum ; GO:0048279
That looks fine. I thought you were referring to vesicles of the same type.
Me too. This looks ok.
Added:
[Term]
+id: GO:0061782
+name: vesicle fusion with vesicle
+namespace: biological_process
+def: "Fusion of the membrane of a transport vesicle with a target membrane on another vesicle." [GOC:bf, GOC:PARL, PMID:16618809]
+synonym: "vesicle to vesicle fusion" EXACT [GOC:dph]
+synonym: "vesicle-vesicle fusion" EXACT [GOC:dph]
+is_a: GO:0006906 ! vesicle fusion
+created_by: dph
+creation_date: 2016-09-06T13:29:49Z
+
Thankyou! The authors were talking about two vesicles of the same type fusing, but vesicle-vesicle fusion is enough for me!
Added logical def to dense core granule term and:
[Term]
+id: GO:0061792
+name: secretory granule maturation
+namespace: biological_process
+def: "Steps required to transform an immature secretory vesicle into a mature secretory vesicle. Typically proceeds through homotypic membrane fusion and membrane remodelling."
|
2025-04-01T06:38:46.349973
| 2016-11-09T09:50:20
|
188204940
|
{
"authors": [
"ValWood",
"dosumis",
"hdrabkin",
"mcourtot",
"nataled",
"pgaudet",
"ukemi"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6207",
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/12787"
}
|
gharchive/issue
|
Modified proteins proposal
Relevant GH tickets:
NTR: ubiquitinated protein binding
Parentage of glycoprotein binding
Question: adding new children of GO:0035064 methylated histone binding?
The proposalwas not well received by annotators at the GO meeting. The annotators thought that these terms were too important biologically for them not to be in the ontology. We need to come up with alternatives.
Continue to add these terms by hand. Unsustainable?
Figure out a way to add these terms automatically. Use PRO?, Chebi?, PsiMod?, will this lead to annotation inconsistency. Probably.
Only add a few very high-level terms. What is the specificity cut off and what happens when annotators make a case that the very specific term that they want is important.
Have do-not-annotate' terms that are generated automatically as in #1 below and categorize gene products by the entity that is captured in binding annotations.
We discussed three possible semantic interpretations of modified protein binding terms:
https://docs.google.com/document/d/1MAnnOfs-e2LY9MnqdCZscalbxbNUDSJ9pbMZ-f2WS9U/edit#heading=h.vzsyf1ss0k8h
Binds a protein that happens to have a modification, but not necessarily the modification site
Binds to the modified part of a protein <- Seems to be most broadly supported.
Binding to some specific protein partner is dependent on the modification state of the partner even if binding is not to the modified bit. <- Some support. DOS objection: this seems like annotating a property of the bound protein. Also specific to one binding interaction (whereas 2 is likely to be a general function)
Number 2 in this list - binds to the modified part of a protein - had the most support. Example: SH2 domain confers binding to phospho-tryosine in the context of a specific peptide motif: https://en.wikipedia.org/wiki/SH2_domain
For terms like this, we could add a comment that it should only be used where there is high confidence that binding is to the modification + protein (simple dependency on phosphorylation of target is not sufficient but one that localizes the domain by deletion/mutagenesis analysis of the protein is). This still leaves the question of how detailed we should get in specifying the target (see histone binding).
Darren will join us on today's meeting to present his ideas.
This is what I mentioned to a few attendees at the USC GOC meeting. I'll summarize first, then give the reasoning by way of example.
Consider that "ubiquitinated protein binding" could be logically defined as something like ("protein binding to some ubiquitinated protein"). So ("ubiquitinated protein binding to some protein") is the same as ("protein binding to some ubiquitinated protein"). Given the confusion that currently holds when using these terms, this proposal provides the benefit that annotation is simplified. No information is lost, since terms like "ubiquitinated protein binding" are useful grouping terms that can be inferred
based on the binding partner indicated.
I would suggest that "protein binding" be used as the annotate-to term. Child terms of interest (such as "glycoprotein binding") stay, and new ones minted if desirable, but be marked as do-not-annotate.
Example:
Consider the case of USP15, a protein that binds to ubiquitinated histone H2B. This protein is annotated to GO:0061649 ("ubiquitinated histone binding"). For the moment, pretend that GO:0061649 can be a child of GO:0042393 ("histone binding") and the proposed term ("ubiquitinated protein binding"). The full hierarchy (after reasoning) would be:
protein binding
|_ ubiquitinated protein binding
| |ubiquitinated histone binding
| histone binding
|_ubiquitinated histone binding
Consider a possible history for the annotation of USP15, from two different labs. Lab one and lab two each publish that USP15 binds protein, but don't know which protein. You'd get:
Lab1: USP15 protein binding some "protein"
Lab2: USP15 protein binding some "protein"
Lab1 then finds that USP15 binds to a ubiquitinated protein, while lab2 discovers that it binds to some histone. Now the annotation becomes:
Lab1: USP15 ubiquitinated protein binding some "ubiquitinated protein"
Lab2: USP15 histone binding some "histone"
Lab1 now finds that the specific type of ubiquitinated protein bound is histone, and Lab2 finds that the histone is ubiquitinated. Revised annotation:
Lab1: USP15 ubiquitinated histone binding some "ubiquitinated histone"
Lab2: USP15 ubiquitinated histone binding some "ubiquitinated histone"
Note that each new bit of information resulted in, for each lab, a change to both the GO term & the target (column 17). That means (for each lab) a total of 4 changes beyond the initial annotation.
A better, easier, more scalable solution would be to annotate ONLY to the parent protein binding term. The other GO terms stay but only as grouping terms. Specificity is given by the target. Thus, the history (for Lab1 only) would become:
A: USP15 protein binding some "protein"
B: USP15 protein binding some "ubiquitinated protein"
C: USP15 protein binding some "ubiquitinated histone"
Only 2 changes were necessary. Pros of this approach include fewer changes, clear guidance for GO term to use, and it's sustainable to any level of target specificity ("ubiquitinated histone" becomes "ubiquitinated histone H2B" becomes "histone H2B ubiquitinated at position 120" (as opposed to "histone H2B ubiquitinated at position 122"). I can think of only one con, which is that term enrichment becomes useless for specific types of protein binding. Of course, term enrichment could still be achieved, but there would be some overhead in calculating the correct sub-function.
On the call http://wiki.geneontology.org/index.php/Ontology_meeting_2016-11-10 @nataled clarified that in the above
A: USP15 protein binding to some "protein"
B: USP15 protein binding to some "ubiquitinated protein"
C: USP15 protein binding to some "ubiquitinated histone"
A, B and C are annotations made based on different evidence (different papers) through time, not one annotation evolving through time (though this would be possible too, provided the original paper contained the necessary information)
Notes from call (http://wiki.geneontology.org/index.php/Ontology_meeting_2016-11-10)
DOS:
Darren's suggested pattern in OWL: GP (enables) 'protein binding' that has_input some PRO:'ubiquitinated protein'.
This corresponds to semantic interpretation 1: binds to some protein that happens to be ubiquitinated. Binding doesn't have to be dependent on modification. Are we sure we want this?
If we have the terms with logical defs in the ontology, then annotations with extensions will also work. Might be worthwhile if we want to allow more specific pro terms to be used in extensions.
Action item: We need more examples. Ask Sylvan and Pascale to come to call to present examples.
The proposal was not well received by annotators at the GO meeting.
I'd like it to be recorded that this is not true for PomBase curators.
I really like Darren's proposal. This is what we would do anyway. The reason this solution was opposed was (by some) was because at the meeting it was not clear to all that PRO could add generic
"ubiquitinated protein" terms.
Of course, tools would also need to catch up, but at present, I would not rely on GO to provide comprehensive lists of modified protein binding partners. The annotation would need to catch up in tandem to be realistically useful. So in this situation we have a good opportunity to make the annotations in a more sustainable way without compromising what users can already do.
For example
GO:0051219 phosphoprotein binding
currently has
1 to 25 of 148 for 99 proteins
But I agree, @dosumis we also need to be clear in these cases whether the binding is modification-dependent (when we make these annotations using PRO IDs we translate has substrate into "active form" for our users
gene A protein binding "active form" PR:000045540 IPI gene B
http://research.bioinformatics.udel.edu/pro/entry/PR%3A000045540/
(this isn't a real example, but I am sure we have done this for binding)
Hello,
Here's a version of the proposal, reviewed by the GO-editors and by @sylvainpoux
Background:
A number of proteins are known to specifically bind a protein that is modified, while not binding the unmodified form. The binding may occur at the site of the modification, but this is not an absolute requirement for a modification-specific binding; in some cases the binding occurs at another position on the target protein but is nevertheless dependent on the PTM.
Moreover (and very important for annotation), the information is not always complete: in some cases, the position of the binding relative to the modification is known, while in other cases, only the nature of the modification is known but not the binding site positions.
A number of domains are also known to specifically recognize and bind modified proteins, which is extremely important for recruitment of proteins and signaling (for example chromatin or DNA repair). Note that we do not always know whether these domains/proteins bind the modified part, but we usually know that they specifically recognize modified proteins.
The existence of these GO terms is extremely important for users, because it can really guide research. Moreover, resources like InterPro can generate electronic annotation for domains that bind modified proteins. Despite the usefulness of these terms, their ambiguous definitions has led to inconsistent usage. Some terms have been created in GO to reflect modified protein binding: GO:0051219 phosphoprotein binding, GO:0061649 ubiquitinated histone binding etc. The definition does not clearly state that the binding must be modification-dependent, so it can be interpreted as 'protein x modified protein binding' to 'a protein that has the potential of bearing the modification'. This usage of the term is clearly incorrect.
On the other hand, other terms are missing from the ontology and curator use existing terms incorrectly to capture the information. For example, GO:0043130 ubiquitin binding could be both used for proteins that bind ubiquitinated proteins and free ubiquitin (both cases exist and are biologically important).
Proposal:
1. Create a GO term
Modification-dependent protein binding:
Definition: Binding specifically to a protein that bears a post-translation modification.
Note: Does not bind the protein when not modified.
Annotation guidelines:
- The binding must be compared with and without the PTM in the interaction partner. If it is not shown that the binding is abolished in absence of the modification, this term cannot be used.
2. Create a limited number of child terms for most common PTMs (fewer than 10)
Phosphorylated protein binding (would replace phosphoprotein binding)
Glycosylated protein binding (would replace glycoprotein binding)
Ubiquitinated protein binding (would be a child of ubiquitin binding and we would specify that ubiquitin binding is both for binding free ubiquitin and ubiquitinated proteins)
Methylated protein binding
Acylated protein binding
Acetylated protein binding
3. Other PTMs:
To avoid unsustainable multiplication of terms, we can use 'Modified protein binding' with an ontology term in the extension (ontology to use remains to be defined). Until we choose the ontology, the parent term “modification-dependent protein binding” should be used for annotation.
Thanks, Pascale
Note: Send annotations to everyone to modify
here is list from PRO:
acetylation
ADP-ribosylation
amidation
bromination
cleavage
glycosylation
GPI-anchor
hydroxylation
methylation
phosphorylation
prenylation
sumoylation
ubiqitination
other
don't know how often bromination comes up, and the "cleavage"
Rename:
GO:0050815 phosphoserine binding -> phosphoserine residue binding
GO:0050816 phosphothreonine binding -> phosphothreonine residue binding
GO:0001784 phosphotyrosine binding -> phosphotyrosine residue binding
On the May 9. 2017 annotation call, we agreed that the meaning of that terms was consistent with these changes.
Updated the definition to "Binding specifically to a protein dependent on the presence of a post-translation modification in the target protein. "
[Term]
+id: GO:0140030
+name: modification-dependent protein binding
+namespace: molecular_function
+def: "Binding specifically to a protein dependent on the presence of a post-translation modification in the target protein." [PMID:26060076]
+comment: This term should only be used when the binding is shown to required the post-translational modification: the interaction needs to be tested with and without the PTM. The binding does not need to be at the site of the modification. It may be that the PTM causes a conformation change that allows binding of the protein to another region; this type of modification-dependent protein binding is valid for annotation to this term.
+synonym: "modified protein binding" RELATED []
+is_a: GO:0005515 ! protein binding
+created_by: pg
+creation_date: 2017-05-17T11:50:41Z
+
Fixed ubiquitin protein binding:
id: GO:0031593
-name: polyubiquitin binding
+name: polyubiquitin modification-dependent protein binding
namespace: molecular_function
-def: "Interacting selectively and non-covalently with a polymer of ubiqutin." [GOC:mah]
-synonym: "multiubiquitin binding" RELATED []
-is_a: GO:0043130 ! ubiquitin binding
+def: "Interacting selectively and non-covalently with a protein upon poly-ubiquitination of the target protein." [GOC:pg]
+is_a: GO:0140030 ! modification-dependent protein binding
[Term]
id: GO:0036435
-name: K48-linked polyubiquitin binding
+name: K48-polyubiquitin modification-dependent protein binding
namespace: molecular_function
-def: "Interacting selectively and non-covalently and non-covalently with a polymer of ubiquitin formed by linkages between lysine residues at position 48 of the ubiquitin monomers." [GOC:al, PMID:20739285]
-is_a: GO:0031593 ! polyubiquitin binding
+def: "Interacting selectively and non-covalently with a protein upon poly-ubiquitination formed by linkages between lysine residues at position 48 in the target protein." [GOC:al, PMID:20739285]
+is_a: GO:0031593 ! polyubiquitin modification-dependent protein binding
created_by: rfoulger
creation_date: 2013-09-18T14:51:06Z
[Term]
id: GO:0070530
-name: K63-linked polyubiquitin binding
+name: K63-polyubiquitin modification-dependent protein binding
namespace: molecular_function
-def: "Interacting selectively and non-covalently and non-covalently with a polymer of ubiquitin formed by linkages between lysine residues at position 63 of the ubiquitin monomers." [GOC:mah, PMID:15556404, PMID:17525341]
-is_a: GO:0031593 ! polyubiquitin binding
+def: "Interacting selectively and non-covalently with a protein upon poly-ubiquitination formed by linkages between lysine residues at position 63 in the target protein." [GOC:mah, PMID:15556404, PMID:17525341]
+is_a: GO:0031593 ! polyubiquitin modification-dependent protein binding
[Term]
id: GO:0071795
-name: K11-linked polyubiquitin binding
+name: K11-polyubiquitin modification-dependent protein binding
namespace: molecular_function
-def: "Interacting selectively and non-covalently and non-covalently with a polymer of ubiquitin formed by linkages between lysine residues at position 11 of the ubiquitin monomers." [GOC:sp, PMID:18775313]
-is_a: GO:0031593 ! polyubiquitin binding
+def: "Interacting selectively and non-covalently with a protein upon poly-ubiquitination formed by linkages between lysine residues at position 11 in the target protein." [GOC:sp, PMID:18775313]
+is_a: GO:0031593 ! polyubiquitin modification-dependent protein binding
created_by: midori
creation_date: 2010-09-02T02:11:41Z
[Term]
id: GO:0071796
-name: K6-linked polyubiquitin binding
+name: K6-polyubiquitin modification-dependent protein binding
namespace: molecular_function
-def: "Interacting selectively and non-covalently and non-covalently with a polymer of ubiquitin formed by linkages between lysine residues at position 6 of the ubiquitin monomers." [GOC:sp, PMID:17525341, PMID:20351172]
-is_a: GO:0031593 ! polyubiquitin binding
+def: "Interacting selectively and non-covalently with a protein upon poly-ubiquitination formed by linkages between lysine residues at position 6 in the target protein." [GOC:sp, PMID:17525341, PMID:20351172]
+is_a: GO:0031593 ! polyubiquitin modification-dependent protein binding
created_by: midori
creation_date: 2010-09-02T02:13:07Z
@@ -587346,7 +587345,7 @@ name: linear polyubiquitin binding
namespace: molecular_function
def: "Interacting selectively and non-covalently with a linear polymer of ubiquitin. Linear ubiquitin polymers are formed by linking the amino-terminal methionine (M1) of one ubiquitin molecule to the carboxy-terminal glycine (G76) of the next." [GOC:bf, GOC:PARL, PMID:23453807]
synonym: "M1-linked ubiquitin chain binding" EXACT [PMID:23453807]
-is_a: GO:0031593 ! polyubiquitin binding
+is_a: GO:0043130 ! ubiquitin binding
created_by: bf
creation_date: 2014-08-06T11:10:26Z
Was the duplication of "and non-covalently" intentional?
probably not :) thanks for pointing out
In fact it was in the old definition ! (as you can see by the '-' sign at the beginning of the line.
I lose. My apologies.
As discussed in
https://github.com/geneontology/go-annotation/issues/1586
I will create
'glycosylated region binding' (similar to 'proline-rich region binding),
Proposed def: Interacting selectively and non-covalently with a glycosylated region of a protein.
Proposed parents:
protein binding + carbohydrate-derivative protein binding.
Pascale
New term:
+id: GO:0140081
+name: glycosylated region protein binding
+namespace: molecular_function
+def: "Interacting selectively and non-covalently with a glycosylated region of a protein." [GOC:pg]
+is_a: GO:0005515 ! protein binding
+is_a: GO:0097367 ! carbohydrate derivative binding
+created_by: pg
+creation_date: 2017-07-25T10:58:31Z
|
2025-04-01T06:38:46.356403
| 2017-08-09T17:38:06
|
249106560
|
{
"authors": [
"pgaudet",
"rjdodson"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6208",
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/14040"
}
|
gharchive/issue
|
This term is most commonly associated with D. discoideum, but applies to at least 3 additional species of Dictyostelids: D. giganteum, D. purpureum, and P. palladium. Please let me know if you have additional questions.
Thanks,
Bob Dodson
Name: sexual macrocyst formation
Ontology: process
Synonyms: macrocyst formation, sexual fusion
Definition: The fusion of haploid amoebae cells with matching mating types to form a larger cell, which ingests additional amoebae and forms a cellulose wall. The resulting macrocyst undergoes recombination and meiosis followed by release of haploid amoebae. An example of this process can be found in Dictyostelium discoideum.
Hi Bob !!
Here's the new term:
[Term]
+id: GO:0140084
+name: sexual macrocyst formation
+namespace: biological_process
+def: "The fusion of haploid amoebae cells with matching mating types to form a larger cell, which ingests additional amoebae and forms a cellulose wall. The resulting macrocyst undergoes recombination and meiosis followed by release of haploid amoebae. An example of this process can be found in Dictyostelium discoideum." [PMID:16592095, PMID:20089169]
+synonym: "macrocyst formation" RELATED []
+synonym: "sexual fusion" RELATED []
+is_a: GO:0019953 ! sexual reproduction
+created_by: pg
+creation_date: 2017-08-14T20:11:03Z
Thanks, Pascale
Thanks Pascale!
|
2025-04-01T06:38:46.358898
| 2004-03-29T22:36:33
|
97147306
|
{
"authors": [
"gocentral",
"linuxlovell"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6209",
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/1624"
}
|
gharchive/issue
|
astrocytes are not immune cells
GO:0045321 : immune cell activation (374)
GO:0048143 : astrocyte activation (5)
Astrocytes are glial cells (only exist in the brain and
support neurons; they are not found in the blood), so
they are currently mis-classified as immune cells.
Reported by: *anonymous
Original Ticket: "geneontology/ontology-requests/1627":https://sourceforge.net/p/geneontology/ontology-requests/1627
thanks www.movied.org
|
2025-04-01T06:38:46.376222
| 2019-03-13T20:42:04
|
420709791
|
{
"authors": [
"ValWood",
"deustp01",
"dsiegele",
"pgaudet",
"ukemi",
"vanaukenk"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6210",
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/17035"
}
|
gharchive/issue
|
ATP synthase parent
proton-transporting ATP synthase activity, rotational mechanism (GO:0046933)
has the parents
cation-transporting ATPase activity
is_a ATPase activity, coupled to transmembrane movement of ions, rotational mechanism
but this is ATP synthase not ATPase
There are 3 types of proton translocating ATPases with a rotational mechanism: P-type, V-type, and F-type. I think the P-type and V-type ATPases function only as ATP-ases. But the F-type ATPases can function either as an ATP synthase (where protons crossing a membrane drive synthesis of ATP from ADP + P1) or as an ATPase (where ATP -> ADP + Pi provides the energy to move protons back across the membrane). The location of GO:0046933 in GO makes sense to me in terms of the biochemistry because all the protein complexes I know of that have proton-transporting ATP synthase activity, rotational mechanism are also proton-translocating ATPases. But that may not fit with the logic of the ontology structure.
Links:
https://en.wikipedia.org/wiki/Proton_ATPase
Stewart et al. (2014) Current Opinion in Structural Biology
https://doi.org/10.1016/j.sbi.2013.11.013
but to annotate the non ATP synthase direction you would use
proton-transporting ATPase activity, rotational mechanism?
(some of the exact synonyms are conflicting too)
Quick note for the assigned curator. The = sign in the definitions means that these reactions are all defined as bidirectional despite their names. At first glance this leads me to agree with @dsiegele.
But we have both reactions in GO?
I think it is really important for representing the biology that we retain this distinction.
ATP synthase creates the energy currency for the cell.
There is an old discussion about having both terms somewhere but I suspect it is so old it is on Source Forge (I can't find it on GitHub), but I seem to remember that , the reverse mitochondrial reaction isn't physiologically relevant. So surely we should represent the reaction in the correct direction with appropriate parentage?
Otherwise it's really difficult to model the biology.
We represent the v-ATPase as an ATPase and the mitochondial F0-F1 as an ATP synthase?
EC and Rhea treat all chemical reactions as reversible in principle, if I remember right. @dsiegele ? This one is probably a valid example - given high enough concentrations of what we normally think of as reaction products, the system could be driven to generate what we think of as reaction substrates. Doesn't GO follow this usage, that by default the description of the direction of a molecular function is agnostic as to direction? So is a direction emerges, that can only happen at the process level? Unless there's some way to impose a "physiological direction" attribute on the function from outside of GO? @ukemi ?
Hmm, I see, so how does that work with kinase/phosphatase etc?
It takes a hell of as lot of phosphoprotein, ADP, and patience.
;)
but we don't follow the agnostic as to direction for GO in this case?
Here's an example:
GO:0004713 protein tyrosine kinase activity
Molecular Function
Definition: Catalysis of the reaction: ATP + a protein tyrosine = ADP + protein tyrosine phosphate.
Which takes us back to @ukemi 's comment above:
Quick note for the assigned curator. The = sign in the definitions means that these reactions are all defined as bidirectional despite their names.
The reverse reaction (ATP --> ADP + H+) is physiologically relevant in bacteria. During fermentative growth, i.e. growth without an electron transport chain, some taxa of bacteria use this process to generate a pmf.
Here's an example:
GO:0004713 protein tyrosine kinase activity
OK I misunderstood, I think. @deustp01 you are saying that we don't specify the direction even though we annotate directionally based on the term name?
The reverse reaction (ATP --> ADP + H+) is physiologically relevant in bacteria.
There should be a term for this too.
At present it's really confusing because the entire ancestry is specified as ATPase
see the parent
GO:0019829 cation-transporting ATPase activity
Enables the transfer of a solute or solutes from one side of a membrane to the other according to the reaction: ATP + H2O + cation(out) = ADP + phosphate + cation(in).
until this term when the term name switches to ATP synthase. It should be consistent but I don't want to annotate a mitochondrial F1F0 ATP synthase as an ATPase. It would look wrong.
There is clearly a precedent for forcing directionality when it is biologically important (kinase/phophatase).
In Trypanosoma brucei, the mitochondrial F1Fo ATPase functions differently depending upon the host organism. In the insect host, it functions as an ATP synthase and generates ATP. In the mammalian bloodstream form of T. brucei, the same enzyme functions primarily as an ATPase and is required to maintain the mitochondrial membrane potential.
PMID: 28414727
also, https://doi.org/10.1111/j.1432-1033.1992.tb17278.x
That's OK, we have terms for both activities
GO:0046933 proton-transporting ATP synthase activity, rotational mechanism
or
GO:0046961 proton-transporting ATPase activity, rotational mechanism
coupled to the appropriate process and the lifecycle stage
This is a reason why it is important to represent both directions- because they represent different aspects of biology.
Parentage of both terms demonstrates more clearly why the parentage is incorrect:
After discussion with @ValWood and @pgaudet, we propose to add GO:0016776 'phosphotransferase activity, phosphate group as acceptor' as a parent to the 'proton-transporting ATP synthase term and remove the ATPase parent. Not having the phosphotransferase parent seems to be missing a biologically relevant MF parent.
For curation, curators would annotate to the ATP synthase and/or ATPase MF terms and, ideally, provide the appropriate biological context to capture when the 'machine' and its subunits enable each type of MF.
We also discussed the 'proton transmembrane transporter activity' MF parent for 'proton-transporting ATP synthase. According to the definition, this MF seems to fit GO:0022803 'passive transmembrane transporter activity':
"Enables the transfer of a single solute from one side of a membrane to the other by a mechanism involving conformational change, either by facilitated diffusion or in a membrane potential dependent process if the solute is charged."
@pgaudet - does this seem correct to you?
Note that the question remains about which RHEA to xref to for each MF term and also that synonyms will need to be reassigned appropriately.
@pgaudet - does this seem correct to you?
Yes
Note that the question remains about which RHEA to xref to for each MF term and also that synonyms will need to be reassigned appropriately.
I thought we had decided on the directionality of the Rhea reaction for both (or at least for one of them - I think Rhea is missing the synthase?)
Thanks, Pascale
I thought we had decided on the directionality of the Rhea reaction for both (or at least for one of them - I think Rhea is missing the synthase?)
I think we weren't clear on exactly how 'in' and 'out' should be interpreted, but tagging @amorgat here for guidance.
|
2025-04-01T06:38:46.385481
| 2019-07-08T09:12:44
|
465143701
|
{
"authors": [
"ValWood",
"mah11",
"pgaudet",
"ukemi"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6211",
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/17597"
}
|
gharchive/issue
|
MP modulation by symbiont of host defense-related PCD
GO:0034053
modulation by symbiont of host defense-related programmed cell death
should be
is_a
regulation of
GO:0097300 programmed necrotic cell death
@CuzickA
Is it always programmed necrotic cell death, never apoptotic?
Well this is a tricky one. In the plant community they call the death "necrotic death" whatever the menchism, because "necrotic lesions" are what is observed.
The mechanism isn't always clear. In the cese we looked at either we don't know, or it is referred to as PCD. If it is PCD or not for plants it is necrotic cell death.
Normally In response to a pathogen the plant activates the hypersensitive repsone, a host immune response- the normal defense against biotrophs (localized cell killing).
However this isn't a pathogen process, since the host is doing the activating.
It would make sense that if the pathogen is doing the killing (implied by this particular term "modulation by symbiont of host defense-related PCD" then it is always "necrotic".
It also seems to completely fit this definition:
GO:0097300 programmed necrotic cell death
Definition (GO:0097300 GONUTS page)
A necrotic cell death process that results from the activation of endogenous cellular processes, such as signaling involving death domain receptors or Toll-like receptors. PMID:21760595
because it is always necrotic.
clarified the comment above a little...
Does this term only apply to plants ? If you want to describe 'necrotic cell death' I think we need a new term.
We want to use
GO:0034053 modulation by symbiont of host defense-related programmed cell death
to logically define a phenotype.
As far as I can see, logically
all
GO:0034053
modulation by symbiont of host defense-related programmed cell death
must be
regulation of GO:0097300 programmed necrotic cell death
which is why we are asking for the parent.
Do symbionts regulate host defenses and cause cell death that isn't necrotic? if the pathogen is causing cell death can it be other?
I don't really quite know, but ...
https://doi.org/10.1371/journal.ppat.1000478
and maybe
https://www.ncbi.nlm.nih.gov/pubmed/20191202
https://www.ncbi.nlm.nih.gov/pubmed/12766474
https://www.ncbi.nlm.nih.gov/pubmed/11595833
Hmm. OK yes this doesn't seem quite right.
We can probably use both GO:0034053 and GO:0097300 in the logical defs.
At the moment we aren't creating the logical defs, we are just noting the GO terms we think are most appropriate so they are ready. James is looking into design patterns with Nico.
@CuzickA could you note both of these GO IDs for this one. We can look closer nearer the time, but I will close this ticket.
|
2025-04-01T06:38:46.402711
| 2018-01-25T19:11:22
|
291682241
|
{
"authors": [
"tmushayahama",
"vanaukenk"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6212",
"repo": "geneontology/simple-annoton-editor",
"url": "https://github.com/geneontology/simple-annoton-editor/issues/25"
}
|
gharchive/issue
|
Allow for searching on IDs or accessions
Curators will want to be able to cut and paste database IDs in the Gene Product field.
This currently works in the Add Individual field in the graph editor.
@vanaukenk pasting values is now working. I just put a minimum of 2 characters for the autocomplete to be triggered.
|
2025-04-01T06:38:46.406563
| 2024-01-26T14:57:11
|
2102357438
|
{
"authors": [
"jldparker"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6213",
"repo": "genesiscommunitysuccess/blank-app-seed",
"url": "https://github.com/genesiscommunitysuccess/blank-app-seed/pull/118"
}
|
gharchive/pull-request
|
chore: added root project to settings.gradle.kts
Review Guidance
📓 Related JIRA
https://genesisglobal.atlassian.net/browse/IP-51
🤔 What does this PR do?
Add bullets..
🚀 Where should the reviewer start?
Add file / directory pointers.
📑 How should this be tested?
npx -y @genesislcap/genx@latest init prtest -x --ref YOUR-BRANCH-NAME
✅ Checklist
[ ] I have tested my changes.
[ ] I have added tests for my changes.
[ ] I have updated the project documentation to reflect my changes.
No longer needed
|
2025-04-01T06:38:46.496769
| 2017-08-12T09:12:59
|
249799841
|
{
"authors": [
"alexanderzimmerman"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6214",
"repo": "geo-fluid-dynamics/phaseflow-fenics",
"url": "https://github.com/geo-fluid-dynamics/phaseflow-fenics/issues/57"
}
|
gharchive/issue
|
Hand exact Jacobian to nonlinear solver
I vaguely recall seeing a way to do this once.
This way we no longer have to distinguish between the nonlinear and linearized problems in phaseflow.
Once this is done, we should be able to run all tests with mpirun, and no longer need to run tests in serial.
This is complete as of commit 35fa7f63e59d07ec46c4e723806b50ba7bd7247e
This is a pretty big deal :) I discussed it in more detail at PR #63
|
2025-04-01T06:38:46.560361
| 2024-12-12T10:09:30
|
2735450077
|
{
"authors": [
"martinRenou",
"mfisher87"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6215",
"repo": "geojupyter/jupytergis",
"url": "https://github.com/geojupyter/jupytergis/issues/247"
}
|
gharchive/issue
|
Setup pre-commit-ci
https://github.com/apps/pre-commit-ci
Checks working on this repo: https://results.pre-commit.ci/repo/github/813604742
Autofix automation working on this PR: https://github.com/geojupyter/jupytergis/pull/243
Although I think I prefer autofixes to be off, but that's just me! :)
|
2025-04-01T06:38:46.572402
| 2023-04-10T13:32:59
|
1660792666
|
{
"authors": [
"alesolla",
"disbr007"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6216",
"repo": "geopython/OWSLib",
"url": "https://github.com/geopython/OWSLib/issues/871"
}
|
gharchive/issue
|
AttributeError: module 'logging' has no attribute 'isEnabledFor'
Just started seeing this error pop yesterday from my project that uses owslib. I'm not version pinned so assuming this is related to release from yesterday.
r = wcs.getCoverage(
File "/usr/local/lib/python3.8/dist-packages/owslib/coverage/wcs201.py", line 156, in getCoverage
if log.isEnabledFor(logging.DEBUG):
AttributeError: module 'logging' has no attribute 'isEnabledFor'
Hi! Is there any update on when this will be updated in the pypi released version?
|
2025-04-01T06:38:46.597111
| 2020-03-23T17:15:30
|
586376758
|
{
"authors": [
"Zeitsperre",
"cehbrecht",
"coveralls"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6217",
"repo": "geopython/pywps",
"url": "https://github.com/geopython/pywps/pull/527"
}
|
gharchive/pull-request
|
Use bump2version to track version changes
Overview
This PR adds bump2version (a maintained fork of bumpversion) to the repository to manage version changes. This works by setting the standard version number in the setup.cfg and performs the changes based on semantic versioning scheme. From the command line:
bump2version patch --> +0.0.1
bump2version minor --> +0.1.x # where x is reset to 0
bump2version major --> +1.x.x # where x is reset to 0
bump2version can also do things like tag versions and create commits when tagging. These have are not enabled in this PR.
Related Issue / Discussion
https://github.com/geopython/pywps/issues/525
Additional Information
https://pypi.org/project/bump2version/
Contribution Agreement
(as per https://github.com/geopython/pywps/blob/master/CONTRIBUTING.rst#contributions-and-licensing)
[x] I'd like to contribute [feature X|bugfix Y|docs|something else] to PyWPS. I confirm that my contributions to PyWPS will be compatible with the PyWPS license guidelines at the time of contribution.
[ x I have already previously agreed to the PyWPS Contributions and Licensing Guidelines
Coverage remained the same at 74.26% when pulling 976fbf90b258a49fd609c5969a1912ab94e1f364 on Zeitsperre:bumpversion into 61e03fe566d9a4cdf52a6b941a404916592e5aac on geopython:master.
test failure on Python 3.7 not related to the PR.
@Zeitsperre thanks :)
|
2025-04-01T06:38:46.694615
| 2022-10-20T23:55:11
|
1417487757
|
{
"authors": [
"RCady",
"gerardroche"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6218",
"repo": "gerardroche/sublime-phpunit",
"url": "https://github.com/gerardroche/sublime-phpunit/issues/102"
}
|
gharchive/issue
|
Add support for Laravel's artisan test command
Laravel's artisan test command offers some styling upgrades on top of PHPUnit's. artisan test passes all arguments directly to PHPUnit.
It would be nice to add Laravel/Artisan detection and adjust the command to utilize this.
I plan to create a Pull Request with the changes in the near future.
Currently you can toggle the --testdox option from the command palette which will give you a similar output.
One way to hack it is the following:
"phpunit.prepend_cmd": ["artisan"],
"phpunit.executable": "test",
"phpunit.options":
{
"colors=never": true,
},
This prepends artisan and replaces the executable with test which gets you artisan test. A bit hacky but works.
The colors option is required because artisan prints colors codes. PHPUnit defaults to --colors=auto which disables colors when run in Sublime Text.
The problem is that artisan still prints some color codes which is similar to the same issue Pest has (see https://github.com/gerardroche/sublime-phpunit/issues/103):
It also prints a TTY warning that I don't understand yet:
Warning: TTY mode requires /dev/tty to be read/writable.
In the next version you will be able to set the executable as a list:
"phpunit.executable": ["artisan", "test"],
"phpunit.options":
{
"colors=never": true,
},
I added the boolean setting phpunit.artisan. Just set it to true to enable the Artisan test runner.
I opened an issue in the Laravel tracker for the color output issues: https://github.com/laravel/framework/issues/46759.
Please open issues about missing syntax highlighting of the output.
|
2025-04-01T06:38:46.695821
| 2016-01-11T14:23:40
|
125954964
|
{
"authors": [
"Hemofektik",
"gerazo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6219",
"repo": "gerazo/loose_quadtree",
"url": "https://github.com/gerazo/loose_quadtree/pull/1"
}
|
gharchive/pull-request
|
Make loose quadtree compatible to MSVC 2015
These are only minor changes but they fix any compiler errors when used in Visual Studio 2015 C++ Toolchain.
Thanks a lot for the fix.
|
2025-04-01T06:38:46.712955
| 2022-12-10T20:59:56
|
1488957600
|
{
"authors": [
"jankoegel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6220",
"repo": "getAlby/lightning-browser-extension",
"url": "https://github.com/getAlby/lightning-browser-extension/pull/1855"
}
|
gharchive/pull-request
|
refactor: Navbar: constrain max width to same width as the content.
Related Rails app PR
https://github.com/getAlby/getalby.com/pull/272
HELP NEEDED / TODOs
[ ] review code
[x] depending on responsiveness needs, consider moving the navigation entries ("websites" etc.) a bit more to the right so they are more centered instead of leaning left.
[x] reviewer: test with an extension development setup (or pair with me / let me know how to set one up)
[x] reviewer: test responsiveness (current extension's header looks as broken to me on smaller breakpoints as with these changes but I'm not sure!?)
Describe the changes you have made in this PR
Navbar: constrain max width to same width as the content.
reason: on desktop the navigation items in the top left and right corners were easy to miss. With this change, everything is closer together and easier to notice.
Type of change
UI improvement
Screenshots of the changes [optional]
Before
After
How has this been tested?
⚠️ I only tested this with my browser's inspector. I don't have a development setup for the extension yet. Ok, I have the development setup for the extension now and everything looked good to me.
I think this is a good adjustment.
This looks a bit weird, but not to problematic I think:
yes, there is a weird space of about 100px display resolution around 1024px where the wallet icon has space on the left and is not flush with the content's left border. in the same space the negative margin on the children is a problem. Didn't investigate further since responsiveness is not a big focus. We could remove the negative margin, at full-res I felt it looks more centered with it than without.
hm, just checked again and it feels like i don't get the children that close to the wallet icon until a much smaller breakpoint on firefox. Does the following GIF match what you're seeing?
looks the same as in Firefox for me on Chrome.
|
2025-04-01T06:38:46.722737
| 2020-03-09T15:31:52
|
577994568
|
{
"authors": [
"gmarec",
"kytwb"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6221",
"repo": "getferdi/ferdi",
"url": "https://github.com/getferdi/ferdi/issues/451"
}
|
gharchive/issue
|
Links between services
Hi all
Is it possible to open a link in a registered service.
Example I have a Trello Service and a Slack service, If someone send me a Trello link in Slack it will open in external browser by default, is it possible to open it in Trello service inside Getferdi ?
Thanks
@kytwb I've worked a little on this, now I need to use user.js in each service in order to modify all links I want. For example, if I have Trello with id 'baa48d44-e29a-4a55-8709-e4fbf02a39a4' and Slack service :
I can transofrm links like Card
TO:
window.open('https://trello.com/c/daUQc9Us/89-gestion-des-notifications-menu-écran', 'Ferdi: Trello', 'baa48d44-e29a-4a55-8709-e4fbf02a39a4');
Can you give me any direction, to retrieve automatically a service.id for a given service.name, is it possible to retrieve this kind of information in user.js.
I can create a pull request, but just to have some insights from you guys ;-)
For the moment all my code is in https://github.com/gmarec/ferdi/tree/develop-service-link
Thanks for your help
@gmarec Thank you for digging into this! 💪🙏
Can you please open a draft pull request? Like this we can more easily checkout, see the diff and collaborate within the pull request 😄 Maybe you can also comment in the diff to pin-point us to where you're missing the ~getServiceIdByServiceName helper?
@kytwb it's ok now to get serviceId by serviceName, thanks. I have opened a draft pull request. Thanks for your help. It needs documentation and a better integration than using user.js but it's working.
|
2025-04-01T06:38:46.725561
| 2020-04-04T13:56:14
|
593874742
|
{
"authors": [
"JohnTHaller",
"poisonborz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6222",
"repo": "getferdi/ferdi",
"url": "https://github.com/getferdi/ferdi/issues/538"
}
|
gharchive/issue
|
Make PAF format release also available as windows portable option
The current portable version for windows simply writes files to a temp directory. While technically this works, this does not fulfill what most users have in mind for a portable app - which is a standalone folder with program files that can be just moved and launched anywhere and it works. I use such apps to easily sync settings between machines.
I still use the old PAF version with the newer versions (just replacing the App/ferdi folder) and it works great. I know using Electron's portable target is easier and updates are also easier to solve, but the PAF format is still more flexible for advanced users, and should be at least optionally there (with caveat warnings).
I'd be happy to create a PortableApps.com Format package and host it on PortableApps.com. Or to create it and let you build/host it for both your users and ours. Or start with the former and transition to the latter as you'd like.
(I'm the creator of PortableApps.com and manage many of our app packages).
|
2025-04-01T06:38:46.734272
| 2016-05-23T11:55:39
|
156263233
|
{
"authors": [
"apoorvam",
"sguptatw"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6223",
"repo": "getgauge/gauge",
"url": "https://github.com/getgauge/gauge/issues/410"
}
|
gharchive/issue
|
With multiple tags for a scenario the previous tags are being overwritten
Expected behavior
If there are multiple tags then it should get appended to the list
Actual behavior
It is getting overwritten
Steps to reproduce
Create a spec
Specification Heading
=====================
tags:top
* something1
c
-----------------------------
tags:scenario3 ppp a b c d e f g h i j k l m n o p q r s t u v w x y z
* something
tags:b
run the spec
The tag associated with c is only b
Gauge version
Gauge version: 0.4.1.nightly-2016-05-18
Plugins
-------
html-report (2.1.1.nightly-2016-05-19)
java (0.4.1.nightly-2016-05-17)
Ideally, tags should be defined only once in a scenario. So if it's defined more than once, we should throw a parse error.
Fix should be available in nightly >= 03.06.2016
|
2025-04-01T06:38:46.742808
| 2018-03-22T09:09:25
|
307558167
|
{
"authors": [
"rhukster",
"stephenvoisey"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6224",
"repo": "getgrav/grav-theme-quark",
"url": "https://github.com/getgrav/grav-theme-quark/issues/12"
}
|
gharchive/issue
|
Missing metadata include
Quick one. Within the base template, an include:
{% include 'partials/metadata.html.twig' %}
But no such file exists in partials or elsewhere.
That is in the Grav core now.. under system/templates/partials/
BTW, this was moved there because it's useful for all themes, and rarely needs changing. You can of course override it in your theme, but don't need to provide it yourself now.
Thanks for the clarification, many thanks!
|
2025-04-01T06:38:46.745690
| 2019-04-16T10:08:30
|
433694002
|
{
"authors": [
"Memurame",
"n30nl1ght",
"ricardo118"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6225",
"repo": "getgrav/grav",
"url": "https://github.com/getgrav/grav/issues/2453"
}
|
gharchive/issue
|
Disable tags in codemirror editor.
I only want to allow my client in the editor to do the text Bold. I want to hide all other tags.
Is this possible?
How can i do this?
I will test it.
|
2025-04-01T06:38:46.747867
| 2016-03-25T00:36:51
|
143395366
|
{
"authors": [
"4evermaat",
"flaviocopes"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6226",
"repo": "getgrav/grav",
"url": "https://github.com/getgrav/grav/issues/749"
}
|
gharchive/issue
|
using underscore vs dashes in page filename
how do I get grav to use underscore _ instead of dashes - in the pages filename? I noticed that grav uses dashes in the page name when there are spaces or on default. I prefer underline. Is there an admin panel option for this?
I see the option to put .html at the end of the page. But it seems spaces in filename default to - instead of _ . Any ideas?
@4evermaat no there's no such option at the moment. Add it to the Admin Plugin issues to be considered https://github.com/getgrav/grav-plugin-admin/issues/new
|
2025-04-01T06:38:46.751319
| 2015-09-16T15:54:18
|
106803588
|
{
"authors": [
"attiks",
"rhukster"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6227",
"repo": "getgrav/grav",
"url": "https://github.com/getgrav/grav/pull/325"
}
|
gharchive/pull-request
|
Responsive image
This patch adds more support for responsive images, allowing you to do the following:
page.media.images|first.derivatives(320, 960, 100).sizes('(max-width: 26em) 100vw, 50vw').html()
This will output an image tag with srcset for all widths starting from 320px to 960px in steps of 100px, the image will never upscale. The example uses an image with an intrinsic width of 650px.
<img sizes="(max-width: 26em) 100vw, 50vw" style="" src="/images/a/2/d/c/0/a2dc029adaa6460cfae159f06e676439138fac17-its-havenvervoer-breed1.jpeg" srcset="/images/6/a/a/f/d/6aafde2cb0b8bca92368200556db5c48be7318b5-its-havenvervoer-breed1.jpeg 320w, /images/f/5/8/c/7/f58c75299a67eff397d1ef62731ed55e1df9ee9a-its-havenvervoer-breed1.jpeg 420w, /images/9/0/1/7/c/9017c3a9fcbe0e2f2894f377ff9460ba78d34b0d-its-havenvervoer-breed1.jpeg 520w, /images/c/e/d/4/6/ced46633f00d10c91b72dd38b3a82c39a03bb181-its-havenvervoer-breed1.jpeg 620w, /images/a/2/d/c/0/a2dc029adaa6460cfae159f06e676439138fac17-its-havenvervoer-breed1.jpeg 650w">
In the code you'll see we just remember a list of url and width, this has been done to avoid memory problems while using lots of steps.
Wow looks great guys. Thanks for the contribution.
Thanks, was hoping for feedback, but this is even better :smile:
I really have no way to improve upon that.. If you could provide a PR to the learn github site to update the docs that would be great.
I assume docs are in https://github.com/getgrav/grav-learn, if so I can have a look tomorrow
yes specifically here: https://github.com/getgrav/grav-learn/blob/develop/pages/02.content/06.media/docs.md#responsive-images
|
2025-04-01T06:38:46.845622
| 2014-12-12T17:32:40
|
51831787
|
{
"authors": [
"getnamo",
"vimaxus"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6228",
"repo": "getnamo/leap-ue4",
"url": "https://github.com/getnamo/leap-ue4/pull/1"
}
|
gharchive/pull-request
|
First tracking implementation
Intend to add the possibility to only track a list of blobs as I find it
too CPU intensive now.
re-up the request on the experimental branch and I'll merge it
|
2025-04-01T06:38:46.867540
| 2022-05-11T17:25:27
|
1232968299
|
{
"authors": [
"matthew-white",
"yanokwa"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6229",
"repo": "getodk/central-backend",
"url": "https://github.com/getodk/central-backend/issues/483"
}
|
gharchive/issue
|
Sentry errors without server_name
We're seeing errors in Sentry that don't indicate a server_name, for example, this error from an ODK Cloud server. Because the error doesn't indicate things like the request URL, I'm thinking that it was thrown by the worker mechanism. I think our hope was that 9e410332ed4e65ab36c87b09c8ee8ce54937c3ad would resolve this, but it seems like these errors are still appearing in Sentry.
For the record, I'm not optimistic that #521 solves this problem.
The only tags on see on this type of error is environment, handled, level, and mechanism. I see a lot more tags on other errors (e.g., runtime, url, device).
I'm out of my depth here, but it would seem that perhaps workers don't have access to the full Sentry env? @matthew-white would it be worth it to escalate to their support team?
I also don't think that #521 solves this problem. For some of the tags you mentioned, I think it's expected that those are missing for workers, because there's no associated request (for example, url). But in other cases, it seems pretty surprising that the tag is missing (for example, runtime). I'm not sure, but it looks like this might be fixed in the latest version of the Sentry SDK: see getsentry/sentry-javascript#5190 (search for server_name).
For posterity, note that #626 effectively reverted the commit 9e410332ed4e65ab36c87b09c8ee8ce54937c3ad mentioned in the issue description.
|
2025-04-01T06:38:46.891689
| 2017-02-08T21:17:01
|
206327054
|
{
"authors": [
"darrennix",
"nateberkopec"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6230",
"repo": "getsentry/raven-ruby",
"url": "https://github.com/getsentry/raven-ruby/pull/633"
}
|
gharchive/pull-request
|
Increase truncation limit on handler to 255
I am trying to debug a Job error. The problem is that Sentry is truncating the "handler" under "Additional Data". What I see in Sentry is
--- !ruby/object:Delayed::PerformableMailer
object: !ruby/class 'GeneralMailer'
method_name: :generi
This requires me to go digging outside of Sentry to find the full handler to get the ID of the troublesome entry (in args)
--- !ruby/object:Delayed::PerformableMailer
object: !ruby/class 'GeneralMailer'
method_name: :generic_message
args:
- 305294"
Hmm, we could probably even go a little bigger. The main thing is we don't want to trip the limit of a TEXT field (~60k characters, depending on DB). 1k for both the handler and last_error is probably fine.
@darrennix Wanna change this PR to modify the truncation for last_error and handler to 1,000?
Done!
|
2025-04-01T06:38:46.911340
| 2022-12-20T08:22:45
|
1504163274
|
{
"authors": [
"marandaneto"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6231",
"repo": "getsentry/sentry-dart",
"url": "https://github.com/getsentry/sentry-dart/issues/1199"
}
|
gharchive/issue
|
Support Dart 3
Description
https://medium.com/dartlang/the-road-to-dart-3-afdd580fbefa
I believe we just need to increase the versioning in the pubspec file, but it has to be tested.
Likely a 7.0.0 milestone.
Let's watch https://github.com/dart-lang/sdk/issues/49530 before going ahead.
Dart v3 is already part of the master channel so I am going to make it part of the v7 release.
|
2025-04-01T06:38:46.968349
| 2020-06-23T09:16:08
|
643672053
|
{
"authors": [
"Jean85",
"gjedeer",
"guilliamxavier"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6232",
"repo": "getsentry/sentry-php",
"url": "https://github.com/getsentry/sentry-php/issues/1029"
}
|
gharchive/issue
|
Documentation out of date
https://docs.sentry.io/error-reporting/configuration/?platform=php
request_bodies
This parameter controls if integrations should capture HTTP request bodies. It can be set to one of the following values:
never: request bodies are never sent.
small: only small request bodies will be captured where the cutoff for small depends on the SDK (typically 4KB)
medium: medium-sized requests and small requests will be captured. (typically 10KB)
always: the SDK will always capture the request body for as long as sentry can make sense of it
When trying to use the request_bodies option:
Fatal error: Uncaught Symfony\Component\OptionsResolver\Exception\UndefinedOptionsException: The option "request_bodies" does not exist. Defined options are: "attach_stacktrace", "before_breadcrumb", "before_send", "capture_silenced_errors", "class_serializers", "context_lines", "default_integrations", "dsn", "enable_compression", "environment", "error_types", "excluded_exceptions", "http_proxy", "in_app_exclude", "in_app_include", "integrations", "logger", "max_breadcrumbs", "max_request_body_size", "max_value_length", "prefixes", "project_root", "release", "sample_rate", "send_attempts", "send_default_pii", "server_name", "tags". in /home/***/public_html/vendor/symfony/options-resolver/OptionsResolver.php:798
Stack trace:
#0 /home/***/public_html/vendor/sentry/sentry/src/Options.php(56): Symfony\Component\OptionsResolver\OptionsResolver->resolve(Array)
#1 /home/***/public_html/vendor/sentry/sentry/src/ClientBuilder.php(115): Sentry\Options->__construct(Array)
#2 /home/***/public_html/ in /home/***/public_html/vendor/symfony/options-resolver/OptionsResolver.php on line 798
FTR, the right option is max_request_body_size.
And never should be none (#1019)
I found related getsentry/sentry-docs#1408: the problem is that the Python client indeed uses request_bodies (and never)
|
2025-04-01T06:38:46.976505
| 2018-10-11T08:19:09
|
369004185
|
{
"authors": [
"HazAT"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6233",
"repo": "getsentry/sentry-php",
"url": "https://github.com/getsentry/sentry-php/issues/670"
}
|
gharchive/issue
|
[2.0 branch] Tracking issue - Unified API
This is a non-exhaustive list containing todo's to refactor the current 2.0 branch to conform the unified SDK API see: https://docs.sentry.io/clientdev/unified-api/
In general, all points are up for discussion. This should be a rough guideline to hold on to while we progress through it.
[ ] Up minimum PHP version to 7.1 and use features (like: Return type declarations, nullable types, Throwable, …)
[ ] General wording changes to functions (e.g.: leaveBreadcrumb -> addBreadcrumb, Config -> Options)
[ ] Change Sentry protocol version to 7
[ ] Update SDK identifier + User Agent (sentry.php)
[ ] Context → Scope
[ ] Create Hub which takes some responsibility of the Client / ClientBuilder
[ ] Breakup Client
[ ] Move getLastEventID to Hub
[ ] Move breadcrumbs, context information into Scope
[ ] Refactor Middleware's to Integrations
[ ] Remove some public setter/getter
[ ] DSN → We no longer need path (projectRoot)
[ ] shouldCapture → beforeSend
[ ] Do not setTags in Configuration
[ ] Stacktrace should be stacktrace.values[].frames
[ ] Breadcrumbs should be breadcrumbs.values[].crumb
[ ] Remove base64 encoding
[ ] Remove sanitization
[ ] Remove docs from repo, they new live in https://github.com/getsentry/sentry-docs
[ ] Write + Update Docs https://github.com/getsentry/sentry-php/issues/650
Closing this in favor of https://github.com/getsentry/sentry-php/pull/677
|
2025-04-01T06:38:46.981316
| 2024-06-22T13:16:00
|
2367818492
|
{
"authors": [
"JakobBruening",
"Jean85",
"Shadow-Devil",
"cleptric"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6234",
"repo": "getsentry/sentry-symfony",
"url": "https://github.com/getsentry/sentry-symfony/issues/855"
}
|
gharchive/issue
|
TraceableResponseForV4::getInfo must be compatible with ResponseInterface::getInfo
How do you use Sentry?
Sentry SaaS (sentry.io)
SDK version
5.0
Steps to reproduce
Executed composer update with symfony 6.4 (https://github.com/pimcore/pimcore/blob/11.x/composer.json)
Expected result
No error
Actual result
Fatal error: Declaration of Sentry\SentryBundle\Tracing\HttpClient\TraceableResponseForV4::getInfo(?string $type = null) must be compatible with Symfony\Contracts\HttpClient\ResponseInterface::getInfo(?string $type = null): mixed in /var
/www/html/vendor/sentry/sentry-symfony/src/Tracing/HttpClient/TraceableResponseForV4.php on line 15
PHP Fatal error: Declaration of Sentry\SentryBundle\Tracing\HttpClient\TraceableResponseForV4::getInfo(?string $type = null) must be compatible with Symfony\Contracts\HttpClient\ResponseInterface::getInfo(?string $type = null): mixed in
/var/www/html/vendor/sentry/sentry-symfony/src/Tracing/HttpClient/TraceableResponseForV4.php on line 15
Something is loading TraceableResponseForV4, while you should be loading just TraceableResponseForV6. This is currently loaded here: https://github.com/getsentry/sentry-symfony/blob/5de2b84421489e20c23c7678c69c3210dc7a223e/src/aliases.php#L81-L83
Do you have any strange situation where all those ifs are triggered and not the correct one?
I had the same issue but fixed it via explicitly requiring symfony/http-client in my composer.json
Will be fixed by #858. Problem is that pimcore requires symfony/contracts but does not inlcude symfony/http-client, hence you're seeing this error.
|
2025-04-01T06:38:46.985473
| 2020-11-17T13:27:54
|
744736090
|
{
"authors": [
"Jean85",
"ste93cry"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6235",
"repo": "getsentry/sentry-symfony",
"url": "https://github.com/getsentry/sentry-symfony/pull/387"
}
|
gharchive/pull-request
|
Use the hub injected in the constructor of the listeners rather than retrieving it from the SDK singleton
This PR refactorizes the event listeners to make them use the Hub injected into the constructor rather than getting it using the global functions. This eases the unit testing of those classes. Note that by default the injected hub is under the hood an instance of HubAdapter and thus it works exactly as before by proxing each call to the current hub that is retrieved on-the-fly
@Jean85 do we really need to make the priority of the listeners configurable? I know it's a really small feature and doesn't have a big impact in the maintenance of the code, but I don't think it's useful since people can simply adjust the priority of their listeners
Listeners need to run after certain stuff, making them configurable makes the installation a lot easier. Requiring users to change their code to install ours is not fine IMHO.
What issues do you have with those?
No issue at all, I was simply questioning because I never heard anyone needing this and also trying to look around GitHub using the search feature to see who does the same I didn't find much results. I'm totally fine with leaving this as-is if you think it's worth keeping the feature though, I have some doubts about the usefulness but it's just my opinion
The issue was requested in #49 the first time, so there's definitely a need.
Looking at that PR I think that the main issue was that the listeners at that time were all using the default priority, so people could not inject their own listeners between the Symfony ones (if there was any) and the Sentry ones. You decided to solve the issue by making the priority configurable, which is a way to solve the issue. The other way (it would had required a new major version of course to avoid breaking BC) was to explicitly set the priority. Both ways are fine, as I said I don't really have any issue with keeping things as they are now 😃
|
2025-04-01T06:38:47.152084
| 2024-08-30T11:12:19
|
2496945199
|
{
"authors": [
"gfxholo",
"jckoester"
],
"license": "MIT-0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6236",
"repo": "gfxholo/iconic",
"url": "https://github.com/gfxholo/iconic/issues/22"
}
|
gharchive/issue
|
Show icons also in dataview tables
When using iconize the icons get also displayed in front of the file link in any dataview table. Icons set by iconic don't. Is there a way to also show them?
Actually this seems not to be related to dataview, this applies to all links. Icons defined by iconize get displayed in any link, icons from iconic not.
Didn't read documentation, behavior is to be expected like it is.
Hi - it's true this feature isn't on the roadmap, but since I'm expecting a lot of "add this Iconize feature" requests over time, it'd be nice to keep this issue open so anyone can pitch in their comments.
You don't have to reply to this, but thanks for the suggestion! :)
|
2025-04-01T06:38:47.293512
| 2022-11-20T14:20:15
|
1456920976
|
{
"authors": [
"ghostdevv"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6239",
"repo": "ghostdevv/short",
"url": "https://github.com/ghostdevv/short/issues/7"
}
|
gharchive/issue
|
robots.txt
don't allow the path where keys are
Done - I tried with something like the snippet below but wasn't sure if it would work how I intended or not so didn't go with it in the end - decided to just Allow: /
User-agent: *
Allow: /
Allow: /changelog
Allow: /account
Allow: /settings
Disallow: /*
|
2025-04-01T06:38:47.344726
| 2024-01-18T10:01:08
|
2087932016
|
{
"authors": [
"marians",
"mproffitt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6240",
"repo": "giantswarm/docs",
"url": "https://github.com/giantswarm/docs/pull/2052"
}
|
gharchive/pull-request
|
Update kubectl gs get releases
Change of review date only - no content changes
I'm thinking that by now a note regarding vintage only would make sense here.
Damn - that comment came in seconds before automerge - It would be correct. I'll do a new PR for that.
|
2025-04-01T06:38:47.348541
| 2020-10-01T08:05:52
|
712602830
|
{
"authors": [
"axbarsan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6241",
"repo": "giantswarm/kubectl-gs",
"url": "https://github.com/giantswarm/kubectl-gs/pull/167"
}
|
gharchive/pull-request
|
Use releases branch from flag when fetching release components
I missed this in https://github.com/giantswarm/kubectl-gs/pull/162
Going with a ping
|
2025-04-01T06:38:47.355254
| 2023-11-22T13:48:22
|
2006386128
|
{
"authors": [
"gawertm",
"glitchcrab"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6242",
"repo": "giantswarm/roadmap",
"url": "https://github.com/giantswarm/roadmap/issues/2990"
}
|
gharchive/issue
|
OS Image Distribution from S3 to Customer vCenters
We should implement a mechanism to distribute the images from the common place (AWS S3 now) to all vSphere environments automatically to be able to manage multiple customers/users easily
from: https://github.com/giantswarm/roadmap/issues/2757#issuecomment-1822303696
@giantswarm/team-rocket how could that work? this would rather need to be a pull mechanism than a push because of network restrictions, right?
Yes we would need to pull the image from the central location, most likely via an operator or maybe a cronjob in the cluster and then push it to the customer's catalog
I want to bump this back up in priority - because all providers+customers are now using releases, the number of images we need to upload has increased. Previously we used to set the image template in the cluster-provider chart and it wasn't often updated, however the release-based charts see kubernetes and/or flatcar upgrades far more often and so we have to upload images more often (especially for test installations). Additionally, the number of locations we need to upload them too is increasing with more on-prem customers coming onboard.
An operator shouldn't be too complex to write - I suggest we provide it a whitelist of images to upload (so as to avoid us uploading every single image which is built). This has the added benefit that we don't need to have the operator watch the S3 bucket - it just reconciles the existing images in the vcenter/vcd catalog against the whitelist and then just add any which are missing.
|
2025-04-01T06:38:47.356902
| 2024-05-02T10:02:47
|
2275104648
|
{
"authors": [
"gawertm",
"vxav"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6243",
"repo": "giantswarm/roadmap",
"url": "https://github.com/giantswarm/roadmap/issues/3432"
}
|
gharchive/issue
|
Azure Arc summary
identify what Azure Arc can do and how a common architecture could look like
https://gigantic.slack.com/archives/C05U702MFS8/p1717010090855449
waiting for the call with microsoft to get a deep dive on those topics. scheduled for June 20th
|
2025-04-01T06:38:47.362846
| 2020-06-03T21:23:09
|
630341631
|
{
"authors": [
"Sydecadus"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6244",
"repo": "gibbed/Gibbed.Borderlands2",
"url": "https://github.com/gibbed/Gibbed.Borderlands2/issues/143"
}
|
gharchive/issue
|
Hello, i have an problem.
Error
An exception was thrown (press Ctrl+C to copy):
System.UnauthorizedAccessException: Access to the path is denied.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.WriteCore(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.FileStream.FlushWrite(Boolean calledFromFinalizer)
at System.IO.FileStream.Dispose(Boolean disposing)
at System.IO.Stream.Close()
at Gibbed.Borderlands2.SaveEdit.ShellViewModel.WriteSave(String savePath, SaveFile saveFile)
at Caliburn.Micro.Contrib.Results.DelegateResult.Execute(ActionExecutionContext context)
OK
Hello, I got it fixed, it was my ransomware protection going off again..
Closed
|
2025-04-01T06:38:47.366735
| 2024-11-25T14:01:37
|
2690857401
|
{
"authors": [
"DutraGames",
"gibbed"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6245",
"repo": "gibbed/SteamAchievementManager",
"url": "https://github.com/gibbed/SteamAchievementManager/issues/448"
}
|
gharchive/issue
|
Activated Achievements Not Displaying in 'Perfectionist' and 'Achievement Count
I encountered an issue where activating achievements for a game does not update their display in the Perfectionist and Achievement Count sections. Additionally, the "perfect game" status is not shown in these sections. However, the game still appears correctly listed under Games > Perfect Games.
Below are images to illustrate the issue:
This inconsistency makes it difficult to track and manage achievements. Could this be reviewed to ensure activated achievements and perfect game status are displayed consistently across all sections?
If needed, I can provide more details or additional examples.
This sounds like a problem on Steam's end, not much SAM can do about that.
This sounds like a problem on Steam's end, not much SAM can do about that.
I'll run some tests by activating achievements with a longer time gap between them to observe the behavior. This might help determine if the issue is related to timing or how the achievements are being processed by the system. Once I have results, I'll share them here.
I’ll be closing this issue because, after testing with a game using a longer time gap, I observed mixed results: the system did not work perfectly when I used a game that was already platinum. However, when testing with a game that had no achievements unlocked, the system worked as expected.
If anyone encounters similar issues, feel free to reopen the issue with more details.
|
2025-04-01T06:38:47.374749
| 2022-08-24T07:19:14
|
1348983812
|
{
"authors": [
"iamMHZ",
"iamtekson",
"trodaway"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6246",
"repo": "gicait/geoserver-rest",
"url": "https://github.com/gicait/geoserver-rest/issues/87"
}
|
gharchive/issue
|
KeyColumn for Creating Query View
I found that your publish_featurestore_sqlview, does not have the primary key ('<keyColumn>primary_key</keyColumn>') parameter. Is there a specific reason behind that?? Are you willing that the primary key(s) be recognized by GeoServer automatically?
Also, I want to mention that a complex query can have a primary key made by combining more than one attribute. Is there a solution for handling that?
@iamtekson I would be happy to contiribute on this issue.
Please contribute if you like. Thank you @iamMHZ
I see this feature was removed in an earlier commit, and given the name it seems to have been a diliberate removal. What's the reason behind this / is there a way of reimplementing this feature? Without it I get errors later in my data pipeline, meaning I can't use this package to automate my work fully. If it's remaining removed, I would also suggest updating the documentation which implies the argument still exists. Thanks
Hi @trodaway, thank you for raising this issue. I forgot exactly why we removed that feature, but I am going to add it again and try to see how it impact, rest of the things. If you want to send a PR, I am also happy to accept it.
Just released the new version v2.5.2 solving this issue. I hope it should work now.
Thanks for taking a look at this so quickly. I think the keyColumn tag is in the wrong bit of the XML though - whilst GeoServer isn't throwing an error when you POST, if I try to GET the layer via WFS I still get an error regarding the key field. From some testing, it appears (from trial & error, as I don't believe GeoServer's REST API is properly documented) the keyColumn tag should be within the virtualTable tag, opposed to just within the featureType. I'll try to compile a PR to fix this.
Seems write permission is limited on this repo so can't push / do a PR. This is the code I've got:
def publish_featurestore_sqlview(
self,
name: str,
store_name: str,
sql: str,
key_column: Optional[str] = None,
geom_name: str = "geom",
geom_type: str = "Geometry",
srid: Optional[int] = 4326,
workspace: Optional[str] = None,
):
"""
Parameters
----------
name : str
store_name : str
sql : str
key_column : str, optional
geom_name : str, optional
geom_type : str, optional
workspace : str, optional
"""
if workspace is None:
workspace = "default"
# issue #87
if key_column is not None:
key_column_xml = """
<keyColumn>{}</keyColumn>""".format(key_column)
else:
key_column_xml = """"""
layer_xml = """<featureType>
<name>{0}</name>
<enabled>true</enabled>
<namespace>
<name>{4}</name>
</namespace>
<title>{0}</title>
<srs>EPSG:{5}</srs>
<metadata>
<entry key="JDBC_VIRTUAL_TABLE">
<virtualTable>
<name>{0}</name>
<sql>{1}</sql>
<escapeSql>true</escapeSql>
<geometry>
<name>{2}</name>
<type>{3}</type>
<srid>{5}</srid>
</geometry>{6}
</virtualTable>
</entry>
</metadata>
</featureType>""".format(
name, sql, geom_name, geom_type, workspace, srid, key_column_xml
)
# rest API url
url = "{}/rest/workspaces/{}/datastores/{}/featuretypes".format(
self.service_url, workspace, store_name
)
# headers
headers = {"content-type": "text/xml"}
# request
r = requests.post(
url,
data=layer_xml,
auth=(self.username, self.password),
headers=headers,
)
if r.status_code == 201:
return r.status_code
else:
raise GeoserverException(r.status_code, r.content)
Yes you are write. I will make another PR to solve this issue. Thanks again!
Thanks for sorting! I assume a 2.5.3 will be released shortly with that update?
I already released the new version!
Thanks - it just didn't come through onto conda-forge for another couple of hours.
Yes, it takes few hours to create the build file for Conda.
|
2025-04-01T06:38:47.389778
| 2018-10-10T04:30:30
|
368490047
|
{
"authors": [
"gifnksm",
"pbzweihander"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6247",
"repo": "gifnksm/oauth-client-rs",
"url": "https://github.com/gifnksm/oauth-client-rs/pull/36"
}
|
gharchive/pull-request
|
Replace reqwest with hyper and futures
Provide asynchronous Future interfaces with hyper.
Thank you for the pull request, and I'm sorry for being late.
This PR conflicts with a current master and has been inactive for a long time, so I'm closing this for now.
If you want an async version of APIs, please create a new Pull Request.
Comments:
The blocking versions of APIs are still useful for some use cases, so it would be better to add new async versions of APIs instead of replacing them.
Since reqwest now supports async APIs, we don't need to switch to hyper. For now, we don't need low-level features provided by hyper, and we can keep our implementation easy and simple by using reqwest's convenient APIs.
|
2025-04-01T06:38:47.399111
| 2019-05-23T04:08:27
|
447441632
|
{
"authors": [
"craigbpeterson",
"esteban-gs",
"thomaskise"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6248",
"repo": "gig-central/gig-repo-1",
"url": "https://github.com/gig-central/gig-repo-1/issues/173"
}
|
gharchive/issue
|
Adding a gig creates a new record in the db, but the new gig does not show up on the gig list.
It looks like the Foreign Key "CompanyContactID" is not being entered properly in the "Company" table.
Additionally, the GigPosted" and "LastUpdated" columns of the Gig table are not getting real data.
I am unsure if these are the reasons why the added gig is not showing up on the gigs/index.php view. More research is needed.
fyi .... I've added gigs. I just added a couple, but I was not logged on. They all seem to have stuck. All fields are populated except GigPosted, and LastUpdated is all zeros.
IF nobody is working on this issue, I'll assign myself
I think a recent pull request may have resolved this issue. @esteban-gs are you able to add a gig and then see it on the view gigs page?
It is writing to the DB, but not showing up on the list:
For me it only shows up when I search for it:
I'll assign myself.
@esteban-gs .... You said above that Gigs are being added. Did you confirm that the Foreign Key "CompanyContactID" is being set properly in the "Company" table?
If so, I think we should close this one.
Yes. It worked on my end
On Mon, Jun 10, 2019, 9:26 PM Thom Harrington<EMAIL_ADDRESS>wrote:
@esteban-gs https://github.com/esteban-gs .... You said above that Gigs
are being added. Did you confirm that the Foreign Key "CompanyContactID" is
being set properly in the "Company" table?
If so, I think we should close this one.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/gig-central/gig-repo-1/issues/173?email_source=notifications&email_token=AGJXJ444OFJFGSCIWCLHCB3PZ4SOFA5CNFSM4HOZY26KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXL4YKQ#issuecomment-500681770,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGJXJ4YEHULWDRI6SZYRYTTPZ4SOFANCNFSM4HOZY26A
.
Closing per PR #233 and above discussion
|
2025-04-01T06:38:47.444085
| 2015-08-04T18:03:28
|
99025092
|
{
"authors": [
"gilbarbara",
"rctneil"
],
"license": "cc0-1.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6249",
"repo": "gilbarbara/logos",
"url": "https://github.com/gilbarbara/logos/issues/80"
}
|
gharchive/issue
|
Suggestion: jQuery icon only
How about the jQuery logo without the text?
Oh and possibly add Microsoft Edge?
hey,
I thought about using only the jquery logomark but isn't clearly recognizable.
About Microsoft Edge:
Hmm, I still think the logomark on it's own could be useful in some cases. Edge has a different (yet suspiciously similar) logo so I think it's eligible to be included.
Added MS Edge f209440ba434258ed6d944c649804c506ad9525f
|
2025-04-01T06:38:47.467231
| 2021-11-12T14:46:57
|
1052039089
|
{
"authors": [
"OmerBenHayun",
"simonecig"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6250",
"repo": "gillescastel/inkscape-shortcut-manager",
"url": "https://github.com/gillescastel/inkscape-shortcut-manager/pull/26"
}
|
gharchive/pull-request
|
ungrab Mod4+Shift and Mod1+Shift
Hi!
Many tiling window managers use by default keybindings with both the Super key (Mod4) and the shift key pressed (i.e. Mod4+Shift+q to close the focused window).
Currently, such key combinations are grabbed and therefore can't be used while the script is running.
I was able to solve this by ungrabbing the Shift_L key with the modifier Mod4Mask (and also Mod1Mask for the Alt key).
This should also resolve #6, since keybindings like Shift+t will still work.
Your fix works for me (I'm using i3-wm) Thanks!
|
2025-04-01T06:38:47.472471
| 2015-02-24T19:10:18
|
58788408
|
{
"authors": [
"9090899",
"mmajis"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6251",
"repo": "ginader/HTML5-placeholder-polyfill",
"url": "https://github.com/ginader/HTML5-placeholder-polyfill/pull/70"
}
|
gharchive/pull-request
|
Fix placeholder vertical positioning when text input element height changes
This fixes a positioning issue with responsive layouts which change the height of single line text input boxes dynamically based on window size.
Placeholder span is now positioned vertically in the middle of the input box based on outerHeights instead of basing the positioning on the input box top padding.
Resize event wasn't firing on the document object, so bound the handler to window instead.
how to fix text in textbox as it is in gmail sign up page ...in the textbox @gmail.com in right side how ???
Can u tell me????????????
|
2025-04-01T06:38:47.481230
| 2022-02-14T16:57:11
|
1137580270
|
{
"authors": [
"MarcelKoch"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6252",
"repo": "ginkgo-project/git-cmake-format",
"url": "https://github.com/ginkgo-project/git-cmake-format/pull/9"
}
|
gharchive/pull-request
|
Allows user to set clang-format executable
This PR allows users to set the clang-format executable. The find_progam part is skipped in that case, instead it is only checked if the provided executable exists. Since that is not enough to check if the executable is working, I've set the minimal version to 9.0.0 (which is used in ubuntu 18.04) to trigger an error, if the user did not provide a working clang-format.
I've also added more clang format names.
PS: This is necessary for me, since my local default clang-format (v13) sometimes disagrees with the one that our check-format action uses. So far, I couldn't figure out which clang-format option is giving these difficulties.
@upsj Yes that works, You have to use the absolute path for that, which I didn't check before.
|
2025-04-01T06:38:47.482960
| 2016-02-04T12:37:07
|
131333578
|
{
"authors": [
"gionkunz",
"mcdonnelldean"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6253",
"repo": "gionkunz/chartist-js",
"url": "https://github.com/gionkunz/chartist-js/pull/605"
}
|
gharchive/pull-request
|
Updated License
Updated so that arsey legal teams can't stop these awesome charts being used in their projects.
related to #604
@gionkunz Do you want me to update this so it show's dual licenses, in the one file?
I switched to dual licensing WTFPL and MIT within Chartist 0.9.6 :smile: :+1:
|
2025-04-01T06:38:47.484523
| 2017-05-01T17:23:11
|
225477109
|
{
"authors": [
"gionkunz",
"radojesrb"
],
"license": "WTFPL",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6254",
"repo": "gionkunz/chartist-plugin-pointlabels",
"url": "https://github.com/gionkunz/chartist-plugin-pointlabels/pull/15"
}
|
gharchive/pull-request
|
Added support for barChart - follow up to PR 11
Hi @gionkunz I modified package.json as you asked here:
https://github.com/gionkunz/chartist-plugin-pointlabels/pull/11
Can you please consider merging this? It would make me very happy 😍
Thanks!
Thanks again! :-)
|
2025-04-01T06:38:47.494320
| 2021-09-27T18:24:01
|
1008458592
|
{
"authors": [
"AGR-12JU"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6255",
"repo": "girlscript/winter-of-contributing",
"url": "https://github.com/girlscript/winter-of-contributing/issues/3218"
}
|
gharchive/issue
|
Kruskal's Algorithm for minimum spanning tree
Description
I will be explaining Kruskal's algorithm with the code in order to find minimum spanning tree.
Domain
Competitive Programming
Type of Contribution
Documentation
Code of Conduct
[X] I follow Contributing Guidelines & Code of conduct of this project.
/assign
|
2025-04-01T06:38:47.496570
| 2021-11-14T07:29:13
|
1052870761
|
{
"authors": [
"Harsh652-cpu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6256",
"repo": "girlscript/winter-of-contributing",
"url": "https://github.com/girlscript/winter-of-contributing/issues/7868"
}
|
gharchive/issue
|
Audio for Goto statement in C
Description
Here.I will be doing audio contribution on the topic Goto statements in C.
Domain
C/CPP
Type of Contribution
Audio
Code of Conduct
[X] I follow Contributing Guidelines & Code of conduct of this project.
/assign
|
2025-04-01T06:38:47.500937
| 2018-02-11T22:46:51
|
296234037
|
{
"authors": [
"elouanKeryell-Even"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6257",
"repo": "gisaia/ARLAS-server",
"url": "https://github.com/gisaia/ARLAS-server/issues/138"
}
|
gharchive/issue
|
Support multi-index and wildcard in collection reference
In gitlab by @sfalquier on Dec 29, 2017, 10:43
Collection API should support https://www.elastic.co/guide/en/elasticsearch/reference/5.6/multi-index.html
ìndexPath should become indexPatternPath
On Collection creation, arlas-server have to check that ES types of all indeces have the same name and are consistant with other paths definitions (centroidPath, geometryPath, timestampPath, ...)
NB : explore.RawRESTService must throw an exception when indexPatternPath refers to more than one indeces
In gitlab by @sfalquier on Dec 29, 2017, 11:21
mentioned in issue #139
In gitlab by @sfalquier on Dec 29, 2017, 11:21
added ~2424198 label
In gitlab by @sylvaingaudan on Feb 7, 2018, 17:05
removed ~2424198 label
In gitlab by @sylvaingaudan on Feb 7, 2018, 17:06
changed milestone to %26
|
2025-04-01T06:38:47.504635
| 2018-02-02T12:57:41
|
293880087
|
{
"authors": [
"elouanKeryell-Even"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6258",
"repo": "gisaia/ARLAS-server",
"url": "https://github.com/gisaia/ARLAS-server/issues/21"
}
|
gharchive/issue
|
Automatic Release generation and tagging
In gitlab by @sylvaingaudan on Apr 26, 2017, 12:16
In gitlab by @sylvaingaudan on Apr 26, 2017, 14:01
added ~1918909 label
In gitlab by @sylvaingaudan on Apr 28, 2017, 13:46
changed milestone to %4
In gitlab by @sfalquier on May 9, 2017, 09:13
added ~1918367 and removed ~1918909 labels
In gitlab by @sfalquier on May 12, 2017, 11:22
added ~1918368 and removed ~1918367 labels
In gitlab by @sfalquier on May 12, 2017, 11:22
assigned to @sfalquier
In gitlab by @sfalquier on May 15, 2017, 13:44
removed ~1918368 label
In gitlab by @sfalquier on May 15, 2017, 13:44
closed
|
2025-04-01T06:38:47.614650
| 2016-04-11T21:54:12
|
147566022
|
{
"authors": [
"joseph-orbis",
"siprbaum",
"spraints"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6259",
"repo": "git-tfs/git-tfs",
"url": "https://github.com/git-tfs/git-tfs/issues/951"
}
|
gharchive/issue
|
checkintool not working (VS2015 Update 2)
git-tfs version <IP_ADDRESS> (TFS client library <IP_ADDRESS> (MS)) (64-bit)
Commits visited count:11
Commits visited count:11
...
System.Reflection.TargetInvocationException: Exception has been thrown by the ta
rget of an invocation. ---> System.ArgumentNullException: Value cannot be null.
Parameter name: path1
at System.IO.Path.Combine(String path1, String path2)
at Sep.Git.Tfs.VsCommon.TfsHelperBase.GetDialogAssembly()
at Sep.Git.Tfs.VsCommon.TfsHelperBase.GetCheckinDialogType()
at Sep.Git.Tfs.VsCommon.TfsHelperBase.ShowCheckinDialog(Workspace workspace,
PendingChange[] pendingChanges, WorkItemCheckedInfo[] checkedInfos, String check
inComment)
at Sep.Git.Tfs.Core.TfsWorkspace.CheckinTool(Func1 generateCheckinComment) at Sep.Git.Tfs.Core.GitTfsRemote.<>c__DisplayClass2a.<CheckinTool>b__29(ITfsW orkspace workspace) at Sep.Git.Tfs.VsCommon.TfsHelperBase.WithWorkspace(String localDirectory, IG itTfsRemote remote, TfsChangesetInfo versionToFetch, Action1 action)
at Sep.Git.Tfs.Core.GitTfsRemote.WithWorkspace(TfsChangesetInfo parentChanges
et, Action1 action) at Sep.Git.Tfs.Core.GitTfsRemote.CheckinTool(String head, TfsChangesetInfo pa rentChangeset) at Sep.Git.Tfs.Commands.CheckinBase.PerformCheckin(TfsChangesetInfo parentCha ngeset, String refToCheckin) --- End of inner exception stack trace --- at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor) at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Objec t[] parameters, Object[] arguments) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invoke Attr, Binder binder, Object[] parameters, CultureInfo culture) at Sep.Git.Tfs.Util.GitTfsCommandRunner.Run(GitTfsCommand command, IList1 ar
gs)
at Sep.Git.Tfs.GitTfs.Main(GitTfsCommand command, IList`1 unparsedArgs)
at Sep.Git.Tfs.Program.Main(String[] args)
Value cannot be null.
Parameter name: path1
There's no "Microsoft.VisualStudio.TeamFoundation.TeamExplorer.Extensions" in my registry:
checkintool is one of the hackier parts of git-tfs, since it manually searches for assemblies etc. If you have a chance to fix this and submit a PR, it'd be super helpful.
Since I also did an uninstall of older Visual Studio 2013 which could have interfered with the registries) I'll have to get my hands on a clean 2015 Update 2 install to see what the registries look like.
Well, it'd be nice if it worked in your case, too. Feel free to build it for your case first.
Reporter had a workaround and hasn't provided further information.
Looks like the issue can be closed
This should be fixed by 1123ad84a8064540bf98faa01a8d1dfeae97d9cb
Closing the issue
This should be fixed by 1123ad84a8064540bf98faa01a8d1dfeae97d9cb
Closing the issue
|
2025-04-01T06:38:47.623746
| 2024-02-18T12:11:39
|
2140959252
|
{
"authors": [
"gitbls",
"pierreCAMANALYTICS"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6260",
"repo": "gitbls/sdm",
"url": "https://github.com/gitbls/sdm/issues/180"
}
|
gharchive/issue
|
[Systemctl service won't start]
Hello Benn,
When trying to start a custome service, the service won't start :
My service looks like this:
But the bash command is working on my terminal.
This behavior is happening for all my customs services.
This is my customize script:
sdm --customize \ --extend --xmb 512 \ --plugin network:"netman=dhcpcd|wifissid=Livebox-49F7|wifipassword=********|wificountry=FR" \ --plugin copyfile:"filelist=/home/camanalytics/camanalytics/LampCam/images/base/files|mkdirif=yes" \ --plugin user:"adduser=pi" \ --plugin user:"setpassword=pi|password=pi" \ --plugin runatboot:"script=/home/camanalytics/camanalytics/LampCam/images/base/ref_files_v2/runatboot.sh|output=/home/pi/firsboot.log" \ --plugin raspiconfig:"spi=1" \ --plugin apps:"apps=@myapps|name=myapps" \ 2023-05-03-raspios-bullseye-arm64-lite_base.img
I'm running v11.4
Am I'm missing something with the system plugin ?
Thank you
Found the solution, by stopping userconfig.service
sudo systemctl stop userconfig.service
The service was stopping all other services that wanted to use multi-user.target
Try NOT stopping/disabling the userconfig service, but instead use --plugin disables:piwiz. This will disable the userconfig service and a few other things that you don't need.
Also, a couple of other things:
It doesn't hurt anything, but RasPiOS comes with the pi user already in /etc/passwd (but with no password set), so you don't need to adduser=pi. It doesn't hurt, of course, to leave it there
In your camanalytics.service file, since you have specified User=root you don't need the sudo on the ExecStart
Closing as resolved.
|
2025-04-01T06:38:47.625287
| 2020-05-01T21:34:43
|
610980023
|
{
"authors": [
"AsadKhanOMS",
"aymannabil86",
"frameworker2019"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6261",
"repo": "gitbrent/bootstrap4-toggle",
"url": "https://github.com/gitbrent/bootstrap4-toggle/issues/40"
}
|
gharchive/issue
|
Toggle turns into normal checkbox
I'm using bootstrap-table. If I include toggles inside the table, the toggle appears on load, but looses all styling if I apply actions to the table: Pagination or toggle columns on/off for example.
Same issue here
Same here!!
|
2025-04-01T06:38:47.804458
| 2023-04-12T13:56:08
|
1664622092
|
{
"authors": [
"darakian",
"georg-jung"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6267",
"repo": "github/advisory-database",
"url": "https://github.com/github/advisory-database/pull/2058"
}
|
gharchive/pull-request
|
[GHSA-5pm2-9mr2-3frq] Vulnerability in the Oracle Data Provider for .NET...
Updates
Affected products
Description
References
Summary
Comments
Add more details, issue automatic dependabot notifications to vulnerable apps
I checked the readme in the nupkg and indeed it does look like that version claims to fix the issue, but is that really the only reference for this fix?
I verified it personally but I guess that isn't the kind of source you are looking for.
It is also listed here: https://www.oracle.com/security-alerts/cpujan2023.html
This is oracle's official document regarding this CVE. I'm not sure if it lists the affected & fixed versions. It was released close to the release date of the fixed package version though. This package isn't updated too regularly so that's a strong sign too.
I believe you. I was just hoping that oracle would have a better public announcement of it. I guess the source repo is likely behind some wall, so maybe I'm hoping for too much 🤷.
Either way, many thanks for the contribution 👍
|
2025-04-01T06:38:47.807669
| 2022-12-01T22:54:35
|
1472063662
|
{
"authors": [
"JamieMagee",
"github"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6268",
"repo": "github/advisory-database",
"url": "https://github.com/github/advisory-database/pull/968"
}
|
gharchive/pull-request
|
[GHSA-wcg3-cvx6-7396] Segmentation fault in time
Updates
Affected products
Comments
All versions of 0.1.x are listed as vulnerable in the advisory description.
Hi there @jhpratt! A community member has suggested an improvement to your security advisory. If approved, this change will affect the global advisory listed at github.com/advisories. It will not affect the version listed in your project repository.
This change will be reviewed by our highly-trained Security Curation Team. If you have thoughts or feedback, please share them in a comment here! If this PR has already been closed, you can start a new community contribution for this advisory
|
2025-04-01T06:38:47.812141
| 2023-04-20T10:31:42
|
1676456023
|
{
"authors": [
"keithamus",
"primer-css"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6269",
"repo": "github/auto-check-element",
"url": "https://github.com/github/auto-check-element/pull/62"
}
|
gharchive/pull-request
|
Upgrade packages tests exports
This upgrades this package to fall in line with the improvements made in our most recently updated Web Component, the relative-time-element.
The development changes are:
Uses web-test-runner over karma.
Uses a slightly improved eslint config
Minor changes to TSconfig
Uses esbuild over rollup
User faces changes are:
Emits JSX types, making it compatible with React
Reworks exports allowing for various patterns, including importing the web component without defining, or defining under different scopes or registries.
Outputs a custom elements manifest.
:wave: Hello and thanks for pinging us! This issue or PR has been added to our inbox and a Design Infrastructure first responder will review it soon.
:art: If this is a PR that includes a visual change, please make sure to add screenshots in the description or deploy this code to a lab machine with instructions for how to test.
:fast_forward: If this is a PR that includes changes to an interaction, please include a video recording in the description.
:warning: If this is urgent, please visit us in #primer on Slack and tag the first responders listed in the channel topic.
|
2025-04-01T06:38:47.882616
| 2024-05-20T04:50:58
|
2305076858
|
{
"authors": [
"nguyenalex836",
"njzjz",
"sunbrye"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6270",
"repo": "github/docs",
"url": "https://github.com/github/docs/issues/33056"
}
|
gharchive/issue
|
build needs to be installed in advance
Code of Conduct
[X] I have read and agree to the GitHub Docs project's Code of Conduct
What article on docs.github.com is affected?
https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-pypi#updating-your-github-actions-workflow
What part(s) of the article would you like to see updated?
This workflow uses python -m build to build packages, but build is not pre-installed in the runner image. One must install build using pip install build, or combine installation and running into pipx run build.
https://github.com/github/docs/blob/a107d854397cc1a346fa011fd1d5dc0a573a2d3f/content/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-pypi.md#L61-L77
Additional information
No response
@njzjz Thanks so much for opening an issue! I'll get this triaged for review ✨
Thank you for opening this issue @njzjz! ✨ This update to the docs looks good to me. You, or anyone else, are free to open a pull request to make these changes.
I would recommend adding the step python-m pip install build before python -m build as seen in the section Publishing to PyPI.
@njzjz After looking into the issue, I've learned that changes to any files within content/actions/deployment/security-hardening-your-deployments/** are restricted due to security compliance. I'm going to transfer this open source issue into an internal issue and update the file with your changes.
Thank you again for your valued contribution 💛
|
2025-04-01T06:38:47.884455
| 2017-10-05T11:16:28
|
263087489
|
{
"authors": [
"aqnaruto",
"dgraham"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6271",
"repo": "github/fetch",
"url": "https://github.com/github/fetch/issues/567"
}
|
gharchive/issue
|
response status value error! when fetch a cors request
i mean if you build a web service in localhost <IP_ADDRESS>
if you fetch <IP_ADDRESS> the response's status is 0 ,not 404 .
Stack Overflow is a better place for usage questions. Here's an introduction to CORS requests.
|
2025-04-01T06:38:47.893673
| 2016-07-26T15:54:23
|
167643680
|
{
"authors": [
"technoweenie"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6272",
"repo": "github/git-lfs",
"url": "https://github.com/github/git-lfs/issues/1395"
}
|
gharchive/issue
|
Debian build errors
I get this error while running ./docker/run_dockers.bsh debian_8. What's happening is that a package is being loaded in multiple spots:
github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm (correct)
github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm (bad)
src/github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/httputil/ntlm.go:22: cannot use c.NtlmSession (type "github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession) as type "github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession in return argument:
"github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession does not implement "github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession (wrong type for GenerateAuthenticateMessage method)
have GenerateAuthenticateMessage() (*"github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".AuthenticateMessage, error)
want GenerateAuthenticateMessage() (*"github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".AuthenticateMessage, error)
src/github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/httputil/ntlm.go:38: cannot use session (type "github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession) as type "github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession in assignment:
"github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession does not implement "github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession (wrong type for GenerateAuthenticateMessage method)
have GenerateAuthenticateMessage() (*"github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".AuthenticateMessage, error)
want GenerateAuthenticateMessage() (*"github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".AuthenticateMessage, error)
Just realized what the problem is: the move to support Go 1.6's vendoring didn't make it into v1.2.1, and I just never ran this after those changes. @ttaylorr and I are looking at the docker files to make these changes:
Update to Go 1.6
Download git lfs to $GOPATH/src/github.com/github/git-lfs. $GOPATH can be anything, Git LFS vendors all dependencies outside of stdlib.
Wish us luck!
Fixes started in https://github.com/andyneff/git-lfs_dockers/pull/3 and https://github.com/github/git-lfs/pull/1398
|
2025-04-01T06:38:47.903781
| 2015-06-07T08:56:18
|
85877940
|
{
"authors": [
"SuriyaaKudoIsc",
"michael-k",
"technoweenie"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6273",
"repo": "github/git-lfs",
"url": "https://github.com/github/git-lfs/issues/371"
}
|
gharchive/issue
|
Unable to find the .exe application
Hi guys, :smile:
I tried to use the Git extension for versioning large files v0.5.1 program.
I run install.bat but it reports permanently "Unable to find git.exe, exiting...". :unamused:
The git-lfs.exe program is in the same folder. What's wrong :question: :question: :question:
Can somebody help me, please?
Yours,
Suriyaa Kudo :octocat:
git.exe must be in your path.
Here is the relevant snippet from install.bat:
:: Check if git.exe is in the user's path before continuing
where /q git.exe
if %errorlevel% neq 0 (ECHO Unable to find git.exe, exiting... & EXIT /b %errorlevel%)
So, I guess you don't have git installed. At least not in the way needed by install.bat.
I have Git installed.
What is the output of where /q git.exe when run in a shell?
My answers:
What is the output of where /q git.exe when run in a shell?
Git: "C:\Program Files (x86)\Git\bin\sh.exe" --login -i
Git Shell: C:\Users\Suriyaa\AppData\Local\GitHub\GitHub.appref-ms --open-shell
Which version do you have installed?
Git: 1.9.5.msysgit.0
Git Shell: 1.9.5.github.0
GitHub for Windows: "I Wear Goggles When You Are Not Here" (<IP_ADDRESS>) cc018b2
Do you have the maintained build installed or any third party software (like Github for Windows)?
Maintained build version 1.9.5 of Git for the Windows platform and GitHub for Windows (include also Git Shell)
Does it help if you adjust the path, i.e. add the directory where your git.exe is located?
No. I tried to run it from Git folder but it doesn't work. Then I tried to run it under Git\cmd and Git\lib. But it still does not work.
Since you have GitHub for Windows, it should have an option to install command line tools, which includes Git LFS. Did you try that?
No. How should I do it?
Hi guys,
I found the problem!
I tired to run install.bat but it's still not ffunction.
Then I copied the git-lfs.exe file into Git/bin folder and see it works!
Ah, thanks for the update. The Git LFS windows installer could definitely use some work.
|
2025-04-01T06:38:47.925658
| 2023-03-09T18:49:11
|
1617818660
|
{
"authors": [
"baywet",
"mishmanners"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6274",
"repo": "github/release-radar",
"url": "https://github.com/github/release-radar/issues/161"
}
|
gharchive/issue
|
[Release Radar Request] Microsoft Kiota v1.0
Open Source Project name
Microsoft Kiota
What is your project?
Kiota is a modern client code generator for OpenAPI and REST. It supports multiple languages and provides model classes, a chained API surface to build requests and much more!
Kiota differentiates itself from other generators by its simplicity, un-opiniated choices about the serialization formats/libraries, HTTP clients or authentication schemes/libraries.
Additionally, Kiota enables selecting just the parts of the API that you need to call and generates out a client for your needs.
Version
1.0.0
Date
2023-03-09
Description of breaking changes
This first major version comes with the C#/dotnet language as a stable language and Go/Java/TypeScript/PHP/Python/Ruby/CLI as preview languages. Right now the generator is available as a CLI and we're working to enable additional experiences (GUI/CI/...).
We're also working to get the other languages to a stable maturity level as well as to add more languages.
GitHub Repo
https://github.com/microsoft/kiota
Website
https://microsoft.github.io/kiota
Link to changelog
https://github.com/microsoft/kiota/blob/main/CHANGELOG.md
Social media
upcoming
Anything else to add?
No response
For information the website now moved to here https://learn.microsoft.com/openapi/kiota/
Thank you for sharing this release in this month's edition! Closing this issue.
Our pleasure! Glad you enjoyed it. For reference, here's the link to the blog post.
|
2025-04-01T06:38:47.941083
| 2020-06-23T15:06:25
|
643920645
|
{
"authors": [
"Rebvos",
"githubteacher"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6275",
"repo": "githubschool/github-games-Rebvos",
"url": "https://github.com/githubschool/github-games-Rebvos/issues/2"
}
|
gharchive/issue
|
URL in description and README broken
The URL in the repository description and the one in the README are pointing to githubschool's copy of the game instead of yours.
Please fix both so they point to your copy of the game at https://githubschool.github.io/github-games-Rebvos/
updated
|
2025-04-01T06:38:47.960013
| 2020-07-13T20:04:33
|
656103740
|
{
"authors": [
"brianamarie",
"hectorsector"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6276",
"repo": "githubtraining/ci-circle",
"url": "https://github.com/githubtraining/ci-circle/issues/122"
}
|
gharchive/issue
|
CircleCI flow out of date
The sign on and authorization flow for CircleCI is no longer in sync with what we describe on the course. This is likely causing folks to drop off and should be resolved.
This seems to be blocking the course as the instructions are very far apart from the learner experience.
Thank you for opening this issue, @hectorsector. Depending on the registration numbers and the maintenance that would be involved in keeping this active, do you think this would be a course we should consider sunsetting or deactivating?
I think this might be a good idea. @brianamarie should we reach out to our friends at CircleCI to see if they're interested in transitioning to their organization? I'd be glad to help them get set up with maintaining the course themselves.
I think that's a great idea, @hectorsector. I found this thread from 3 years ago when we worked with Emma, and it looks like she's the director of marketing and reachable at<EMAIL_ADDRESS>🎉
Reached out via Halp.
|
2025-04-01T06:38:47.972145
| 2017-11-02T02:48:27
|
270515423
|
{
"authors": [
"NicholasJohn16",
"alehaa"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6277",
"repo": "gitlist-php/gitlist",
"url": "https://github.com/gitlist-php/gitlist/issues/44"
}
|
gharchive/issue
|
Is this fork dead?
There's no new releases. Should I use this, the original source of the px3 fork?
Official development takes place in upstream. Eventually upstream will be moved here in the future.
|
2025-04-01T06:38:47.989496
| 2019-03-21T09:52:45
|
423645089
|
{
"authors": [
"gitpitch",
"kserin"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6278",
"repo": "gitpitch/gitpitch",
"url": "https://github.com/gitpitch/gitpitch/issues/245"
}
|
gharchive/issue
|
theme-override not working for private gitlab
I'm currently using gitpitch opensource with my own gitlab server:
git {
repo {
services = [
{
name = "GitLab2"
type = "gitlab"
site = "http://********.prive"
apibase = "http://*********.prive/api/v4/"
apitoken = "**********"
apitokenheader = "PRIVATE-TOKEN"
rawbase = "http://********.prive/"
branchdelim = "~"
default = "true"
}
]
}
}
When I tried to use theme-override : next/PITCHME.css in my PITCHME.yaml, it seems that the css is not correctly loaded.
After inspecting network, it seems that the PRIVATE-TOKEN header is not set for the PITCHME.css request (it is correctly set for PITCHME.md request):
GET /*****/*****/raw/master/next/PITCHME.css HTTP/1.1
User-Agent: Java/1.8.0_181
Host: ********.prive
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Connection: keep-alive
[ ... ]
GET /*****/*****/raw/master/next/PITCHME.md?gp=90 HTTP/1.1
Cache-Control: no-cache
PRIVATE-TOKEN: ********
Host: *********.prive
Accept: */*
User-Agent: AHC/2.0
Hi Kévin, the GitPitch open-source server does not support private repos. So the token is not set on any assets fetches, including PITCHME.css. The token on the open-source server was only ever used to avoid API call limits when the open-source server was being used live on gitpitch.com.
For GitPitch private repository support you need either (1) a Pro subscription on gitpitch.com or (2) a license for GitPitch Enterprise that supports public and private repos within self-hosted GitHub, GitLab, and Bitbucket servers.
Oh ! So it's not a bug and it's not necessary that I do a pull request to fix that ?
Not a bug! No PR. I offer the Pro subscription service on gitpitch.com and the Enterprise server to deliver private repo support. The open-source server is public repo only.
|
2025-04-01T06:38:48.082948
| 2015-03-30T16:12:30
|
65246886
|
{
"authors": [
"gizak",
"hagna"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6280",
"repo": "gizak/termui",
"url": "https://github.com/gizak/termui/issues/19"
}
|
gharchive/issue
|
sparklines and line chart from example/theme.go don't look right on raspberry pi (should they?)
Using raspbian wheezy (2015-02-16). Cross compiled example/theme.go and ran it on the console of the pi.
Maybe the pi is missing a font?
I do not have an experience with raspberry pi, but I guess the problem may be the unicode displaying. The sparklines and line chart rely upon unicode, so make sure your terminal and font's settings are correct.
Ok well it looks like there are ~24 characters that I don't have: the brail ones and the blocks.
|
2025-04-01T06:38:48.086467
| 2023-05-19T16:38:57
|
1717527078
|
{
"authors": [
"moodysalem",
"raphaelDkhn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6281",
"repo": "gizatechxyz/onnx-cairo",
"url": "https://github.com/gizatechxyz/onnx-cairo/pull/53"
}
|
gharchive/pull-request
|
remove unnecessary into that is breaking cairo-test
Pull Request type
Please check the type of change your PR introduces:
[x] Bugfix
[ ] Feature
[ ] Code style update (formatting, renaming)
[ ] Refactoring (no functional changes, no API changes)
[ ] Build-related changes
[ ] Documentation content changes
[ ] Other (please describe):
What is the current behavior?
Issue Number: #52
Lgtm!
https://github.com/all-contributors please add @moodysalem for code, bug
@all-contributors please add @moodysalem for code, bug
|
2025-04-01T06:38:48.091403
| 2023-07-06T14:45:32
|
1791710141
|
{
"authors": [
"raphaelDkhn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6282",
"repo": "gizatechxyz/orion",
"url": "https://github.com/gizatechxyz/orion/pull/141"
}
|
gharchive/pull-request
|
Feat: QuantizeLinear
Pull Request type
Please check the type of change your PR introduces:
[ ] Bugfix
[X] Feature
[ ] Code style update (formatting, renaming)
[X] Refactoring (no functional changes, no API changes)
[ ] Build-related changes
[ ] Documentation content changes
[ ] Other (please describe):
What is the current behavior?
Current quantization functions do not comply with ONNX quantization operators.
Issue Number: N/A
What is the new behavior?
implement QuantizeLinear
adding tests
Does this introduce a breaking change?
[X] Yes
[ ] No
Other information
Wouldn't be better to have a separate saturate module with the arithmetic operations instead of having saturate_ everywhere? What do you think @raphaelDkhn . I'm fine with this approach just thinking for modularity purposes.
You mean a new saturation function in TensorTrait? We could do that, but it would cost more because we'd have to loop into the tensor element
|
2025-04-01T06:38:48.095926
| 2023-12-05T08:43:46
|
2025627043
|
{
"authors": [
"HappyTomatoo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6283",
"repo": "gizatechxyz/orion",
"url": "https://github.com/gizatechxyz/orion/pull/491"
}
|
gharchive/pull-request
|
Feat: erf operator
Pull Request type
Please check the type of change your PR introduces:
[ ] Bugfix
[x] Feature
[ ] Code style update (formatting, renaming)
[ ] Refactoring (no functional changes, no API changes)
[ ] Build-related changes
[ ] Documentation content changes
[ ] Other (please describe):
What is the current behavior?
Issue Number: https://github.com/gizatechxyz/orion/issues/349
What is the new behavior?
added Erf operator compatible with ONNX Erf.
Does this introduce a breaking change?
[ ] Yes
[x] No
Other information
@raphaelDkhn I modified the Erf implementation to be a conditional statement function
|
2025-04-01T06:38:48.101360
| 2019-09-09T11:38:43
|
491043326
|
{
"authors": [
"Sgoettschkes",
"sheharyarn"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6284",
"repo": "gjaldon/ecto_enum",
"url": "https://github.com/gjaldon/ecto_enum/issues/91"
}
|
gharchive/issue
|
EctoEnum macro should implement embed_as/1 and equal?/2
After updating Ecto to 3.2.0, I get the following warnings for each use of EctoEnum:
warning: function embed_as/1 required by behaviour Ecto.Type is not implemented (in module Module.Enum)
lib/namespace/module.ex:9: Module.Enum (module)
warning: function equal?/2 required by behaviour Ecto.Type is not implemented (in module Module.Enum)
lib/namespace/module.ex:9: Module.Enum (module)
I guess to prevent this warnings from showing up changing the macro would be sufficient.
Any update on this? I see that there already is a PR.
|
2025-04-01T06:38:48.121942
| 2022-12-06T22:22:16
|
1480331072
|
{
"authors": [
"EWisely",
"gjeunen",
"hughcross"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6285",
"repo": "gjeunen/reference_database_creator",
"url": "https://github.com/gjeunen/reference_database_creator/issues/11"
}
|
gharchive/issue
|
db_download error: 0it [00:00, ?it/s]
Hi! I love the functionality to download a list of taxa from NCBI! I'm trying it now, and for most species in my list, it works as expected. For some species in the same list, I get the "No such file or directory: 'esearch_output.txt'" error mentioned earlier (which I will try to narrow down and see if that fixes it) and for other species in my list, I get the following error:
downloading sequences from NCBI
looking up the number of sequences that match the query
found 394 number of sequences matching the query
starting the download
formatting the downloaded sequencing file to CRABS format
0it [00:00, ?it/s]
found 0 sequences with incorrect accession format
written 0 sequences to Stenella frontalis_ncbi.fasta
what could be the problem here?
Thanks for a great tool!
~Eldridge
I think I fixed that error by changing the --keep_original flag to yes.
Thanks for the input, Eldridge. And thanks for cross-posting on the other issue. We will have a look at the code and see where the issue is.
with the --keep_originals flag set to yes, it no longer throws the errors... however, it does append the fasta records for the previous species to the beginning of the file for the current species, such that the file for the final species in the list contains the fasta records for the whole list.
I hope this helps with the debugging!
I'm going to rename the last one from the list to reflect that it's from the whole list and delete the rest of the files. It's not as elegant as the original way is designed, but it appears to get the job done!
Hello @EWisely,
Thanks for posting the error message. The first "No such file or directory: 'esearch_output.txt'" is pointing towards a failure to connect to the NCBI servers. Similarly, the second error message is also pointing towards a failure to connect to the NCBI servers, but at a different point in the process. It might have been that there was unusual high traffic on their servers, making it unstable. The --keep_originals parameter should not have an impact on the server connection, as it is just an IF statement to remove a file. Glad you managed to download the files though and please let me know if you keep encountering this issue.
For the second part, could you please provide a bit more info on what is going on? I might be misunderstanding something, but it seems that you're running the db_download function multiple times (using different output filenames?), but CRABS still appends the sequences to the first output file? Could you please share the code you used and I'll look into this in the coming days.
Best,
Gert-Jan
|
2025-04-01T06:38:48.124713
| 2018-12-04T06:55:24
|
387145392
|
{
"authors": [
"gjtorikian",
"jregistr",
"thrieu"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6286",
"repo": "gjtorikian/graphql-docs",
"url": "https://github.com/gjtorikian/graphql-docs/issues/64"
}
|
gharchive/issue
|
Support for quote-styled comment
When parsing content with quote-styled comment such as """The ID of user""" , it throws an error
Parse error on "id" (IDENTIFIER) at [9, 3] (GraphQL::ParseError)
It is possible to support that?
Wild -- out of curiosity, is this coming from the IDL file? Could you post that snippet?
Hi @gjtorikian , The following snippet is generated by https://github.com/graphql-cli/graphql-cli
type Query {
profile: User
}
"""
A user
"""
type User {
"""
The id of the user
"""
id: String!
"""
The email of user
"""
email: String
}
Any news on this getting implemented?
|
2025-04-01T06:38:48.141181
| 2017-09-28T05:34:14
|
261198235
|
{
"authors": [
"gkjohnson"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6287",
"repo": "gkjohnson/webgl-shader-editor",
"url": "https://github.com/gkjohnson/webgl-shader-editor/issues/3"
}
|
gharchive/issue
|
Use drawImage instead of toDataURL
This jsfiddle shows that using the toDataURL function takes significantly longer than drawImage function, which typically will take less than 0.1ms.
With so many images and shaders to draw, rendering takes a significant amount of time, but that seems to be primarily because of the use of toDataURL, which takes ~3-5ms per draw (and it's called once for positive shaders and once for negated). Rendering only takes < 1ms, so switching to use canvases and drawImage could significantly improve the performance.
ToDataURL may still be needed for the zoomed images, though, but that's only required for one of the rendered shaders.
when drawing from a gl to a 2d context, this can apparently be a bit slower. In chrome it's still pretty fast, but firefox takes ~10ms...
This is more complicated than I initially thought because we ultimately need to be producing those data URLs anyway for sampling at the moment. We could just keep around extra canvases instead, though, to sample from. This will also mean extending the zoombable-image element to support a canvas.
With commit https://github.com/gkjohnson/webgl-shader-editor/commit/cf816158d1db37d66b5afb2b9fc830645a13c940 and the ones after, toDataURL is no longer being used
|
2025-04-01T06:38:48.156180
| 2016-02-16T09:28:23
|
133928408
|
{
"authors": [
"ThaDafinser",
"gkralik"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6288",
"repo": "gkralik/php7-sapnwrfc",
"url": "https://github.com/gkralik/php7-sapnwrfc/issues/6"
}
|
gharchive/issue
|
Windows shutdown error
Now i get this error on shutdown.
Calling RFC functions before works fine.
This error also comes with the simple command php -m
Which version did you use?
Can you tell what was the last version where you did not experience this
error?
On 02/16/2016 10:28 AM, Martin Keckeis wrote:
Now i get this error on shutdown.
Calling RFC functions before works fine.
This error also comes with the simple command |php -m|
image
https://cloud.githubusercontent.com/assets/533017/13072147/e6609ac0-d497-11e5-8f3c-717cd2cc010e.png
—
Reply to this email directly or view it on GitHub
https://github.com/gkralik/php7-sapnwrfc/issues/6.
Latest. Need to go through the commits, will inform you when i found the last working commit.
Needed to build a lot of times to get the commit :+1:
With this commit fd56ff26c12ef099e4c6a21aa39494432cba58af it stopped to work
https://github.com/gkralik/php7-sapnwrfc/commit/fd56ff26c12ef099e4c6a21aa39494432cba58af
;extension=php_sapnwrfc-c4ecab0276d2e8159e8138680c8571a3b50da935.dll ; nope
;extension=php_sapnwrfc-20f4f437a2d0f5fd7074afbcf5df883882bc18ff.dll ; WORK
;extension=php_sapnwrfc-afd90b6e667f89ad586caa193bda842c914aca98.dll ; nope
;extension=php_sapnwrfc-58251b123861eddf76406120a8a909dc7c20e19c.dll ; nope
;extension=php_sapnwrfc-8d1f9276b84842ec7d53ece3fd1d0e75c12d2d8c.dll ; WORK
;extension=php_sapnwrfc-88b3ba2319a94417f2cbf6184de427c1aa6a6a5d.dll ; WORK
extension=php_sapnwrfc-fd56ff26c12ef099e4c6a21aa39494432cba58af.dll ; nope
Ok, thanks. I'll have a look in the evening.. seems strange. Can you
show me a stripped down version of the PHP script you use that I can use
to try to reproduce the issue?
@ThaDafinser can you try the fix-shutdown-error-on-windows branch and
check if it makes a difference? Just a wild guess, but maybe it helps me
narrow down the problem...
On 02/16/2016 12:21 PM, Martin Keckeis wrote:
Needed to build a lot of times to get the commit :+1:
With this commit |fd56ff26c12ef099e4c6a21aa39494432cba58af| it stopped
to work
fd56ff2
https://github.com/gkralik/php7-sapnwrfc/commit/fd56ff26c12ef099e4c6a21aa39494432cba58af
|;extension=php_sapnwrfc-c4ecab0276d2e8159e8138680c8571a3b50da935.dll ;
nope
;extension=php_sapnwrfc-20f4f437a2d0f5fd7074afbcf5df883882bc18ff.dll ;
WORK
;extension=php_sapnwrfc-afd90b6e667f89ad586caa193bda842c914aca98.dll ;
nope
;extension=php_sapnwrfc-58251b123861eddf76406120a8a909dc7c20e19c.dll ;
nope
;extension=php_sapnwrfc-8d1f9276b84842ec7d53ece3fd1d0e75c12d2d8c.dll ;
WORK
;extension=php_sapnwrfc-88b3ba2319a94417f2cbf6184de427c1aa6a6a5d.dll ;
WORK extension=php_sapnwrfc-fd56ff26c12ef099e4c6a21aa39494432cba58af.dll
; nope |
—
Reply to this email directly or view it on GitHub
https://github.com/gkralik/php7-sapnwrfc/issues/6#issuecomment-184637482.
@gkralik that branch works.
Compiler shows me two warnings now
Very well. The please update the fixed branch and try again... I changed
the handling of the errorInfo member. Curious if that works now.
On 02/16/2016 01:12 PM, Martin Keckeis wrote:
@gkralik https://github.com/gkralik that branch works.
Compiler shows me two warnings now
image
https://cloud.githubusercontent.com/assets/533017/13075912/e90404ee-d4ae-11e5-8e17-a2281a935c9d.png
—
Reply to this email directly or view it on GitHub
https://github.com/gkralik/php7-sapnwrfc/issues/6#issuecomment-184659617.
@gkralik still work with the 2nd commit in that branch.
Two warnings still exist
OK, I'll fix the unused error_info error, but the signed/unsigned
mismatch should already have been resolved in master...
On 02/16/2016 01:20 PM, Martin Keckeis wrote:
@gkralik https://github.com/gkralik still work with the 2nd commit in
that branch.
Two warnings still exist
—
Reply to this email directly or view it on GitHub
https://github.com/gkralik/php7-sapnwrfc/issues/6#issuecomment-184661476.
Merged and fixed on current master. Thanks for your help.
|
2025-04-01T06:38:48.167745
| 2021-07-30T08:11:55
|
956514521
|
{
"authors": [
"locona",
"shreyas-jadhav"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6289",
"repo": "gladly-team/next-firebase-auth",
"url": "https://github.com/gladly-team/next-firebase-auth/issues/244"
}
|
gharchive/issue
|
After changing claims, user's claims are not updated
Describe the bug
After changing claims, user's claims are not updated
Version
v0.13.1-alpha.3
To Reproduce
Create a server-side function to update claims.
Call the prepared function and call the following on the frontend
const { claims } = uesAuthUser();
Claims will not be updated.
Expected behavior
I want the frontend useAuthUser claims to be updated after calling setCustomUserClaims on the server side (like firebase functions).
Additional context
Add any other context about the problem here.
Maybe there is a problem here.
https://github.com/gladly-team/next-firebase-auth/blob/main/src/withAuthUser.js#L58-L61
You need to call AuthUser.getIdToken(true)
This will refresh the instance and fetch latest claims.
@shreyas-jadhav
What I want to do is to change the claims of all members except myself.
AuthUser.getIdToken(true)
```
If I put the above in, won't I have to do it every time when rendering?
|
2025-04-01T06:38:48.219794
| 2023-05-02T00:28:56
|
1691608654
|
{
"authors": [
"valentinedwv"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6290",
"repo": "gleanerio/scheduler",
"url": "https://github.com/gleanerio/scheduler/issues/15"
}
|
gharchive/issue
|
Add earthcube summarize
https://github.com/earthcube/earthcube_utilities/blob/eeabc6c5d7a7c532011de70824e76a914b607d43/summarize/pyproject.toml#L40
summarize_from_repo
We are working on some changes for this, so let's wait til these tools are moved into the main package
Think this is best done with parsing a release.nq, rather than querying a full graph
|
2025-04-01T06:38:48.267488
| 2023-09-08T17:57:01
|
1888102940
|
{
"authors": [
"fharper"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6307",
"repo": "gleich/desktop",
"url": "https://github.com/gleich/desktop/pull/4"
}
|
gharchive/pull-request
|
add an option to also detect applications running in the menubar (close #3)
Description
Added an optional option to also list menubar application for macOS.
Steps
[X] My change requires a change to the documentation
[X] I have updated the accessible documentation according
[X] I have read the CONTRIBUTING.md file
[X] There is no duplicate open or closed pull request for this fix/addition/issue resolution.
Original Issue
This PR resolves #3
@gleich friendly ping :)
|
2025-04-01T06:38:48.273132
| 2015-10-24T16:44:42
|
113171176
|
{
"authors": [
"glenjamin",
"huoy",
"kesne"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6308",
"repo": "glenjamin/webpack-hot-middleware",
"url": "https://github.com/glenjamin/webpack-hot-middleware/issues/39"
}
|
gharchive/issue
|
EventSource vs Websockets?
Just wondering the reason for using EventSource over Websockets? I have found that chrome and firefox have differing implementations of EventSource. The spec seems to have been abandoned as well.
I've generally found them to be simple & stable, work on modern browsers, and be a very lightweight dependency.
If it's causing issues I'm not averse to switching to websockets.
I'm trying to implement this middleware without the need for a node server. Specifically in Go, which doesn't have an adequate implementation of EventSource. Presumably because it's hardly used and the docs are nonexistent. It would be nice if it used websockets.
This lib has its own hand rolled event-stream implementation - it's very straightforward. Slim over the source - feel free to ask any questions.
The spec itself I found quite clear - the main reason I think libs aren't very good is that it's very simple to do without.
Are you re-implementing webpack in go?
That's true, infact it's not so much of a problem with Go itself and lack of libraries, as I've managed to get it working for the most part. The weird thing for me has been chrome acting differently than firefox. For example Chrome will detect onopen on connection/headers, however firefox does not, until I send a "building" event. It may be a problem with my implementation but I have been able to recreate the problem sporadically with the default node implementation.
And no, not webpack itself but I would like to serve reloads from my backend rather than through a middleware/webpack. Ie. Push compile events from webpack to my backend and to the client, thus not needing CORS or any fancy middleware stacks.
Going to close this, as there isn't really an action for me to take.
Feel free to ask more questions if you're still having trouble with the port.
Fwiw I ended up not implementing it in the way I initially planned. It's definitely better having the HMR decoupled from the backend. I was just being silly. Though I still think websockets would be a better choice :)
I'm having issues with SSE behind a proxy. I run my node app in a local VM behind a proxy, and for some reason it doesn't proxy the SSE requests.
Are you still not averse to switching to websockets? I realize that the issues are related to my use case and really likely to be encountered by others.
I'm still open to the idea, but if SSE is getting chewed up by the proxy, I'm not sure why websockets wouldn't.
I'm not 100% that it wouldn't as well. This issue seems tangentially related to the issue that I've been having: https://github.com/nodejitsu/node-http-proxy/issues/921
I'll spend this weekend playing with a fork to see if websockets also get killed.
|
2025-04-01T06:38:48.275008
| 2015-08-13T22:13:34
|
100881567
|
{
"authors": [
"Peaker",
"elmindreda"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6309",
"repo": "glfw/glfw",
"url": "https://github.com/glfw/glfw/pull/580"
}
|
gharchive/pull-request
|
x11_window.c: select may return EINTR, needs to be retried
When using glfw in Haskell in Linux via the Haskell bindings, the Haskell runtime system uses plenty of signals.
These signals cause glfwWaitForEvents to return immediately, because "select" in x11_window.c returns with EINTR. Then, XPending is called, finds nothing on the X file descriptor, returns 0, and glfw returns to the user without actually waiting for any event.
The fix is simply to loop while select returns EINTR.
Thank you!
|
2025-04-01T06:38:48.299462
| 2017-09-18T13:21:10
|
258477028
|
{
"authors": [
"rwjblue"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6310",
"repo": "glimmerjs/glimmer-application-pipeline",
"url": "https://github.com/glimmerjs/glimmer-application-pipeline/pull/128"
}
|
gharchive/pull-request
|
[WIP] Provide custom tests/index.html.
Currently, the test supported added to Glimmer.js apps uses QUnit 1.20. The reason for this is that we are relying on the default testem test page for QUnit from here (which uses //code.jquery.com/qunit/qunit-1.20.0.js).
This commit does the following:
creates support for a tests tree, that at the moment only includes tests/index.html
Ensures that when tests are built, we still build the "normal" app assets (and tests/index.js / tests/index.html are generated)
There are still a number of things that should probably done here:
[ ] Ensure QUnit itself is included into the tests/index.js (QUnit does not have a module entry point so it can't just automatically be rolled up...)
[ ] Update existing tests for the asset location changes
[ ] Add more tests to ensure the new functionality works properly
[ ] Update to ensure that tests folder is not required (so that we are backwards compatible)
I'm not likely going to have time to finish this off anytime soon, but I'd happily help someone through the TODO items if they want to pick it up...
|
2025-04-01T06:38:48.325939
| 2015-09-10T18:20:59
|
105868060
|
{
"authors": [
"glittershark",
"growlsworth",
"thientran1707"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6311",
"repo": "glittershark/reactable",
"url": "https://github.com/glittershark/reactable/pull/179"
}
|
gharchive/pull-request
|
Keep intermediate built modules around
Rather than putting them in tmp/ and .gitignore-ing them, keep the
individually compiled ES6 modules (each of which implicitly supports
CommonJS and therefore Node and Webpack and friends due to UMD)
committed to the repo.
/cc @growlsworth, @thientran1707 can you verify that this fixes your Webpack problem?
Fixes #167
I got the React is not defined in React.Component line 51 in filterer.js @glittershark
I get the error React not defined in filterer.js
@thientran1707 how about this?
(obviously this breaks browsers, but I just want to make sure that it does work in CommonJS before trying to fix it)
Packaged successfully for me in my project.
I pulled this branch to my machine and I experienced the same test failure as travis.
:+1:
Nice! I'll get on making the big-ol'-concatenated-blob working, then
interesting, the next button works but previous does not...haha @glittershark but it's working now :+1:
|
2025-04-01T06:38:48.342563
| 2024-10-08T11:21:24
|
2572901525
|
{
"authors": [
"jsbrittain"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6312",
"repo": "globaldothealth/InsightBoard",
"url": "https://github.com/globaldothealth/InsightBoard/pull/29"
}
|
gharchive/pull-request
|
Prevent zeros coercing to nulls
Prevent zeros coercing to nulls. This was caused when number types were passed directly into the coercion function.
This works for me - Not relevant for this PR but note that when the returned validation error is 'is not valid under any of the given schemas' block highlighting doesn't show the 2 age cells as being the issue (although the row correctly highlights in red shading). I suspect that might be a bit more work to implement though as the error message isn't all that helpful.
@pipliggins - yes, the validator can only tell us that the row isn't valid for a given schema, but determining which fields could be changed feels like an underdefined probem, especially if the schema were to become any more complex! Maybe the error reporting could be improved, but as you say, we'll deal with that separately (perhaps based on further user feedback).
|
2025-04-01T06:38:48.363369
| 2022-05-11T00:29:31
|
1231846104
|
{
"authors": [
"codecov-commenter",
"ricardosdias"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6313",
"repo": "globocom/database-as-a-service",
"url": "https://github.com/globocom/database-as-a-service/pull/728"
}
|
gharchive/pull-request
|
Adding MountDataVolume to DetachVolume function undo to fix rollback …
…of host_migrate
Codecov Report
Merging #728 (d908be3) into master (2459c7b) will decrease coverage by 0.00%.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## master #728 +/- ##
==========================================
- Coverage 41.41% 41.41% -0.01%
==========================================
Files 341 341
Lines 25330 25331 +1
==========================================
Hits 10491 10491
- Misses 14839 14840 +1
Flag
Coverage Δ
unittests
41.41% <0.00%> (-0.01%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
dbaas/workflow/steps/util/volume_provider.py
41.43% <0.00%> (-0.03%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2459c7b...d908be3. Read the comment docs.
|
2025-04-01T06:38:48.451604
| 2021-05-26T14:24:22
|
902472654
|
{
"authors": [
"Minnozz",
"glromeo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6314",
"repo": "glromeo/esbuild-sass-plugin",
"url": "https://github.com/glromeo/esbuild-sass-plugin/issues/11"
}
|
gharchive/issue
|
cache does not track dependencies on other files
Example:
Cache is enabled by passing a Map to the cache option
entry.scss imports bar.scss
After the initial build, bar.scss is changed
An incremental build is started
transform is called for entry.scss
entry.scss is in the cache and that file itself has not been changed, so the cached transform result with the old contents of bar.scss is returned
The cache is never queried for bar.scss
The old contents of bar.scss end up in the bundle
It seems like this case is described in the esbuild docs:
Cache invalidation only works if slowTransform() is a pure function (meaning that the output of the function only depends on the inputs to the function) and if all of the inputs to the function are somehow captured in the lookup to the cache map. For example if the transform function automatically reads the contents of some other files and the output depends on the contents of those files too, then the cache would fail to be invalidated when those files are changed because they are not included in the cache key.
Hi Bart, thank you for pointing this out...
sass gives me the files that I should track and as of now I pass those on to esbuild.
I was expecting it to magically deal with dependencies but thinking it twice it's not its task!
I could check not just the main file but also the dependencies using fs.stat but I would like to find a way to make the cache better than it is right now rather than worse off in terms of performance.
So I'd rather asynchronously clear the entries when a change is detected in the fs.
It would be amazing if, given esbuild does the polling, I could hook into it to get the notification I want because otherwise I might end up with asynchronous issues...
I am looking into this and I hope I can come up with a fix soon!
Hi Bart
version v1.4.1 should have (I hope) fixed the cache issue with sass imports
let me know how it goes
At the end the fix I applied is to check for all the imported file mtimes so it should work regardless of watch mode
Seems to work with your fix, thanks for the quick update!
|
2025-04-01T06:38:48.458490
| 2016-03-31T20:36:24
|
145010745
|
{
"authors": [
"PennyQ",
"astrofrog",
"dborncamp"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6315",
"repo": "glue-viz/glue-3d-viewer",
"url": "https://github.com/glue-viz/glue-3d-viewer/issues/101"
}
|
gharchive/issue
|
Linking Pixel Axis From 1D Array to Pixel Axis of 2D Array
I am trying to link the Y axis of a 2 dimensional array (attached file: col100_sci.txt) to the index of a 1 dimensional array (attached file f100.txt). I was hoping that when I select something in the 1 dimensional file I could get the corresponding rows highlighted.
f100.txt
col100_sci.txt
@dborncamp Could you tell us which data_loading method should be chose when opening these two datasets? Thanks!
(Just for some background this was an issue that happened at the tutorial today, will post more details to this issue)
(this issue really applies to glue core, not the 3d plugin repo)
@PennyQ This was read in using the qglue interface from ipython. The data originated from an open HDF5 file but it was casted to numpy arrays before opening.
@astrofrog Oops. Should I open another issue there?
|
2025-04-01T06:38:48.465867
| 2020-10-28T18:31:08
|
731691489
|
{
"authors": [
"panarch",
"yejihan-dev"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6316",
"repo": "gluesql/gluesql",
"url": "https://github.com/gluesql/gluesql/pull/110"
}
|
gharchive/pull-request
|
Support Plus(+) and Minus(-) signs in WHERE
This pull request resolves issue #101 by supporting unary operations of Value. Specific modifications are the following.
value.rs & evaluated.rs: Functions to evaluate unary operations of the expression were added.
mod.rs: Expressions now handle unary operations.
test.rs: Two test cases to check if WHERE supports plus(+) and minus(-) signs were added.
@panarch Sure, I will merge all commits into one by this afternoon. If I finish combining all commits, should I make a new pull request?
@yejihan-dev It's just all up to you, making a new pr or using this thread, both are ok :)
@panarch Oh, then I will close this one and make up new request!
|
2025-04-01T06:38:48.530368
| 2022-07-27T14:58:02
|
1319692292
|
{
"authors": [
"atodorov",
"codecov-commenter"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6317",
"repo": "gluwa/creditcoin",
"url": "https://github.com/gluwa/creditcoin/pull/472"
}
|
gharchive/pull-request
|
Debug benchmark ci job
Description of proposed changes:
Practical tips for PR review & merge:
[ ] All GitHub Actions report PASS
[ ] Newly added code/functions have unit tests
[ ] Coverage tools report all newly added lines as covered
[ ] The positive scenario is exercised
[ ] Negative scenarios are exercised, e.g. assert on all possible errors
[ ] Assert on events triggered if applicable
[ ] Assert on changes made to storage if applicable
[ ] Modified behavior/functions - try to make sure above test items are covered
[ ] Integration tests are added if applicable/needed
Codecov Report
Merging #472 (a8f2203) into dev (c5418cb) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## dev #472 +/- ##
=======================================
Coverage 75.43% 75.43%
=======================================
Files 44 44
Lines 8378 8378
=======================================
Hits 6320 6320
Misses 2058 2058
:mega: Codecov can now indicate which changes are the most critical in Pull Requests. Learn more
|
2025-04-01T06:38:48.548811
| 2020-12-01T09:05:26
|
754196372
|
{
"authors": [
"RaffiCologne",
"gmag11"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6318",
"repo": "gmag11/ESPNtpClient",
"url": "https://github.com/gmag11/ESPNtpClient/issues/4"
}
|
gharchive/issue
|
NTP.getTime not public
Good morning, using your Lib as replacement for NTPClientLib.
Everything is running fine till now.
Using NTP.getTime() results in
C:\Users\raffi\Documents\Arduino\libraries\ESPNtpClient-main\src/ESPNtpClient.h:153:10: error: 'void NTPClient::getTime()' is protected
void getTime ();
Looking in ESPNtpLib.h you commented:
/**
* Starts a NTP time request to server. Returns a time in UNIX time format. Normally only called from library.
* Kept in public section to allow direct NTP request.
*/
void getTime ();
Is there another function to manually fire a request or what i am doing wrong?
Regards,
Ralf
Just fixed in last release. Why do you need that? I'd like to know the use case.
Hi, thanks for your fast reply.
The picture shows a part of a web-interface.
If the user clicks the Button, time sync should happen manually.
Von: Germán Martín<EMAIL_ADDRESS>Gesendet: Dienstag, 1. Dezember 2020 22:26
An: gmag11/ESPNtpClient
Cc: RaffiCologne; Author
Betreff: Re: [gmag11/ESPNtpClient] NTP.getTime not public (#4)
Just fixed in last release. Why do you need that? I'd like to know the use case.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/gmag11/ESPNtpClient/issues/4#issuecomment-736830411 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AR5PUBLXMSFCJFC3EPSSZK3SSVNOPANCNFSM4UIXF3PA .Das Bild wurde vom Absender entfernt.
Thanks, then I close this issue
|
2025-04-01T06:38:48.550636
| 2011-04-08T15:36:10
|
745661
|
{
"authors": [
"kevinjqiu",
"rainux"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6319",
"repo": "gmarik/vundle",
"url": "https://github.com/gmarik/vundle/issues/12"
}
|
gharchive/issue
|
Conditionally load bundles
Add a mechanism for loading bundles based on conditions, such as filetype detection?
For example, if I'm editing a Python file, I don't necessarily need to load all the Ruby plugins.
Feel free to close this issue if there's already a workaround I'm not aware of.
Load bundles based on filetype is an interesting idea. Before read this issue, I already implemented a on demand mechanism that use tags and the new command BundleBind! manually. I'll try to figure out if it possible to implement this.
|
2025-04-01T06:38:48.552457
| 2023-02-15T14:54:07
|
1585999494
|
{
"authors": [
"gmberton",
"wpumain"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6320",
"repo": "gmberton/CosPlace",
"url": "https://github.com/gmberton/CosPlace/issues/22"
}
|
gharchive/issue
|
When the model makes predictions, what is the relationship between the number of images in the query dataset and the number of database images?
When the model makes predictions, what is the relationship between the number of images in the query dataset and the number of database images?For example, when I use this model to predict, there is only one image in the query dataset, so the image in the database dataset should be the image of the region where the query dataset image is located,to ensure the correctness, how many database images should be at least?
You only need one database image near your query to find the query's position. If your first prediction is correct and the others are wrong, your recall is still 100%.
From the paper: As a metric, we use the recall@N with a 25 meters threshold, i.e., the percentage of queries for which at least one of the first N predictions is within a 25 meters distance from the query, following standard procedure
Think you for your help
|
2025-04-01T06:38:48.562618
| 2022-05-13T07:14:14
|
1234833318
|
{
"authors": [
"gmenounos",
"kerekt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6321",
"repo": "gmenounos/kw1281test",
"url": "https://github.com/gmenounos/kw1281test/issues/17"
}
|
gharchive/issue
|
m73 cluster not supported
Dear Sir!
NewBeetle cluster not supported, readed it but security area empty (00) :-(
Can umake support for it ? I can read the cpu if it helps.
regards
kerekt
KW1281Test: Yesterday's diagnostics...Today.
Version 0.74-beta (https://github.com/gmenounos/kw1281test/releases)
Args: COM1 9600 17 ReadIdent
OSVersion: Microsoft Windows NT 10.0.19044.0
.NET Version: 6.0.2
Culture: en-US
Opening serial port COM1
Sending wakeup message
Wakeup duration: 2 seconds
Reading sync byte
Keyword Lsb $01
Keyword Msb $8A
Protocol is KW 1281 (8N1)
ECU: 1C0920840C KOMBI+WEGFAHRS. M73 V05
Software Coding 01112, Workshop Code: 00000
Sending ReadIdent block
Ident: WVWZZZ1YZ4M323266 VWZ5Z0C7261289
Sending EndCommunication block
eeprom.zip
I don't have access to test a cluster running that software so I don't know what went wrong. Is this a diesel car? Can you try the DumpEdc15Eeprom with baud rate 9600? If you have an EDC15 and if it's Immobilier version 3 then you can get the SKC from the EDC15.
Its immo3 cluster with petrol engine 2.0i, maybe motronic me7.
I can dump internal D and P flash from hc12 cpu if it help.
Sure, it might help, so if it's not a lot of trouble, please do. May I ask what method you're using to dump the flash? I ask because I haven't managed to dump the flash from these myself (or maybe I've only dumped the region that contains the SKC).
I readed the internal flash and eeprom from the mc912dg128a cpu. SKC i found.
Can u add obd support for this type of dashboard?
i can test it today
NB_1C0920840C_Dumps.zip
Thanks for the dumps! How did you get them (hardware device or software program via OBD connector)?
I've uploaded a test version. It's not ready for general use, but could please download it and run a command for me? It will help me figure out how the memory areas are arranged in your cluster.
Here is the release: https://github.com/gmenounos/kw1281test/releases/tag/v0.75-alpha
The command is:
.\kw1281test COM1 9600 17 DumpMarelliMem 8208 3
Then please upload the dump file that is created (marelli_mem_$2010.bin).
Thanks!
Its readed with hardware device Xprog-m (mine is original but china copy around 60$ do the same) , 4 wires soldered to internal BDM connector.
Unfortunatly car gone yesterday, i cannot test it, but soon as i have same car, test will follow and will report here.
Check pictures
pic.zip
That's the same memory layout as my 1C0920921G Immo3 cluster. You should be able to dump the entire EEPROM with this command:
.\kw1281test.exe COM1 9600 17 DumpMarelliMem 14336 2048
thank you!
which version should i use ? v0.75 or v0.74 ?
https://github.com/gmenounos/kw1281test/releases/tag/v0.75-alpha
You were able to dump the EEPROM and the SKC was there this time?
i dont think so, car's owner move to next country, not come back anymore.
|
2025-04-01T06:38:48.598828
| 2023-04-05T09:47:10
|
1655288765
|
{
"authors": [
"gngpp"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6323",
"repo": "gngpp/luci-theme-design",
"url": "https://github.com/gngpp/luci-theme-design/pull/54"
}
|
gharchive/pull-request
|
fix: fixed DHCPv6 lease table host names incorrectly displayed
Automated changes by create-pull-request GitHub action
|
2025-04-01T06:38:48.642714
| 2018-05-04T10:58:44
|
320233205
|
{
"authors": [
"Velenir",
"W3stside"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6324",
"repo": "gnosis/dx-react",
"url": "https://github.com/gnosis/dx-react/pull/243"
}
|
gharchive/pull-request
|
Optim/dx 256 getSellerOngoingAuctions func
Simplified logic in getSellerOngoingAuctions
nice, tried:
add token pair
1a. post sell order
move time to 2 min after auction start
buy out all sell balance
run claim funds
= Pair shown in ongoing Auctions, pair shows claim button after sell balance bought out, claiming disappears row
Looks good, approving
|
2025-04-01T06:38:48.648792
| 2022-02-09T15:40:26
|
1128703079
|
{
"authors": [
"codecov-commenter",
"nlordell"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6325",
"repo": "gnosis/gp-v2-services",
"url": "https://github.com/gnosis/gp-v2-services/pull/1636"
}
|
gharchive/pull-request
|
Fix Balancer SOR settlement prices for buy orders
This PR fixes an issue I discovered when investigating Balancer SOR solver performance in staging.
Specifically, the SOR API uses swap_amount for the amount of tokens for computing the route (so in sell_token for sell orders or buy_token for buy orders) and return_amount for the computed counter amount (so in buy_token for sell orders and in sell_token for buy orders). However, the prices map was always being computed as if swap_amount was in sell_token and return_amount was in buy_token, making it only valid for sell orders.
Test Plan
Added check for prices in unit test. Also, you can run the manual test and see:
% cargo test -p solver -- --nocapture --ignored balancer_sor_solver
Compiling solver v0.1.0 (/Users/nlordell/Developer/gp-v2-services/crates/solver)
Finished test [unoptimized + debuginfo] target(s) in 5.23s
Running unittests (target/debug/deps/solver-a7ee0ecc05ab3005)
running 1 test
Found settlement for sell order: Settlement {
encoder: SettlementEncoder {
tokens: [
0x6b175474e89094c44da98b954eedeac495271d0f,
0xba100000625a3754423978a60c9317c58a424e3d,
],
clearing_prices: {
0xba100000625a3754423978a60c9317c58a424e3d:<PHONE_NUMBER>5394634375,
0x6b175474e89094c44da98b954eedeac495271d0f:<PHONE_NUMBER>000000000,
},
...
}
Found settlement for buy order: Settlement {
encoder: SettlementEncoder {
tokens: [
0x6b175474e89094c44da98b954eedeac495271d0f,
0xba100000625a3754423978a60c9317c58a424e3d,
],
clearing_prices: {
0xba100000625a3754423978a60c9317c58a424e3d:<PHONE_NUMBER>00000000000,
0x6b175474e89094c44da98b954eedeac495271d0f:<PHONE_NUMBER>379216961,
},
Which makes sense given that 0x6b175474e89094c44da98b954eedeac495271d0f is DAI (so 1$) and
0xba100000625a3754423978a60c9317c58a424e3d is BAL which is roughly worth 15$ ATM:
For the sell order, the price of BAL is 14.98... and the price of DAI is 1.0 (so BAL is roughly worth 15$)
For the buy order, the price of BAL is 1.0 and the price of DAI is 0.0667.. (so 1 DAI roughly buys you 0.667 BAL - noting that 1/15 = 0.06666...)
Codecov Report
Merging #1636 (16ffa5c) into main (5269328) will increase coverage by 0.02%.
The diff coverage is 100.00%.
:exclamation: Current head 16ffa5c differs from pull request most recent head b36f478. Consider uploading reports for the commit b36f478 to get more accurate results
@@ Coverage Diff @@
## main #1636 +/- ##
==========================================
+ Coverage 66.47% 66.50% +0.02%
==========================================
Files 173 173
Lines 34039 34053 +14
==========================================
+ Hits 22629 22646 +17
+ Misses 11410 11407 -3
|
2025-04-01T06:38:48.661118
| 2021-11-17T14:13:16
|
1056180471
|
{
"authors": [
"mikheevm",
"odd-amphora"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6326",
"repo": "gnosis/safe-apps-list",
"url": "https://github.com/gnosis/safe-apps-list/issues/88"
}
|
gharchive/issue
|
Add Juicebox
Name/Description
Juicebox
Type
New addition
Compatible Networks
- Mainnet
- Rinkeby
Code for review
Web Interface
Contracts
You can find contract ABIs and Addresses under ./deployments.
The only Rinkeby ABIs that should be needed are the ones that have corresponding ABIs in the Mainnet folder.
IPFS hash/App URL
https://juicebox.money
Desired date
ASAP. We are getting tons of activity from the ConstitutionDAO project.
I apologize that we have chosen not to add the project to the default apps list because it is not audited, and our users trust us when we list apps by default.
You can ask your users to add the app as a custom app in the meantime.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.