Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
72,500
8,744,661,889
IssuesEvent
2018-12-12 23:03:19
openopps/OpenOppsTasks
https://api.github.com/repos/openopps/OpenOppsTasks
closed
Design for Editing location
Design
Location will be edited on Open Opportunities and will not come from USAJOBS. Need design for allowing the Open Opps user to edit location in their Open Opps profile. Date Needed: This will go into sprint 4 of the current release. Sprint planning will occur on 11/1
1.0
Design for Editing location - Location will be edited on Open Opportunities and will not come from USAJOBS. Need design for allowing the Open Opps user to edit location in their Open Opps profile. Date Needed: This will go into sprint 4 of the current release. Sprint planning will occur on 11/1
non_process
design for editing location location will be edited on open opportunities and will not come from usajobs need design for allowing the open opps user to edit location in their open opps profile date needed this will go into sprint of the current release sprint planning will occur on
0
20,101
10,456,226,183
IssuesEvent
2019-09-20 00:08:27
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Mislabeled section
Pri1 cxp doc-bug security-fundamentals/subsvc security/svc triaged
I believe the section labeled "Server-side encryption using service-managed keys in customer-controlled hardware" should read "Server-side encryption using customer-managed keys in customer-controlled hardware". In the data encryption models section, there is no mention of supporting service-managed keys in customer-controller hardware. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: cd0472f9-1ed9-0154-3bfd-b25e5da64cd1 * Version Independent ID: 846187d4-c24c-279f-e216-80b6507228b3 * Content: [Microsoft Azure Data Encryption-at-Rest](https://docs.microsoft.com/en-us/azure/security/fundamentals/encryption-atrest#feedback) * Content Source: [articles/security/fundamentals/encryption-atrest.md](https://github.com/Microsoft/azure-docs/blob/master/articles/security/fundamentals/encryption-atrest.md) * Service: **security** * Sub-service: **security-fundamentals** * GitHub Login: @barclayn * Microsoft Alias: **barclayn**
True
Mislabeled section - I believe the section labeled "Server-side encryption using service-managed keys in customer-controlled hardware" should read "Server-side encryption using customer-managed keys in customer-controlled hardware". In the data encryption models section, there is no mention of supporting service-managed keys in customer-controller hardware. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: cd0472f9-1ed9-0154-3bfd-b25e5da64cd1 * Version Independent ID: 846187d4-c24c-279f-e216-80b6507228b3 * Content: [Microsoft Azure Data Encryption-at-Rest](https://docs.microsoft.com/en-us/azure/security/fundamentals/encryption-atrest#feedback) * Content Source: [articles/security/fundamentals/encryption-atrest.md](https://github.com/Microsoft/azure-docs/blob/master/articles/security/fundamentals/encryption-atrest.md) * Service: **security** * Sub-service: **security-fundamentals** * GitHub Login: @barclayn * Microsoft Alias: **barclayn**
non_process
mislabeled section i believe the section labeled server side encryption using service managed keys in customer controlled hardware should read server side encryption using customer managed keys in customer controlled hardware in the data encryption models section there is no mention of supporting service managed keys in customer controller hardware document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service security sub service security fundamentals github login barclayn microsoft alias barclayn
0
21,688
30,184,711,334
IssuesEvent
2023-07-04 11:16:00
Seddryck/Tseesecake
https://api.github.com/repos/Seddryck/Tseesecake
opened
Processor to transform projections to slicers for aggragations
new-feature query-processor
If you're defining an `AggregationProjection`, any other projection should be part of the slicers. We should allow people to not define them as slicers, just as projection and copy them to slicers with a processor.
1.0
Processor to transform projections to slicers for aggragations - If you're defining an `AggregationProjection`, any other projection should be part of the slicers. We should allow people to not define them as slicers, just as projection and copy them to slicers with a processor.
process
processor to transform projections to slicers for aggragations if you re defining an aggregationprojection any other projection should be part of the slicers we should allow people to not define them as slicers just as projection and copy them to slicers with a processor
1
14,907
18,293,393,528
IssuesEvent
2021-10-05 17:42:25
microsoft/react-native-windows
https://api.github.com/repos/microsoft/react-native-windows
closed
Update react-native-platform-override for new nightly version pattern
must-have enhancement Area: Release Process Integration Follow-up
RN nightlies now encode more than just the commit hash into the prerelease segment. E.g. 0.0.0-2d2de744b-20210929-2246. We need to update integration tooling to properly extract just the commit hash from this.
1.0
Update react-native-platform-override for new nightly version pattern - RN nightlies now encode more than just the commit hash into the prerelease segment. E.g. 0.0.0-2d2de744b-20210929-2246. We need to update integration tooling to properly extract just the commit hash from this.
process
update react native platform override for new nightly version pattern rn nightlies now encode more than just the commit hash into the prerelease segment e g we need to update integration tooling to properly extract just the commit hash from this
1
17,566
23,378,245,169
IssuesEvent
2022-08-11 06:50:33
Battle-s/battle-school-backend
https://api.github.com/repos/Battle-s/battle-school-backend
closed
[FEAT] JPA (jpa.show-sql설정)
feature :computer: processing :hourglass_flowing_sand:
## 설명 > JPA가 만드는 SQL을 보기위해 설정을 추가합니다. ## 관련 논의 > 관련 설정 추가해서 PR보낼테니 확인해주세요.
1.0
[FEAT] JPA (jpa.show-sql설정) - ## 설명 > JPA가 만드는 SQL을 보기위해 설정을 추가합니다. ## 관련 논의 > 관련 설정 추가해서 PR보낼테니 확인해주세요.
process
jpa jpa show sql설정 설명 jpa가 만드는 sql을 보기위해 설정을 추가합니다 관련 논의 관련 설정 추가해서 pr보낼테니 확인해주세요
1
134,575
12,619,445,259
IssuesEvent
2020-06-13 00:39:08
MYE2020-A2-GRUPO-04/Archivos-Boveda_de_seguridad
https://api.github.com/repos/MYE2020-A2-GRUPO-04/Archivos-Boveda_de_seguridad
closed
Demostración del proyecto
documentation
Crear una vídeo demostración o animación del proyecto en funcionamiento
1.0
Demostración del proyecto - Crear una vídeo demostración o animación del proyecto en funcionamiento
non_process
demostración del proyecto crear una vídeo demostración o animación del proyecto en funcionamiento
0
19,174
25,282,120,965
IssuesEvent
2022-11-16 16:28:43
ncbo/bioportal-project
https://api.github.com/repos/ncbo/bioportal-project
closed
AIO: latest submission failed to process - status "Uploaded, Error Rdf"
ontology processing problem
Received report from an end user on the support list that the latest version of their [AIO ontology](https://bioportal.bioontology.org/ontologies/AIO) failed to process (submission ID 4). The summary page on BioPortal is showing statuses of "Uploaded, Error Rdf". Production log file indicates that the OWL API wasn't able to load the ontology: ``` Error: OWL_PARSE_EXCEPTION Message: Problem parsing file:/srv/ncbo/repository/AIO/4/aio-full.owl Could not parse ontology. Either a suitable parser could not be found, or parsing failed. See parser logs below for explanation. ``` Error is reproducible with the following snippet of test code: ```java @Test public void testLoad_AIO_Ontology_WithDocumentFormat() throws Exception { String path = "src/test/resources/aio-full.owl"; FileDocumentSource fileDocumentSource = new FileDocumentSource(new File(path), new RDFXMLDocumentFormat()); OWLOntologyManager manager = OWLManager.createOWLOntologyManager(); OWLOntology ontology = manager.loadOntologyFromOntologyDocument(fileDocumentSource); assertNotNull(ontology); } ``` Relevant stack trace from the OWL API shows that the ontology source file contains illegal characters: ``` Illegal character in path at index 50: https://w3id.org/aio/Unsupervised_Block_Clustering|Unsupervised_Co-clustering|Unsupervised_Unsupervised_Two-mode_Clustering|Unsupervised_Two-way_Clustering|Unsupervised_Joint_Clustering java.base/java.net.URI$Parser.fail(URI.java:2913) java.base/java.net.URI$Parser.checkChars(URI.java:3084) java.base/java.net.URI$Parser.parseHierarchical(URI.java:3166) java.base/java.net.URI$Parser.parse(URI.java:3114) java.base/java.net.URI.<init>(URI.java:600) java.base/java.net.URI.create(URI.java:881) java.base/java.net.URI.resolve(URI.java:1066) org.semanticweb.owlapi.rdf.rdfxml.parser.RDFParser.resolveFromDelegate(RDFParser.java:277) org.semanticweb.owlapi.rdf.rdfxml.parser.RDFParser.resolveIRI(RDFParser.java:346) org.semanticweb.owlapi.rdf.rdfxml.parser.NodeElement.getIDNodeIDAboutResourceIRI(StartRDF.java:340) at uk.ac.manchester.cs.owl.owlapi.OWLOntologyFactoryImpl.loadOWLOntology(OWLOntologyFactoryImpl.java:257) at uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.actualParse(OWLOntologyManagerImpl.java:1288) at uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.loadOntology(OWLOntologyManagerImpl.java:1228) at uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.loadOntologyFromOntologyDocument(OWLOntologyManagerImpl.java:1179) ```
1.0
AIO: latest submission failed to process - status "Uploaded, Error Rdf" - Received report from an end user on the support list that the latest version of their [AIO ontology](https://bioportal.bioontology.org/ontologies/AIO) failed to process (submission ID 4). The summary page on BioPortal is showing statuses of "Uploaded, Error Rdf". Production log file indicates that the OWL API wasn't able to load the ontology: ``` Error: OWL_PARSE_EXCEPTION Message: Problem parsing file:/srv/ncbo/repository/AIO/4/aio-full.owl Could not parse ontology. Either a suitable parser could not be found, or parsing failed. See parser logs below for explanation. ``` Error is reproducible with the following snippet of test code: ```java @Test public void testLoad_AIO_Ontology_WithDocumentFormat() throws Exception { String path = "src/test/resources/aio-full.owl"; FileDocumentSource fileDocumentSource = new FileDocumentSource(new File(path), new RDFXMLDocumentFormat()); OWLOntologyManager manager = OWLManager.createOWLOntologyManager(); OWLOntology ontology = manager.loadOntologyFromOntologyDocument(fileDocumentSource); assertNotNull(ontology); } ``` Relevant stack trace from the OWL API shows that the ontology source file contains illegal characters: ``` Illegal character in path at index 50: https://w3id.org/aio/Unsupervised_Block_Clustering|Unsupervised_Co-clustering|Unsupervised_Unsupervised_Two-mode_Clustering|Unsupervised_Two-way_Clustering|Unsupervised_Joint_Clustering java.base/java.net.URI$Parser.fail(URI.java:2913) java.base/java.net.URI$Parser.checkChars(URI.java:3084) java.base/java.net.URI$Parser.parseHierarchical(URI.java:3166) java.base/java.net.URI$Parser.parse(URI.java:3114) java.base/java.net.URI.<init>(URI.java:600) java.base/java.net.URI.create(URI.java:881) java.base/java.net.URI.resolve(URI.java:1066) org.semanticweb.owlapi.rdf.rdfxml.parser.RDFParser.resolveFromDelegate(RDFParser.java:277) org.semanticweb.owlapi.rdf.rdfxml.parser.RDFParser.resolveIRI(RDFParser.java:346) org.semanticweb.owlapi.rdf.rdfxml.parser.NodeElement.getIDNodeIDAboutResourceIRI(StartRDF.java:340) at uk.ac.manchester.cs.owl.owlapi.OWLOntologyFactoryImpl.loadOWLOntology(OWLOntologyFactoryImpl.java:257) at uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.actualParse(OWLOntologyManagerImpl.java:1288) at uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.loadOntology(OWLOntologyManagerImpl.java:1228) at uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.loadOntologyFromOntologyDocument(OWLOntologyManagerImpl.java:1179) ```
process
aio latest submission failed to process status uploaded error rdf received report from an end user on the support list that the latest version of their failed to process submission id the summary page on bioportal is showing statuses of uploaded error rdf production log file indicates that the owl api wasn t able to load the ontology error owl parse exception message problem parsing file srv ncbo repository aio aio full owl could not parse ontology either a suitable parser could not be found or parsing failed see parser logs below for explanation error is reproducible with the following snippet of test code java test public void testload aio ontology withdocumentformat throws exception string path src test resources aio full owl filedocumentsource filedocumentsource new filedocumentsource new file path new rdfxmldocumentformat owlontologymanager manager owlmanager createowlontologymanager owlontology ontology manager loadontologyfromontologydocument filedocumentsource assertnotnull ontology relevant stack trace from the owl api shows that the ontology source file contains illegal characters illegal character in path at index java base java net uri parser fail uri java java base java net uri parser checkchars uri java java base java net uri parser parsehierarchical uri java java base java net uri parser parse uri java java base java net uri uri java java base java net uri create uri java java base java net uri resolve uri java org semanticweb owlapi rdf rdfxml parser rdfparser resolvefromdelegate rdfparser java org semanticweb owlapi rdf rdfxml parser rdfparser resolveiri rdfparser java org semanticweb owlapi rdf rdfxml parser nodeelement getidnodeidaboutresourceiri startrdf java at uk ac manchester cs owl owlapi owlontologyfactoryimpl loadowlontology owlontologyfactoryimpl java at uk ac manchester cs owl owlapi owlontologymanagerimpl actualparse owlontologymanagerimpl java at uk ac manchester cs owl owlapi owlontologymanagerimpl loadontology owlontologymanagerimpl java at uk ac manchester cs owl owlapi owlontologymanagerimpl loadontologyfromontologydocument owlontologymanagerimpl java
1
35,678
4,695,799,363
IssuesEvent
2016-10-12 00:33:50
WordPress/twentyseventeen
https://api.github.com/repos/WordPress/twentyseventeen
closed
Setting a baseline
design
As in Twenty Sixteen, I think it's helpful **setting a baseline**. Some elements are already aligned vertically, but there isn't consistence all over the theme.
1.0
Setting a baseline - As in Twenty Sixteen, I think it's helpful **setting a baseline**. Some elements are already aligned vertically, but there isn't consistence all over the theme.
non_process
setting a baseline as in twenty sixteen i think it s helpful setting a baseline some elements are already aligned vertically but there isn t consistence all over the theme
0
12,189
4,385,788,468
IssuesEvent
2016-08-08 10:14:10
AquariaOSE/Aquaria
https://api.github.com/repos/AquariaOSE/Aquaria
opened
Rework Cmake files
enhancement non-code
Current CMakeLists.txt is long and messy and not suitable for generating proper project files one can work with. Should be split into smaller files, into proper sub-projects, and reference header files properly.
1.0
Rework Cmake files - Current CMakeLists.txt is long and messy and not suitable for generating proper project files one can work with. Should be split into smaller files, into proper sub-projects, and reference header files properly.
non_process
rework cmake files current cmakelists txt is long and messy and not suitable for generating proper project files one can work with should be split into smaller files into proper sub projects and reference header files properly
0
66,533
27,498,435,486
IssuesEvent
2023-03-05 12:22:42
APIs-guru/openapi-directory
https://api.github.com/repos/APIs-guru/openapi-directory
closed
Add "Amadeus Airport On-Time Performance" API
add API needs-servicename
**Format**: OpenAPI 2.0 (fka Swagger) **Official**: YES **Url**: https://raw.githubusercontent.com/amadeus4dev/amadeus-open-api-specification/main/spec/json/AirportOnTimePerformance_v1_swagger_specification.json **Name**: Amadeus Airport On-Time Performance **Category**: transport **Logo**: https://developers.amadeus.com/PAS-EAS/api/v1/cms-gateway/sites/default/files/2019-09/logo-portal.png
1.0
Add "Amadeus Airport On-Time Performance" API - **Format**: OpenAPI 2.0 (fka Swagger) **Official**: YES **Url**: https://raw.githubusercontent.com/amadeus4dev/amadeus-open-api-specification/main/spec/json/AirportOnTimePerformance_v1_swagger_specification.json **Name**: Amadeus Airport On-Time Performance **Category**: transport **Logo**: https://developers.amadeus.com/PAS-EAS/api/v1/cms-gateway/sites/default/files/2019-09/logo-portal.png
non_process
add amadeus airport on time performance api format openapi fka swagger official yes url name amadeus airport on time performance category transport logo
0
777,963
27,299,160,133
IssuesEvent
2023-02-23 23:27:46
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
Should handle incorrect decimal values specified for custom networks
priority/P3 QA/Yes release-notes/exclude closed/not-actionable feature/web3/wallet OS/Desktop front-end-change
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue. PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE. INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED--> ## Description <!--Provide a brief description of the issue--> Should handle incorrect decimal values specified for custom networks ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. Enable wallet 2. Enter custom network via settings 3. Add BSC to custom network 4. Add an incorrect decimal value to the testnet 5. Open panel/widget shows wrong balance value ## Actual result: <!--Please add screenshots if needed--> ![image](https://user-images.githubusercontent.com/17010094/139230903-0c543297-42e6-4201-8b80-329e6c675592.png) ## Expected result: Some correct way of handling incorrect decimal values for custom networks ## Reproduces how often: <!--[Easily reproduced/Intermittent issue/No steps to reproduce]--> Easy ## Brave version (brave://version info) <!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details--> Brave | 1.33.37 Chromium: 95.0.4638.54 (Official Build) nightly (64-bit) -- | -- Revision | `d31a821ec901f68d0d34ccdbaea45b4c86ce543e-refs/branch-heads/4638@{#871}` OS | All ## Version/Channel Information: <!--Does this issue happen on any other channels? Or is it specific to a certain channel?--> - Can you reproduce this issue with the current release? NA - Can you reproduce this issue with the beta channel? Yes - Can you reproduce this issue with the nightly channel? Yes ## Other Additional Information: - Does the issue resolve itself when disabling Brave Shields? NA - Does the issue resolve itself when disabling Brave Rewards? NA - Is the issue reproducible on the latest version of Chrome? NA ## Miscellaneous Information: <!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue--> cc: @onyb @spylogsster
1.0
Should handle incorrect decimal values specified for custom networks - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue. PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE. INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED--> ## Description <!--Provide a brief description of the issue--> Should handle incorrect decimal values specified for custom networks ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. Enable wallet 2. Enter custom network via settings 3. Add BSC to custom network 4. Add an incorrect decimal value to the testnet 5. Open panel/widget shows wrong balance value ## Actual result: <!--Please add screenshots if needed--> ![image](https://user-images.githubusercontent.com/17010094/139230903-0c543297-42e6-4201-8b80-329e6c675592.png) ## Expected result: Some correct way of handling incorrect decimal values for custom networks ## Reproduces how often: <!--[Easily reproduced/Intermittent issue/No steps to reproduce]--> Easy ## Brave version (brave://version info) <!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details--> Brave | 1.33.37 Chromium: 95.0.4638.54 (Official Build) nightly (64-bit) -- | -- Revision | `d31a821ec901f68d0d34ccdbaea45b4c86ce543e-refs/branch-heads/4638@{#871}` OS | All ## Version/Channel Information: <!--Does this issue happen on any other channels? Or is it specific to a certain channel?--> - Can you reproduce this issue with the current release? NA - Can you reproduce this issue with the beta channel? Yes - Can you reproduce this issue with the nightly channel? Yes ## Other Additional Information: - Does the issue resolve itself when disabling Brave Shields? NA - Does the issue resolve itself when disabling Brave Rewards? NA - Is the issue reproducible on the latest version of Chrome? NA ## Miscellaneous Information: <!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue--> cc: @onyb @spylogsster
non_process
should handle incorrect decimal values specified for custom networks have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description should handle incorrect decimal values specified for custom networks steps to reproduce enable wallet enter custom network via settings add bsc to custom network add an incorrect decimal value to the testnet open panel widget shows wrong balance value actual result expected result some correct way of handling incorrect decimal values for custom networks reproduces how often easy brave version brave version info brave chromium   official build  nightly  bit revision refs branch heads os all version channel information can you reproduce this issue with the current release na can you reproduce this issue with the beta channel yes can you reproduce this issue with the nightly channel yes other additional information does the issue resolve itself when disabling brave shields na does the issue resolve itself when disabling brave rewards na is the issue reproducible on the latest version of chrome na miscellaneous information cc onyb spylogsster
0
406,542
11,899,423,534
IssuesEvent
2020-03-30 08:59:06
osmontrouge/caresteouvert
https://api.github.com/repos/osmontrouge/caresteouvert
closed
Bug d'affichage quand aucune couche n'est sélectionnée
bug priority: medium
On ne devrait rien voir d'affiché quand aucune couche n'est sélectionnée, ce qui n'est pas le cas : https://www.caresteouvert.fr/police=false&pharmacy=false&post_office=false&food=false&bakery=false&shop=false&bank=false&fuel=false&funeral_directors=false@-17.545042,-149.575043,14.90
1.0
Bug d'affichage quand aucune couche n'est sélectionnée - On ne devrait rien voir d'affiché quand aucune couche n'est sélectionnée, ce qui n'est pas le cas : https://www.caresteouvert.fr/police=false&pharmacy=false&post_office=false&food=false&bakery=false&shop=false&bank=false&fuel=false&funeral_directors=false@-17.545042,-149.575043,14.90
non_process
bug d affichage quand aucune couche n est sélectionnée on ne devrait rien voir d affiché quand aucune couche n est sélectionnée ce qui n est pas le cas
0
96,862
16,168,287,934
IssuesEvent
2021-05-01 23:51:33
gabriel-milan/uptime-bot
https://api.github.com/repos/gabriel-milan/uptime-bot
opened
CVE-2021-27290 (High) detected in ssri-7.1.0.tgz, ssri-6.0.1.tgz
security vulnerability
## CVE-2021-27290 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ssri-7.1.0.tgz</b>, <b>ssri-6.0.1.tgz</b></p></summary> <p> <details><summary><b>ssri-7.1.0.tgz</b></p></summary> <p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p> <p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-7.1.0.tgz">https://registry.npmjs.org/ssri/-/ssri-7.1.0.tgz</a></p> <p>Path to dependency file: uptime-bot/client/package.json</p> <p>Path to vulnerable library: uptime-bot/client/node_modules/@vue/cli-service/node_modules/ssri/package.json</p> <p> Dependency Hierarchy: - cli-service-4.5.11.tgz (Root Library) - :x: **ssri-7.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>ssri-6.0.1.tgz</b></p></summary> <p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p> <p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz">https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz</a></p> <p>Path to dependency file: uptime-bot/client/package.json</p> <p>Path to vulnerable library: uptime-bot/client/node_modules/ssri/package.json</p> <p> Dependency Hierarchy: - cli-service-4.5.11.tgz (Root Library) - copy-webpack-plugin-5.1.2.tgz - cacache-12.0.4.tgz - :x: **ssri-6.0.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/gabriel-milan/uptime-bot/commit/216b2d1977764ebebd21770dadc4261dd1f6d51c">216b2d1977764ebebd21770dadc4261dd1f6d51c</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option. <p>Publish Date: 2021-03-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27290>CVE-2021-27290</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27290">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27290</a></p> <p>Release Date: 2021-03-12</p> <p>Fix Resolution: ssri - 6.0.2,8.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-27290 (High) detected in ssri-7.1.0.tgz, ssri-6.0.1.tgz - ## CVE-2021-27290 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ssri-7.1.0.tgz</b>, <b>ssri-6.0.1.tgz</b></p></summary> <p> <details><summary><b>ssri-7.1.0.tgz</b></p></summary> <p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p> <p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-7.1.0.tgz">https://registry.npmjs.org/ssri/-/ssri-7.1.0.tgz</a></p> <p>Path to dependency file: uptime-bot/client/package.json</p> <p>Path to vulnerable library: uptime-bot/client/node_modules/@vue/cli-service/node_modules/ssri/package.json</p> <p> Dependency Hierarchy: - cli-service-4.5.11.tgz (Root Library) - :x: **ssri-7.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>ssri-6.0.1.tgz</b></p></summary> <p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p> <p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz">https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz</a></p> <p>Path to dependency file: uptime-bot/client/package.json</p> <p>Path to vulnerable library: uptime-bot/client/node_modules/ssri/package.json</p> <p> Dependency Hierarchy: - cli-service-4.5.11.tgz (Root Library) - copy-webpack-plugin-5.1.2.tgz - cacache-12.0.4.tgz - :x: **ssri-6.0.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/gabriel-milan/uptime-bot/commit/216b2d1977764ebebd21770dadc4261dd1f6d51c">216b2d1977764ebebd21770dadc4261dd1f6d51c</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option. <p>Publish Date: 2021-03-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27290>CVE-2021-27290</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27290">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27290</a></p> <p>Release Date: 2021-03-12</p> <p>Fix Resolution: ssri - 6.0.2,8.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in ssri tgz ssri tgz cve high severity vulnerability vulnerable libraries ssri tgz ssri tgz ssri tgz standard subresource integrity library parses serializes generates and verifies integrity metadata according to the sri spec library home page a href path to dependency file uptime bot client package json path to vulnerable library uptime bot client node modules vue cli service node modules ssri package json dependency hierarchy cli service tgz root library x ssri tgz vulnerable library ssri tgz standard subresource integrity library parses serializes generates and verifies integrity metadata according to the sri spec library home page a href path to dependency file uptime bot client package json path to vulnerable library uptime bot client node modules ssri package json dependency hierarchy cli service tgz root library copy webpack plugin tgz cacache tgz x ssri tgz vulnerable library found in head commit a href found in base branch master vulnerability details ssri fixed in processes sris using a regular expression which is vulnerable to a denial of service malicious sris could take an extremely long time to process leading to denial of service this issue only affects consumers using the strict option publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ssri step up your open source security game with whitesource
0
3,006
6,007,772,524
IssuesEvent
2017-06-06 05:08:39
TEAMMATES/teammates
https://api.github.com/repos/TEAMMATES/teammates
closed
Development Workflow: Explicitly state that contributors should not create multiple PRs for a single issue
a-Docs a-Process c.DevOps d.FirstTimers
Contributors should not be opening multiple PRs for the same issue (e.g. Closing a PR and creating a new one). Doing this makes it difficult to keep track of past reviews/comments on the PR. We should consider stating this explicitly in our development workflow document.
1.0
Development Workflow: Explicitly state that contributors should not create multiple PRs for a single issue - Contributors should not be opening multiple PRs for the same issue (e.g. Closing a PR and creating a new one). Doing this makes it difficult to keep track of past reviews/comments on the PR. We should consider stating this explicitly in our development workflow document.
process
development workflow explicitly state that contributors should not create multiple prs for a single issue contributors should not be opening multiple prs for the same issue e g closing a pr and creating a new one doing this makes it difficult to keep track of past reviews comments on the pr we should consider stating this explicitly in our development workflow document
1
16,641
6,258,775,103
IssuesEvent
2017-07-14 16:22:28
gap-system/gap
https://api.github.com/repos/gap-system/gap
closed
Unconditionally using compiler warning flags causes gcc 4.3.4 compile failure
build system
On a supercomputer I am allowed to use compilation stops: ```` DEBRECEN[service0] ~/bin/gap (0)$ make C src/ariths.c => obj/ariths.lo cc1: error: unrecognized command line option "-Wno-implicit-fallthrough" cc1: error: unrecognized command line option "-Wno-unknown-warning-option" make: *** [obj/ariths.lo] Error 1 DEBRECEN[service0] ~/bin/gap (2)$ cc -v Using built-in specs. Target: x86_64-suse-linux Configured with: ../configure --prefix=/usr --infodir=/usr/share/info --mandir=/usr/share/man --libdir=/usr/lib64 --libexecdir=/usr/lib64 --enable-languages=c,c++,objc,fortran,obj-c++,java,ada --enable-checking=release --with-gxx-include-dir=/usr/include/c++/4.3 --enable-ssp --disable-libssp --with-bugurl=http://bugs.opensuse.org/ --with-pkgversion='SUSE Linux' --disable-libgcj --disable-libmudflap --with-slibdir=/lib64 --with-system-zlib --enable-__cxa_atexit --enable-libstdcxx-allocator=new --disable-libstdcxx-pch --enable-version-specific-runtime-libs --program-suffix=-4.3 --enable-linux-futex --without-system-libunwind --with-cpu=generic --build=x86_64-suse-linux Thread model: posix gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) DEBRECEN[service0] ~/bin/gap (2)$ ```` The number in parentheses in the prompt is supposed to show the error code (2 in this case). Could this be because of #1339?
1.0
Unconditionally using compiler warning flags causes gcc 4.3.4 compile failure - On a supercomputer I am allowed to use compilation stops: ```` DEBRECEN[service0] ~/bin/gap (0)$ make C src/ariths.c => obj/ariths.lo cc1: error: unrecognized command line option "-Wno-implicit-fallthrough" cc1: error: unrecognized command line option "-Wno-unknown-warning-option" make: *** [obj/ariths.lo] Error 1 DEBRECEN[service0] ~/bin/gap (2)$ cc -v Using built-in specs. Target: x86_64-suse-linux Configured with: ../configure --prefix=/usr --infodir=/usr/share/info --mandir=/usr/share/man --libdir=/usr/lib64 --libexecdir=/usr/lib64 --enable-languages=c,c++,objc,fortran,obj-c++,java,ada --enable-checking=release --with-gxx-include-dir=/usr/include/c++/4.3 --enable-ssp --disable-libssp --with-bugurl=http://bugs.opensuse.org/ --with-pkgversion='SUSE Linux' --disable-libgcj --disable-libmudflap --with-slibdir=/lib64 --with-system-zlib --enable-__cxa_atexit --enable-libstdcxx-allocator=new --disable-libstdcxx-pch --enable-version-specific-runtime-libs --program-suffix=-4.3 --enable-linux-futex --without-system-libunwind --with-cpu=generic --build=x86_64-suse-linux Thread model: posix gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) DEBRECEN[service0] ~/bin/gap (2)$ ```` The number in parentheses in the prompt is supposed to show the error code (2 in this case). Could this be because of #1339?
non_process
unconditionally using compiler warning flags causes gcc compile failure on a supercomputer i am allowed to use compilation stops debrecen bin gap make c src ariths c obj ariths lo error unrecognized command line option wno implicit fallthrough error unrecognized command line option wno unknown warning option make error debrecen bin gap cc v using built in specs target suse linux configured with configure prefix usr infodir usr share info mandir usr share man libdir usr libexecdir usr enable languages c c objc fortran obj c java ada enable checking release with gxx include dir usr include c enable ssp disable libssp with bugurl with pkgversion suse linux disable libgcj disable libmudflap with slibdir with system zlib enable cxa atexit enable libstdcxx allocator new disable libstdcxx pch enable version specific runtime libs program suffix enable linux futex without system libunwind with cpu generic build suse linux thread model posix gcc version suse linux debrecen bin gap the number in parentheses in the prompt is supposed to show the error code in this case could this be because of
0
1,880
4,711,655,398
IssuesEvent
2016-10-14 14:28:53
CERNDocumentServer/cds
https://api.github.com/repos/CERNDocumentServer/cds
opened
webhooks: AVC tasks refactoring
avc_processing in progress
Update code structure based on newest Invenio-Webhooks approach.
1.0
webhooks: AVC tasks refactoring - Update code structure based on newest Invenio-Webhooks approach.
process
webhooks avc tasks refactoring update code structure based on newest invenio webhooks approach
1
98,696
30,055,922,218
IssuesEvent
2023-06-28 06:47:04
apache/camel-k
https://api.github.com/repos/apache/camel-k
closed
Use maven distribution available in the operator image
kind/bug area/builder
During the docker build of the operator image, the maven distribution is downloaded inside the folder `/usr/share/maven/wrapper/dists/` (using maven wrapper). When the builder is generated the content of the folder is never added to the builder image (see `https://github.com/apache/camel-k/blob/main/pkg/controller/catalog/initialize.go` code). In the end when the builder image runs it has to download it again. Wouldn't it be better to use the one we already downloaded and added in the operator image ?
1.0
Use maven distribution available in the operator image - During the docker build of the operator image, the maven distribution is downloaded inside the folder `/usr/share/maven/wrapper/dists/` (using maven wrapper). When the builder is generated the content of the folder is never added to the builder image (see `https://github.com/apache/camel-k/blob/main/pkg/controller/catalog/initialize.go` code). In the end when the builder image runs it has to download it again. Wouldn't it be better to use the one we already downloaded and added in the operator image ?
non_process
use maven distribution available in the operator image during the docker build of the operator image the maven distribution is downloaded inside the folder usr share maven wrapper dists using maven wrapper when the builder is generated the content of the folder is never added to the builder image see code in the end when the builder image runs it has to download it again wouldn t it be better to use the one we already downloaded and added in the operator image
0
110,224
4,423,766,184
IssuesEvent
2016-08-16 09:48:53
Optiboot/optiboot
https://api.github.com/repos/Optiboot/optiboot
opened
Better error handling for bad LED values?
Maintainability Priority-Low Type-Enhancement
It would be nice if the error message for a bad LED value distinguished between a bad setting and LED being "undefined" (as can happen when adding a new CPU type, if you forget to modify pin_defs.h) (also, it would be nice if there was more internal documentation on how the LED assignments actually work!)
1.0
Better error handling for bad LED values? - It would be nice if the error message for a bad LED value distinguished between a bad setting and LED being "undefined" (as can happen when adding a new CPU type, if you forget to modify pin_defs.h) (also, it would be nice if there was more internal documentation on how the LED assignments actually work!)
non_process
better error handling for bad led values it would be nice if the error message for a bad led value distinguished between a bad setting and led being undefined as can happen when adding a new cpu type if you forget to modify pin defs h also it would be nice if there was more internal documentation on how the led assignments actually work
0
213,254
16,507,538,838
IssuesEvent
2021-05-25 21:23:18
ansible/awx
https://api.github.com/repos/ansible/awx
closed
[ui_next] Favicon is missing
component:ui priority:high state:needs_test type:bug
##### ISSUE TYPE - Bug Report ##### SUMMARY No favicon! ##### ENVIRONMENT * AWX version: b338da40c534d1b2809b5426b61f13eff7770434 * AWX install method: `npm install` ##### ADDITIONAL INFORMATION ![Screen Shot 2020-11-18 at 11 14 14 AM](https://user-images.githubusercontent.com/12446869/99556189-361f1180-298f-11eb-8dcc-1df88dbd8f05.png)
1.0
[ui_next] Favicon is missing - ##### ISSUE TYPE - Bug Report ##### SUMMARY No favicon! ##### ENVIRONMENT * AWX version: b338da40c534d1b2809b5426b61f13eff7770434 * AWX install method: `npm install` ##### ADDITIONAL INFORMATION ![Screen Shot 2020-11-18 at 11 14 14 AM](https://user-images.githubusercontent.com/12446869/99556189-361f1180-298f-11eb-8dcc-1df88dbd8f05.png)
non_process
favicon is missing issue type bug report summary no favicon environment awx version awx install method npm install additional information
0
54,717
30,324,912,474
IssuesEvent
2023-07-10 22:42:58
scylladb/scylladb
https://api.github.com/repos/scylladb/scylladb
closed
performance regression after extending statement scope guard
performance Regression P1
Bad commit: c42a91ec72 perf-simple-query --smp 1 before: 216489.88 tps ( 61.1 allocs/op, 13.1 tasks/op, 43558 insns/op, 0 errors) 217708.69 tps ( 61.1 allocs/op, 13.1 tasks/op, 43542 insns/op, 0 errors) 219495.02 tps ( 61.1 allocs/op, 13.1 tasks/op, 43538 insns/op, 0 errors) 216863.84 tps ( 61.1 allocs/op, 13.1 tasks/op, 43567 insns/op, 0 errors) 218936.48 tps ( 61.1 allocs/op, 13.1 tasks/op, 43546 insns/op, 0 errors) after: 201773.52 tps ( 63.1 allocs/op, 15.1 tasks/op, 44600 insns/op, 0 errors) 210875.48 tps ( 63.1 allocs/op, 15.1 tasks/op, 44558 insns/op, 0 errors) 210186.55 tps ( 63.1 allocs/op, 15.1 tasks/op, 44588 insns/op, 0 errors) 211021.76 tps ( 63.1 allocs/op, 15.1 tasks/op, 44569 insns/op, 0 errors) 208597.52 tps ( 63.1 allocs/op, 15.1 tasks/op, 44587 insns/op, 0 errors) Two extra allocations, two extra tasks, 1k extra instructions, for something that is DDL only.
True
performance regression after extending statement scope guard - Bad commit: c42a91ec72 perf-simple-query --smp 1 before: 216489.88 tps ( 61.1 allocs/op, 13.1 tasks/op, 43558 insns/op, 0 errors) 217708.69 tps ( 61.1 allocs/op, 13.1 tasks/op, 43542 insns/op, 0 errors) 219495.02 tps ( 61.1 allocs/op, 13.1 tasks/op, 43538 insns/op, 0 errors) 216863.84 tps ( 61.1 allocs/op, 13.1 tasks/op, 43567 insns/op, 0 errors) 218936.48 tps ( 61.1 allocs/op, 13.1 tasks/op, 43546 insns/op, 0 errors) after: 201773.52 tps ( 63.1 allocs/op, 15.1 tasks/op, 44600 insns/op, 0 errors) 210875.48 tps ( 63.1 allocs/op, 15.1 tasks/op, 44558 insns/op, 0 errors) 210186.55 tps ( 63.1 allocs/op, 15.1 tasks/op, 44588 insns/op, 0 errors) 211021.76 tps ( 63.1 allocs/op, 15.1 tasks/op, 44569 insns/op, 0 errors) 208597.52 tps ( 63.1 allocs/op, 15.1 tasks/op, 44587 insns/op, 0 errors) Two extra allocations, two extra tasks, 1k extra instructions, for something that is DDL only.
non_process
performance regression after extending statement scope guard bad commit perf simple query smp before tps allocs op tasks op insns op errors tps allocs op tasks op insns op errors tps allocs op tasks op insns op errors tps allocs op tasks op insns op errors tps allocs op tasks op insns op errors after tps allocs op tasks op insns op errors tps allocs op tasks op insns op errors tps allocs op tasks op insns op errors tps allocs op tasks op insns op errors tps allocs op tasks op insns op errors two extra allocations two extra tasks extra instructions for something that is ddl only
0
11,451
4,227,187,211
IssuesEvent
2016-07-03 00:59:46
btkelly/gnag
https://api.github.com/repos/btkelly/gnag
opened
Improve log output when violation detector creation fails due to invalid config
code difficulty-easy enhancement
Example: ``` :app:gnagCheck FAILED FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:gnagCheck'. > Unable to create a Checker: configLocation {/Users/stkent/dev/detroit_labs/apps/herbie-android/app/config/checkstyle.xml}, classpath {null}. * Try: Run with --info or --debug option to get more log output. * Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:gnagCheck'. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:69) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46) at org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:35) at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:66) at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58) at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:53) at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:203) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:185) at org.gradle.execution.taskgraph.AbstractTaskPlanExecutor$TaskExecutorWorker.processTask(AbstractTaskPlanExecutor.java:66) at org.gradle.execution.taskgraph.AbstractTaskPlanExecutor$TaskExecutorWorker.run(AbstractTaskPlanExecutor.java:50) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor.process(DefaultTaskPlanExecutor.java:25) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter.execute(DefaultTaskGraphExecuter.java:110) at org.gradle.execution.SelectedTaskExecutionAction.execute(SelectedTaskExecutionAction.java:37) at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:37) at org.gradle.execution.DefaultBuildExecuter.access$000(DefaultBuildExecuter.java:23) at org.gradle.execution.DefaultBuildExecuter$1.proceed(DefaultBuildExecuter.java:43) at org.gradle.execution.DryRunBuildExecutionAction.execute(DryRunBuildExecutionAction.java:32) at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:37) at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:30) at org.gradle.initialization.DefaultGradleLauncher$4.run(DefaultGradleLauncher.java:153) at org.gradle.internal.Factories$1.create(Factories.java:22) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:91) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:53) at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:150) at org.gradle.initialization.DefaultGradleLauncher.access$200(DefaultGradleLauncher.java:32) at org.gradle.initialization.DefaultGradleLauncher$1.create(DefaultGradleLauncher.java:98) at org.gradle.initialization.DefaultGradleLauncher$1.create(DefaultGradleLauncher.java:92) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:91) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:63) at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:92) at org.gradle.initialization.DefaultGradleLauncher.run(DefaultGradleLauncher.java:83) at org.gradle.launcher.exec.InProcessBuildActionExecuter$DefaultBuildController.run(InProcessBuildActionExecuter.java:99) at org.gradle.tooling.internal.provider.ExecuteBuildActionRunner.run(ExecuteBuildActionRunner.java:28) at org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35) at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:48) at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:30) at org.gradle.launcher.exec.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:81) at org.gradle.launcher.exec.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:46) at org.gradle.launcher.exec.DaemonUsageSuggestingBuildActionExecuter.execute(DaemonUsageSuggestingBuildActionExecuter.java:51) at org.gradle.launcher.exec.DaemonUsageSuggestingBuildActionExecuter.execute(DaemonUsageSuggestingBuildActionExecuter.java:28) at org.gradle.launcher.cli.RunBuildAction.run(RunBuildAction.java:43) at org.gradle.internal.Actions$RunnableActionAdapter.execute(Actions.java:173) at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:239) at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:212) at org.gradle.launcher.cli.JavaRuntimeValidationAction.execute(JavaRuntimeValidationAction.java:35) at org.gradle.launcher.cli.JavaRuntimeValidationAction.execute(JavaRuntimeValidationAction.java:24) at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:33) at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:22) at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:205) at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:169) at org.gradle.launcher.Main.doAction(Main.java:33) at org.gradle.launcher.bootstrap.EntryPoint.run(EntryPoint.java:45) at org.gradle.launcher.bootstrap.ProcessBootstrap.runNoExit(ProcessBootstrap.java:55) at org.gradle.launcher.bootstrap.ProcessBootstrap.run(ProcessBootstrap.java:36) at org.gradle.launcher.GradleMain.main(GradleMain.java:23) at org.gradle.wrapper.BootstrapMainStarter.start(BootstrapMainStarter.java:30) at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:129) at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:61) Caused by: Unable to create a Checker: configLocation {/Users/stkent/dev/detroit_labs/apps/herbie-android/app/config/checkstyle.xml}, classpath {null}. at com.puppycrawl.tools.checkstyle.ant.CheckstyleAntTask.createChecker(CheckstyleAntTask.java:425) at com.puppycrawl.tools.checkstyle.ant.CheckstyleAntTask.realExecute(CheckstyleAntTask.java:320) at com.puppycrawl.tools.checkstyle.ant.CheckstyleAntTask.execute(CheckstyleAntTask.java:303) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Task$perform.call(Unknown Source) at com.btkelly.gnag.reporters.CheckstyleViolationDetector.executeReporter(CheckstyleViolationDetector.groovy:53) at com.btkelly.gnag.tasks.GnagCheck.lambda$executeGnagCheck$0(GnagCheck.java:84) at com.btkelly.gnag.tasks.GnagCheck.executeGnagCheck(GnagCheck.java:81) at com.btkelly.gnag.tasks.GnagCheck.taskAction(GnagCheck.java:71) at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:75) at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.doExecute(AnnotationProcessingTaskFactory.java:228) at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.execute(AnnotationProcessingTaskFactory.java:221) at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.execute(AnnotationProcessingTaskFactory.java:210) at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:621) at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:604) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:80) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:61) ... 60 more Caused by: com.puppycrawl.tools.checkstyle.api.CheckstyleException: EmptyLineSeparator is not allowed as a child in Checker at com.puppycrawl.tools.checkstyle.Checker.setupChild(Checker.java:423) at com.puppycrawl.tools.checkstyle.api.AutomaticBean.configure(AutomaticBean.java:138) at com.puppycrawl.tools.checkstyle.ant.CheckstyleAntTask.createChecker(CheckstyleAntTask.java:422) ... 77 more ```
1.0
Improve log output when violation detector creation fails due to invalid config - Example: ``` :app:gnagCheck FAILED FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:gnagCheck'. > Unable to create a Checker: configLocation {/Users/stkent/dev/detroit_labs/apps/herbie-android/app/config/checkstyle.xml}, classpath {null}. * Try: Run with --info or --debug option to get more log output. * Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:gnagCheck'. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:69) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46) at org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:35) at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:66) at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58) at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:53) at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:203) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:185) at org.gradle.execution.taskgraph.AbstractTaskPlanExecutor$TaskExecutorWorker.processTask(AbstractTaskPlanExecutor.java:66) at org.gradle.execution.taskgraph.AbstractTaskPlanExecutor$TaskExecutorWorker.run(AbstractTaskPlanExecutor.java:50) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor.process(DefaultTaskPlanExecutor.java:25) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter.execute(DefaultTaskGraphExecuter.java:110) at org.gradle.execution.SelectedTaskExecutionAction.execute(SelectedTaskExecutionAction.java:37) at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:37) at org.gradle.execution.DefaultBuildExecuter.access$000(DefaultBuildExecuter.java:23) at org.gradle.execution.DefaultBuildExecuter$1.proceed(DefaultBuildExecuter.java:43) at org.gradle.execution.DryRunBuildExecutionAction.execute(DryRunBuildExecutionAction.java:32) at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:37) at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:30) at org.gradle.initialization.DefaultGradleLauncher$4.run(DefaultGradleLauncher.java:153) at org.gradle.internal.Factories$1.create(Factories.java:22) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:91) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:53) at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:150) at org.gradle.initialization.DefaultGradleLauncher.access$200(DefaultGradleLauncher.java:32) at org.gradle.initialization.DefaultGradleLauncher$1.create(DefaultGradleLauncher.java:98) at org.gradle.initialization.DefaultGradleLauncher$1.create(DefaultGradleLauncher.java:92) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:91) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:63) at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:92) at org.gradle.initialization.DefaultGradleLauncher.run(DefaultGradleLauncher.java:83) at org.gradle.launcher.exec.InProcessBuildActionExecuter$DefaultBuildController.run(InProcessBuildActionExecuter.java:99) at org.gradle.tooling.internal.provider.ExecuteBuildActionRunner.run(ExecuteBuildActionRunner.java:28) at org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35) at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:48) at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:30) at org.gradle.launcher.exec.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:81) at org.gradle.launcher.exec.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:46) at org.gradle.launcher.exec.DaemonUsageSuggestingBuildActionExecuter.execute(DaemonUsageSuggestingBuildActionExecuter.java:51) at org.gradle.launcher.exec.DaemonUsageSuggestingBuildActionExecuter.execute(DaemonUsageSuggestingBuildActionExecuter.java:28) at org.gradle.launcher.cli.RunBuildAction.run(RunBuildAction.java:43) at org.gradle.internal.Actions$RunnableActionAdapter.execute(Actions.java:173) at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:239) at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:212) at org.gradle.launcher.cli.JavaRuntimeValidationAction.execute(JavaRuntimeValidationAction.java:35) at org.gradle.launcher.cli.JavaRuntimeValidationAction.execute(JavaRuntimeValidationAction.java:24) at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:33) at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:22) at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:205) at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:169) at org.gradle.launcher.Main.doAction(Main.java:33) at org.gradle.launcher.bootstrap.EntryPoint.run(EntryPoint.java:45) at org.gradle.launcher.bootstrap.ProcessBootstrap.runNoExit(ProcessBootstrap.java:55) at org.gradle.launcher.bootstrap.ProcessBootstrap.run(ProcessBootstrap.java:36) at org.gradle.launcher.GradleMain.main(GradleMain.java:23) at org.gradle.wrapper.BootstrapMainStarter.start(BootstrapMainStarter.java:30) at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:129) at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:61) Caused by: Unable to create a Checker: configLocation {/Users/stkent/dev/detroit_labs/apps/herbie-android/app/config/checkstyle.xml}, classpath {null}. at com.puppycrawl.tools.checkstyle.ant.CheckstyleAntTask.createChecker(CheckstyleAntTask.java:425) at com.puppycrawl.tools.checkstyle.ant.CheckstyleAntTask.realExecute(CheckstyleAntTask.java:320) at com.puppycrawl.tools.checkstyle.ant.CheckstyleAntTask.execute(CheckstyleAntTask.java:303) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Task$perform.call(Unknown Source) at com.btkelly.gnag.reporters.CheckstyleViolationDetector.executeReporter(CheckstyleViolationDetector.groovy:53) at com.btkelly.gnag.tasks.GnagCheck.lambda$executeGnagCheck$0(GnagCheck.java:84) at com.btkelly.gnag.tasks.GnagCheck.executeGnagCheck(GnagCheck.java:81) at com.btkelly.gnag.tasks.GnagCheck.taskAction(GnagCheck.java:71) at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:75) at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.doExecute(AnnotationProcessingTaskFactory.java:228) at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.execute(AnnotationProcessingTaskFactory.java:221) at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.execute(AnnotationProcessingTaskFactory.java:210) at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:621) at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:604) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:80) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:61) ... 60 more Caused by: com.puppycrawl.tools.checkstyle.api.CheckstyleException: EmptyLineSeparator is not allowed as a child in Checker at com.puppycrawl.tools.checkstyle.Checker.setupChild(Checker.java:423) at com.puppycrawl.tools.checkstyle.api.AutomaticBean.configure(AutomaticBean.java:138) at com.puppycrawl.tools.checkstyle.ant.CheckstyleAntTask.createChecker(CheckstyleAntTask.java:422) ... 77 more ```
non_process
improve log output when violation detector creation fails due to invalid config example app gnagcheck failed failure build failed with an exception what went wrong execution failed for task app gnagcheck unable to create a checker configlocation users stkent dev detroit labs apps herbie android app config checkstyle xml classpath null try run with info or debug option to get more log output exception is org gradle api tasks taskexecutionexception execution failed for task app gnagcheck at org gradle api internal tasks execution executeactionstaskexecuter executeactions executeactionstaskexecuter java at org gradle api internal tasks execution executeactionstaskexecuter execute executeactionstaskexecuter java at org gradle api internal tasks execution postexecutionanalysistaskexecuter execute postexecutionanalysistaskexecuter java at org gradle api internal tasks execution skipuptodatetaskexecuter execute skipuptodatetaskexecuter java at org gradle api internal tasks execution validatingtaskexecuter execute validatingtaskexecuter java at org gradle api internal tasks execution skipemptysourcefilestaskexecuter execute skipemptysourcefilestaskexecuter java at org gradle api internal tasks execution skiptaskwithnoactionsexecuter execute skiptaskwithnoactionsexecuter java at org gradle api internal tasks execution skiponlyiftaskexecuter execute skiponlyiftaskexecuter java at org gradle api internal tasks execution executeatmostoncetaskexecuter execute executeatmostoncetaskexecuter java at org gradle execution taskgraph defaulttaskgraphexecuter eventfiringtaskworker execute defaulttaskgraphexecuter java at org gradle execution taskgraph defaulttaskgraphexecuter eventfiringtaskworker execute defaulttaskgraphexecuter java at org gradle execution taskgraph abstracttaskplanexecutor taskexecutorworker processtask abstracttaskplanexecutor java at org gradle execution taskgraph abstracttaskplanexecutor taskexecutorworker run abstracttaskplanexecutor java at org gradle execution taskgraph defaulttaskplanexecutor process defaulttaskplanexecutor java at org gradle execution taskgraph defaulttaskgraphexecuter execute defaulttaskgraphexecuter java at org gradle execution selectedtaskexecutionaction execute selectedtaskexecutionaction java at org gradle execution defaultbuildexecuter execute defaultbuildexecuter java at org gradle execution defaultbuildexecuter access defaultbuildexecuter java at org gradle execution defaultbuildexecuter proceed defaultbuildexecuter java at org gradle execution dryrunbuildexecutionaction execute dryrunbuildexecutionaction java at org gradle execution defaultbuildexecuter execute defaultbuildexecuter java at org gradle execution defaultbuildexecuter execute defaultbuildexecuter java at org gradle initialization defaultgradlelauncher run defaultgradlelauncher java at org gradle internal factories create factories java at org gradle internal progress defaultbuildoperationexecutor run defaultbuildoperationexecutor java at org gradle internal progress defaultbuildoperationexecutor run defaultbuildoperationexecutor java at org gradle initialization defaultgradlelauncher dobuildstages defaultgradlelauncher java at org gradle initialization defaultgradlelauncher access defaultgradlelauncher java at org gradle initialization defaultgradlelauncher create defaultgradlelauncher java at org gradle initialization defaultgradlelauncher create defaultgradlelauncher java at org gradle internal progress defaultbuildoperationexecutor run defaultbuildoperationexecutor java at org gradle internal progress defaultbuildoperationexecutor run defaultbuildoperationexecutor java at org gradle initialization defaultgradlelauncher dobuild defaultgradlelauncher java at org gradle initialization defaultgradlelauncher run defaultgradlelauncher java at org gradle launcher exec inprocessbuildactionexecuter defaultbuildcontroller run inprocessbuildactionexecuter java at org gradle tooling internal provider executebuildactionrunner run executebuildactionrunner java at org gradle launcher exec chainingbuildactionrunner run chainingbuildactionrunner java at org gradle launcher exec inprocessbuildactionexecuter execute inprocessbuildactionexecuter java at org gradle launcher exec inprocessbuildactionexecuter execute inprocessbuildactionexecuter java at org gradle launcher exec continuousbuildactionexecuter execute continuousbuildactionexecuter java at org gradle launcher exec continuousbuildactionexecuter execute continuousbuildactionexecuter java at org gradle launcher exec daemonusagesuggestingbuildactionexecuter execute daemonusagesuggestingbuildactionexecuter java at org gradle launcher exec daemonusagesuggestingbuildactionexecuter execute daemonusagesuggestingbuildactionexecuter java at org gradle launcher cli runbuildaction run runbuildaction java at org gradle internal actions runnableactionadapter execute actions java at org gradle launcher cli commandlineactionfactory parseandbuildaction execute commandlineactionfactory java at org gradle launcher cli commandlineactionfactory parseandbuildaction execute commandlineactionfactory java at org gradle launcher cli javaruntimevalidationaction execute javaruntimevalidationaction java at org gradle launcher cli javaruntimevalidationaction execute javaruntimevalidationaction java at org gradle launcher cli exceptionreportingaction execute exceptionreportingaction java at org gradle launcher cli exceptionreportingaction execute exceptionreportingaction java at org gradle launcher cli commandlineactionfactory withlogging execute commandlineactionfactory java at org gradle launcher cli commandlineactionfactory withlogging execute commandlineactionfactory java at org gradle launcher main doaction main java at org gradle launcher bootstrap entrypoint run entrypoint java at org gradle launcher bootstrap processbootstrap runnoexit processbootstrap java at org gradle launcher bootstrap processbootstrap run processbootstrap java at org gradle launcher gradlemain main gradlemain java at org gradle wrapper bootstrapmainstarter start bootstrapmainstarter java at org gradle wrapper wrapperexecutor execute wrapperexecutor java at org gradle wrapper gradlewrappermain main gradlewrappermain java caused by unable to create a checker configlocation users stkent dev detroit labs apps herbie android app config checkstyle xml classpath null at com puppycrawl tools checkstyle ant checkstyleanttask createchecker checkstyleanttask java at com puppycrawl tools checkstyle ant checkstyleanttask realexecute checkstyleanttask java at com puppycrawl tools checkstyle ant checkstyleanttask execute checkstyleanttask java at org apache tools ant dispatch dispatchutils execute dispatchutils java at org apache tools ant task perform task java at org apache tools ant task perform call unknown source at com btkelly gnag reporters checkstyleviolationdetector executereporter checkstyleviolationdetector groovy at com btkelly gnag tasks gnagcheck lambda executegnagcheck gnagcheck java at com btkelly gnag tasks gnagcheck executegnagcheck gnagcheck java at com btkelly gnag tasks gnagcheck taskaction gnagcheck java at org gradle internal reflect javamethod invoke javamethod java at org gradle api internal project taskfactory annotationprocessingtaskfactory standardtaskaction doexecute annotationprocessingtaskfactory java at org gradle api internal project taskfactory annotationprocessingtaskfactory standardtaskaction execute annotationprocessingtaskfactory java at org gradle api internal project taskfactory annotationprocessingtaskfactory standardtaskaction execute annotationprocessingtaskfactory java at org gradle api internal abstracttask taskactionwrapper execute abstracttask java at org gradle api internal abstracttask taskactionwrapper execute abstracttask java at org gradle api internal tasks execution executeactionstaskexecuter executeaction executeactionstaskexecuter java at org gradle api internal tasks execution executeactionstaskexecuter executeactions executeactionstaskexecuter java more caused by com puppycrawl tools checkstyle api checkstyleexception emptylineseparator is not allowed as a child in checker at com puppycrawl tools checkstyle checker setupchild checker java at com puppycrawl tools checkstyle api automaticbean configure automaticbean java at com puppycrawl tools checkstyle ant checkstyleanttask createchecker checkstyleanttask java more
0
278,163
30,702,219,930
IssuesEvent
2023-07-27 01:12:28
Nivaskumark/kernel_v4.1.15
https://api.github.com/repos/Nivaskumark/kernel_v4.1.15
closed
CVE-2016-3135 (High) detected in linuxlinux-4.6 - autoclosed
Mend: dependency security vulnerability
## CVE-2016-3135 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.1.15/commit/00db4e8795bcbec692fb60b19160bdd763ad42e3">00db4e8795bcbec692fb60b19160bdd763ad42e3</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/x_tables.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/x_tables.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Integer overflow in the xt_alloc_table_info function in net/netfilter/x_tables.c in the Linux kernel through 4.5.2 on 32-bit platforms allows local users to gain privileges or cause a denial of service (heap memory corruption) via an IPT_SO_SET_REPLACE setsockopt call. <p>Publish Date: 2016-04-27 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-3135>CVE-2016-3135</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3135">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3135</a></p> <p>Release Date: 2016-04-27</p> <p>Fix Resolution: v4.6-rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-3135 (High) detected in linuxlinux-4.6 - autoclosed - ## CVE-2016-3135 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.1.15/commit/00db4e8795bcbec692fb60b19160bdd763ad42e3">00db4e8795bcbec692fb60b19160bdd763ad42e3</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/x_tables.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/x_tables.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Integer overflow in the xt_alloc_table_info function in net/netfilter/x_tables.c in the Linux kernel through 4.5.2 on 32-bit platforms allows local users to gain privileges or cause a denial of service (heap memory corruption) via an IPT_SO_SET_REPLACE setsockopt call. <p>Publish Date: 2016-04-27 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-3135>CVE-2016-3135</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3135">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3135</a></p> <p>Release Date: 2016-04-27</p> <p>Fix Resolution: v4.6-rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files net netfilter x tables c net netfilter x tables c vulnerability details integer overflow in the xt alloc table info function in net netfilter x tables c in the linux kernel through on bit platforms allows local users to gain privileges or cause a denial of service heap memory corruption via an ipt so set replace setsockopt call publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
14,336
17,367,136,918
IssuesEvent
2021-07-30 08:53:41
didi/mpx
https://api.github.com/repos/didi/mpx
closed
[Bug report] 小程序插件中引用主包组件提示找不到
processing
**问题描述** 1. 问题触发的条件 微信小程序插件中引用主包的组件提示找不到文件 2. 期望的表现 可以在小程序插件中引用主包的文件 3. 实际的表现 页面中有报错,如下图: app.json与打包后的路径不一致 ![image](https://user-images.githubusercontent.com/41629087/126605627-c23b294b-87e1-4b06-a62a-8face7d42a7b.png) **最简复现demo** github地址请[点击](https://github.com/nianxiongdi/mpx-plugin)
1.0
[Bug report] 小程序插件中引用主包组件提示找不到 - **问题描述** 1. 问题触发的条件 微信小程序插件中引用主包的组件提示找不到文件 2. 期望的表现 可以在小程序插件中引用主包的文件 3. 实际的表现 页面中有报错,如下图: app.json与打包后的路径不一致 ![image](https://user-images.githubusercontent.com/41629087/126605627-c23b294b-87e1-4b06-a62a-8face7d42a7b.png) **最简复现demo** github地址请[点击](https://github.com/nianxiongdi/mpx-plugin)
process
小程序插件中引用主包组件提示找不到 问题描述 问题触发的条件 微信小程序插件中引用主包的组件提示找不到文件 期望的表现 可以在小程序插件中引用主包的文件 实际的表现 页面中有报错,如下图 app json与打包后的路径不一致 最简复现demo github地址请
1
152,870
5,871,407,452
IssuesEvent
2017-05-15 08:38:11
salesagility/SuiteCRM
https://api.github.com/repos/salesagility/SuiteCRM
closed
Project Templates: Project from Project Title - language
bug Fix Proposed Low Priority Resolved: Next Release
<!--- Provide a general summary of the issue in the **Title** above --> <!--- Before you open an issue, please check if a similar issue already exists or has been closed before. ---> #### Issue <!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --> Punctuation change required to `modules/AM_ProjectTemplates/language/en_us.lang.php` See: https://github.com/salesagility/SuiteCRM/blob/hotfix/modules/AM_ProjectTemplates/language/en_us.lang.php#L146 It would make the sentence exactly the same as in `modules/AM_TaskTemplates/language/en_us.lang.php` located here https://github.com/salesagility/SuiteCRM/blob/hotfix/modules/AM_TaskTemplates/language/en_us.lang.php#L84 This will help translators as it will only require to translate on of them! #### Expected Behavior <!--- Tell us what should happen --> `'LBL_AM_PROJECTTEMPLATES_PROJECT_1_FROM_PROJECT_TITLE' => 'Project Templates: Project from Project Title',` #### Actual Behavior <!--- Tell us what happens instead --> `'LBL_AM_PROJECTTEMPLATES_PROJECT_1_FROM_PROJECT_TITLE' => 'Project Templates Project from Project Title',` #### Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * SuiteCRM Version used: Hotfix
1.0
Project Templates: Project from Project Title - language - <!--- Provide a general summary of the issue in the **Title** above --> <!--- Before you open an issue, please check if a similar issue already exists or has been closed before. ---> #### Issue <!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --> Punctuation change required to `modules/AM_ProjectTemplates/language/en_us.lang.php` See: https://github.com/salesagility/SuiteCRM/blob/hotfix/modules/AM_ProjectTemplates/language/en_us.lang.php#L146 It would make the sentence exactly the same as in `modules/AM_TaskTemplates/language/en_us.lang.php` located here https://github.com/salesagility/SuiteCRM/blob/hotfix/modules/AM_TaskTemplates/language/en_us.lang.php#L84 This will help translators as it will only require to translate on of them! #### Expected Behavior <!--- Tell us what should happen --> `'LBL_AM_PROJECTTEMPLATES_PROJECT_1_FROM_PROJECT_TITLE' => 'Project Templates: Project from Project Title',` #### Actual Behavior <!--- Tell us what happens instead --> `'LBL_AM_PROJECTTEMPLATES_PROJECT_1_FROM_PROJECT_TITLE' => 'Project Templates Project from Project Title',` #### Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * SuiteCRM Version used: Hotfix
non_process
project templates project from project title language issue punctuation change required to modules am projecttemplates language en us lang php see it would make the sentence exactly the same as in modules am tasktemplates language en us lang php located here this will help translators as it will only require to translate on of them expected behavior lbl am projecttemplates project from project title project templates project from project title actual behavior lbl am projecttemplates project from project title project templates project from project title your environment suitecrm version used hotfix
0
6,676
9,792,662,942
IssuesEvent
2019-06-10 17:58:47
googleapis/google-cloud-cpp-spanner
https://api.github.com/repos/googleapis/google-cloud-cpp-spanner
closed
Add a CI build using MemorySanitizer.
type: process
A build with MemorySanitizer will discover problems that neither AddressSanitizer nor UndefinedBehaviorSanitizer cover. But it requires compiling `libc++` from source and I do not know how to do that.
1.0
Add a CI build using MemorySanitizer. - A build with MemorySanitizer will discover problems that neither AddressSanitizer nor UndefinedBehaviorSanitizer cover. But it requires compiling `libc++` from source and I do not know how to do that.
process
add a ci build using memorysanitizer a build with memorysanitizer will discover problems that neither addresssanitizer nor undefinedbehaviorsanitizer cover but it requires compiling libc from source and i do not know how to do that
1
1,908
4,734,069,829
IssuesEvent
2016-10-19 13:16:21
AllenFang/react-bootstrap-table
https://api.github.com/repos/AllenFang/react-bootstrap-table
reopened
Do you know why filters do not work with dynamically generated columns?
inprocess
Do you know why filters do not work with dynamically generated columns? I followed the steps as in https://github.com/AllenFang/react-bootstrap-table/issues/450 , the dynamic columns work great. Its only filters that do not work(Example numeric filter). It displays the filter controls , but does not do the filter operation.
1.0
Do you know why filters do not work with dynamically generated columns? - Do you know why filters do not work with dynamically generated columns? I followed the steps as in https://github.com/AllenFang/react-bootstrap-table/issues/450 , the dynamic columns work great. Its only filters that do not work(Example numeric filter). It displays the filter controls , but does not do the filter operation.
process
do you know why filters do not work with dynamically generated columns do you know why filters do not work with dynamically generated columns i followed the steps as in the dynamic columns work great its only filters that do not work example numeric filter it displays the filter controls but does not do the filter operation
1
5,633
8,484,729,530
IssuesEvent
2018-10-26 04:19:25
nodejs/node
https://api.github.com/repos/nodejs/node
closed
spawnSync segfaults when given throwing toString
child_process
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: 6.4.0 - 8.0.0 * **Platform**: * **Subsystem**: <!-- Enter your issue details below this comment. --> spawnSync will segfault if called with an object that defines a throwing `toString`. Here is a snippet using the high-level `child_process` API: ```javascript const spawn = require('child_process').spawnSync; const args = []; const obj = {}; obj.toString = () => { throw 'yo'; // causes ToString on spawn_sync.cc:964 to return empty handle; Set getfaults }; args[0] = obj; spawn('ls', args); ``` It may be safer to call toString in JS land before calling into the binding code. + @mlfbrown for working on this with me.
1.0
spawnSync segfaults when given throwing toString - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: 6.4.0 - 8.0.0 * **Platform**: * **Subsystem**: <!-- Enter your issue details below this comment. --> spawnSync will segfault if called with an object that defines a throwing `toString`. Here is a snippet using the high-level `child_process` API: ```javascript const spawn = require('child_process').spawnSync; const args = []; const obj = {}; obj.toString = () => { throw 'yo'; // causes ToString on spawn_sync.cc:964 to return empty handle; Set getfaults }; args[0] = obj; spawn('ls', args); ``` It may be safer to call toString in JS land before calling into the binding code. + @mlfbrown for working on this with me.
process
spawnsync segfaults when given throwing tostring thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform subsystem spawnsync will segfault if called with an object that defines a throwing tostring here is a snippet using the high level child process api javascript const spawn require child process spawnsync const args const obj obj tostring throw yo causes tostring on spawn sync cc to return empty handle set getfaults args obj spawn ls args it may be safer to call tostring in js land before calling into the binding code mlfbrown for working on this with me
1
2,588
5,345,496,274
IssuesEvent
2017-02-17 17:05:41
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
closed
"use <schema>" query missing in stats_mysql_query_digest
bug CONNECTION POOL QUERY PROCESSOR
I wrote a web interface that creates rules from all queries listed in stats_mysql_query_digest. I've just discovered that all kind of queries listed, but not the "use table_name" query - therefor it was missing in my auto created rules. Since I create a deny all at the end it run into that rule and got denied. Created it manually, works fine now. Not sure if this is a bug or on purpose. ### Summary Summarizing what discussed below: `USE schename` should be intercepted and handled by ProxySQL in the same way as an `INIT_DB` command
1.0
"use <schema>" query missing in stats_mysql_query_digest - I wrote a web interface that creates rules from all queries listed in stats_mysql_query_digest. I've just discovered that all kind of queries listed, but not the "use table_name" query - therefor it was missing in my auto created rules. Since I create a deny all at the end it run into that rule and got denied. Created it manually, works fine now. Not sure if this is a bug or on purpose. ### Summary Summarizing what discussed below: `USE schename` should be intercepted and handled by ProxySQL in the same way as an `INIT_DB` command
process
use query missing in stats mysql query digest i wrote a web interface that creates rules from all queries listed in stats mysql query digest i ve just discovered that all kind of queries listed but not the use table name query therefor it was missing in my auto created rules since i create a deny all at the end it run into that rule and got denied created it manually works fine now not sure if this is a bug or on purpose summary summarizing what discussed below use schename should be intercepted and handled by proxysql in the same way as an init db command
1
1,484
2,514,743,684
IssuesEvent
2015-01-15 14:09:36
olga-jane/prizm
https://api.github.com/repos/olga-jane/prizm
closed
can the spool length take negative value
bug bug - crash/performance/leak Coding Construction HIGH priority Incoming inspection
Scenario: 1. Open "Катушки" 2. Click "Редактировать" 3. Click "Ok" or "Close" in error dialog 4. Set the value of spool length in -1 5. Click "Save" button Result: An unhandled exception of type 'System.NullReferenceException' occurred in Prizm.Data.dll Additional information: Object reference not set to an instance of an object.
1.0
can the spool length take negative value - Scenario: 1. Open "Катушки" 2. Click "Редактировать" 3. Click "Ok" or "Close" in error dialog 4. Set the value of spool length in -1 5. Click "Save" button Result: An unhandled exception of type 'System.NullReferenceException' occurred in Prizm.Data.dll Additional information: Object reference not set to an instance of an object.
non_process
can the spool length take negative value scenario open катушки click редактировать click ok or close in error dialog set the value of spool length in click save button result an unhandled exception of type system nullreferenceexception occurred in prizm data dll additional information object reference not set to an instance of an object
0
154,603
24,326,961,671
IssuesEvent
2022-09-30 15:34:57
carbon-design-system/carbon-for-ibm-dotcom
https://api.github.com/repos/carbon-design-system/carbon-for-ibm-dotcom
closed
[Figma] Card section carousel
design design kit stretch project
Add the Card section carousel component to Figma kit. Remember we are only building variants for `xlg`, `md`, and `sm`. [Design spec](https://ibm.ent.box.com/folder/126468155543?s=wyj5uboppri7nc3gn4v1f7ft80qc5eir) [Website](https://www.ibm.com/standards/carbon/components/card-section-carousel) **Must haves:** - [ ] Use `Card - basic` in the component. - [ ] Use the carousel control from `Carousel`. - [ ] Use `Link with icon`. ![Screen Shot 2022-06-11 at 12.03.41 PM.png](https://images.zenhubusercontent.com/60b63c5c740eb13b3d18acb6/d0af4407-a4a8-417f-a856-acbc60722f11) **Resources** - C4IBM.com Component checklist is available in the Read Me 👀 page of the Figma library - [Bases demo](https://secure.video.ibm.com/channel/23570833/playlist/641336/video/131117161) from the Figma Fridays (Episode 23) - Reference the [best practices](https://www.figma.com/file/8n9IJtHnICSydODoYl4ueW/Figma-Best-Practices?node-id=52%3A6706) from Figma guild - Info on [auto layout](https://www.figma.com/proto/RQhu2NWQVr1JDaU4mpRuZh/Figma-Exercises?node-id=422%3A2366&scaling=min-zoom&page-id=52%3A6706&starting-point-node-id=422%3A2366)
2.0
[Figma] Card section carousel - Add the Card section carousel component to Figma kit. Remember we are only building variants for `xlg`, `md`, and `sm`. [Design spec](https://ibm.ent.box.com/folder/126468155543?s=wyj5uboppri7nc3gn4v1f7ft80qc5eir) [Website](https://www.ibm.com/standards/carbon/components/card-section-carousel) **Must haves:** - [ ] Use `Card - basic` in the component. - [ ] Use the carousel control from `Carousel`. - [ ] Use `Link with icon`. ![Screen Shot 2022-06-11 at 12.03.41 PM.png](https://images.zenhubusercontent.com/60b63c5c740eb13b3d18acb6/d0af4407-a4a8-417f-a856-acbc60722f11) **Resources** - C4IBM.com Component checklist is available in the Read Me 👀 page of the Figma library - [Bases demo](https://secure.video.ibm.com/channel/23570833/playlist/641336/video/131117161) from the Figma Fridays (Episode 23) - Reference the [best practices](https://www.figma.com/file/8n9IJtHnICSydODoYl4ueW/Figma-Best-Practices?node-id=52%3A6706) from Figma guild - Info on [auto layout](https://www.figma.com/proto/RQhu2NWQVr1JDaU4mpRuZh/Figma-Exercises?node-id=422%3A2366&scaling=min-zoom&page-id=52%3A6706&starting-point-node-id=422%3A2366)
non_process
card section carousel add the card section carousel component to figma kit remember we are only building variants for xlg md and sm must haves use card basic in the component use the carousel control from carousel use link with icon resources com component checklist is available in the read me 👀 page of the figma library from the figma fridays episode reference the from figma guild info on
0
22,506
31,558,942,349
IssuesEvent
2023-09-03 02:06:28
tdwg/hc
https://api.github.com/repos/tdwg/hc
opened
New Term - siteNestingDescription
Term - add normative Process - under public review Class - Event
## New term * Submitter: Humboldt Extension Task Group * Efficacy Justification (why is this term necessary?): Part of a package of terms in support of biological inventory data. * Demand Justification (name at least two organizations that independently need this term): The Humboldt Extension Task Group proposing this term consists of numerous organizations. * Stability Justification (what concerns are there that this might affect existing implementations?): None * Implications for dwciri: namespace (does this change affect a dwciri term version)?: None Proposed attributes of the new term: * Term name (in lowerCamelCase for properties, UpperCamelCase for classes): siteNestingDescription * Term label (English, not normative): Site Nesting Description * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Event * Definition of the term (normative): Textual description of the hierarchical sampling design. * Usage comments (recommendations regarding content, etc., not normative): Site refers to the location at which observations are made or samples/measurements are taken. The site can be at any level of hierarchy. * Examples (not normative): 5 sampling sites of 3-5 plots each * Refines (identifier of the broader term this term refines; normative): None * Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None * ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
1.0
New Term - siteNestingDescription - ## New term * Submitter: Humboldt Extension Task Group * Efficacy Justification (why is this term necessary?): Part of a package of terms in support of biological inventory data. * Demand Justification (name at least two organizations that independently need this term): The Humboldt Extension Task Group proposing this term consists of numerous organizations. * Stability Justification (what concerns are there that this might affect existing implementations?): None * Implications for dwciri: namespace (does this change affect a dwciri term version)?: None Proposed attributes of the new term: * Term name (in lowerCamelCase for properties, UpperCamelCase for classes): siteNestingDescription * Term label (English, not normative): Site Nesting Description * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Event * Definition of the term (normative): Textual description of the hierarchical sampling design. * Usage comments (recommendations regarding content, etc., not normative): Site refers to the location at which observations are made or samples/measurements are taken. The site can be at any level of hierarchy. * Examples (not normative): 5 sampling sites of 3-5 plots each * Refines (identifier of the broader term this term refines; normative): None * Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None * ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
process
new term sitenestingdescription new term submitter humboldt extension task group efficacy justification why is this term necessary part of a package of terms in support of biological inventory data demand justification name at least two organizations that independently need this term the humboldt extension task group proposing this term consists of numerous organizations stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version none proposed attributes of the new term term name in lowercamelcase for properties uppercamelcase for classes sitenestingdescription term label english not normative site nesting description organized in class e g occurrence event location taxon event definition of the term normative textual description of the hierarchical sampling design usage comments recommendations regarding content etc not normative site refers to the location at which observations are made or samples measurements are taken the site can be at any level of hierarchy examples not normative sampling sites of plots each refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd
1
397,831
27,179,153,965
IssuesEvent
2023-02-18 11:56:15
swarnarkadas/ichat_app
https://api.github.com/repos/swarnarkadas/ichat_app
closed
Make User interface attractive
documentation enhancement good first issue JWOC Easy
Upgrade the user interface & make it as attractive as possible. You can use any CSS framework **except Bootstrap**.
1.0
Make User interface attractive - Upgrade the user interface & make it as attractive as possible. You can use any CSS framework **except Bootstrap**.
non_process
make user interface attractive upgrade the user interface make it as attractive as possible you can use any css framework except bootstrap
0
424,561
29,144,656,677
IssuesEvent
2023-05-18 01:01:20
jrsteensen/OpenHornet
https://api.github.com/repos/jrsteensen/OpenHornet
opened
Generate MFG Files: OH4A7A1-1 - ASSY, COMM PANEL
Type: Documentation "Category: MCAD Priority: Normal"
Generate the manufacturing files for Generate MFG Files: OH4A7A1-1 - ASSY, COMM PANEL. __Check off each item in issue as you complete it.__ ### File generation - [OH Wiki HOWTO Link](https://github.com/jrsteensen/OpenHornet/wiki/HOWTO:-Generating-Fusion360-Manufacturing-Files) - [ ] Generate SVG files (if required.) - [ ] Generate 3MF files (if required.) - [ ] Generate STEP files (if required.) - [ ] Copy the relevant decal PDFs from the art folder to the relevant manufacturing folder (if required.) ### Review your files - [ ] Verify against drawing parts list that all the relevant manufacturing files have been created. - [ ] Open each SVG in your browser and compare against part to ensure it appears the same and its filename is correct. - [ ] Open each 3MF in a slicer of your choice and verify geometry matches F360 model and its filename is correct. - [ ] Open each STEP in a STEP file viewer of your choice and verify geometry matches F360 model and its filename is correct. ### Submit your files - [ ] Create a github PR against the beta 1 branch with the manufacturing files located in correct location of the release folder. - [ ] Request a review of the PR. #### Why a PR? It gives you credit when I generate the changelog in the release, and (more importantly) adds traceability to the history of the issues.
1.0
Generate MFG Files: OH4A7A1-1 - ASSY, COMM PANEL - Generate the manufacturing files for Generate MFG Files: OH4A7A1-1 - ASSY, COMM PANEL. __Check off each item in issue as you complete it.__ ### File generation - [OH Wiki HOWTO Link](https://github.com/jrsteensen/OpenHornet/wiki/HOWTO:-Generating-Fusion360-Manufacturing-Files) - [ ] Generate SVG files (if required.) - [ ] Generate 3MF files (if required.) - [ ] Generate STEP files (if required.) - [ ] Copy the relevant decal PDFs from the art folder to the relevant manufacturing folder (if required.) ### Review your files - [ ] Verify against drawing parts list that all the relevant manufacturing files have been created. - [ ] Open each SVG in your browser and compare against part to ensure it appears the same and its filename is correct. - [ ] Open each 3MF in a slicer of your choice and verify geometry matches F360 model and its filename is correct. - [ ] Open each STEP in a STEP file viewer of your choice and verify geometry matches F360 model and its filename is correct. ### Submit your files - [ ] Create a github PR against the beta 1 branch with the manufacturing files located in correct location of the release folder. - [ ] Request a review of the PR. #### Why a PR? It gives you credit when I generate the changelog in the release, and (more importantly) adds traceability to the history of the issues.
non_process
generate mfg files assy comm panel generate the manufacturing files for generate mfg files assy comm panel check off each item in issue as you complete it file generation generate svg files if required generate files if required generate step files if required copy the relevant decal pdfs from the art folder to the relevant manufacturing folder if required review your files verify against drawing parts list that all the relevant manufacturing files have been created open each svg in your browser and compare against part to ensure it appears the same and its filename is correct open each in a slicer of your choice and verify geometry matches model and its filename is correct open each step in a step file viewer of your choice and verify geometry matches model and its filename is correct submit your files create a github pr against the beta branch with the manufacturing files located in correct location of the release folder request a review of the pr why a pr it gives you credit when i generate the changelog in the release and more importantly adds traceability to the history of the issues
0
81,268
15,610,851,707
IssuesEvent
2021-03-19 13:40:50
AlexRogalskiy/stylegrams
https://api.github.com/repos/AlexRogalskiy/stylegrams
opened
CVE-2021-27290 (High) detected in ssri-6.0.1.tgz
security vulnerability
## CVE-2021-27290 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ssri-6.0.1.tgz</b></p></summary> <p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p> <p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz">https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz</a></p> <p>Path to dependency file: stylegrams/package.json</p> <p>Path to vulnerable library: stylegrams/node_modules/npm/node_modules/ssri/package.json</p> <p> Dependency Hierarchy: - npm-7.0.10.tgz (Root Library) - npm-6.14.11.tgz - :x: **ssri-6.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/stylegrams/commit/5b172fd0ddd41ea3261c48768452d3e7b5ceff9a">5b172fd0ddd41ea3261c48768452d3e7b5ceff9a</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option. <p>Publish Date: 2021-03-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27290>CVE-2021-27290</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27290">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27290</a></p> <p>Release Date: 2021-03-12</p> <p>Fix Resolution: v8.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-27290 (High) detected in ssri-6.0.1.tgz - ## CVE-2021-27290 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ssri-6.0.1.tgz</b></p></summary> <p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p> <p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz">https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz</a></p> <p>Path to dependency file: stylegrams/package.json</p> <p>Path to vulnerable library: stylegrams/node_modules/npm/node_modules/ssri/package.json</p> <p> Dependency Hierarchy: - npm-7.0.10.tgz (Root Library) - npm-6.14.11.tgz - :x: **ssri-6.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/stylegrams/commit/5b172fd0ddd41ea3261c48768452d3e7b5ceff9a">5b172fd0ddd41ea3261c48768452d3e7b5ceff9a</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option. <p>Publish Date: 2021-03-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27290>CVE-2021-27290</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27290">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27290</a></p> <p>Release Date: 2021-03-12</p> <p>Fix Resolution: v8.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in ssri tgz cve high severity vulnerability vulnerable library ssri tgz standard subresource integrity library parses serializes generates and verifies integrity metadata according to the sri spec library home page a href path to dependency file stylegrams package json path to vulnerable library stylegrams node modules npm node modules ssri package json dependency hierarchy npm tgz root library npm tgz x ssri tgz vulnerable library found in head commit a href vulnerability details ssri fixed in processes sris using a regular expression which is vulnerable to a denial of service malicious sris could take an extremely long time to process leading to denial of service this issue only affects consumers using the strict option publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
104,880
9,011,965,830
IssuesEvent
2019-02-05 15:51:10
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
opened
Failing test in master-blocking: Dynamic PV/Inline-volume subPath should support creating multiple subpath from same volumes
kind/failing-test
<!-- Please only use this template for submitting reports about failing tests in Kubernetes CI jobs --> **Which jobs are failing**: - https://testgrid.k8s.io/sig-release-master-blocking#gce-cos-master-slow **Which test(s) are failing**: `subPath should support creating multiple subpath from same volumes`, for both In-tree and CSI volumes, and multiple drivers. **Since when has it been failing**: 2019-02-04, [k/k:1e78fec9b](https://github.com/kubernetes/kubernetes/commit/1e78fec9b) **Testgrid link**: - [gce-cos-master-slow: Run 23818](https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-slow/23818) **Reason for failure**: It seems that pods are not initializing within the expected amount of time in these tests: ``` Expected error: <*errors.errorString | 0xc001b40fa0>: { s: "expected pod <...> success: pod <...> failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:False <...>} not to have occurred ``` **Anything else we need to know**: - diff since last successful test: [f0beaf46d...1e78fec9b](https://github.com/kubernetes/kubernetes/compare/f0beaf46d...1e78fec9b) /kind failing-test /sig storage /priority important-soon cc @smourapina @alejandrox1 @mortent @kacole2
1.0
Failing test in master-blocking: Dynamic PV/Inline-volume subPath should support creating multiple subpath from same volumes - <!-- Please only use this template for submitting reports about failing tests in Kubernetes CI jobs --> **Which jobs are failing**: - https://testgrid.k8s.io/sig-release-master-blocking#gce-cos-master-slow **Which test(s) are failing**: `subPath should support creating multiple subpath from same volumes`, for both In-tree and CSI volumes, and multiple drivers. **Since when has it been failing**: 2019-02-04, [k/k:1e78fec9b](https://github.com/kubernetes/kubernetes/commit/1e78fec9b) **Testgrid link**: - [gce-cos-master-slow: Run 23818](https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-slow/23818) **Reason for failure**: It seems that pods are not initializing within the expected amount of time in these tests: ``` Expected error: <*errors.errorString | 0xc001b40fa0>: { s: "expected pod <...> success: pod <...> failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:False <...>} not to have occurred ``` **Anything else we need to know**: - diff since last successful test: [f0beaf46d...1e78fec9b](https://github.com/kubernetes/kubernetes/compare/f0beaf46d...1e78fec9b) /kind failing-test /sig storage /priority important-soon cc @smourapina @alejandrox1 @mortent @kacole2
non_process
failing test in master blocking dynamic pv inline volume subpath should support creating multiple subpath from same volumes which jobs are failing which test s are failing subpath should support creating multiple subpath from same volumes for both in tree and csi volumes and multiple drivers since when has it been failing testgrid link reason for failure it seems that pods are not initializing within the expected amount of time in these tests expected error s expected pod success pod failed with status phase failed conditions type initialized status false not to have occurred anything else we need to know diff since last successful test kind failing test sig storage priority important soon cc smourapina mortent
0
19,168
25,268,342,022
IssuesEvent
2022-11-16 07:15:37
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Test CockroachDB 22.2 in prisma/prisma and prisma/prisma-engines
process/candidate topic: internal kind/tech team/schema team/client
Prompted by a notification about SCRAM authentication becoming the default (https://prisma-company.slack.com/files/USLACKBOT/F04AZJYMAHK/scram_password_authentication_is_now_the_default_in_cockroachdb).
1.0
Test CockroachDB 22.2 in prisma/prisma and prisma/prisma-engines - Prompted by a notification about SCRAM authentication becoming the default (https://prisma-company.slack.com/files/USLACKBOT/F04AZJYMAHK/scram_password_authentication_is_now_the_default_in_cockroachdb).
process
test cockroachdb in prisma prisma and prisma prisma engines prompted by a notification about scram authentication becoming the default
1
6,513
9,599,015,306
IssuesEvent
2019-05-10 04:29:23
furukawa-laboratory/workout_report_2019
https://api.github.com/repos/furukawa-laboratory/workout_report_2019
closed
[ビギナー・常人] sklearnを用いて基本的な前処理を学ぶ
preprocessing training
## sklearnを用いて基本的な前処理を学ぶ ここでは、Data Preprocessing, Analysis & Visualization – Python Machine Learningのサイトに沿って、前処理の基本の使い方を学ぶ。具体的には、Pythonのsklearnライブラリを用いてデータの基本的なリスケールや正規化などの処理を写経して実装し、動作を確認する。また、sklearnに実装されているiris等のデータセットやUCIレポジトリをPCAを用いて可視化する処理を行う。 ## 取り組む期間 04/23~04/25(予定) ## 獲得を目指すスキル ・sklearnやnumpyライブラリを用いた基本的なPythonコードの実装 ・sklearnのデータセットを用いた実装 ・データ処理(スケーリング、正規化など) ・データ分析 ## Action list - [x] Data Preprocessing, Analysis & Visualization – Python Machine Learningのチュートリアルを読む - [x] プログラムの写経および動作確認 - [x] sklearnを用いたデータセットおよびPCAの実装 - [x] ミニレポート提出
1.0
[ビギナー・常人] sklearnを用いて基本的な前処理を学ぶ - ## sklearnを用いて基本的な前処理を学ぶ ここでは、Data Preprocessing, Analysis & Visualization – Python Machine Learningのサイトに沿って、前処理の基本の使い方を学ぶ。具体的には、Pythonのsklearnライブラリを用いてデータの基本的なリスケールや正規化などの処理を写経して実装し、動作を確認する。また、sklearnに実装されているiris等のデータセットやUCIレポジトリをPCAを用いて可視化する処理を行う。 ## 取り組む期間 04/23~04/25(予定) ## 獲得を目指すスキル ・sklearnやnumpyライブラリを用いた基本的なPythonコードの実装 ・sklearnのデータセットを用いた実装 ・データ処理(スケーリング、正規化など) ・データ分析 ## Action list - [x] Data Preprocessing, Analysis & Visualization – Python Machine Learningのチュートリアルを読む - [x] プログラムの写経および動作確認 - [x] sklearnを用いたデータセットおよびPCAの実装 - [x] ミニレポート提出
process
sklearnを用いて基本的な前処理を学ぶ sklearnを用いて基本的な前処理を学ぶ ここでは、data preprocessing analysis visualization – python machine learningのサイトに沿って、前処理の基本の使い方を学ぶ。具体的には、pythonのsklearnライブラリを用いてデータの基本的なリスケールや正規化などの処理を写経して実装し、動作を確認する。また、sklearnに実装されているiris等のデータセットやuciレポジトリをpcaを用いて可視化する処理を行う。 取り組む期間 予定 獲得を目指すスキル ・sklearnやnumpyライブラリを用いた基本的なpythonコードの実装 ・sklearnのデータセットを用いた実装 ・データ処理(スケーリング、正規化など) ・データ分析 action list data preprocessing analysis visualization – python machine learningのチュートリアルを読む プログラムの写経および動作確認 sklearnを用いたデータセットおよびpcaの実装 ミニレポート提出
1
4,314
7,203,329,737
IssuesEvent
2018-02-06 08:49:19
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
[FEATURE][processing] Extract by attribute can extract for null/notnull values
Automatic new feature Processing
Original commit: https://github.com/qgis/QGIS/commit/24ffa15ecf1aa20ef389fad1f0aaf6f235b712da by nyalldawson Adds support for filtering where an attribute value is null or not null
1.0
[FEATURE][processing] Extract by attribute can extract for null/notnull values - Original commit: https://github.com/qgis/QGIS/commit/24ffa15ecf1aa20ef389fad1f0aaf6f235b712da by nyalldawson Adds support for filtering where an attribute value is null or not null
process
extract by attribute can extract for null notnull values original commit by nyalldawson adds support for filtering where an attribute value is null or not null
1
11,248
14,016,238,384
IssuesEvent
2020-10-29 14:17:49
googleapis/java-bigtable-hbase
https://api.github.com/repos/googleapis/java-bigtable-hbase
closed
chore: Tasks to be performed before next release
api: bigtable type: process
As per #2467, We have marked snapshot operation as unsupported with this library. We need to make sure of the following before next release cut: - [ ] add backup support - [ ] add a breaking change note in the release notes
1.0
chore: Tasks to be performed before next release - As per #2467, We have marked snapshot operation as unsupported with this library. We need to make sure of the following before next release cut: - [ ] add backup support - [ ] add a breaking change note in the release notes
process
chore tasks to be performed before next release as per we have marked snapshot operation as unsupported with this library we need to make sure of the following before next release cut add backup support add a breaking change note in the release notes
1
306,747
26,492,476,958
IssuesEvent
2023-01-18 00:35:35
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
reopened
Fix jax_numpy_statistical.test_jax_numpy_var
JAX Frontend Sub Task Failing Test
| | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3941620207/jobs/6744286919" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/3941620207/jobs/6744297694" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3941620207/jobs/6744279327" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/3941620207/jobs/6744290907" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_statistical.py::test_jax_numpy_var[cpu-ivy.functional.backends.numpy-False-False]</summary> 2023-01-17T17:36:30.4160505Z E AssertionError: the return with a TensorFlow backend produced data type of float64, while the return with a numpy backend returned a data type of float32. 2023-01-17T17:36:30.4160989Z E Falsifying example: test_jax_numpy_var( 2023-01-17T17:36:30.4161462Z E dtype_x_axis=(['float16'], [array([-1., -1.], dtype=float16)], 0, 0), 2023-01-17T17:36:30.4161829Z E dtype=['float64'], 2023-01-17T17:36:30.4162114Z E where=[array([False, False])], 2023-01-17T17:36:30.4162384Z E keepdims=False, 2023-01-17T17:36:30.4162789Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. , 2023-01-17T17:36:30.4163311Z E fn_tree='ivy.functional.frontends.jax.numpy.var', 2023-01-17T17:36:30.4163730Z E on_device='cpu', 2023-01-17T17:36:30.4164009Z E frontend='jax', 2023-01-17T17:36:30.4164239Z E ) 2023-01-17T17:36:30.4164436Z E 2023-01-17T17:36:30.4165031Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkAAMoBaaR2QgeAwMAAMkABw==') as a decorator on your test case </details> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_statistical.py::test_jax_numpy_var[cpu-ivy.functional.backends.numpy-False-False]</summary> 2023-01-17T17:36:30.4160505Z E AssertionError: the return with a TensorFlow backend produced data type of float64, while the return with a numpy backend returned a data type of float32. 2023-01-17T17:36:30.4160989Z E Falsifying example: test_jax_numpy_var( 2023-01-17T17:36:30.4161462Z E dtype_x_axis=(['float16'], [array([-1., -1.], dtype=float16)], 0, 0), 2023-01-17T17:36:30.4161829Z E dtype=['float64'], 2023-01-17T17:36:30.4162114Z E where=[array([False, False])], 2023-01-17T17:36:30.4162384Z E keepdims=False, 2023-01-17T17:36:30.4162789Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. , 2023-01-17T17:36:30.4163311Z E fn_tree='ivy.functional.frontends.jax.numpy.var', 2023-01-17T17:36:30.4163730Z E on_device='cpu', 2023-01-17T17:36:30.4164009Z E frontend='jax', 2023-01-17T17:36:30.4164239Z E ) 2023-01-17T17:36:30.4164436Z E 2023-01-17T17:36:30.4165031Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkAAMoBaaR2QgeAwMAAMkABw==') as a decorator on your test case </details> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_statistical.py::test_jax_numpy_var[cpu-ivy.functional.backends.numpy-False-False]</summary> 2023-01-17T17:36:30.4160505Z E AssertionError: the return with a TensorFlow backend produced data type of float64, while the return with a numpy backend returned a data type of float32. 2023-01-17T17:36:30.4160989Z E Falsifying example: test_jax_numpy_var( 2023-01-17T17:36:30.4161462Z E dtype_x_axis=(['float16'], [array([-1., -1.], dtype=float16)], 0, 0), 2023-01-17T17:36:30.4161829Z E dtype=['float64'], 2023-01-17T17:36:30.4162114Z E where=[array([False, False])], 2023-01-17T17:36:30.4162384Z E keepdims=False, 2023-01-17T17:36:30.4162789Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. , 2023-01-17T17:36:30.4163311Z E fn_tree='ivy.functional.frontends.jax.numpy.var', 2023-01-17T17:36:30.4163730Z E on_device='cpu', 2023-01-17T17:36:30.4164009Z E frontend='jax', 2023-01-17T17:36:30.4164239Z E ) 2023-01-17T17:36:30.4164436Z E 2023-01-17T17:36:30.4165031Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkAAMoBaaR2QgeAwMAAMkABw==') as a decorator on your test case </details> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_statistical.py::test_jax_numpy_var[cpu-ivy.functional.backends.numpy-False-False]</summary> 2023-01-17T17:36:30.4160505Z E AssertionError: the return with a TensorFlow backend produced data type of float64, while the return with a numpy backend returned a data type of float32. 2023-01-17T17:36:30.4160989Z E Falsifying example: test_jax_numpy_var( 2023-01-17T17:36:30.4161462Z E dtype_x_axis=(['float16'], [array([-1., -1.], dtype=float16)], 0, 0), 2023-01-17T17:36:30.4161829Z E dtype=['float64'], 2023-01-17T17:36:30.4162114Z E where=[array([False, False])], 2023-01-17T17:36:30.4162384Z E keepdims=False, 2023-01-17T17:36:30.4162789Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. , 2023-01-17T17:36:30.4163311Z E fn_tree='ivy.functional.frontends.jax.numpy.var', 2023-01-17T17:36:30.4163730Z E on_device='cpu', 2023-01-17T17:36:30.4164009Z E frontend='jax', 2023-01-17T17:36:30.4164239Z E ) 2023-01-17T17:36:30.4164436Z E 2023-01-17T17:36:30.4165031Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkAAMoBaaR2QgeAwMAAMkABw==') as a decorator on your test case </details>
1.0
Fix jax_numpy_statistical.test_jax_numpy_var - | | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3941620207/jobs/6744286919" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/3941620207/jobs/6744297694" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3941620207/jobs/6744279327" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/3941620207/jobs/6744290907" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_statistical.py::test_jax_numpy_var[cpu-ivy.functional.backends.numpy-False-False]</summary> 2023-01-17T17:36:30.4160505Z E AssertionError: the return with a TensorFlow backend produced data type of float64, while the return with a numpy backend returned a data type of float32. 2023-01-17T17:36:30.4160989Z E Falsifying example: test_jax_numpy_var( 2023-01-17T17:36:30.4161462Z E dtype_x_axis=(['float16'], [array([-1., -1.], dtype=float16)], 0, 0), 2023-01-17T17:36:30.4161829Z E dtype=['float64'], 2023-01-17T17:36:30.4162114Z E where=[array([False, False])], 2023-01-17T17:36:30.4162384Z E keepdims=False, 2023-01-17T17:36:30.4162789Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. , 2023-01-17T17:36:30.4163311Z E fn_tree='ivy.functional.frontends.jax.numpy.var', 2023-01-17T17:36:30.4163730Z E on_device='cpu', 2023-01-17T17:36:30.4164009Z E frontend='jax', 2023-01-17T17:36:30.4164239Z E ) 2023-01-17T17:36:30.4164436Z E 2023-01-17T17:36:30.4165031Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkAAMoBaaR2QgeAwMAAMkABw==') as a decorator on your test case </details> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_statistical.py::test_jax_numpy_var[cpu-ivy.functional.backends.numpy-False-False]</summary> 2023-01-17T17:36:30.4160505Z E AssertionError: the return with a TensorFlow backend produced data type of float64, while the return with a numpy backend returned a data type of float32. 2023-01-17T17:36:30.4160989Z E Falsifying example: test_jax_numpy_var( 2023-01-17T17:36:30.4161462Z E dtype_x_axis=(['float16'], [array([-1., -1.], dtype=float16)], 0, 0), 2023-01-17T17:36:30.4161829Z E dtype=['float64'], 2023-01-17T17:36:30.4162114Z E where=[array([False, False])], 2023-01-17T17:36:30.4162384Z E keepdims=False, 2023-01-17T17:36:30.4162789Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. , 2023-01-17T17:36:30.4163311Z E fn_tree='ivy.functional.frontends.jax.numpy.var', 2023-01-17T17:36:30.4163730Z E on_device='cpu', 2023-01-17T17:36:30.4164009Z E frontend='jax', 2023-01-17T17:36:30.4164239Z E ) 2023-01-17T17:36:30.4164436Z E 2023-01-17T17:36:30.4165031Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkAAMoBaaR2QgeAwMAAMkABw==') as a decorator on your test case </details> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_statistical.py::test_jax_numpy_var[cpu-ivy.functional.backends.numpy-False-False]</summary> 2023-01-17T17:36:30.4160505Z E AssertionError: the return with a TensorFlow backend produced data type of float64, while the return with a numpy backend returned a data type of float32. 2023-01-17T17:36:30.4160989Z E Falsifying example: test_jax_numpy_var( 2023-01-17T17:36:30.4161462Z E dtype_x_axis=(['float16'], [array([-1., -1.], dtype=float16)], 0, 0), 2023-01-17T17:36:30.4161829Z E dtype=['float64'], 2023-01-17T17:36:30.4162114Z E where=[array([False, False])], 2023-01-17T17:36:30.4162384Z E keepdims=False, 2023-01-17T17:36:30.4162789Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. , 2023-01-17T17:36:30.4163311Z E fn_tree='ivy.functional.frontends.jax.numpy.var', 2023-01-17T17:36:30.4163730Z E on_device='cpu', 2023-01-17T17:36:30.4164009Z E frontend='jax', 2023-01-17T17:36:30.4164239Z E ) 2023-01-17T17:36:30.4164436Z E 2023-01-17T17:36:30.4165031Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkAAMoBaaR2QgeAwMAAMkABw==') as a decorator on your test case </details> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_statistical.py::test_jax_numpy_var[cpu-ivy.functional.backends.numpy-False-False]</summary> 2023-01-17T17:36:30.4160505Z E AssertionError: the return with a TensorFlow backend produced data type of float64, while the return with a numpy backend returned a data type of float32. 2023-01-17T17:36:30.4160989Z E Falsifying example: test_jax_numpy_var( 2023-01-17T17:36:30.4161462Z E dtype_x_axis=(['float16'], [array([-1., -1.], dtype=float16)], 0, 0), 2023-01-17T17:36:30.4161829Z E dtype=['float64'], 2023-01-17T17:36:30.4162114Z E where=[array([False, False])], 2023-01-17T17:36:30.4162384Z E keepdims=False, 2023-01-17T17:36:30.4162789Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. , 2023-01-17T17:36:30.4163311Z E fn_tree='ivy.functional.frontends.jax.numpy.var', 2023-01-17T17:36:30.4163730Z E on_device='cpu', 2023-01-17T17:36:30.4164009Z E frontend='jax', 2023-01-17T17:36:30.4164239Z E ) 2023-01-17T17:36:30.4164436Z E 2023-01-17T17:36:30.4165031Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkAAMoBaaR2QgeAwMAAMkABw==') as a decorator on your test case </details>
non_process
fix jax numpy statistical test jax numpy var tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test frontends test jax test jax numpy statistical py test jax numpy var e assertionerror the return with a tensorflow backend produced data type of while the return with a numpy backend returned a data type of e falsifying example test jax numpy var e dtype x axis dtype e dtype e where e keepdims false e test flags num positional args with out false inplace false native arrays as variable e fn tree ivy functional frontends jax numpy var e on device cpu e frontend jax e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case failed ivy tests test ivy test frontends test jax test jax numpy statistical py test jax numpy var e assertionerror the return with a tensorflow backend produced data type of while the return with a numpy backend returned a data type of e falsifying example test jax numpy var e dtype x axis dtype e dtype e where e keepdims false e test flags num positional args with out false inplace false native arrays as variable e fn tree ivy functional frontends jax numpy var e on device cpu e frontend jax e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case failed ivy tests test ivy test frontends test jax test jax numpy statistical py test jax numpy var e assertionerror the return with a tensorflow backend produced data type of while the return with a numpy backend returned a data type of e falsifying example test jax numpy var e dtype x axis dtype e dtype e where e keepdims false e test flags num positional args with out false inplace false native arrays as variable e fn tree ivy functional frontends jax numpy var e on device cpu e frontend jax e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case failed ivy tests test ivy test frontends test jax test jax numpy statistical py test jax numpy var e assertionerror the return with a tensorflow backend produced data type of while the return with a numpy backend returned a data type of e falsifying example test jax numpy var e dtype x axis dtype e dtype e where e keepdims false e test flags num positional args with out false inplace false native arrays as variable e fn tree ivy functional frontends jax numpy var e on device cpu e frontend jax e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case
0
19,609
25,961,536,339
IssuesEvent
2022-12-18 23:49:59
streamnative/pulsar-spark
https://api.github.com/repos/streamnative/pulsar-spark
closed
[BUG] (Spark 2.4.5) Pulsar receiver requires DataFrame creation before readStream/writeStream methods can be used
type/bug compute/data-processing
**Describe the bug** The spark-submit of a spark job written with the connector fails if a DataFrame is not created prior to calling readStream and writeStream. **To Reproduce** Steps to reproduce the behavior: 1. Write a pyspark program that streams data from pulsar using `readStream` and writes somewhere using `writeStream`, see example code below. 2. Attempt to spark-submit using `./bin/spark-submit --master "local[*]" --packages io.streamnative.connectors:pulsar-spark-connector_${SCALA_VERSION}:${SPARK_VERSION} --repositories https://dl.bintray.com/streamnative/maven <your-script>` 3. If a `createDataFrame` using the SparkSession is not executed, the job will inevitably fail with error ``` py4j.protocol.Py4JJavaError: An error occurred while calling o34.load. : java.util.NoSuchElementException: None.get at scala.None$.get(Option.scala:347) at scala.None$.get(Option.scala:345) at org.apache.spark.sql.pulsar.PulsarProvider$.org$apache$spark$sql$pulsar$PulsarProvider$$jsonOptions(PulsarProvider.scala:603) at org.apache.spark.sql.pulsar.PulsarProvider.createMicroBatchReader(PulsarProvider.scala:154) at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:182) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) ``` **Expected behavior** The spark job does NOT need a DataFrame or other to be instantiated in order to run. **Example failing python code** ``` import sys from pyspark.sql import SparkSession if __name__ == "__main__": if len(sys.argv) != 4: print(""" Usage: TestPyspark.py <service_url> <admin_url> <topics> """, file=sys.stderr) sys.exit(-1) serviceUrl = sys.argv[1] adminUrl = sys.argv[2] topics = sys.argv[3] spark = SparkSession\ .builder\ .appName("StructuredPulsarWordCount")\ .getOrCreate() # Create DataSet representing the stream of input lines from pulsar lines = spark\ .readStream\ .format("pulsar")\ .option("service.url", serviceUrl)\ .option("admin.url", adminUrl)\ .option("topics", topics)\ .load()\ .selectExpr("CAST(value AS STRING)") query = lines\ .writeStream\ .outputMode('append')\ .format('console')\ .start() query.awaitTermination() ``` **Example working python code** ``` import sys from pyspark.sql import SparkSession if __name__ == "__main__": if len(sys.argv) != 4: print(""" Usage: TestPyspark.py <service_url> <admin_url> <topics> """, file=sys.stderr) sys.exit(-1) serviceUrl = sys.argv[1] adminUrl = sys.argv[2] topics = sys.argv[3] spark = SparkSession\ .builder\ .appName("StructuredPulsarWordCount")\ .getOrCreate() dt = spark.createDataFrame([(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 10.0), (5, 12.0)], ("id", "v")) # Create DataSet representing the stream of input lines from pulsar lines = spark\ .readStream\ .format("pulsar")\ .option("service.url", serviceUrl)\ .option("admin.url", adminUrl)\ .option("topics", topics)\ .load()\ .selectExpr("CAST(value AS STRING)") query = lines\ .writeStream\ .outputMode('append')\ .format('console')\ .start() query.awaitTermination() ``` **Additional context** Using Spark 2.4.5 Using Python 3.7.3
1.0
[BUG] (Spark 2.4.5) Pulsar receiver requires DataFrame creation before readStream/writeStream methods can be used - **Describe the bug** The spark-submit of a spark job written with the connector fails if a DataFrame is not created prior to calling readStream and writeStream. **To Reproduce** Steps to reproduce the behavior: 1. Write a pyspark program that streams data from pulsar using `readStream` and writes somewhere using `writeStream`, see example code below. 2. Attempt to spark-submit using `./bin/spark-submit --master "local[*]" --packages io.streamnative.connectors:pulsar-spark-connector_${SCALA_VERSION}:${SPARK_VERSION} --repositories https://dl.bintray.com/streamnative/maven <your-script>` 3. If a `createDataFrame` using the SparkSession is not executed, the job will inevitably fail with error ``` py4j.protocol.Py4JJavaError: An error occurred while calling o34.load. : java.util.NoSuchElementException: None.get at scala.None$.get(Option.scala:347) at scala.None$.get(Option.scala:345) at org.apache.spark.sql.pulsar.PulsarProvider$.org$apache$spark$sql$pulsar$PulsarProvider$$jsonOptions(PulsarProvider.scala:603) at org.apache.spark.sql.pulsar.PulsarProvider.createMicroBatchReader(PulsarProvider.scala:154) at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:182) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) ``` **Expected behavior** The spark job does NOT need a DataFrame or other to be instantiated in order to run. **Example failing python code** ``` import sys from pyspark.sql import SparkSession if __name__ == "__main__": if len(sys.argv) != 4: print(""" Usage: TestPyspark.py <service_url> <admin_url> <topics> """, file=sys.stderr) sys.exit(-1) serviceUrl = sys.argv[1] adminUrl = sys.argv[2] topics = sys.argv[3] spark = SparkSession\ .builder\ .appName("StructuredPulsarWordCount")\ .getOrCreate() # Create DataSet representing the stream of input lines from pulsar lines = spark\ .readStream\ .format("pulsar")\ .option("service.url", serviceUrl)\ .option("admin.url", adminUrl)\ .option("topics", topics)\ .load()\ .selectExpr("CAST(value AS STRING)") query = lines\ .writeStream\ .outputMode('append')\ .format('console')\ .start() query.awaitTermination() ``` **Example working python code** ``` import sys from pyspark.sql import SparkSession if __name__ == "__main__": if len(sys.argv) != 4: print(""" Usage: TestPyspark.py <service_url> <admin_url> <topics> """, file=sys.stderr) sys.exit(-1) serviceUrl = sys.argv[1] adminUrl = sys.argv[2] topics = sys.argv[3] spark = SparkSession\ .builder\ .appName("StructuredPulsarWordCount")\ .getOrCreate() dt = spark.createDataFrame([(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 10.0), (5, 12.0)], ("id", "v")) # Create DataSet representing the stream of input lines from pulsar lines = spark\ .readStream\ .format("pulsar")\ .option("service.url", serviceUrl)\ .option("admin.url", adminUrl)\ .option("topics", topics)\ .load()\ .selectExpr("CAST(value AS STRING)") query = lines\ .writeStream\ .outputMode('append')\ .format('console')\ .start() query.awaitTermination() ``` **Additional context** Using Spark 2.4.5 Using Python 3.7.3
process
spark pulsar receiver requires dataframe creation before readstream writestream methods can be used describe the bug the spark submit of a spark job written with the connector fails if a dataframe is not created prior to calling readstream and writestream to reproduce steps to reproduce the behavior write a pyspark program that streams data from pulsar using readstream and writes somewhere using writestream see example code below attempt to spark submit using bin spark submit master local packages io streamnative connectors pulsar spark connector scala version spark version repositories if a createdataframe using the sparksession is not executed the job will inevitably fail with error protocol an error occurred while calling load java util nosuchelementexception none get at scala none get option scala at scala none get option scala at org apache spark sql pulsar pulsarprovider org apache spark sql pulsar pulsarprovider jsonoptions pulsarprovider scala at org apache spark sql pulsar pulsarprovider createmicrobatchreader pulsarprovider scala at org apache spark sql streaming datastreamreader load datastreamreader scala at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at reflection methodinvoker invoke methodinvoker java at reflection reflectionengine invoke reflectionengine java at gateway invoke gateway java at commands abstractcommand invokemethod abstractcommand java at commands callcommand execute callcommand java at gatewayconnection run gatewayconnection java at java lang thread run thread java expected behavior the spark job does not need a dataframe or other to be instantiated in order to run example failing python code import sys from pyspark sql import sparksession if name main if len sys argv print usage testpyspark py file sys stderr sys exit serviceurl sys argv adminurl sys argv topics sys argv spark sparksession builder appname structuredpulsarwordcount getorcreate create dataset representing the stream of input lines from pulsar lines spark readstream format pulsar option service url serviceurl option admin url adminurl option topics topics load selectexpr cast value as string query lines writestream outputmode append format console start query awaittermination example working python code import sys from pyspark sql import sparksession if name main if len sys argv print usage testpyspark py file sys stderr sys exit serviceurl sys argv adminurl sys argv topics sys argv spark sparksession builder appname structuredpulsarwordcount getorcreate dt spark createdataframe id v create dataset representing the stream of input lines from pulsar lines spark readstream format pulsar option service url serviceurl option admin url adminurl option topics topics load selectexpr cast value as string query lines writestream outputmode append format console start query awaittermination additional context using spark using python
1
307,857
9,423,229,635
IssuesEvent
2019-04-11 11:19:52
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.google.com - see bug description
browser-firefox priority-critical
<!-- @browser: Firefox 67.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:67.0) Gecko/20100101 Firefox/67.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://www.google.com/sorry/index?continue=https://www.google.com/search%3Fclient%3Dfirefox-b-d%26q%3Dsacda%2Bmeaning&q=EgQpTky-GOirnOUFIhkA8aeDSwz-tkbpCUIfW5uZEOblYgQa0gthMgFy **Browser / Version**: Firefox 67.0 **Operating System**: Windows 7 **Tested Another Browser**: No **Problem type**: Something else **Description**: it keeps asking me if i am a robot **Steps to Reproduce**: [![Screenshot Description](https://webcompat.com/uploads/2019/4/05827298-2bc7-4838-ad91-ab9a87c7f3db-thumb.jpeg)](https://webcompat.com/uploads/2019/4/05827298-2bc7-4838-ad91-ab9a87c7f3db.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190331141835</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Warning: "An iframe which has both allow-scripts and allow-same-origin for its sandbox attribute can remove its sandboxing." {file: "https://www.google.com/sorry/index?continue=https://www.google.com/search%3Fclient%3Dfirefox-b-d%26q%3Dsacda%2Bmeaning&q=EgQpTky-GOirnOUFIhkA8aeDSwz-tkbpCUIfW5uZEOblYgQa0gthMgFy" line: 0}]', u'[JavaScript Warning: "An iframe which has both allow-scripts and allow-same-origin for its sandbox attribute can remove its sandboxing." {file: "https://www.google.com/sorry/index?continue=https://www.google.com/search%3Fclient%3Dfirefox-b-d%26q%3Dsacda%2Bmeaning&q=EgQpTky-GOirnOUFIhkA8aeDSwz-tkbpCUIfW5uZEOblYgQa0gthMgFy" line: 0}]'] </pre> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.google.com - see bug description - <!-- @browser: Firefox 67.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:67.0) Gecko/20100101 Firefox/67.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://www.google.com/sorry/index?continue=https://www.google.com/search%3Fclient%3Dfirefox-b-d%26q%3Dsacda%2Bmeaning&q=EgQpTky-GOirnOUFIhkA8aeDSwz-tkbpCUIfW5uZEOblYgQa0gthMgFy **Browser / Version**: Firefox 67.0 **Operating System**: Windows 7 **Tested Another Browser**: No **Problem type**: Something else **Description**: it keeps asking me if i am a robot **Steps to Reproduce**: [![Screenshot Description](https://webcompat.com/uploads/2019/4/05827298-2bc7-4838-ad91-ab9a87c7f3db-thumb.jpeg)](https://webcompat.com/uploads/2019/4/05827298-2bc7-4838-ad91-ab9a87c7f3db.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190331141835</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Warning: "An iframe which has both allow-scripts and allow-same-origin for its sandbox attribute can remove its sandboxing." {file: "https://www.google.com/sorry/index?continue=https://www.google.com/search%3Fclient%3Dfirefox-b-d%26q%3Dsacda%2Bmeaning&q=EgQpTky-GOirnOUFIhkA8aeDSwz-tkbpCUIfW5uZEOblYgQa0gthMgFy" line: 0}]', u'[JavaScript Warning: "An iframe which has both allow-scripts and allow-same-origin for its sandbox attribute can remove its sandboxing." {file: "https://www.google.com/sorry/index?continue=https://www.google.com/search%3Fclient%3Dfirefox-b-d%26q%3Dsacda%2Bmeaning&q=EgQpTky-GOirnOUFIhkA8aeDSwz-tkbpCUIfW5uZEOblYgQa0gthMgFy" line: 0}]'] </pre> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
see bug description url browser version firefox operating system windows tested another browser no problem type something else description it keeps asking me if i am a robot steps to reproduce browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen false mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta console messages u from with ❤️
0
12,822
9,793,521,658
IssuesEvent
2019-06-10 20:10:27
terraform-providers/terraform-provider-aws
https://api.github.com/repos/terraform-providers/terraform-provider-aws
closed
Allow tags nested in aws_redshift_parameter_group resource
enhancement service/redshift
_This issue was originally opened by @ntkawasaki as hashicorp/terraform#21620. It was migrated here as a result of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._ <hr> ### Current Terraform Version ``` Terraform v0.12.0 ``` ### Use-case <!--- In order to properly evaluate a feature request, it is necessary to understand the use-cases for it. Please describe below the _end goal_ you are trying to achieve that has led you to request this feature. Please keep this section focused on the problem and not on the suggested solution. We'll get to that in a moment, below! --> Just being able to tag this resource in AWS like you can for other resources Heres the doc: https://www.terraform.io/docs/providers/aws/r/redshift_parameter_group.html Here's when you try to use tags inside `aws_redshift_parameter_group` <img width="731" alt="Screen Shot 2019-06-05 at 2 18 50 PM" src="https://user-images.githubusercontent.com/28987971/58991507-43494000-879d-11e9-9699-e41aa7dcea13.png">
1.0
Allow tags nested in aws_redshift_parameter_group resource - _This issue was originally opened by @ntkawasaki as hashicorp/terraform#21620. It was migrated here as a result of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._ <hr> ### Current Terraform Version ``` Terraform v0.12.0 ``` ### Use-case <!--- In order to properly evaluate a feature request, it is necessary to understand the use-cases for it. Please describe below the _end goal_ you are trying to achieve that has led you to request this feature. Please keep this section focused on the problem and not on the suggested solution. We'll get to that in a moment, below! --> Just being able to tag this resource in AWS like you can for other resources Heres the doc: https://www.terraform.io/docs/providers/aws/r/redshift_parameter_group.html Here's when you try to use tags inside `aws_redshift_parameter_group` <img width="731" alt="Screen Shot 2019-06-05 at 2 18 50 PM" src="https://user-images.githubusercontent.com/28987971/58991507-43494000-879d-11e9-9699-e41aa7dcea13.png">
non_process
allow tags nested in aws redshift parameter group resource this issue was originally opened by ntkawasaki as hashicorp terraform it was migrated here as a result of the the original body of the issue is below current terraform version terraform use case in order to properly evaluate a feature request it is necessary to understand the use cases for it please describe below the end goal you are trying to achieve that has led you to request this feature please keep this section focused on the problem and not on the suggested solution we ll get to that in a moment below just being able to tag this resource in aws like you can for other resources heres the doc here s when you try to use tags inside aws redshift parameter group img width alt screen shot at pm src
0
622,263
19,619,257,922
IssuesEvent
2022-01-07 02:45:51
SystemsGenetics/GEMmaker
https://api.github.com/repos/SystemsGenetics/GEMmaker
opened
dev branch: Kallisto only Workflow that can process paired files
bug needs fix high priority
This issue pertains to the dev branch witch currently contains the new dsl2 branch. Only Kallisto can process paired end reads. Demonstration: 1) Clone copy of GEMmaker to get the demo directory. Switch to dev branch ``` git clone git@github.com:SystemsGenetics/GEMmaker.git cd GEMmaker git checkout dev ``` 2) Demonstrate that Hisat works fine with non-paired SRAs. We will use the first arabidopsis example from [How to Launch GEMmaker](https://gemmaker.readthedocs.io/en/dev/execution.html#how-to-launch-gemmaker) ``` # Put non-paired SRA in SRA.txt echo SRR1058270 > SRAs.txt # Run Hisat2 nextflow run main.nf -profile singularity \ --pipeline hisat2 \ --hisat2_base_name CORG \ --hisat2_index_dir assets/demo/references/CORG.genome.Hisat2.indexed/ \ --hisat2_gtf_file assets/demo/references/CORG.transcripts.gtf \ --sras SRAs.txt ``` *Result is that it works fine and will complete* 2) Now we try the same command with a paired end file. We will use the file from the [CORG example](https://github.com/SystemsGenetics/GEMmaker/blob/master/assets/demo/SRA_IDs.txt) ``` echo SRR649944 > SRAs.txt # Run Hisat2 nextflow run main.nf -profile singularity \ --pipeline hisat2 \ --hisat2_base_name CORG \ --hisat2_index_dir assets/demo/references/CORG.genome.Hisat2.indexed/ \ --hisat2_gtf_file assets/demo/references/CORG.transcripts.gtf \ --sras SRAs.txt ``` *This fails* 3) We can run the same file with Kallisto, and it will succeed: ``` # Remove Hisat2 results and work dirs rm work results -r nextflow run main.nf -profile singularity \ --pipeline kallisto \ --kallisto_index_path assets/demo/references/CORG.transcripts.Kallisto.indexed \ --sras SRAs.txt ``` *This succeeds* This can be repeated with other files, both local and remote. === Here is the error message for Hisat2: ``` Error executing process > 'GEMmaker:trimmomatic (SRX218012)' Caused by: Process `GEMmaker:trimmomatic (SRX218012)` terminated with an error exit status (1) Command executed: echo "#TRACE sample_id=SRX218012" echo "#TRACE n_cpus=4" echo "#TRACE minlen=0.7" echo "#TRACE leading=3" echo "#TRACE trailing=6" echo "#TRACE slidingwindow=4:15" echo "#TRACE fasta_lines=`cat fasta_adapter.txt | wc -l`" echo "#TRACE fastq_lines=`cat *.fastq | wc -l`" # convert the incoming FASTQ file list to an array read -a fastq_files <<< SRX218012_1.fastq SRX218012_2.fastq # This script calculates average length of fastq files. total=0 # This if statement checks if the data is single or paired data, and checks length accordingly # This script returns 1 number, which can be used for the minlen in trimmomatic if [ ${#fastq_files[@]} == 2 ]; then for fastq in "${fastq_files[@]}"; do cat="cat $fastq" if [[ $fastq =~ .gz$ ]]; then cat="zcat $fastq" fi a=`$cat | awk 'NR%4 == 2 {lengths[length($0)]++} END {for (l in lengths) {print l, lengths[l]}}' | sort | awk '{ print $0, $1*$2}' | awk -v var="0.7" '{ SUM += $3 } { SUM2 += $2 } END { printf("%.0f", SUM / SUM2 * var)} '` total=($a + $total) done total=( $total / 2 ) minlen=$total else cat="cat ${fastq_files[0]}" if [[ ${fastq_files[0]} =~ .gz$ ]]; then cat="zcat ${fastq_files[0]}" fi minlen=`$cat | awk 'NR%4 == 2 {lengths[length($0)]++} END {for (l in lengths) {print l, lengths[l]}}' | sort | awk '{ print $0, $1*$2}' | awk -v var="0.7" '{ SUM += $3 } { SUM2 += $2 } END { printf("%.0f", SUM / SUM2 * var)} '` fi if [ ${#fastq_files[@]} == 2 ]; then trimmomatic PE -threads 4 ${fastq_files[0]} ${fastq_files[1]} SRX218012_1p_trim.fastq SRX218012_1u_trim.fastq SRX218012_2p_trim.fastq SRX218012_2u_trim.fastq ILLUMINACLIP:fasta_adapter.txt:2:40:15 LEADING:3 TRAILING:6 SLIDINGWINDOW:4:15 MINLEN:"$minlen" > SRX218012.trim.log 2>&1 else trimmomatic SE -threads 4 ${fastq_files[0]} SRX218012_1u_trim.fastq ILLUMINACLIP:fasta_adapter.txt:2:40:15 LEADING:3 TRAILING:6 SLIDINGWINDOW:4:15 MINLEN:"$minlen" > SRX218012.trim.log 2>&1 fi Command exit status: 1 Command output: #TRACE sample_id=SRX218012 #TRACE n_cpus=4 #TRACE minlen=0.7 #TRACE leading=3 #TRACE trailing=6 #TRACE slidingwindow=4:15 #TRACE fasta_lines=130 #TRACE fastq_lines=5480 Command error: .command.sh: line 12: read: `SRX218012_2.fastq': not a valid identifier Work dir: /home/jah/Documents/test/GEMmaker/work/fe/b8cdd7186eb2af71f374da3b069a90 Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line WARN: To render the execution DAG in the required format it is required to install Graphviz -- See http://www.graphviz.org for more info. ```
1.0
dev branch: Kallisto only Workflow that can process paired files - This issue pertains to the dev branch witch currently contains the new dsl2 branch. Only Kallisto can process paired end reads. Demonstration: 1) Clone copy of GEMmaker to get the demo directory. Switch to dev branch ``` git clone git@github.com:SystemsGenetics/GEMmaker.git cd GEMmaker git checkout dev ``` 2) Demonstrate that Hisat works fine with non-paired SRAs. We will use the first arabidopsis example from [How to Launch GEMmaker](https://gemmaker.readthedocs.io/en/dev/execution.html#how-to-launch-gemmaker) ``` # Put non-paired SRA in SRA.txt echo SRR1058270 > SRAs.txt # Run Hisat2 nextflow run main.nf -profile singularity \ --pipeline hisat2 \ --hisat2_base_name CORG \ --hisat2_index_dir assets/demo/references/CORG.genome.Hisat2.indexed/ \ --hisat2_gtf_file assets/demo/references/CORG.transcripts.gtf \ --sras SRAs.txt ``` *Result is that it works fine and will complete* 2) Now we try the same command with a paired end file. We will use the file from the [CORG example](https://github.com/SystemsGenetics/GEMmaker/blob/master/assets/demo/SRA_IDs.txt) ``` echo SRR649944 > SRAs.txt # Run Hisat2 nextflow run main.nf -profile singularity \ --pipeline hisat2 \ --hisat2_base_name CORG \ --hisat2_index_dir assets/demo/references/CORG.genome.Hisat2.indexed/ \ --hisat2_gtf_file assets/demo/references/CORG.transcripts.gtf \ --sras SRAs.txt ``` *This fails* 3) We can run the same file with Kallisto, and it will succeed: ``` # Remove Hisat2 results and work dirs rm work results -r nextflow run main.nf -profile singularity \ --pipeline kallisto \ --kallisto_index_path assets/demo/references/CORG.transcripts.Kallisto.indexed \ --sras SRAs.txt ``` *This succeeds* This can be repeated with other files, both local and remote. === Here is the error message for Hisat2: ``` Error executing process > 'GEMmaker:trimmomatic (SRX218012)' Caused by: Process `GEMmaker:trimmomatic (SRX218012)` terminated with an error exit status (1) Command executed: echo "#TRACE sample_id=SRX218012" echo "#TRACE n_cpus=4" echo "#TRACE minlen=0.7" echo "#TRACE leading=3" echo "#TRACE trailing=6" echo "#TRACE slidingwindow=4:15" echo "#TRACE fasta_lines=`cat fasta_adapter.txt | wc -l`" echo "#TRACE fastq_lines=`cat *.fastq | wc -l`" # convert the incoming FASTQ file list to an array read -a fastq_files <<< SRX218012_1.fastq SRX218012_2.fastq # This script calculates average length of fastq files. total=0 # This if statement checks if the data is single or paired data, and checks length accordingly # This script returns 1 number, which can be used for the minlen in trimmomatic if [ ${#fastq_files[@]} == 2 ]; then for fastq in "${fastq_files[@]}"; do cat="cat $fastq" if [[ $fastq =~ .gz$ ]]; then cat="zcat $fastq" fi a=`$cat | awk 'NR%4 == 2 {lengths[length($0)]++} END {for (l in lengths) {print l, lengths[l]}}' | sort | awk '{ print $0, $1*$2}' | awk -v var="0.7" '{ SUM += $3 } { SUM2 += $2 } END { printf("%.0f", SUM / SUM2 * var)} '` total=($a + $total) done total=( $total / 2 ) minlen=$total else cat="cat ${fastq_files[0]}" if [[ ${fastq_files[0]} =~ .gz$ ]]; then cat="zcat ${fastq_files[0]}" fi minlen=`$cat | awk 'NR%4 == 2 {lengths[length($0)]++} END {for (l in lengths) {print l, lengths[l]}}' | sort | awk '{ print $0, $1*$2}' | awk -v var="0.7" '{ SUM += $3 } { SUM2 += $2 } END { printf("%.0f", SUM / SUM2 * var)} '` fi if [ ${#fastq_files[@]} == 2 ]; then trimmomatic PE -threads 4 ${fastq_files[0]} ${fastq_files[1]} SRX218012_1p_trim.fastq SRX218012_1u_trim.fastq SRX218012_2p_trim.fastq SRX218012_2u_trim.fastq ILLUMINACLIP:fasta_adapter.txt:2:40:15 LEADING:3 TRAILING:6 SLIDINGWINDOW:4:15 MINLEN:"$minlen" > SRX218012.trim.log 2>&1 else trimmomatic SE -threads 4 ${fastq_files[0]} SRX218012_1u_trim.fastq ILLUMINACLIP:fasta_adapter.txt:2:40:15 LEADING:3 TRAILING:6 SLIDINGWINDOW:4:15 MINLEN:"$minlen" > SRX218012.trim.log 2>&1 fi Command exit status: 1 Command output: #TRACE sample_id=SRX218012 #TRACE n_cpus=4 #TRACE minlen=0.7 #TRACE leading=3 #TRACE trailing=6 #TRACE slidingwindow=4:15 #TRACE fasta_lines=130 #TRACE fastq_lines=5480 Command error: .command.sh: line 12: read: `SRX218012_2.fastq': not a valid identifier Work dir: /home/jah/Documents/test/GEMmaker/work/fe/b8cdd7186eb2af71f374da3b069a90 Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line WARN: To render the execution DAG in the required format it is required to install Graphviz -- See http://www.graphviz.org for more info. ```
non_process
dev branch kallisto only workflow that can process paired files this issue pertains to the dev branch witch currently contains the new branch only kallisto can process paired end reads demonstration clone copy of gemmaker to get the demo directory switch to dev branch git clone git github com systemsgenetics gemmaker git cd gemmaker git checkout dev demonstrate that hisat works fine with non paired sras we will use the first arabidopsis example from put non paired sra in sra txt echo sras txt run nextflow run main nf profile singularity pipeline base name corg index dir assets demo references corg genome indexed gtf file assets demo references corg transcripts gtf sras sras txt result is that it works fine and will complete now we try the same command with a paired end file we will use the file from the echo sras txt run nextflow run main nf profile singularity pipeline base name corg index dir assets demo references corg genome indexed gtf file assets demo references corg transcripts gtf sras sras txt this fails we can run the same file with kallisto and it will succeed remove results and work dirs rm work results r nextflow run main nf profile singularity pipeline kallisto kallisto index path assets demo references corg transcripts kallisto indexed sras sras txt this succeeds this can be repeated with other files both local and remote here is the error message for error executing process gemmaker trimmomatic caused by process gemmaker trimmomatic terminated with an error exit status command executed echo trace sample id echo trace n cpus echo trace minlen echo trace leading echo trace trailing echo trace slidingwindow echo trace fasta lines cat fasta adapter txt wc l echo trace fastq lines cat fastq wc l convert the incoming fastq file list to an array read a fastq files fastq fastq this script calculates average length of fastq files total this if statement checks if the data is single or paired data and checks length accordingly this script returns number which can be used for the minlen in trimmomatic if then for fastq in fastq files do cat cat fastq if then cat zcat fastq fi a cat awk nr lengths end for l in lengths print l lengths sort awk print awk v var sum end printf sum var total a total done total total minlen total else cat cat fastq files if gz then cat zcat fastq files fi minlen cat awk nr lengths end for l in lengths print l lengths sort awk print awk v var sum end printf sum var fi if then trimmomatic pe threads fastq files fastq files trim fastq trim fastq trim fastq trim fastq illuminaclip fasta adapter txt leading trailing slidingwindow minlen minlen trim log else trimmomatic se threads fastq files trim fastq illuminaclip fasta adapter txt leading trailing slidingwindow minlen minlen trim log fi command exit status command output trace sample id trace n cpus trace minlen trace leading trace trailing trace slidingwindow trace fasta lines trace fastq lines command error command sh line read fastq not a valid identifier work dir home jah documents test gemmaker work fe tip when you have fixed the problem you can continue the execution adding the option resume to the run command line warn to render the execution dag in the required format it is required to install graphviz see for more info
0
10,985
13,783,538,827
IssuesEvent
2020-10-08 19:23:07
googleapis/proto-plus-python
https://api.github.com/repos/googleapis/proto-plus-python
closed
Include tests with PyPi packages?
type: process
The PyPi packages are missing the tests directory, unless this is intentional could they be included?
1.0
Include tests with PyPi packages? - The PyPi packages are missing the tests directory, unless this is intentional could they be included?
process
include tests with pypi packages the pypi packages are missing the tests directory unless this is intentional could they be included
1
17,041
22,420,243,764
IssuesEvent
2022-06-20 01:42:26
lynnandtonic/nestflix.fun
https://api.github.com/repos/lynnandtonic/nestflix.fun
closed
Add Astronaut Dolphin Detective from Mr. Pickles
suggested title in process
Please add as much of the following info as you can: Title: Astronaut Dolphin Detective Type (film/tv show): TV Show Film or show in which it appears: Mr. Pickles Is the parent film/show streaming anywhere? HBO Max About when in the parent film/show does it appear? Several Episodes (Pilot, The Cheeseman, A.D.D.) Actual footage of the film/show can be seen (yes/no)? Yes
1.0
Add Astronaut Dolphin Detective from Mr. Pickles - Please add as much of the following info as you can: Title: Astronaut Dolphin Detective Type (film/tv show): TV Show Film or show in which it appears: Mr. Pickles Is the parent film/show streaming anywhere? HBO Max About when in the parent film/show does it appear? Several Episodes (Pilot, The Cheeseman, A.D.D.) Actual footage of the film/show can be seen (yes/no)? Yes
process
add astronaut dolphin detective from mr pickles please add as much of the following info as you can title astronaut dolphin detective type film tv show tv show film or show in which it appears mr pickles is the parent film show streaming anywhere hbo max about when in the parent film show does it appear several episodes pilot the cheeseman a d d actual footage of the film show can be seen yes no yes
1
3,924
6,845,444,706
IssuesEvent
2017-11-13 08:16:47
symfony/symfony
https://api.github.com/repos/symfony/symfony
closed
setPty(true) sends input before the process is ready
Bug Process Status: Needs Review Status: Waiting feedback Unconfirmed
| Q | A | ---------------- | ----- | Bug report? | yes | Feature request? | no | BC Break report? | no | RFC? | no | Symfony version | 2.7.* Hi I'm using Lumen 5.1 in a Ubuntu 14.04.3 LTS, PHP 5.6 Laravel Homestead Virtual Box which comes with Symfony Process 2.7.*. Here is a minimal example showing that the PTY functionality sends the input before the process has asked for it. Just run: ``` $ php artisan tinker ``` Then paste in the following: ``` $process = new \Symfony\Component\Process\Process('read -p "Please enter some text and press enter: " result && echo "You entered: $result"', null, [], "Hello, world"); $process->setPty(true)->run(function ($type, $buffer) { fwrite(STDOUT, $buffer); }); ``` Full example: ``` $ php artisan tinker Psy Shell v0.6.1 (PHP 5.6.15-1+deb.sury.org~trusty+1 — cli) by Justin Hileman >>> $process = new \Symfony\Component\Process\Process('read -p "Please enter some text and press enter: " result && echo "You entered: $result"', null, [], "Hello, world"); => Symfony\Component\Process\Process {#648} >>> >>> $process->setPty(true)->run(function ($type, $buffer) { ... fwrite(STDOUT, $buffer); ... }); Hello, worldPlease enter some text and press enter: ^C ``` Expected: ``` Please enter some text and press enter: Hello, world ``` Actual: ``` Hello, worldPlease enter some text and press enter: ``` I'm trying to use this to enter the password for `mysqldump` securely. I could set up https://dev.mysql.com/doc/refman/5.7/en/mysql-config-editor.html but I feel like that is a workaround for processes not being able to communicate with each other properly (not to mention that it's difficult to maintain in dev ops). I've tried setting the input after the process is running but it returns that that's not allowed. If I had access to the process's STDIN pipe, I could perhaps send the password there. I've also tried using setTty(true) but it doesn't work either. OK I just stumbled onto `Symfony\Component\Process\InputStream` https://symfony.com/blog/new-in-symfony-3-1-input-and-output-stream-for-processes#input-streaming but I can't upgrade symfony/process to 3.1 without upgrading Lumen. Looks like Lumen 5.3 is the first version which has it https://github.com/laravel/lumen-framework/blob/5.3/composer.json so I will investigate upgrading. If there is a workaround in the meantime I'd greatly appreciate it. I guess I will leave this here in the hopes that it helps someone down the road. Thanks for any help you can provide.
1.0
setPty(true) sends input before the process is ready - | Q | A | ---------------- | ----- | Bug report? | yes | Feature request? | no | BC Break report? | no | RFC? | no | Symfony version | 2.7.* Hi I'm using Lumen 5.1 in a Ubuntu 14.04.3 LTS, PHP 5.6 Laravel Homestead Virtual Box which comes with Symfony Process 2.7.*. Here is a minimal example showing that the PTY functionality sends the input before the process has asked for it. Just run: ``` $ php artisan tinker ``` Then paste in the following: ``` $process = new \Symfony\Component\Process\Process('read -p "Please enter some text and press enter: " result && echo "You entered: $result"', null, [], "Hello, world"); $process->setPty(true)->run(function ($type, $buffer) { fwrite(STDOUT, $buffer); }); ``` Full example: ``` $ php artisan tinker Psy Shell v0.6.1 (PHP 5.6.15-1+deb.sury.org~trusty+1 — cli) by Justin Hileman >>> $process = new \Symfony\Component\Process\Process('read -p "Please enter some text and press enter: " result && echo "You entered: $result"', null, [], "Hello, world"); => Symfony\Component\Process\Process {#648} >>> >>> $process->setPty(true)->run(function ($type, $buffer) { ... fwrite(STDOUT, $buffer); ... }); Hello, worldPlease enter some text and press enter: ^C ``` Expected: ``` Please enter some text and press enter: Hello, world ``` Actual: ``` Hello, worldPlease enter some text and press enter: ``` I'm trying to use this to enter the password for `mysqldump` securely. I could set up https://dev.mysql.com/doc/refman/5.7/en/mysql-config-editor.html but I feel like that is a workaround for processes not being able to communicate with each other properly (not to mention that it's difficult to maintain in dev ops). I've tried setting the input after the process is running but it returns that that's not allowed. If I had access to the process's STDIN pipe, I could perhaps send the password there. I've also tried using setTty(true) but it doesn't work either. OK I just stumbled onto `Symfony\Component\Process\InputStream` https://symfony.com/blog/new-in-symfony-3-1-input-and-output-stream-for-processes#input-streaming but I can't upgrade symfony/process to 3.1 without upgrading Lumen. Looks like Lumen 5.3 is the first version which has it https://github.com/laravel/lumen-framework/blob/5.3/composer.json so I will investigate upgrading. If there is a workaround in the meantime I'd greatly appreciate it. I guess I will leave this here in the hopes that it helps someone down the road. Thanks for any help you can provide.
process
setpty true sends input before the process is ready q a bug report yes feature request no bc break report no rfc no symfony version hi i m using lumen in a ubuntu lts php laravel homestead virtual box which comes with symfony process here is a minimal example showing that the pty functionality sends the input before the process has asked for it just run php artisan tinker then paste in the following process new symfony component process process read p please enter some text and press enter result echo you entered result null hello world process setpty true run function type buffer fwrite stdout buffer full example php artisan tinker psy shell php deb sury org trusty — cli by justin hileman process new symfony component process process read p please enter some text and press enter result echo you entered result null hello world symfony component process process process setpty true run function type buffer fwrite stdout buffer hello worldplease enter some text and press enter c expected please enter some text and press enter hello world actual hello worldplease enter some text and press enter i m trying to use this to enter the password for mysqldump securely i could set up but i feel like that is a workaround for processes not being able to communicate with each other properly not to mention that it s difficult to maintain in dev ops i ve tried setting the input after the process is running but it returns that that s not allowed if i had access to the process s stdin pipe i could perhaps send the password there i ve also tried using settty true but it doesn t work either ok i just stumbled onto symfony component process inputstream but i can t upgrade symfony process to without upgrading lumen looks like lumen is the first version which has it so i will investigate upgrading if there is a workaround in the meantime i d greatly appreciate it i guess i will leave this here in the hopes that it helps someone down the road thanks for any help you can provide
1
7,768
10,889,617,253
IssuesEvent
2019-11-18 18:38:10
microsoft/ptvsd
https://api.github.com/repos/microsoft/ptvsd
closed
Fork/exec creates two connections from pydevd to adapter
Bug area:Multiprocessing platform: Linux platform: Mac
This is causing `test_subprocess` to fail on Python 2.7, and is also the root cause of Flask test failures. The problem is that on 2.7, `subprocess.Popen` uses fork+exec - and pydevd detours both. So first `fork()` creates a child process, and pydevd is spun up in that process and connects to the adapter. Then the child process does `exec()`, pydevd patches command line to ensure that it gets loaded first, and as it does so, it connects to the adapter again. Since both connections come from a process with the same PID, the adapter rejects the second one. There is a bug in the adapter here as well (#1908), but the fundamental problem is that it really is the _same_ process trying to connect twice. The adapter can handle it by forcibly closing any existing connection with the same PID when a new one comes in; but I wonder if it might be better to prevent this on pydevd side, e.g. by having the code detouring `exec()` close the socket before replacing the process. Per `man execve`, the OS itself will keep the socket fd open unless it's explicitly closed, so the adapter can't just wait for it to go away.
1.0
Fork/exec creates two connections from pydevd to adapter - This is causing `test_subprocess` to fail on Python 2.7, and is also the root cause of Flask test failures. The problem is that on 2.7, `subprocess.Popen` uses fork+exec - and pydevd detours both. So first `fork()` creates a child process, and pydevd is spun up in that process and connects to the adapter. Then the child process does `exec()`, pydevd patches command line to ensure that it gets loaded first, and as it does so, it connects to the adapter again. Since both connections come from a process with the same PID, the adapter rejects the second one. There is a bug in the adapter here as well (#1908), but the fundamental problem is that it really is the _same_ process trying to connect twice. The adapter can handle it by forcibly closing any existing connection with the same PID when a new one comes in; but I wonder if it might be better to prevent this on pydevd side, e.g. by having the code detouring `exec()` close the socket before replacing the process. Per `man execve`, the OS itself will keep the socket fd open unless it's explicitly closed, so the adapter can't just wait for it to go away.
process
fork exec creates two connections from pydevd to adapter this is causing test subprocess to fail on python and is also the root cause of flask test failures the problem is that on subprocess popen uses fork exec and pydevd detours both so first fork creates a child process and pydevd is spun up in that process and connects to the adapter then the child process does exec pydevd patches command line to ensure that it gets loaded first and as it does so it connects to the adapter again since both connections come from a process with the same pid the adapter rejects the second one there is a bug in the adapter here as well but the fundamental problem is that it really is the same process trying to connect twice the adapter can handle it by forcibly closing any existing connection with the same pid when a new one comes in but i wonder if it might be better to prevent this on pydevd side e g by having the code detouring exec close the socket before replacing the process per man execve the os itself will keep the socket fd open unless it s explicitly closed so the adapter can t just wait for it to go away
1
183,137
31,161,949,463
IssuesEvent
2023-08-16 16:38:24
multisig-labs/app.gogopool.com
https://api.github.com/repos/multisig-labs/app.gogopool.com
opened
FE(Navbar): Coming soon tag for GoGoPass
enhancement high priority design
For GoGoPass Navlink, we don't want it to be clickable and take you to a page. Right now, there isn't anything to show for that and it just goes to the 404 image. Instead, lets have it just chill in the navbar as a non clickable link item with the coming soon tag like in the figma comp here: https://www.figma.com/file/uFQfzJMUGngXEEUkMUuTJp/Design-System---GoGoPool?type=design&node-id=2202%3A11062&mode=design&t=KrMiTB9ZJ1oq8iQS-1
1.0
FE(Navbar): Coming soon tag for GoGoPass - For GoGoPass Navlink, we don't want it to be clickable and take you to a page. Right now, there isn't anything to show for that and it just goes to the 404 image. Instead, lets have it just chill in the navbar as a non clickable link item with the coming soon tag like in the figma comp here: https://www.figma.com/file/uFQfzJMUGngXEEUkMUuTJp/Design-System---GoGoPool?type=design&node-id=2202%3A11062&mode=design&t=KrMiTB9ZJ1oq8iQS-1
non_process
fe navbar coming soon tag for gogopass for gogopass navlink we don t want it to be clickable and take you to a page right now there isn t anything to show for that and it just goes to the image instead lets have it just chill in the navbar as a non clickable link item with the coming soon tag like in the figma comp here
0
3,459
6,544,192,346
IssuesEvent
2017-09-03 13:00:52
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
bingbot/2.0 under Windows OS
log-processing question
Could it be that bingbot/2.0 appears under Windows OS? `Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X) AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 Safari/9537.53 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)`
1.0
bingbot/2.0 under Windows OS - Could it be that bingbot/2.0 appears under Windows OS? `Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X) AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 Safari/9537.53 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)`
process
bingbot under windows os could it be that bingbot appears under windows os mozilla iphone cpu iphone os like mac os x applewebkit khtml like gecko version mobile safari compatible bingbot
1
37,359
6,601,392,143
IssuesEvent
2017-09-18 00:00:15
MilSpouseCoders/milspousecoders
https://api.github.com/repos/MilSpouseCoders/milspousecoders
closed
Instructions for adding a remote don't work
documentation
I followed the instructions to add the main repo as a remote, and it didn't work. It said I didn't have permissions. Using https://github.com/MilSpouseCoders/milspousecoders.git instead of the link listed worked (this comes from clone and download button).
1.0
Instructions for adding a remote don't work - I followed the instructions to add the main repo as a remote, and it didn't work. It said I didn't have permissions. Using https://github.com/MilSpouseCoders/milspousecoders.git instead of the link listed worked (this comes from clone and download button).
non_process
instructions for adding a remote don t work i followed the instructions to add the main repo as a remote and it didn t work it said i didn t have permissions using instead of the link listed worked this comes from clone and download button
0
9,805
3,321,099,199
IssuesEvent
2015-11-09 06:03:54
patacrep/patacrep
https://api.github.com/repos/patacrep/patacrep
closed
Requirements.txt seems deprecated
documentation enhancement
Pour le développement, la commande [`python3 setup.py develop`](https://github.com/patacrep/patacrep/blob/master/README.rst#for-developement) se charge d'installer les dépendances de manière automatique. Il faudrait donc mettre à jour le README et potentiellement supprimer le fichier `Requirements.txt` (sauf si il y a des dépendances uniquement utiles pour le dev, ce qui n'est pas le cas à l'heure actuelle) Pour la gestion fine des versions des dépendances, on peut mettre à jour le script de setup: http://python-packaging-user-guide.readthedocs.org/en/latest/requirements/#id5
1.0
Requirements.txt seems deprecated - Pour le développement, la commande [`python3 setup.py develop`](https://github.com/patacrep/patacrep/blob/master/README.rst#for-developement) se charge d'installer les dépendances de manière automatique. Il faudrait donc mettre à jour le README et potentiellement supprimer le fichier `Requirements.txt` (sauf si il y a des dépendances uniquement utiles pour le dev, ce qui n'est pas le cas à l'heure actuelle) Pour la gestion fine des versions des dépendances, on peut mettre à jour le script de setup: http://python-packaging-user-guide.readthedocs.org/en/latest/requirements/#id5
non_process
requirements txt seems deprecated pour le développement la commande se charge d installer les dépendances de manière automatique il faudrait donc mettre à jour le readme et potentiellement supprimer le fichier requirements txt sauf si il y a des dépendances uniquement utiles pour le dev ce qui n est pas le cas à l heure actuelle pour la gestion fine des versions des dépendances on peut mettre à jour le script de setup
0
16,753
21,921,717,689
IssuesEvent
2022-05-22 17:18:11
huutho77/CNPMNC_ThayAi
https://api.github.com/repos/huutho77/CNPMNC_ThayAi
closed
[API] Coding Feature Login
dev/thnguyen processing
- Validate Input value - Validation Input Value - Generate AccessToken and RefreshToken
1.0
[API] Coding Feature Login - - Validate Input value - Validation Input Value - Generate AccessToken and RefreshToken
process
coding feature login validate input value validation input value generate accesstoken and refreshtoken
1
5,828
8,664,710,411
IssuesEvent
2018-11-28 21:00:27
lightningWhite/weatherLearning
https://api.github.com/repos/lightningWhite/weatherLearning
closed
Normalize the data
dataProcessing
We will need to normalize the data. Will we do this to the difference columns we created?
1.0
Normalize the data - We will need to normalize the data. Will we do this to the difference columns we created?
process
normalize the data we will need to normalize the data will we do this to the difference columns we created
1
10,275
13,128,635,833
IssuesEvent
2020-08-06 12:38:47
keep-network/keep-core
https://api.github.com/repos/keep-network/keep-core
closed
Stake delegation system tests
process & client team
Having top-ups implemented we need to execute system tests covering all possible stake delegation scenarios. Depending on the progress on #1898, tests should be performed using KEEP token dashboard or directly against smart contracts. - [x] Grantee can delegate a stake - [x] Managed grantee can delegate a stake - [x] Owner of liquid tokens can delegate a stake - [x] Stake delegation can be canceled within the initialization period by a grantee directly on the staking contract, tokens are deposited to the escrow - [x] Stake delegation can be canceled within the initialization period by a managed grantee directly on the staking contract, tokens are deposited to the escrow - [x] Stake delegation can be canceled within the initialization period by an operator directly on the staking contract, tokens are returned to the owner's account - [x] Stake can be undelegated by grantee directly on the staking contract - [x] Stake can be undelegated by a managed grantee directly on the staking contract - [x] Stake can be undelegated by liquid tokens owner directly on the staking contract - [x] Tokens staked from the grant are deposited in the escrow after recovering them from the delegation - [x] Staked, liquid tokens are sent back to the owner after recovering them from the delegation - [x] Tokens can be withdrawn from escrow according to grant's unlocking schedule: - [x] permissive staking policy - [x] adaptive staking policy - [x] It is possible to top-up existing delegation from tokens deposited in the escrow - [x] It is possible to top-up existing delegation from tokens from a grant - [x] It is possible to top-up existing delegation from tokens from a managed grant - [x] It is possible to top-up existing delegation from liquid tokens - [x] It is possible to top-up delegation initiated from escrow with tokens from a grant - [x] It is not possible to delegate from grant to already used operator whose stake has been canceled - [x] It is not possible to redelegate from the escrow to already used operator whose stake has been canceled - [x] Tokens deposited in the escrow can be redelegated to a new operator - [x] Stake redelegated from the escrow can be canceled - [x] Tokens deposited in the escrow from a redelegated stake can be withdrawn - [x] Stake from liquid tokens can be transferred to new staking contract with a bridge: - [x] with payback - [x] without payback - [x] Stake from a grant can be transferred to new staking contract with a bridge - [x] with payback - [x] without payback - [x] Stake from a managed grant can be transferred to new staking contract with a bridge - [x] with payback - [x] without payback (more...)
1.0
Stake delegation system tests - Having top-ups implemented we need to execute system tests covering all possible stake delegation scenarios. Depending on the progress on #1898, tests should be performed using KEEP token dashboard or directly against smart contracts. - [x] Grantee can delegate a stake - [x] Managed grantee can delegate a stake - [x] Owner of liquid tokens can delegate a stake - [x] Stake delegation can be canceled within the initialization period by a grantee directly on the staking contract, tokens are deposited to the escrow - [x] Stake delegation can be canceled within the initialization period by a managed grantee directly on the staking contract, tokens are deposited to the escrow - [x] Stake delegation can be canceled within the initialization period by an operator directly on the staking contract, tokens are returned to the owner's account - [x] Stake can be undelegated by grantee directly on the staking contract - [x] Stake can be undelegated by a managed grantee directly on the staking contract - [x] Stake can be undelegated by liquid tokens owner directly on the staking contract - [x] Tokens staked from the grant are deposited in the escrow after recovering them from the delegation - [x] Staked, liquid tokens are sent back to the owner after recovering them from the delegation - [x] Tokens can be withdrawn from escrow according to grant's unlocking schedule: - [x] permissive staking policy - [x] adaptive staking policy - [x] It is possible to top-up existing delegation from tokens deposited in the escrow - [x] It is possible to top-up existing delegation from tokens from a grant - [x] It is possible to top-up existing delegation from tokens from a managed grant - [x] It is possible to top-up existing delegation from liquid tokens - [x] It is possible to top-up delegation initiated from escrow with tokens from a grant - [x] It is not possible to delegate from grant to already used operator whose stake has been canceled - [x] It is not possible to redelegate from the escrow to already used operator whose stake has been canceled - [x] Tokens deposited in the escrow can be redelegated to a new operator - [x] Stake redelegated from the escrow can be canceled - [x] Tokens deposited in the escrow from a redelegated stake can be withdrawn - [x] Stake from liquid tokens can be transferred to new staking contract with a bridge: - [x] with payback - [x] without payback - [x] Stake from a grant can be transferred to new staking contract with a bridge - [x] with payback - [x] without payback - [x] Stake from a managed grant can be transferred to new staking contract with a bridge - [x] with payback - [x] without payback (more...)
process
stake delegation system tests having top ups implemented we need to execute system tests covering all possible stake delegation scenarios depending on the progress on tests should be performed using keep token dashboard or directly against smart contracts grantee can delegate a stake managed grantee can delegate a stake owner of liquid tokens can delegate a stake stake delegation can be canceled within the initialization period by a grantee directly on the staking contract tokens are deposited to the escrow stake delegation can be canceled within the initialization period by a managed grantee directly on the staking contract tokens are deposited to the escrow stake delegation can be canceled within the initialization period by an operator directly on the staking contract tokens are returned to the owner s account stake can be undelegated by grantee directly on the staking contract stake can be undelegated by a managed grantee directly on the staking contract stake can be undelegated by liquid tokens owner directly on the staking contract tokens staked from the grant are deposited in the escrow after recovering them from the delegation staked liquid tokens are sent back to the owner after recovering them from the delegation tokens can be withdrawn from escrow according to grant s unlocking schedule permissive staking policy adaptive staking policy it is possible to top up existing delegation from tokens deposited in the escrow it is possible to top up existing delegation from tokens from a grant it is possible to top up existing delegation from tokens from a managed grant it is possible to top up existing delegation from liquid tokens it is possible to top up delegation initiated from escrow with tokens from a grant it is not possible to delegate from grant to already used operator whose stake has been canceled it is not possible to redelegate from the escrow to already used operator whose stake has been canceled tokens deposited in the escrow can be redelegated to a new operator stake redelegated from the escrow can be canceled tokens deposited in the escrow from a redelegated stake can be withdrawn stake from liquid tokens can be transferred to new staking contract with a bridge with payback without payback stake from a grant can be transferred to new staking contract with a bridge with payback without payback stake from a managed grant can be transferred to new staking contract with a bridge with payback without payback more
1
311,009
26,761,271,215
IssuesEvent
2023-01-31 07:07:10
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
closed
NPE when parsing Blaze XML output for exceptions without a message
type: bug topic: testing
It looks like when running tests, if you raise an exception without a message (i.e. `throw new NullPointerException()`) it results in test xml output like so: ```xml <error type='java.lang.NullPointerException'><![CDATA[java.lang.NullPointerException" at com.example.MyClass.myMethod(MyClass.java:55) at ... ]]></error> ``` What happens is there's a bug in the parser that always assumes there's a message in this case, and throws a NPE. I have a draft PR against my own fork here for an example -- https://github.com/cgthornt/intellij/pull/1. Could I go ahead and open a PR against this repo?
1.0
NPE when parsing Blaze XML output for exceptions without a message - It looks like when running tests, if you raise an exception without a message (i.e. `throw new NullPointerException()`) it results in test xml output like so: ```xml <error type='java.lang.NullPointerException'><![CDATA[java.lang.NullPointerException" at com.example.MyClass.myMethod(MyClass.java:55) at ... ]]></error> ``` What happens is there's a bug in the parser that always assumes there's a message in this case, and throws a NPE. I have a draft PR against my own fork here for an example -- https://github.com/cgthornt/intellij/pull/1. Could I go ahead and open a PR against this repo?
non_process
npe when parsing blaze xml output for exceptions without a message it looks like when running tests if you raise an exception without a message i e throw new nullpointerexception it results in test xml output like so xml cdata java lang nullpointerexception at com example myclass mymethod myclass java at what happens is there s a bug in the parser that always assumes there s a message in this case and throws a npe i have a draft pr against my own fork here for an example could i go ahead and open a pr against this repo
0
218,018
7,330,168,912
IssuesEvent
2018-03-05 08:59:29
ssu-411/project
https://api.github.com/repos/ssu-411/project
closed
Create Main Page UI
feature request priority/P2
This page shows most rated books and list of books which were specified in the left sidebar.
1.0
Create Main Page UI - This page shows most rated books and list of books which were specified in the left sidebar.
non_process
create main page ui this page shows most rated books and list of books which were specified in the left sidebar
0
31,006
8,638,759,756
IssuesEvent
2018-11-23 15:50:28
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Ubuntu package with GeoIP missing
build change debian-repo
Hi, I installed the goaccess package on Ubuntu 18.04, but this does not seem to have GeoIP enabled: goaccess: unrecognized option '--geoip-database' Is there a version with GeoIP avaliable? Thx in advance Jan
1.0
Ubuntu package with GeoIP missing - Hi, I installed the goaccess package on Ubuntu 18.04, but this does not seem to have GeoIP enabled: goaccess: unrecognized option '--geoip-database' Is there a version with GeoIP avaliable? Thx in advance Jan
non_process
ubuntu package with geoip missing hi i installed the goaccess package on ubuntu but this does not seem to have geoip enabled goaccess unrecognized option geoip database is there a version with geoip avaliable thx in advance jan
0
340,798
24,671,399,189
IssuesEvent
2022-10-18 14:00:33
nightsailor/online-school
https://api.github.com/repos/nightsailor/online-school
closed
[DOCS] broken links
documentation good first issue hacktoberfest hacktoberfest2022
### Description Hi, Links to "code of conduct" and "contributing" are broken in the README file : need to change `main` into `master` 😉 example : - in the README : `https://github.com/nightsailor/online-school/blob/main/CODE_OF_CONDUCT.md ` - in the repo : ` https://github.com/nightsailor/online-school/blob/master/CODE_OF_CONDUCT.md` I can do it if you want to ;) ### Screenshots _No response_ ### Additional information _No response_
1.0
[DOCS] broken links - ### Description Hi, Links to "code of conduct" and "contributing" are broken in the README file : need to change `main` into `master` 😉 example : - in the README : `https://github.com/nightsailor/online-school/blob/main/CODE_OF_CONDUCT.md ` - in the repo : ` https://github.com/nightsailor/online-school/blob/master/CODE_OF_CONDUCT.md` I can do it if you want to ;) ### Screenshots _No response_ ### Additional information _No response_
non_process
broken links description hi links to code of conduct and contributing are broken in the readme file need to change main into master 😉 example in the readme in the repo i can do it if you want to screenshots no response additional information no response
0
89,526
25,827,144,739
IssuesEvent
2022-12-12 13:44:45
ChristinaKs/WebServicesTermProject
https://api.github.com/repos/ChristinaKs/WebServicesTermProject
closed
Filtering processes for games
Build03
references; #14 - [x] /games/ -- GameName, MPA Rating, Platform, GameStudioId - [x] /games/{game}/reviews -- posOrNeg, GameId
1.0
Filtering processes for games - references; #14 - [x] /games/ -- GameName, MPA Rating, Platform, GameStudioId - [x] /games/{game}/reviews -- posOrNeg, GameId
non_process
filtering processes for games references games gamename mpa rating platform gamestudioid games game reviews posorneg gameid
0
88,972
11,184,747,798
IssuesEvent
2019-12-31 19:58:52
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
closed
Facility Locator Usability and Accessibility Improvements
508/Accessibility Epic design frontend vsa-facilities
This epic to house the identified usability and accessibility issues that we can address in the short term to deliver high impact changes for our users
1.0
Facility Locator Usability and Accessibility Improvements - This epic to house the identified usability and accessibility issues that we can address in the short term to deliver high impact changes for our users
non_process
facility locator usability and accessibility improvements this epic to house the identified usability and accessibility issues that we can address in the short term to deliver high impact changes for our users
0
4,263
7,189,091,785
IssuesEvent
2018-02-02 12:44:18
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
Store 'badTrans' flag for ultra-high weight trace transactions
apps-blockScrape status-inprocess type-enhancement
Store 'badTrans' flag in transactions and cache traces for bad transactions (all traces for both in error and not in error) Also -- pull out those ultra high weight traces into a separate file so we only every have to endure that pain once.
1.0
Store 'badTrans' flag for ultra-high weight trace transactions - Store 'badTrans' flag in transactions and cache traces for bad transactions (all traces for both in error and not in error) Also -- pull out those ultra high weight traces into a separate file so we only every have to endure that pain once.
process
store badtrans flag for ultra high weight trace transactions store badtrans flag in transactions and cache traces for bad transactions all traces for both in error and not in error also pull out those ultra high weight traces into a separate file so we only every have to endure that pain once
1
15,047
18,762,675,522
IssuesEvent
2021-11-05 18:28:16
ORNL-AMO/AMO-Tools-Suite
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Suite
closed
Condensing Economizer
Needs Verification Process Heating Calculator
Issue overview -------------- Add condensing economizer math. Will send excel via teams
1.0
Condensing Economizer - Issue overview -------------- Add condensing economizer math. Will send excel via teams
process
condensing economizer issue overview add condensing economizer math will send excel via teams
1
38,550
6,676,793,611
IssuesEvent
2017-10-05 07:49:46
SavandBros/badge
https://api.github.com/repos/SavandBros/badge
closed
Github issues/pull-requests template files
documentation in progress
Should be in .github/ hidden dir. I don't like to have all of them in the root directory.
1.0
Github issues/pull-requests template files - Should be in .github/ hidden dir. I don't like to have all of them in the root directory.
non_process
github issues pull requests template files should be in github hidden dir i don t like to have all of them in the root directory
0
20,813
27,574,940,949
IssuesEvent
2023-03-08 12:17:54
scikit-learn/scikit-learn
https://api.github.com/repos/scikit-learn/scikit-learn
closed
OneHotEncoder `drop_idx_` attribute description in presence of infrequent categories
Bug module:preprocessing
### Describe the issue linked to the documentation ### Issue summary In the OneHotEncoder documentation both for [v1.2](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder) and [v1.1](https://scikit-learn.org/1.1/modules/generated/sklearn.preprocessing.OneHotEncoder.html?highlight=one+hot+encoder#sklearn.preprocessing.OneHotEncoder), the description of attribute `drop_idx_` in presence of infrequent categories reads as follows: > If infrequent categories are enabled by setting `min_frequency` or `max_categories` to a non-default value and `drop_idx[i]` corresponds to a infrequent category, then the entire infrequent category is dropped.` ### User interpretation My understanding of this description is that when `drop_idx_[i]` corresponds to an infrequent category for column `i`, then the expected encoded column `i_infrequent_sklearn` is dropped. For example, suppose we have the following situation: ``` >>> X = np.array([['a'] * 2 + ['b'] * 4 + ['c'] * 4 ... + ['d'] * 4 + ['e'] * 4], dtype=object).T >>> enc = preprocessing.OneHotEncoder(min_frequency=4, sparse_output=False, drop='first') ``` Here `X` is a column with five categories where category `a` is considered infrequent. If the above interpretation is correct, then the expected output will consist of four columns, namely, `x0_b`, `x0_c`, `x0_d` and `x0_e`. This is because `a` is both the first category to get dropped due to `drop='first'` as well as an infrequent one. However, the transform output is as follows: ``` >>> Xt = enc.fit_transform(X) >>> pd.DataFrame(Xt, columns = enc.get_feature_names_out()) ent_categories_ x0_c x0_d x0_e x0_infrequent_sklearn 0 0.0 0.0 0.0 1.0 1 0.0 0.0 0.0 1.0 2 0.0 0.0 0.0 0.0 3 0.0 0.0 0.0 0.0 4 0.0 0.0 0.0 0.0 5 0.0 0.0 0.0 0.0 6 1.0 0.0 0.0 0.0 7 1.0 0.0 0.0 0.0 8 1.0 0.0 0.0 0.0 9 1.0 0.0 0.0 0.0 10 0.0 1.0 0.0 0.0 11 0.0 1.0 0.0 0.0 12 0.0 1.0 0.0 0.0 13 0.0 1.0 0.0 0.0 14 0.0 0.0 1.0 0.0 15 0.0 0.0 1.0 0.0 16 0.0 0.0 1.0 0.0 17 0.0 0.0 1.0 0.0 ``` This means that category `a` is part of the `x0_infrequent_sklearn` column, which takes the value of `1` when `X=='a'`. Category `b` is dropped, this is expected since the `drop='first'` functionality drops the column indexed `0` and after the `_encode` function is applied, categories are remapped based on their sorting order and infrequent ones are mapped last. Meaning that `'a'->4, 'b'->0, 'c'->1, 'd'->2, 'e'->3. This can be verified by the following objects: ``` >>> enc.categories_ [array(['a', 'b', 'c', 'd', 'e'], dtype=object)] >>> enc._default_to_infrequent_mappings [array([4, 0, 1, 2, 3])] ``` Notice how at transform the values of the encoded columns are `0` when `X=='b'`. Finally, columns `x0_c`, `x0_d` and `x0_e` are encoded as expected. ### Suggest a potential alternative/fix ### Correct suggestive description based on what is actually happening. > If infrequent categories are enabled by setting `min_frequency` or `max_categories` to a non-default value and `drop_idx_[i]` corresponds to a infrequent category, then the "first", i.e., indexed `0`, frequent category is dropped after `_encode` is applied during `_transform`.
1.0
OneHotEncoder `drop_idx_` attribute description in presence of infrequent categories - ### Describe the issue linked to the documentation ### Issue summary In the OneHotEncoder documentation both for [v1.2](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder) and [v1.1](https://scikit-learn.org/1.1/modules/generated/sklearn.preprocessing.OneHotEncoder.html?highlight=one+hot+encoder#sklearn.preprocessing.OneHotEncoder), the description of attribute `drop_idx_` in presence of infrequent categories reads as follows: > If infrequent categories are enabled by setting `min_frequency` or `max_categories` to a non-default value and `drop_idx[i]` corresponds to a infrequent category, then the entire infrequent category is dropped.` ### User interpretation My understanding of this description is that when `drop_idx_[i]` corresponds to an infrequent category for column `i`, then the expected encoded column `i_infrequent_sklearn` is dropped. For example, suppose we have the following situation: ``` >>> X = np.array([['a'] * 2 + ['b'] * 4 + ['c'] * 4 ... + ['d'] * 4 + ['e'] * 4], dtype=object).T >>> enc = preprocessing.OneHotEncoder(min_frequency=4, sparse_output=False, drop='first') ``` Here `X` is a column with five categories where category `a` is considered infrequent. If the above interpretation is correct, then the expected output will consist of four columns, namely, `x0_b`, `x0_c`, `x0_d` and `x0_e`. This is because `a` is both the first category to get dropped due to `drop='first'` as well as an infrequent one. However, the transform output is as follows: ``` >>> Xt = enc.fit_transform(X) >>> pd.DataFrame(Xt, columns = enc.get_feature_names_out()) ent_categories_ x0_c x0_d x0_e x0_infrequent_sklearn 0 0.0 0.0 0.0 1.0 1 0.0 0.0 0.0 1.0 2 0.0 0.0 0.0 0.0 3 0.0 0.0 0.0 0.0 4 0.0 0.0 0.0 0.0 5 0.0 0.0 0.0 0.0 6 1.0 0.0 0.0 0.0 7 1.0 0.0 0.0 0.0 8 1.0 0.0 0.0 0.0 9 1.0 0.0 0.0 0.0 10 0.0 1.0 0.0 0.0 11 0.0 1.0 0.0 0.0 12 0.0 1.0 0.0 0.0 13 0.0 1.0 0.0 0.0 14 0.0 0.0 1.0 0.0 15 0.0 0.0 1.0 0.0 16 0.0 0.0 1.0 0.0 17 0.0 0.0 1.0 0.0 ``` This means that category `a` is part of the `x0_infrequent_sklearn` column, which takes the value of `1` when `X=='a'`. Category `b` is dropped, this is expected since the `drop='first'` functionality drops the column indexed `0` and after the `_encode` function is applied, categories are remapped based on their sorting order and infrequent ones are mapped last. Meaning that `'a'->4, 'b'->0, 'c'->1, 'd'->2, 'e'->3. This can be verified by the following objects: ``` >>> enc.categories_ [array(['a', 'b', 'c', 'd', 'e'], dtype=object)] >>> enc._default_to_infrequent_mappings [array([4, 0, 1, 2, 3])] ``` Notice how at transform the values of the encoded columns are `0` when `X=='b'`. Finally, columns `x0_c`, `x0_d` and `x0_e` are encoded as expected. ### Suggest a potential alternative/fix ### Correct suggestive description based on what is actually happening. > If infrequent categories are enabled by setting `min_frequency` or `max_categories` to a non-default value and `drop_idx_[i]` corresponds to a infrequent category, then the "first", i.e., indexed `0`, frequent category is dropped after `_encode` is applied during `_transform`.
process
onehotencoder drop idx attribute description in presence of infrequent categories describe the issue linked to the documentation issue summary in the onehotencoder documentation both for and the description of attribute drop idx in presence of infrequent categories reads as follows if infrequent categories are enabled by setting min frequency or max categories to a non default value and drop idx corresponds to a infrequent category then the entire infrequent category is dropped user interpretation my understanding of this description is that when drop idx corresponds to an infrequent category for column i then the expected encoded column i infrequent sklearn is dropped for example suppose we have the following situation x np array dtype object t enc preprocessing onehotencoder min frequency sparse output false drop first here x is a column with five categories where category a is considered infrequent if the above interpretation is correct then the expected output will consist of four columns namely b c d and e this is because a is both the first category to get dropped due to drop first as well as an infrequent one however the transform output is as follows xt enc fit transform x pd dataframe xt columns enc get feature names out ent categories c d e infrequent sklearn this means that category a is part of the infrequent sklearn column which takes the value of when x a category b is dropped this is expected since the drop first functionality drops the column indexed and after the encode function is applied categories are remapped based on their sorting order and infrequent ones are mapped last meaning that a b c d e this can be verified by the following objects enc categories dtype object enc default to infrequent mappings notice how at transform the values of the encoded columns are when x b finally columns c d and e are encoded as expected suggest a potential alternative fix correct suggestive description based on what is actually happening if infrequent categories are enabled by setting min frequency or max categories to a non default value and drop idx corresponds to a infrequent category then the first i e indexed frequent category is dropped after encode is applied during transform
1
210,973
16,413,753,383
IssuesEvent
2021-05-19 01:54:12
django-cms/django-cms
https://api.github.com/repos/django-cms/django-cms
closed
TypeError: argument of type 'WindowsPath' is not iterable
component: documentation easy pickings
this error face me when I am trying install Django-cms manually http://docs.django-cms.org/en/latest/how_to/install.html I use Django 3.0 and django-cms-3.7.4 ![Screenshot (298)](https://user-images.githubusercontent.com/35177448/91603802-eae7fa80-e96d-11ea-8907-12d2869adfa6.png)
1.0
TypeError: argument of type 'WindowsPath' is not iterable - this error face me when I am trying install Django-cms manually http://docs.django-cms.org/en/latest/how_to/install.html I use Django 3.0 and django-cms-3.7.4 ![Screenshot (298)](https://user-images.githubusercontent.com/35177448/91603802-eae7fa80-e96d-11ea-8907-12d2869adfa6.png)
non_process
typeerror argument of type windowspath is not iterable this error face me when i am trying install django cms manually i use django and django cms
0
135,455
12,684,751,491
IssuesEvent
2020-06-20 00:01:28
vtex-apps/io-documentation
https://api.github.com/repos/vtex-apps/io-documentation
opened
vtex-apps/store-theme-robots has no documentation yet
no-documentation
[vtex-apps/store-theme-robots](https://github.com/vtex-apps/store-theme-robots) hasn't created any README file yet or is not using Docs Builder
1.0
vtex-apps/store-theme-robots has no documentation yet - [vtex-apps/store-theme-robots](https://github.com/vtex-apps/store-theme-robots) hasn't created any README file yet or is not using Docs Builder
non_process
vtex apps store theme robots has no documentation yet hasn t created any readme file yet or is not using docs builder
0
23,828
12,128,906,463
IssuesEvent
2020-04-22 21:23:45
tidyverse/tidyr
https://api.github.com/repos/tidyverse/tidyr
closed
nest() / unnest() in 1.0.0 significantly slower
feature performance :racing_car: trees :evergreen_tree: vctrs ↗️
It appears the new implementation of `nest()` and `unnest()` has resulted in a dramatic slowdown, compared to the previous version of `tidyR` Perhaps the problem is related to size preallocation? In that case, `num_rows` needs to be large to observe the slowdown. See code snippet below. ```r num_rows <- 100000 x <- dplyr::tibble(first = 1:num_rows, second = 5:(num_rows+5-1), third = 7:(num_rows+7-1)) before <- Sys.time() y <- dplyr::tibble(first = 1:num_rows, second = 5:(num_rows+5-1), third = 7:(num_rows+7-1)) %>% tidyr::nest(second_and_third = c(second, third)) %>% tidyr::unnest(second_and_third) after <- Sys.time() if(length(which(x != y)) != 0){ stop("nest() and unnest() procedure results in corrupted data!") } cat(paste("Execution Time:",difftime(after,before,units="secs"),"seconds")) ``` On my system: ``` Execution Time: 61.2449209690094 seconds ```
True
nest() / unnest() in 1.0.0 significantly slower - It appears the new implementation of `nest()` and `unnest()` has resulted in a dramatic slowdown, compared to the previous version of `tidyR` Perhaps the problem is related to size preallocation? In that case, `num_rows` needs to be large to observe the slowdown. See code snippet below. ```r num_rows <- 100000 x <- dplyr::tibble(first = 1:num_rows, second = 5:(num_rows+5-1), third = 7:(num_rows+7-1)) before <- Sys.time() y <- dplyr::tibble(first = 1:num_rows, second = 5:(num_rows+5-1), third = 7:(num_rows+7-1)) %>% tidyr::nest(second_and_third = c(second, third)) %>% tidyr::unnest(second_and_third) after <- Sys.time() if(length(which(x != y)) != 0){ stop("nest() and unnest() procedure results in corrupted data!") } cat(paste("Execution Time:",difftime(after,before,units="secs"),"seconds")) ``` On my system: ``` Execution Time: 61.2449209690094 seconds ```
non_process
nest unnest in significantly slower it appears the new implementation of nest and unnest has resulted in a dramatic slowdown compared to the previous version of tidyr perhaps the problem is related to size preallocation in that case num rows needs to be large to observe the slowdown see code snippet below r num rows x dplyr tibble first num rows second num rows third num rows before sys time y dplyr tibble first num rows second num rows third num rows tidyr nest second and third c second third tidyr unnest second and third after sys time if length which x y stop nest and unnest procedure results in corrupted data cat paste execution time difftime after before units secs seconds on my system execution time seconds
0
6,553
9,646,071,090
IssuesEvent
2019-05-17 10:14:29
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
[move-meta] file in output\temp not found // refs to ditamaps with relative paths unstable
bug preprocess priority/medium stale
Hi all, we are using DITA to build documents with more than 1000 pages and about 8 levels of chapter depth. Changes to these documents are summarised in additional smaller documents which points to the corresponding positions, which are to be changed, in the main documents using maprefs. Since we changed the DITA-OT Version from 1.5.1 to 2.4.4 we encoutered the following problem. In our case a reference might look like this: <pre><code> &lt;topichead navtitle="04_subSubChapter"&gt; &lt;mapref href="../../../../mainDocuments/mainDocument_1/01_chapter/04_subChapter/04_subWorkflows/08_subSubSubChapter/08_Description/_08_Description.ditamap" /&gt; &lt;/topichead&gt; </pre></code> such a reference causes the following warning and error messages <pre><code> [DITA-OT] [chunk] Processing file:/C:/DEV/Repo/testDoc/Output/temp/sideDocument_1.ditamap [DITA-OT] [job-helper] Processing C:\DEV\Repo\testDoc\Output\temp\.job.xml to C:\DEV\Repo\testDoc\Output\temp\fullditatopic.list [DITA-OT] [job-helper] Loading stylesheet C:\DEV\Repo\testDoc\Tooling\DITA-OT\xsl\job-helper.xsl [DITA-OT] [job-helper] Processing C:\DEV\Repo\testDoc\Output\temp\.job.xml to C:\DEV\Repo\testDoc\Output\temp\fullditamap.list [DITA-OT] [job-helper] Loading stylesheet C:\DEV\Repo\testDoc\Tooling\DITA-OT\xsl\job-helper.xsl [DITA-OT] [job-helper] Processing C:\DEV\Repo\testDoc\Output\temp\.job.xml to C:\DEV\Repo\testDoc\Output\temp\fullditamapandtopic.list [DITA-OT] [job-helper] Loading stylesheet C:\DEV\Repo\testDoc\Tooling\DITA-OT\xsl\job-helper.xsl` [DITA-OT] [job-helper] Processing C:\DEV\Repo\testDoc\Output\temp\.job.xml to C:\DEV\Repo\testDoc\Output\temp\resourceonly.list [DITA-OT] [job-helper] Loading stylesheet C:\DEV\Repo\testDoc\Tooling\DITA-OT\xsl\job-helper.xsl` [DITA-OT] [job-helper] Processing C:\DEV\Repo\testDoc\Output\temp\.job.xml to C:\DEV\Repo\testDoc\Output\temp\copytosource.list [DITA-OT] [job-helper] Loading stylesheet C:\DEV\Repo\testDoc\Tooling\DITA-OT\xsl\job-helper.xsl` [DITA-OT] move-meta-entries: [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/sideDocument_1.ditamap [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/mainDocuments/mainDocument_1/01_chapter/02_subChapter/02_subWorkflows/04_subSubSubChapter/04_Description/07_subDescription/_07_subDescription-main.dita [DITA-OT] [move-meta] File C:\DEV\Repo\testDoc\Output\temp\08_subDescription\08.dita was not found. [DITA-OT] [move-meta] File C:\DEV\Repo\testDoc\Output\temp\07_subDescription\07.dita was not found. [DITA-OT] [move-meta] File C:\DEV\Repo\testDoc\Output\temp\08_subDescription\08.dita was not found. [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/mainDocuments/mainDocument_1/01_chapter/01_subChapter/01_subWorkflows/01_subSubSubChapter/01_Description/02_subDescription/02.dita [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/mainDocuments/mainDocument_1/01_chapter/04_subChapter/04_subWorkflows/08_subSubSubChapter/08_Description/16_subDescription/16.dita [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/mainDocuments/mainDocument_1/01_chapter/02_subChapter/02_subWorkflows/04_subSubSubChapter/04_Description/08_subDescription/_08_subDescription-main.dita [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/mainDocuments/mainDocument_1/01_chapter/03_subChapter/03_subWorkflows/06_subSubSubChapter/06_Description/12_subDescription/12.dita [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/mainDocuments/mainDocument_1/01_chapter/01_subChapter/01_subWorkflows/01_subSubSubChapter/01_Description/01_subDescription/_01_subDescription-main.dita </pre></code> ... <pre><code> [DITA-OT] [move-meta] Loading stylesheet C:\DEV\Repo\testDoc\Tooling\DITA-OT\xsl\preprocess\mappull.xsl [DITA-OT] [move-meta] File C:\DEV\Repo\testDoc\Output\temp\07_subDescription\07.dita was not found. [DITA-OT] [move-meta] Recoverable error on line 62 of mappullImpl.xsl: [DITA-OT] [move-meta] FODC0002: java.io.FileNotFoundException: [DITA-OT] [move-meta] C:\DEV\Repo\testDoc\Output\temp\07_subDescription\07.dita (Das System kann den [DITA-OT] [move-meta] angegebenen Pfad nicht finden) [DITA-OT] [move-meta] file:/C:/DEV/Repo/testDoc/Content/mainDocuments/mainDocument_1/01_chapter/02_subChapter/02_subWorkflows/04_subSubSubChapter/04_Description/07_subDescription/_07_subDescription.ditamap:5:56: [DOTX023W][WARN]: Unable to retrieve navtitle from target: ' 07_subDescription/07.dita '. [DITA-OT] [move-meta] file:/C:/DEV/Repo/testDoc/Content/mainDocuments/mainDocument_1/01_chapter/02_subChapter/02_subWorkflows/04_subSubSubChapter/04_Description/07_subDescription/_07_subDescription.ditamap:5:56: [DOTX027W][WARN]: Unable to retrieve linktext from target: ' 07_subDescription/07.dita '. [DITA-OT] [move-meta] Recoverable error on line 62 of mappullImpl.xsl: [DITA-OT] [move-meta] FODC0002: java.io.FileNotFoundException: [DITA-OT] [move-meta] C:\DEV\Repo\testDoc\Output\temp\08_subDescription\08.dita (Das System kann den [DITA-OT] [move-meta] angegebenen Pfad nicht finden) [DITA-OT] [move-meta] file:/C:/DEV/Repo/testDoc/Content/mainDocuments/mainDocument_1/01_chapter/02_subChapter/02_subWorkflows/04_subSubSubChapter/04_Description/08_subDescription/_08_subDescription.ditamap:5:56: [DOTX023W][WARN]: Unable to retrieve navtitle from target: ' 08_subDescription/08.dita '. [DITA-OT] [move-meta] file:/C:/DEV/Repo/testDoc/Content/mainDocuments/mainDocument_1/01_chapter/02_subChapter/02_subWorkflows/04_subSubSubChapter/04_Description/08_subDescription/_08_subDescription.ditamap:5:56: [DOTX027W][WARN]: Unable to retrieve linktext from target: ' 08_subDescription/08.dita '. [DITA-OT] maplink-check: [DITA-OT] maplink: </pre></code> ... <pre><code> [DITA-OT] [clean-map] Processing C:\DEV\Repo\testDoc\Output\temp\mainDocuments\mainDocument_1\01_chapter\04_subChapter\04_subWorkflows\08_subSubSubChapter\08_Description\_08_Description.ditamap [DITA-OT] [clean-map] Processing C:\DEV\Repo\testDoc\Output\temp\mainDocuments\mainDocument_1\01_chapter\01_subChapter\01_subWorkflows\01_subSubSubChapter\01_Description\_01_Description.ditamap [DITA-OT] [clean-map] Processing C:\DEV\Repo\testDoc\Output\temp\mainDocuments\mainDocument_1\01_chapter\03_subChapter\03_subWorkflows\06_subSubSubChapter\06_Description\11_subDescription\_11_subDescription.ditamap [DITA-OT] [topic-merge] [DOTX008E][ERROR] File 'file:/C:/DEV/Repo/testDoc/Output/temp/07_subDescription/07.dita' does not exist or cannot be loaded. [DITA-OT] [topic-merge] [DOTX008E][ERROR] File 'file:/C:/DEV/Repo/testDoc/Output/temp/08_subDescription/08.dita' does not exist or cannot be loaded. [DITA-OT] [clean-map] Processing C:\DEV\Repo\testDoc\Output\temp\mainDocuments\mainDocument_1\01_chapter\03_subChapter\03_subWorkflows\06_subSubSubChapter\06_Description\_06_Description.ditamap [DITA-OT] [clean-map] Processing C:\DEV\Repo\testDoc\Output\temp\mainDocuments\mainDocument_1\01_chapter\01_subChapter\01_subWorkflows\01_subSubSubChapter\01_Description\01_subDescription\_01_subDescription.ditamap </pre></code> These errors occur depending on the deepness of the relative path It seems to be related to the fact, that the references are outside the ditamap folder [https://www.oxygenxml.com/doc/versions/18.1/ug-editor/topics/dita-ot-external-refs.html](url) A testcase reproducing the mentioned errors and resembling the hierarchy of our documents is attached. Is there a workaround to make references with this deepness possible, or is it planned to make it working? Greetings Andreas [testCase.zip](https://github.com/dita-ot/dita-ot/files/862039/testCase.zip)
1.0
[move-meta] file in output\temp not found // refs to ditamaps with relative paths unstable - Hi all, we are using DITA to build documents with more than 1000 pages and about 8 levels of chapter depth. Changes to these documents are summarised in additional smaller documents which points to the corresponding positions, which are to be changed, in the main documents using maprefs. Since we changed the DITA-OT Version from 1.5.1 to 2.4.4 we encoutered the following problem. In our case a reference might look like this: <pre><code> &lt;topichead navtitle="04_subSubChapter"&gt; &lt;mapref href="../../../../mainDocuments/mainDocument_1/01_chapter/04_subChapter/04_subWorkflows/08_subSubSubChapter/08_Description/_08_Description.ditamap" /&gt; &lt;/topichead&gt; </pre></code> such a reference causes the following warning and error messages <pre><code> [DITA-OT] [chunk] Processing file:/C:/DEV/Repo/testDoc/Output/temp/sideDocument_1.ditamap [DITA-OT] [job-helper] Processing C:\DEV\Repo\testDoc\Output\temp\.job.xml to C:\DEV\Repo\testDoc\Output\temp\fullditatopic.list [DITA-OT] [job-helper] Loading stylesheet C:\DEV\Repo\testDoc\Tooling\DITA-OT\xsl\job-helper.xsl [DITA-OT] [job-helper] Processing C:\DEV\Repo\testDoc\Output\temp\.job.xml to C:\DEV\Repo\testDoc\Output\temp\fullditamap.list [DITA-OT] [job-helper] Loading stylesheet C:\DEV\Repo\testDoc\Tooling\DITA-OT\xsl\job-helper.xsl [DITA-OT] [job-helper] Processing C:\DEV\Repo\testDoc\Output\temp\.job.xml to C:\DEV\Repo\testDoc\Output\temp\fullditamapandtopic.list [DITA-OT] [job-helper] Loading stylesheet C:\DEV\Repo\testDoc\Tooling\DITA-OT\xsl\job-helper.xsl` [DITA-OT] [job-helper] Processing C:\DEV\Repo\testDoc\Output\temp\.job.xml to C:\DEV\Repo\testDoc\Output\temp\resourceonly.list [DITA-OT] [job-helper] Loading stylesheet C:\DEV\Repo\testDoc\Tooling\DITA-OT\xsl\job-helper.xsl` [DITA-OT] [job-helper] Processing C:\DEV\Repo\testDoc\Output\temp\.job.xml to C:\DEV\Repo\testDoc\Output\temp\copytosource.list [DITA-OT] [job-helper] Loading stylesheet C:\DEV\Repo\testDoc\Tooling\DITA-OT\xsl\job-helper.xsl` [DITA-OT] move-meta-entries: [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/sideDocument_1.ditamap [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/mainDocuments/mainDocument_1/01_chapter/02_subChapter/02_subWorkflows/04_subSubSubChapter/04_Description/07_subDescription/_07_subDescription-main.dita [DITA-OT] [move-meta] File C:\DEV\Repo\testDoc\Output\temp\08_subDescription\08.dita was not found. [DITA-OT] [move-meta] File C:\DEV\Repo\testDoc\Output\temp\07_subDescription\07.dita was not found. [DITA-OT] [move-meta] File C:\DEV\Repo\testDoc\Output\temp\08_subDescription\08.dita was not found. [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/mainDocuments/mainDocument_1/01_chapter/01_subChapter/01_subWorkflows/01_subSubSubChapter/01_Description/02_subDescription/02.dita [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/mainDocuments/mainDocument_1/01_chapter/04_subChapter/04_subWorkflows/08_subSubSubChapter/08_Description/16_subDescription/16.dita [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/mainDocuments/mainDocument_1/01_chapter/02_subChapter/02_subWorkflows/04_subSubSubChapter/04_Description/08_subDescription/_08_subDescription-main.dita [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/mainDocuments/mainDocument_1/01_chapter/03_subChapter/03_subWorkflows/06_subSubSubChapter/06_Description/12_subDescription/12.dita [DITA-OT] [move-meta] Processing file:/C:/DEV/Repo/testDoc/Output/temp/mainDocuments/mainDocument_1/01_chapter/01_subChapter/01_subWorkflows/01_subSubSubChapter/01_Description/01_subDescription/_01_subDescription-main.dita </pre></code> ... <pre><code> [DITA-OT] [move-meta] Loading stylesheet C:\DEV\Repo\testDoc\Tooling\DITA-OT\xsl\preprocess\mappull.xsl [DITA-OT] [move-meta] File C:\DEV\Repo\testDoc\Output\temp\07_subDescription\07.dita was not found. [DITA-OT] [move-meta] Recoverable error on line 62 of mappullImpl.xsl: [DITA-OT] [move-meta] FODC0002: java.io.FileNotFoundException: [DITA-OT] [move-meta] C:\DEV\Repo\testDoc\Output\temp\07_subDescription\07.dita (Das System kann den [DITA-OT] [move-meta] angegebenen Pfad nicht finden) [DITA-OT] [move-meta] file:/C:/DEV/Repo/testDoc/Content/mainDocuments/mainDocument_1/01_chapter/02_subChapter/02_subWorkflows/04_subSubSubChapter/04_Description/07_subDescription/_07_subDescription.ditamap:5:56: [DOTX023W][WARN]: Unable to retrieve navtitle from target: ' 07_subDescription/07.dita '. [DITA-OT] [move-meta] file:/C:/DEV/Repo/testDoc/Content/mainDocuments/mainDocument_1/01_chapter/02_subChapter/02_subWorkflows/04_subSubSubChapter/04_Description/07_subDescription/_07_subDescription.ditamap:5:56: [DOTX027W][WARN]: Unable to retrieve linktext from target: ' 07_subDescription/07.dita '. [DITA-OT] [move-meta] Recoverable error on line 62 of mappullImpl.xsl: [DITA-OT] [move-meta] FODC0002: java.io.FileNotFoundException: [DITA-OT] [move-meta] C:\DEV\Repo\testDoc\Output\temp\08_subDescription\08.dita (Das System kann den [DITA-OT] [move-meta] angegebenen Pfad nicht finden) [DITA-OT] [move-meta] file:/C:/DEV/Repo/testDoc/Content/mainDocuments/mainDocument_1/01_chapter/02_subChapter/02_subWorkflows/04_subSubSubChapter/04_Description/08_subDescription/_08_subDescription.ditamap:5:56: [DOTX023W][WARN]: Unable to retrieve navtitle from target: ' 08_subDescription/08.dita '. [DITA-OT] [move-meta] file:/C:/DEV/Repo/testDoc/Content/mainDocuments/mainDocument_1/01_chapter/02_subChapter/02_subWorkflows/04_subSubSubChapter/04_Description/08_subDescription/_08_subDescription.ditamap:5:56: [DOTX027W][WARN]: Unable to retrieve linktext from target: ' 08_subDescription/08.dita '. [DITA-OT] maplink-check: [DITA-OT] maplink: </pre></code> ... <pre><code> [DITA-OT] [clean-map] Processing C:\DEV\Repo\testDoc\Output\temp\mainDocuments\mainDocument_1\01_chapter\04_subChapter\04_subWorkflows\08_subSubSubChapter\08_Description\_08_Description.ditamap [DITA-OT] [clean-map] Processing C:\DEV\Repo\testDoc\Output\temp\mainDocuments\mainDocument_1\01_chapter\01_subChapter\01_subWorkflows\01_subSubSubChapter\01_Description\_01_Description.ditamap [DITA-OT] [clean-map] Processing C:\DEV\Repo\testDoc\Output\temp\mainDocuments\mainDocument_1\01_chapter\03_subChapter\03_subWorkflows\06_subSubSubChapter\06_Description\11_subDescription\_11_subDescription.ditamap [DITA-OT] [topic-merge] [DOTX008E][ERROR] File 'file:/C:/DEV/Repo/testDoc/Output/temp/07_subDescription/07.dita' does not exist or cannot be loaded. [DITA-OT] [topic-merge] [DOTX008E][ERROR] File 'file:/C:/DEV/Repo/testDoc/Output/temp/08_subDescription/08.dita' does not exist or cannot be loaded. [DITA-OT] [clean-map] Processing C:\DEV\Repo\testDoc\Output\temp\mainDocuments\mainDocument_1\01_chapter\03_subChapter\03_subWorkflows\06_subSubSubChapter\06_Description\_06_Description.ditamap [DITA-OT] [clean-map] Processing C:\DEV\Repo\testDoc\Output\temp\mainDocuments\mainDocument_1\01_chapter\01_subChapter\01_subWorkflows\01_subSubSubChapter\01_Description\01_subDescription\_01_subDescription.ditamap </pre></code> These errors occur depending on the deepness of the relative path It seems to be related to the fact, that the references are outside the ditamap folder [https://www.oxygenxml.com/doc/versions/18.1/ug-editor/topics/dita-ot-external-refs.html](url) A testcase reproducing the mentioned errors and resembling the hierarchy of our documents is attached. Is there a workaround to make references with this deepness possible, or is it planned to make it working? Greetings Andreas [testCase.zip](https://github.com/dita-ot/dita-ot/files/862039/testCase.zip)
process
file in output temp not found refs to ditamaps with relative paths unstable hi all we are using dita to build documents with more than pages and about levels of chapter depth changes to these documents are summarised in additional smaller documents which points to the corresponding positions which are to be changed in the main documents using maprefs since we changed the dita ot version from to we encoutered the following problem in our case a reference might look like this lt topichead navtitle subsubchapter gt lt mapref href maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description description ditamap gt lt topichead gt such a reference causes the following warning and error messages processing file c dev repo testdoc output temp sidedocument ditamap processing c dev repo testdoc output temp job xml to c dev repo testdoc output temp fullditatopic list loading stylesheet c dev repo testdoc tooling dita ot xsl job helper xsl processing c dev repo testdoc output temp job xml to c dev repo testdoc output temp fullditamap list loading stylesheet c dev repo testdoc tooling dita ot xsl job helper xsl processing c dev repo testdoc output temp job xml to c dev repo testdoc output temp fullditamapandtopic list loading stylesheet c dev repo testdoc tooling dita ot xsl job helper xsl processing c dev repo testdoc output temp job xml to c dev repo testdoc output temp resourceonly list loading stylesheet c dev repo testdoc tooling dita ot xsl job helper xsl processing c dev repo testdoc output temp job xml to c dev repo testdoc output temp copytosource list loading stylesheet c dev repo testdoc tooling dita ot xsl job helper xsl move meta entries processing file c dev repo testdoc output temp sidedocument ditamap processing file c dev repo testdoc output temp maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description subdescription subdescription main dita file c dev repo testdoc output temp subdescription dita was not found file c dev repo testdoc output temp subdescription dita was not found file c dev repo testdoc output temp subdescription dita was not found processing file c dev repo testdoc output temp maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description subdescription dita processing file c dev repo testdoc output temp maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description subdescription dita processing file c dev repo testdoc output temp maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description subdescription subdescription main dita processing file c dev repo testdoc output temp maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description subdescription dita processing file c dev repo testdoc output temp maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description subdescription subdescription main dita loading stylesheet c dev repo testdoc tooling dita ot xsl preprocess mappull xsl file c dev repo testdoc output temp subdescription dita was not found recoverable error on line of mappullimpl xsl java io filenotfoundexception c dev repo testdoc output temp subdescription dita das system kann den angegebenen pfad nicht finden file c dev repo testdoc content maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description subdescription subdescription ditamap unable to retrieve navtitle from target subdescription dita file c dev repo testdoc content maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description subdescription subdescription ditamap unable to retrieve linktext from target subdescription dita recoverable error on line of mappullimpl xsl java io filenotfoundexception c dev repo testdoc output temp subdescription dita das system kann den angegebenen pfad nicht finden file c dev repo testdoc content maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description subdescription subdescription ditamap unable to retrieve navtitle from target subdescription dita file c dev repo testdoc content maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description subdescription subdescription ditamap unable to retrieve linktext from target subdescription dita maplink check maplink processing c dev repo testdoc output temp maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description description ditamap processing c dev repo testdoc output temp maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description description ditamap processing c dev repo testdoc output temp maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description subdescription subdescription ditamap file file c dev repo testdoc output temp subdescription dita does not exist or cannot be loaded file file c dev repo testdoc output temp subdescription dita does not exist or cannot be loaded processing c dev repo testdoc output temp maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description description ditamap processing c dev repo testdoc output temp maindocuments maindocument chapter subchapter subworkflows subsubsubchapter description subdescription subdescription ditamap these errors occur depending on the deepness of the relative path it seems to be related to the fact that the references are outside the ditamap folder url a testcase reproducing the mentioned errors and resembling the hierarchy of our documents is attached is there a workaround to make references with this deepness possible or is it planned to make it working greetings andreas
1
51,222
12,691,250,281
IssuesEvent
2020-06-21 16:06:24
supercollider/supercollider
https://api.github.com/repos/supercollider/supercollider
closed
consider using vcpkg on Windows
comp: appveyor comp: build os: Windows waiting for consensus waiting for information waiting for testing
Per discussion below, let's use this thread to discuss possible switch to using vcpkg for dependencies on Windows. Relevant portion of the comment from @brianlheim to follow ------------ ### vcpkg vcpkg is definitely worth another look based on what i'm seeing and what you said here. i searched around for what other people in C++ think of vcpkg, and based on [this reddit thread](https://www.reddit.com/r/cpp/comments/gch3i5/your_thoughts_on_vcpkg/), it seems they are still working on some basic core features, and given that we already have a workflow with submodules that works OK, i am somewhat hesitant to say let's go ahead and spend time switching everything over. it would be really helpful to know if other people have used vcpkg and what their experience was. > They have Qt too, but last time I looked webengine was missing. it appears to be present now - https://repology.org/project/qt/versions - but at 5.12.8, not 5.14. perhaps they are sticking to LTS intentionally. in any case, Qt is the least difficult part of the Windows build to configure, in my experience. if we can use vcpkg for fftw3, readline, libsndfile, and portaudio, that would be a huge simplification to our build process! also, for reference i see that vcpkg is available on Appveyor build machines by default, but not in Travis's beta Windows support, where they do have chocolatey. _Originally posted by @brianlheim in https://github.com/supercollider/supercollider/pull/4925#issuecomment-628332780_
1.0
consider using vcpkg on Windows - Per discussion below, let's use this thread to discuss possible switch to using vcpkg for dependencies on Windows. Relevant portion of the comment from @brianlheim to follow ------------ ### vcpkg vcpkg is definitely worth another look based on what i'm seeing and what you said here. i searched around for what other people in C++ think of vcpkg, and based on [this reddit thread](https://www.reddit.com/r/cpp/comments/gch3i5/your_thoughts_on_vcpkg/), it seems they are still working on some basic core features, and given that we already have a workflow with submodules that works OK, i am somewhat hesitant to say let's go ahead and spend time switching everything over. it would be really helpful to know if other people have used vcpkg and what their experience was. > They have Qt too, but last time I looked webengine was missing. it appears to be present now - https://repology.org/project/qt/versions - but at 5.12.8, not 5.14. perhaps they are sticking to LTS intentionally. in any case, Qt is the least difficult part of the Windows build to configure, in my experience. if we can use vcpkg for fftw3, readline, libsndfile, and portaudio, that would be a huge simplification to our build process! also, for reference i see that vcpkg is available on Appveyor build machines by default, but not in Travis's beta Windows support, where they do have chocolatey. _Originally posted by @brianlheim in https://github.com/supercollider/supercollider/pull/4925#issuecomment-628332780_
non_process
consider using vcpkg on windows per discussion below let s use this thread to discuss possible switch to using vcpkg for dependencies on windows relevant portion of the comment from brianlheim to follow vcpkg vcpkg is definitely worth another look based on what i m seeing and what you said here i searched around for what other people in c think of vcpkg and based on it seems they are still working on some basic core features and given that we already have a workflow with submodules that works ok i am somewhat hesitant to say let s go ahead and spend time switching everything over it would be really helpful to know if other people have used vcpkg and what their experience was they have qt too but last time i looked webengine was missing it appears to be present now but at not perhaps they are sticking to lts intentionally in any case qt is the least difficult part of the windows build to configure in my experience if we can use vcpkg for readline libsndfile and portaudio that would be a huge simplification to our build process also for reference i see that vcpkg is available on appveyor build machines by default but not in travis s beta windows support where they do have chocolatey originally posted by brianlheim in
0
3,703
3,519,560,903
IssuesEvent
2016-01-12 17:17:00
goblint/analyzer
https://api.github.com/repos/goblint/analyzer
opened
use automated builds for docker
usability
The [voglerr/goblint](https://hub.docker.com/r/voglerr/goblint/) image was created manually - its latest commit is from June 2014... We should use [automated builds](https://docs.docker.com/docker-hub/builds/) to keep this up to date.
True
use automated builds for docker - The [voglerr/goblint](https://hub.docker.com/r/voglerr/goblint/) image was created manually - its latest commit is from June 2014... We should use [automated builds](https://docs.docker.com/docker-hub/builds/) to keep this up to date.
non_process
use automated builds for docker the image was created manually its latest commit is from june we should use to keep this up to date
0
451,657
13,039,791,609
IssuesEvent
2020-07-28 17:21:30
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
We need a difficulty setting that disables specialty point requirements for crafting.
Category: Balance Priority: Low
Without this, single player is either impossible or endless, and at the very least it would mean requiring people who play single player to leave their server on always over-night when not playing, which would obviously be ridiculous. Adding this to the difficulty options is also important for small groups or even large servers who should have the option to decide if they want (more or less) forced collaboration or not. It's high pri because for now it has a massive effect on single-player and we would likely get significant backlash if it's not in 9.0 on release.
1.0
We need a difficulty setting that disables specialty point requirements for crafting. - Without this, single player is either impossible or endless, and at the very least it would mean requiring people who play single player to leave their server on always over-night when not playing, which would obviously be ridiculous. Adding this to the difficulty options is also important for small groups or even large servers who should have the option to decide if they want (more or less) forced collaboration or not. It's high pri because for now it has a massive effect on single-player and we would likely get significant backlash if it's not in 9.0 on release.
non_process
we need a difficulty setting that disables specialty point requirements for crafting without this single player is either impossible or endless and at the very least it would mean requiring people who play single player to leave their server on always over night when not playing which would obviously be ridiculous adding this to the difficulty options is also important for small groups or even large servers who should have the option to decide if they want more or less forced collaboration or not it s high pri because for now it has a massive effect on single player and we would likely get significant backlash if it s not in on release
0
50,212
6,336,400,008
IssuesEvent
2017-07-26 21:00:34
devtools-html/debugger.html
https://api.github.com/repos/devtools-html/debugger.html
opened
[SourceSearch] Improve not-found UI
design
Fixes #3450 Currently, no results looks like this: <img width="544" alt="image" src="https://user-images.githubusercontent.com/55994/28643402-949cb45a-720a-11e7-9afd-11e838e796ed.png"> I'm proposing we: - [ ] remove the error-red color since there's not necessarily anything wrong here - [ ] remove the sad smileys because the mood should be neutral - [ ] change message to the less redundant "No results found" - [ ] Vertically-center "No results found" text a little better Mockup: <img width="847" alt="image" src="https://user-images.githubusercontent.com/55994/28643444-bad764bc-720a-11e7-8ac3-a7138f58b38b.png">
1.0
[SourceSearch] Improve not-found UI - Fixes #3450 Currently, no results looks like this: <img width="544" alt="image" src="https://user-images.githubusercontent.com/55994/28643402-949cb45a-720a-11e7-9afd-11e838e796ed.png"> I'm proposing we: - [ ] remove the error-red color since there's not necessarily anything wrong here - [ ] remove the sad smileys because the mood should be neutral - [ ] change message to the less redundant "No results found" - [ ] Vertically-center "No results found" text a little better Mockup: <img width="847" alt="image" src="https://user-images.githubusercontent.com/55994/28643444-bad764bc-720a-11e7-8ac3-a7138f58b38b.png">
non_process
improve not found ui fixes currently no results looks like this img width alt image src i m proposing we remove the error red color since there s not necessarily anything wrong here remove the sad smileys because the mood should be neutral change message to the less redundant no results found vertically center no results found text a little better mockup img width alt image src
0
8,347
11,498,954,271
IssuesEvent
2020-02-12 13:04:04
prisma/prisma2
https://api.github.com/repos/prisma/prisma2
opened
Create a small Buildkite Slack Bot to notify the author of a commit if the CI status is broken
process/candidate topic: internal
i.e: As a developer I want to get a notification in Slack when my commit breaks the CI. I would like it to give me the URL to the commit.
1.0
Create a small Buildkite Slack Bot to notify the author of a commit if the CI status is broken - i.e: As a developer I want to get a notification in Slack when my commit breaks the CI. I would like it to give me the URL to the commit.
process
create a small buildkite slack bot to notify the author of a commit if the ci status is broken i e as a developer i want to get a notification in slack when my commit breaks the ci i would like it to give me the url to the commit
1
49,853
26,362,371,522
IssuesEvent
2023-01-11 14:17:34
treeverse/lakeFS
https://api.github.com/repos/treeverse/lakeFS
closed
UploadObject ifAbsent fails with "not found" / 500 ISE when it should succeed
bug area/cataloger area/API performance team/versioning-engine
# What lakeFS fails "upload if absent" when it should succeed. # How to reproduce Either look at lakeFSFS in #4947 tests failing to mkdir, or apply [these patches]((https://github.com/treeverse/lakeFS/files/10374445/lakectl.if-absent.patch.gz)) and then try to upload to a non-existent path: ```sh ❯ echo foo | go run ./cmd/lakectl -c ~/.lakectl.prod-e2e.yaml fs upload --if-absent -s - lakefs://ariels-repo/main/upload/3 not found 500 Internal Server Error exit status 1 ``` If the path _does_ exist, it gives a nice 412 error as it should: ```sh ❯ echo foo | go run ./cmd/lakectl -c ~/.lakectl.prod-e2e.yaml fs upload --if-absent -s - lakefs://ariels-repo/main/upload/2 path already exists 412 Precondition Failed exit status 1 ``` # Context Discovered as part of work on lakeFSFS in #4947. It fails integration tests. I believe that this is due to a bug in the lakeFS server, and indeed the above patches reproduce it without lakeFSFS. Also "Internal Server Error" hints at an issue on the server.
True
UploadObject ifAbsent fails with "not found" / 500 ISE when it should succeed - # What lakeFS fails "upload if absent" when it should succeed. # How to reproduce Either look at lakeFSFS in #4947 tests failing to mkdir, or apply [these patches]((https://github.com/treeverse/lakeFS/files/10374445/lakectl.if-absent.patch.gz)) and then try to upload to a non-existent path: ```sh ❯ echo foo | go run ./cmd/lakectl -c ~/.lakectl.prod-e2e.yaml fs upload --if-absent -s - lakefs://ariels-repo/main/upload/3 not found 500 Internal Server Error exit status 1 ``` If the path _does_ exist, it gives a nice 412 error as it should: ```sh ❯ echo foo | go run ./cmd/lakectl -c ~/.lakectl.prod-e2e.yaml fs upload --if-absent -s - lakefs://ariels-repo/main/upload/2 path already exists 412 Precondition Failed exit status 1 ``` # Context Discovered as part of work on lakeFSFS in #4947. It fails integration tests. I believe that this is due to a bug in the lakeFS server, and indeed the above patches reproduce it without lakeFSFS. Also "Internal Server Error" hints at an issue on the server.
non_process
uploadobject ifabsent fails with not found ise when it should succeed what lakefs fails upload if absent when it should succeed how to reproduce either look at lakefsfs in tests failing to mkdir or apply and then try to upload to a non existent path sh ❯ echo foo go run cmd lakectl c lakectl prod yaml fs upload if absent s lakefs ariels repo main upload not found internal server error exit status if the path does exist it gives a nice error as it should sh ❯ echo foo go run cmd lakectl c lakectl prod yaml fs upload if absent s lakefs ariels repo main upload path already exists precondition failed exit status context discovered as part of work on lakefsfs in it fails integration tests i believe that this is due to a bug in the lakefs server and indeed the above patches reproduce it without lakefsfs also internal server error hints at an issue on the server
0
13,299
15,772,659,058
IssuesEvent
2021-03-31 22:06:11
CivicActions/accessibility
https://api.github.com/repos/CivicActions/accessibility
opened
Research how to encourage participation in async retrospective
process
Provide options/opportunities for more participants to engage in async retrospective.
1.0
Research how to encourage participation in async retrospective - Provide options/opportunities for more participants to engage in async retrospective.
process
research how to encourage participation in async retrospective provide options opportunities for more participants to engage in async retrospective
1
3,673
6,706,513,777
IssuesEvent
2017-10-12 07:24:45
nuclio/nuclio
https://api.github.com/repos/nuclio/nuclio
closed
Invocation duration should be supported in Python, as close to the function as possible
area/processor priority/medium
The Golang runtime measures function runtime here: https://github.com/nuclio/nuclio/blob/master/pkg/processor/runtime/golang/runtime.go#L115 The statistics are held in a struct common to both Python and Golang. Gathering this structure and publishing to Prometheus are all taken care of by a common mechanism (metricpusher). The Python runtime needs to simple populate the count/sum counters but should do the measurement as close to the funciton invocation as possible as to not measure the communication / wrapper overhead.
1.0
Invocation duration should be supported in Python, as close to the function as possible - The Golang runtime measures function runtime here: https://github.com/nuclio/nuclio/blob/master/pkg/processor/runtime/golang/runtime.go#L115 The statistics are held in a struct common to both Python and Golang. Gathering this structure and publishing to Prometheus are all taken care of by a common mechanism (metricpusher). The Python runtime needs to simple populate the count/sum counters but should do the measurement as close to the funciton invocation as possible as to not measure the communication / wrapper overhead.
process
invocation duration should be supported in python as close to the function as possible the golang runtime measures function runtime here the statistics are held in a struct common to both python and golang gathering this structure and publishing to prometheus are all taken care of by a common mechanism metricpusher the python runtime needs to simple populate the count sum counters but should do the measurement as close to the funciton invocation as possible as to not measure the communication wrapper overhead
1
10,430
13,219,486,244
IssuesEvent
2020-08-17 10:34:13
RIOT-OS/RIOT
https://api.github.com/repos/RIOT-OS/RIOT
closed
RFC: Move PHY/MAC layer out of drivers
Area: network Discussion: RFC Process: API change State: stale Type: enhancement
#### Description Towards adding more features of IEEE802.15.4 and variants (TSCH, etc), having a well defined MAC layer, reduce code duplication and increase maintainability, I've been working in a series of changes like #11473 and the following proposal. Most of the PHY and MAC logic is currently implemented in the drivers (ACK management, sending packets, etc). This leads to code duplication, radios that are not easy to maintain and it's hard to integrate full MAC features on existing radios (and it's hard to add new radios) We already have some common PHY code in `netdev_xxx_t` descriptors (e.g `netdev_ieee802154_t`), but there are a lot of stuff that can be factorized out from the drivers. In the case of IEEE802.15.4, the proposal is: - Implement PHY layer logic using `netdev_ieee802154_t` so common operations (TX, RX, CCA, Energy Detection) are transceiver agnostic. - Define a minimal `rf_ops` interface between `netdev_ieee802154_t` and the transceiver with low level operations (prepare packet, trigger send, maybe flush fifo, etc) - Implement MAC layers on top using well known interfaces (for IEEE802.15.4, the MCPS-SAP and MLME-SAP as described in #11324 ) - A `netdev_ieee802154_ops_t` interface with specific PHY ops might be needed (cca, prepare frame, transmit, etc) in order to accomplish most MAC layers features. For transceiver implementing PHY and MAC components use radio caps (#11473) to decide whether a feature is handled by the layer or the tranceiver. Here is a picture about how can it look like for IEEE802.15.4: ![](https://gist.githubusercontent.com/jia200x/daac639c022d5b085e0019b63eadbdb0/raw/bc9982381f1ceb524bff8aca25e73030da39b91c/PHY_MAC.png) I did a quick proof of concept for the PHY layer with the AT86RF231 radio and this is how generic prepare, transmit and send functions look like: ```c /** * Implementation of netdev_ieee802154_ops_t members */ /* PSDU: PHY Service Data Unit. Equivalent to MPDU (Mac Protocol Data Unit), a.k.a full MAC header */ int netdev_ieee802154_prepare(netdev_ieee802154_t *dev, const iolist_t *psdu) { dev->rf_ops->prepare(dev); if(dev->caps & NETDEV_IEEE802154_CAPS_TX_CHECKSUM) uint8_t psdu_len = iolist_size(psdu) + IEEE802154_FCS_LEN; if (psdu_len > IEEE802154_FRAME_LEN_MAX) { DEBUG("[at86rf2xx] error: packet too large (%u byte) to be send\n", (unsigned) psdu_len); return -EOVERFLOW; } /* Write PHDR */ dev->rf_ops->tx_load(dev, &psdu_len, 1); /* load packet data into FIFO */ for (const iolist_t *iol = psdu; iol; iol = iol->iol_next) { /* current packet data + FCS too long */ if (iol->iol_len) { dev->rx_ops->tx_load(dev, iol->iol_base, iol->iol_len); } } /* return the number of bytes that were actually loaded into the frame * buffer/send out */ return (int)psdu_len; } void netdev_ieee802154_transmit(netdev_ieee802154_t *dev) { /* Transmit PHY Protocol Data Unit (PPDU) */ dev->rf_ops->tx_exec(dev); } int netdev_ieee802154_send(netdev_ieee802154_t *dev, const iolist_t *psdu) { if(res = netdev_ieee802154_prepare(dev, psdu)) { netdev_ieee802154_transmit(dev); } return res; } ``` Any thoughts? ### <!-- Please describe your use case, why you need this feature and why this feature is important for RIOT. --> ### Useful links - [IPP Hurray](http://www.open-zb.net/publications/HURRAY_TR_061106_An_IEEE_802.15.4_protocol_implementation%20_in_nesCTinyOS_%20Reference_Guide_v1.2.pdf): They used this abstraction for their IEEE802.15.4 stack using TinyOS - [Linux Kernel](https://www.kernel.org/doc/Documentation/networking/phy.txt): Check their reasons on why they moved the PHY layer out of the drivers <!-- Please include links to any documentation that you think is useful. --> <!-- Thanks for contributing! -->
1.0
RFC: Move PHY/MAC layer out of drivers - #### Description Towards adding more features of IEEE802.15.4 and variants (TSCH, etc), having a well defined MAC layer, reduce code duplication and increase maintainability, I've been working in a series of changes like #11473 and the following proposal. Most of the PHY and MAC logic is currently implemented in the drivers (ACK management, sending packets, etc). This leads to code duplication, radios that are not easy to maintain and it's hard to integrate full MAC features on existing radios (and it's hard to add new radios) We already have some common PHY code in `netdev_xxx_t` descriptors (e.g `netdev_ieee802154_t`), but there are a lot of stuff that can be factorized out from the drivers. In the case of IEEE802.15.4, the proposal is: - Implement PHY layer logic using `netdev_ieee802154_t` so common operations (TX, RX, CCA, Energy Detection) are transceiver agnostic. - Define a minimal `rf_ops` interface between `netdev_ieee802154_t` and the transceiver with low level operations (prepare packet, trigger send, maybe flush fifo, etc) - Implement MAC layers on top using well known interfaces (for IEEE802.15.4, the MCPS-SAP and MLME-SAP as described in #11324 ) - A `netdev_ieee802154_ops_t` interface with specific PHY ops might be needed (cca, prepare frame, transmit, etc) in order to accomplish most MAC layers features. For transceiver implementing PHY and MAC components use radio caps (#11473) to decide whether a feature is handled by the layer or the tranceiver. Here is a picture about how can it look like for IEEE802.15.4: ![](https://gist.githubusercontent.com/jia200x/daac639c022d5b085e0019b63eadbdb0/raw/bc9982381f1ceb524bff8aca25e73030da39b91c/PHY_MAC.png) I did a quick proof of concept for the PHY layer with the AT86RF231 radio and this is how generic prepare, transmit and send functions look like: ```c /** * Implementation of netdev_ieee802154_ops_t members */ /* PSDU: PHY Service Data Unit. Equivalent to MPDU (Mac Protocol Data Unit), a.k.a full MAC header */ int netdev_ieee802154_prepare(netdev_ieee802154_t *dev, const iolist_t *psdu) { dev->rf_ops->prepare(dev); if(dev->caps & NETDEV_IEEE802154_CAPS_TX_CHECKSUM) uint8_t psdu_len = iolist_size(psdu) + IEEE802154_FCS_LEN; if (psdu_len > IEEE802154_FRAME_LEN_MAX) { DEBUG("[at86rf2xx] error: packet too large (%u byte) to be send\n", (unsigned) psdu_len); return -EOVERFLOW; } /* Write PHDR */ dev->rf_ops->tx_load(dev, &psdu_len, 1); /* load packet data into FIFO */ for (const iolist_t *iol = psdu; iol; iol = iol->iol_next) { /* current packet data + FCS too long */ if (iol->iol_len) { dev->rx_ops->tx_load(dev, iol->iol_base, iol->iol_len); } } /* return the number of bytes that were actually loaded into the frame * buffer/send out */ return (int)psdu_len; } void netdev_ieee802154_transmit(netdev_ieee802154_t *dev) { /* Transmit PHY Protocol Data Unit (PPDU) */ dev->rf_ops->tx_exec(dev); } int netdev_ieee802154_send(netdev_ieee802154_t *dev, const iolist_t *psdu) { if(res = netdev_ieee802154_prepare(dev, psdu)) { netdev_ieee802154_transmit(dev); } return res; } ``` Any thoughts? ### <!-- Please describe your use case, why you need this feature and why this feature is important for RIOT. --> ### Useful links - [IPP Hurray](http://www.open-zb.net/publications/HURRAY_TR_061106_An_IEEE_802.15.4_protocol_implementation%20_in_nesCTinyOS_%20Reference_Guide_v1.2.pdf): They used this abstraction for their IEEE802.15.4 stack using TinyOS - [Linux Kernel](https://www.kernel.org/doc/Documentation/networking/phy.txt): Check their reasons on why they moved the PHY layer out of the drivers <!-- Please include links to any documentation that you think is useful. --> <!-- Thanks for contributing! -->
process
rfc move phy mac layer out of drivers description towards adding more features of and variants tsch etc having a well defined mac layer reduce code duplication and increase maintainability i ve been working in a series of changes like and the following proposal most of the phy and mac logic is currently implemented in the drivers ack management sending packets etc this leads to code duplication radios that are not easy to maintain and it s hard to integrate full mac features on existing radios and it s hard to add new radios we already have some common phy code in netdev xxx t descriptors e g netdev t but there are a lot of stuff that can be factorized out from the drivers in the case of the proposal is implement phy layer logic using netdev t so common operations tx rx cca energy detection are transceiver agnostic define a minimal rf ops interface between netdev t and the transceiver with low level operations prepare packet trigger send maybe flush fifo etc implement mac layers on top using well known interfaces for the mcps sap and mlme sap as described in a netdev ops t interface with specific phy ops might be needed cca prepare frame transmit etc in order to accomplish most mac layers features for transceiver implementing phy and mac components use radio caps to decide whether a feature is handled by the layer or the tranceiver here is a picture about how can it look like for i did a quick proof of concept for the phy layer with the radio and this is how generic prepare transmit and send functions look like c implementation of netdev ops t members psdu phy service data unit equivalent to mpdu mac protocol data unit a k a full mac header int netdev prepare netdev t dev const iolist t psdu dev rf ops prepare dev if dev caps netdev caps tx checksum t psdu len iolist size psdu fcs len if psdu len frame len max debug error packet too large u byte to be send n unsigned psdu len return eoverflow write phdr dev rf ops tx load dev psdu len load packet data into fifo for const iolist t iol psdu iol iol iol iol next current packet data fcs too long if iol iol len dev rx ops tx load dev iol iol base iol iol len return the number of bytes that were actually loaded into the frame buffer send out return int psdu len void netdev transmit netdev t dev transmit phy protocol data unit ppdu dev rf ops tx exec dev int netdev send netdev t dev const iolist t psdu if res netdev prepare dev psdu netdev transmit dev return res any thoughts please describe your use case why you need this feature and why this feature is important for riot useful links they used this abstraction for their stack using tinyos check their reasons on why they moved the phy layer out of the drivers
1
7,893
11,081,437,577
IssuesEvent
2019-12-13 09:48:05
SharryChoo/blog-gittalk
https://api.github.com/repos/SharryChoo/blog-gittalk
opened
Android 系统架构 —— Zygote 进程的启动 - Sharry's blog
Gitalk android-source-zygote-process-start
https://sharrychoo.github.io/blog/android-source/zygote-process-start 前言通过 Init 进程启动的分析, 我们了解到它会读取 init.rc 脚本文件, zygote 进程启动相关的配置便是在此定义的, 这里我们系统的了解一下Zygote 进程的作用顾明思议, Zygote 是孵化器的意思, 在 Android 系统中, 所有的应用程序进程以及用来运行系统关键服务的 System...
1.0
Android 系统架构 —— Zygote 进程的启动 - Sharry's blog - https://sharrychoo.github.io/blog/android-source/zygote-process-start 前言通过 Init 进程启动的分析, 我们了解到它会读取 init.rc 脚本文件, zygote 进程启动相关的配置便是在此定义的, 这里我们系统的了解一下Zygote 进程的作用顾明思议, Zygote 是孵化器的意思, 在 Android 系统中, 所有的应用程序进程以及用来运行系统关键服务的 System...
process
android 系统架构 —— zygote 进程的启动 sharry s blog 前言通过 init 进程启动的分析 我们了解到它会读取 init rc 脚本文件 zygote 进程启动相关的配置便是在此定义的 这里我们系统的了解一下zygote 进程的作用顾明思议 zygote 是孵化器的意思 在 android 系统中 所有的应用程序进程以及用来运行系统关键服务的 system
1
5,158
26,270,931,237
IssuesEvent
2023-01-06 16:55:15
aws/serverless-application-model
https://api.github.com/repos/aws/serverless-application-model
closed
SAM Api Gateway cache with queryStringParam and PathParam
type/bug maintainer/need-followup
**Description:** I would like to enable chaching for the API Gateway which distinguish requests based on QueryStringParameters and RequestParameters/PathParams, I was able to enable cache for the ServerlessRestApi but for some reasion doesn't matter what i do it just ignores the params defined. At this point im not even sure if this is an issue or bug, but this would be nice to know/have a feature where i could just simply define my Methods in the global section of a cloudformation template, and would include params(both query and path params) in caching. I also made a stack overflow question regarding this, for more details please check: https://stackoverflow.com/questions/57907320/aws-enable-caching-with-querrystringparameter-pathparameter-for-sam-api-gateway Example Yaml template ``` `AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Globals: Api: EndpointConfiguration: REGIONAL CacheClusterEnabled: true CacheClusterSize: "0.5" MethodSettings: - CachingEnabled: true CacheDataEncrypted: true CacheTtlInSeconds: 60 HttpMethod: "*" ResourcePath: "/*" - ResourcePath: "/~1item~1/~1{itemCode}" CachingEnabled: true CacheDataEncrypted: true CacheTtlInSeconds: 60 HttpMethod: "*" Resources: ...... GetItem: Type: 'AWS::Serverless::Function' Properties: Handler: GetItem.handler Runtime: nodejs8.10 Timeout: 20 CodeUri: "codes" Events: GetItem: Type: Api Properties: Path: /item/{itemCode} Method: get ......` ``` **Observed result:** Caching not enabled for params thus returning incorrect response **Expected result:** Enable caching for params and distingush requests based on the params.
True
SAM Api Gateway cache with queryStringParam and PathParam - **Description:** I would like to enable chaching for the API Gateway which distinguish requests based on QueryStringParameters and RequestParameters/PathParams, I was able to enable cache for the ServerlessRestApi but for some reasion doesn't matter what i do it just ignores the params defined. At this point im not even sure if this is an issue or bug, but this would be nice to know/have a feature where i could just simply define my Methods in the global section of a cloudformation template, and would include params(both query and path params) in caching. I also made a stack overflow question regarding this, for more details please check: https://stackoverflow.com/questions/57907320/aws-enable-caching-with-querrystringparameter-pathparameter-for-sam-api-gateway Example Yaml template ``` `AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Globals: Api: EndpointConfiguration: REGIONAL CacheClusterEnabled: true CacheClusterSize: "0.5" MethodSettings: - CachingEnabled: true CacheDataEncrypted: true CacheTtlInSeconds: 60 HttpMethod: "*" ResourcePath: "/*" - ResourcePath: "/~1item~1/~1{itemCode}" CachingEnabled: true CacheDataEncrypted: true CacheTtlInSeconds: 60 HttpMethod: "*" Resources: ...... GetItem: Type: 'AWS::Serverless::Function' Properties: Handler: GetItem.handler Runtime: nodejs8.10 Timeout: 20 CodeUri: "codes" Events: GetItem: Type: Api Properties: Path: /item/{itemCode} Method: get ......` ``` **Observed result:** Caching not enabled for params thus returning incorrect response **Expected result:** Enable caching for params and distingush requests based on the params.
non_process
sam api gateway cache with querystringparam and pathparam description i would like to enable chaching for the api gateway which distinguish requests based on querystringparameters and requestparameters pathparams i was able to enable cache for the serverlessrestapi but for some reasion doesn t matter what i do it just ignores the params defined at this point im not even sure if this is an issue or bug but this would be nice to know have a feature where i could just simply define my methods in the global section of a cloudformation template and would include params both query and path params in caching i also made a stack overflow question regarding this for more details please check example yaml template awstemplateformatversion transform aws serverless globals api endpointconfiguration regional cacheclusterenabled true cacheclustersize methodsettings cachingenabled true cachedataencrypted true cachettlinseconds httpmethod resourcepath resourcepath itemcode cachingenabled true cachedataencrypted true cachettlinseconds httpmethod resources getitem type aws serverless function properties handler getitem handler runtime timeout codeuri codes events getitem type api properties path item itemcode method get observed result caching not enabled for params thus returning incorrect response expected result enable caching for params and distingush requests based on the params
0
412,019
27,841,211,250
IssuesEvent
2023-03-20 12:54:17
yesvelte/yesvelte
https://api.github.com/repos/yesvelte/yesvelte
closed
Switch has `text` prop and Radio and Checkbox have `label` prop
documentation enhancement
@pournasserian it should be label or text?
1.0
Switch has `text` prop and Radio and Checkbox have `label` prop - @pournasserian it should be label or text?
non_process
switch has text prop and radio and checkbox have label prop pournasserian it should be label or text
0
128,318
18,046,651,236
IssuesEvent
2021-09-19 02:04:54
MidnightBSD/src
https://api.github.com/repos/MidnightBSD/src
reopened
CVE-2016-10012 (High) detected in openssh-portablehpn-PeakTput-7_9_P1, openssh-portablehpn-PeakTput-7_9_P1
security vulnerability
## CVE-2016-10012 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>openssh-portablehpn-PeakTput-7_9_P1</b>, <b>openssh-portablehpn-PeakTput-7_9_P1</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The shared memory manager (associated with pre-authentication compression) in sshd in OpenSSH before 7.4 does not ensure that a bounds check is enforced by all compilers, which might allows local users to gain privileges by leveraging access to a sandboxed privilege-separation process, related to the m_zback and m_zlib data structures. <p>Publish Date: 2017-01-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10012>CVE-2016-10012</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://gitlab.alpinelinux.org/alpine/aports/issues/6583">https://gitlab.alpinelinux.org/alpine/aports/issues/6583</a></p> <p>Release Date: 2017-01-05</p> <p>Fix Resolution: 7.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-10012 (High) detected in openssh-portablehpn-PeakTput-7_9_P1, openssh-portablehpn-PeakTput-7_9_P1 - ## CVE-2016-10012 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>openssh-portablehpn-PeakTput-7_9_P1</b>, <b>openssh-portablehpn-PeakTput-7_9_P1</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The shared memory manager (associated with pre-authentication compression) in sshd in OpenSSH before 7.4 does not ensure that a bounds check is enforced by all compilers, which might allows local users to gain privileges by leveraging access to a sandboxed privilege-separation process, related to the m_zback and m_zlib data structures. <p>Publish Date: 2017-01-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10012>CVE-2016-10012</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://gitlab.alpinelinux.org/alpine/aports/issues/6583">https://gitlab.alpinelinux.org/alpine/aports/issues/6583</a></p> <p>Release Date: 2017-01-05</p> <p>Fix Resolution: 7.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in openssh portablehpn peaktput openssh portablehpn peaktput cve high severity vulnerability vulnerable libraries openssh portablehpn peaktput openssh portablehpn peaktput vulnerability details the shared memory manager associated with pre authentication compression in sshd in openssh before does not ensure that a bounds check is enforced by all compilers which might allows local users to gain privileges by leveraging access to a sandboxed privilege separation process related to the m zback and m zlib data structures publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
35,519
9,605,605,534
IssuesEvent
2019-05-11 01:39:02
dealii/dealii
https://api.github.com/repos/dealii/dealii
closed
Refactor DEAL_II_WITH_CUDA_AWARE_MPI into a non DEAL_II_WITH variable
Bug Build system
Umh, I somehow missed this. The fact that `DEAL_II_WITH_CUDA_AWARE_MPI` is a "feature" variable is in my opinion a bit suboptimal. We have a number of features (such as CUDA) that have some subordinate configuration options. Would it be possible to refactor `DEAL_II_WITH_CUDA_AWARE_MPI` into an option `DEAL_II_CUDA_WITH_MPI` and configure it together with `DEAL_II_WITH_CUDA`? (In short I would like to see that a feature, i.e. something with `DEAL_II_WITH` should only refer to an external/internal feature - such as an external library). @Rombur @masterleinad *ping*
1.0
Refactor DEAL_II_WITH_CUDA_AWARE_MPI into a non DEAL_II_WITH variable - Umh, I somehow missed this. The fact that `DEAL_II_WITH_CUDA_AWARE_MPI` is a "feature" variable is in my opinion a bit suboptimal. We have a number of features (such as CUDA) that have some subordinate configuration options. Would it be possible to refactor `DEAL_II_WITH_CUDA_AWARE_MPI` into an option `DEAL_II_CUDA_WITH_MPI` and configure it together with `DEAL_II_WITH_CUDA`? (In short I would like to see that a feature, i.e. something with `DEAL_II_WITH` should only refer to an external/internal feature - such as an external library). @Rombur @masterleinad *ping*
non_process
refactor deal ii with cuda aware mpi into a non deal ii with variable umh i somehow missed this the fact that deal ii with cuda aware mpi is a feature variable is in my opinion a bit suboptimal we have a number of features such as cuda that have some subordinate configuration options would it be possible to refactor deal ii with cuda aware mpi into an option deal ii cuda with mpi and configure it together with deal ii with cuda in short i would like to see that a feature i e something with deal ii with should only refer to an external internal feature such as an external library rombur masterleinad ping
0
254,901
27,442,637,385
IssuesEvent
2023-03-02 12:09:22
loftwah/shop.grindmodecypher.com
https://api.github.com/repos/loftwah/shop.grindmodecypher.com
reopened
CVE-2022-31043 (High) detected in guzzlehttp/guzzle-6.3.3, guzzlehttp/guzzle-6.3.0
security vulnerability
## CVE-2022-31043 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>guzzlehttp/guzzle-6.3.3</b></p></summary> <p>Guzzle, an extensible PHP HTTP client</p> <p> Dependency Hierarchy: - :x: **guzzlehttp/guzzle-6.3.3** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/loftwah/shop.grindmodecypher.com/commit/ab76dd905220f63a5e50d7a6c36543f1d876d52a">ab76dd905220f63a5e50d7a6c36543f1d876d52a</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Guzzle is an open source PHP HTTP client. In affected versions `Authorization` headers on requests are sensitive information. On making a request using the `https` scheme to a server which responds with a redirect to a URI with the `http` scheme, we should not forward the `Authorization` header on. This is much the same as to how we don't forward on the header if the host changes. Prior to this fix, `https` to `http` downgrades did not result in the `Authorization` header being removed, only changes to the host. Affected Guzzle 7 users should upgrade to Guzzle 7.4.4 as soon as possible. Affected users using any earlier series of Guzzle should upgrade to Guzzle 6.5.7 or 7.4.4. Users unable to upgrade may consider an alternative approach which would be to use their own redirect middleware. Alternately users may simply disable redirects all together if redirects are not expected or required. <p>Publish Date: 2022-06-10 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-31043>CVE-2022-31043</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/guzzle/guzzle/security/advisories/GHSA-w248-ffj2-4v5q">https://github.com/guzzle/guzzle/security/advisories/GHSA-w248-ffj2-4v5q</a></p> <p>Release Date: 2022-06-10</p> <p>Fix Resolution: 6.5.7,7.4.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-31043 (High) detected in guzzlehttp/guzzle-6.3.3, guzzlehttp/guzzle-6.3.0 - ## CVE-2022-31043 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>guzzlehttp/guzzle-6.3.3</b></p></summary> <p>Guzzle, an extensible PHP HTTP client</p> <p> Dependency Hierarchy: - :x: **guzzlehttp/guzzle-6.3.3** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/loftwah/shop.grindmodecypher.com/commit/ab76dd905220f63a5e50d7a6c36543f1d876d52a">ab76dd905220f63a5e50d7a6c36543f1d876d52a</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Guzzle is an open source PHP HTTP client. In affected versions `Authorization` headers on requests are sensitive information. On making a request using the `https` scheme to a server which responds with a redirect to a URI with the `http` scheme, we should not forward the `Authorization` header on. This is much the same as to how we don't forward on the header if the host changes. Prior to this fix, `https` to `http` downgrades did not result in the `Authorization` header being removed, only changes to the host. Affected Guzzle 7 users should upgrade to Guzzle 7.4.4 as soon as possible. Affected users using any earlier series of Guzzle should upgrade to Guzzle 6.5.7 or 7.4.4. Users unable to upgrade may consider an alternative approach which would be to use their own redirect middleware. Alternately users may simply disable redirects all together if redirects are not expected or required. <p>Publish Date: 2022-06-10 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-31043>CVE-2022-31043</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/guzzle/guzzle/security/advisories/GHSA-w248-ffj2-4v5q">https://github.com/guzzle/guzzle/security/advisories/GHSA-w248-ffj2-4v5q</a></p> <p>Release Date: 2022-06-10</p> <p>Fix Resolution: 6.5.7,7.4.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in guzzlehttp guzzle guzzlehttp guzzle cve high severity vulnerability vulnerable library guzzlehttp guzzle guzzle an extensible php http client dependency hierarchy x guzzlehttp guzzle vulnerable library found in head commit a href found in base branch master vulnerability details guzzle is an open source php http client in affected versions authorization headers on requests are sensitive information on making a request using the https scheme to a server which responds with a redirect to a uri with the http scheme we should not forward the authorization header on this is much the same as to how we don t forward on the header if the host changes prior to this fix https to http downgrades did not result in the authorization header being removed only changes to the host affected guzzle users should upgrade to guzzle as soon as possible affected users using any earlier series of guzzle should upgrade to guzzle or users unable to upgrade may consider an alternative approach which would be to use their own redirect middleware alternately users may simply disable redirects all together if redirects are not expected or required publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
62,566
14,656,539,587
IssuesEvent
2020-12-28 13:39:07
fu1771695yongxie/marked
https://api.github.com/repos/fu1771695yongxie/marked
opened
CVE-2020-11023 (Medium) detected in jquery-1.8.1.min.js
security vulnerability
## CVE-2020-11023 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p> <p>Path to dependency file: marked/node_modules/redeyed/examples/browser/index.html</p> <p>Path to vulnerable library: marked/node_modules/redeyed/examples/browser/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.8.1.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/marked/commit/490bcd0a9c060e357156fca27c3e585accaa8491">490bcd0a9c060e357156fca27c3e585accaa8491</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: jquery - 3.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-11023 (Medium) detected in jquery-1.8.1.min.js - ## CVE-2020-11023 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p> <p>Path to dependency file: marked/node_modules/redeyed/examples/browser/index.html</p> <p>Path to vulnerable library: marked/node_modules/redeyed/examples/browser/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.8.1.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/marked/commit/490bcd0a9c060e357156fca27c3e585accaa8491">490bcd0a9c060e357156fca27c3e585accaa8491</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: jquery - 3.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file marked node modules redeyed examples browser index html path to vulnerable library marked node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
0
2,961
5,957,443,022
IssuesEvent
2017-05-29 02:10:27
jacklam718/react-native-button-component
https://api.github.com/repos/jacklam718/react-native-button-component
closed
Error linking module
child_process
Hi, I tried to use this package but when I do the link it throws this error and it won't build ``` child_process.js:526 throw err; ^ Error: Command failed: node node_modules/react-native/local-cli/cli.js link react-native-spinkit && node node_modules/react-native/local-cli/cli.js link react-native-linear-gradient rnpm-install info Linking react-native-spinkit android dependency rnpm-install info Android module react-native-spinkit has been successfully linked rnpm-install ERR! Something went wrong while linking. Error: Expected "/*" or ";" but "\"" found. ``` I'm using RN Version 0.44 on Android 6.0
1.0
Error linking module - Hi, I tried to use this package but when I do the link it throws this error and it won't build ``` child_process.js:526 throw err; ^ Error: Command failed: node node_modules/react-native/local-cli/cli.js link react-native-spinkit && node node_modules/react-native/local-cli/cli.js link react-native-linear-gradient rnpm-install info Linking react-native-spinkit android dependency rnpm-install info Android module react-native-spinkit has been successfully linked rnpm-install ERR! Something went wrong while linking. Error: Expected "/*" or ";" but "\"" found. ``` I'm using RN Version 0.44 on Android 6.0
process
error linking module hi i tried to use this package but when i do the link it throws this error and it won t build child process js throw err error command failed node node modules react native local cli cli js link react native spinkit node node modules react native local cli cli js link react native linear gradient rnpm install info linking react native spinkit android dependency rnpm install info android module react native spinkit has been successfully linked rnpm install err something went wrong while linking error expected or but found i m using rn version on android
1
46,714
13,055,963,212
IssuesEvent
2020-07-30 03:15:00
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
opened
ports install of lhapdf 5.8.7 fails, Ubuntu 16.04 (Trac #1752)
Incomplete Migration Migrated from Trac defect tools/ports
Migrated from https://code.icecube.wisc.edu/ticket/1752 ```json { "status": "closed", "changetime": "2016-06-21T15:22:02", "description": "This dependency failing causes Genie 2.8.6 to fail to install as well.", "reporter": "jlanfranchi", "cc": "", "resolution": "fixed", "_ts": "1466522522975275", "component": "tools/ports", "summary": "ports install of lhapdf 5.8.7 fails, Ubuntu 16.04", "priority": "normal", "keywords": "", "time": "2016-06-20T20:57:58", "milestone": "", "owner": "nega", "type": "defect" } ```
1.0
ports install of lhapdf 5.8.7 fails, Ubuntu 16.04 (Trac #1752) - Migrated from https://code.icecube.wisc.edu/ticket/1752 ```json { "status": "closed", "changetime": "2016-06-21T15:22:02", "description": "This dependency failing causes Genie 2.8.6 to fail to install as well.", "reporter": "jlanfranchi", "cc": "", "resolution": "fixed", "_ts": "1466522522975275", "component": "tools/ports", "summary": "ports install of lhapdf 5.8.7 fails, Ubuntu 16.04", "priority": "normal", "keywords": "", "time": "2016-06-20T20:57:58", "milestone": "", "owner": "nega", "type": "defect" } ```
non_process
ports install of lhapdf fails ubuntu trac migrated from json status closed changetime description this dependency failing causes genie to fail to install as well reporter jlanfranchi cc resolution fixed ts component tools ports summary ports install of lhapdf fails ubuntu priority normal keywords time milestone owner nega type defect
0
2,244
5,088,645,734
IssuesEvent
2016-12-31 23:56:06
sw4j-org/tool-jpa-processor
https://api.github.com/repos/sw4j-org/tool-jpa-processor
opened
Handle @MapKeyEnumerated Annotation
annotation processor task
Handle the `@MapKeyEnumerated` annotation for a property or field. See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf) - 11.1.34 MapKeyEnumerated Annotation
1.0
Handle @MapKeyEnumerated Annotation - Handle the `@MapKeyEnumerated` annotation for a property or field. See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf) - 11.1.34 MapKeyEnumerated Annotation
process
handle mapkeyenumerated annotation handle the mapkeyenumerated annotation for a property or field see mapkeyenumerated annotation
1
304,798
9,335,656,505
IssuesEvent
2019-03-28 19:08:08
department-of-veterans-affairs/caseflow
https://api.github.com/repos/department-of-veterans-affairs/caseflow
closed
Intake | Appeal not valid to reopen
In-Progress bug-medium-priority sierra
https://sentry.ds.va.gov/department-of-veterans-affairs/caseflow/issues/4143/events/281428/ AppealRepository::AppealNotValidToReopen: Appeal id 65075 is not valid to reopen
1.0
Intake | Appeal not valid to reopen - https://sentry.ds.va.gov/department-of-veterans-affairs/caseflow/issues/4143/events/281428/ AppealRepository::AppealNotValidToReopen: Appeal id 65075 is not valid to reopen
non_process
intake appeal not valid to reopen appealrepository appealnotvalidtoreopen appeal id is not valid to reopen
0
191,272
22,215,726,020
IssuesEvent
2022-06-08 01:17:16
Nivaskumark/kernel_v4.1.15
https://api.github.com/repos/Nivaskumark/kernel_v4.1.15
reopened
CVE-2017-17448 (High) detected in linuxlinux-4.6
security vulnerability
## CVE-2017-17448 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.1.15/commit/00db4e8795bcbec692fb60b19160bdd763ad42e3">00db4e8795bcbec692fb60b19160bdd763ad42e3</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nfnetlink_cthelper.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nfnetlink_cthelper.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> net/netfilter/nfnetlink_cthelper.c in the Linux kernel through 4.14.4 does not require the CAP_NET_ADMIN capability for new, get, and del operations, which allows local users to bypass intended access restrictions because the nfnl_cthelper_list data structure is shared across all net namespaces. <p>Publish Date: 2017-12-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-17448>CVE-2017-17448</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17448">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17448</a></p> <p>Release Date: 2017-12-07</p> <p>Fix Resolution: v4.15-rc4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-17448 (High) detected in linuxlinux-4.6 - ## CVE-2017-17448 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.1.15/commit/00db4e8795bcbec692fb60b19160bdd763ad42e3">00db4e8795bcbec692fb60b19160bdd763ad42e3</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nfnetlink_cthelper.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nfnetlink_cthelper.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> net/netfilter/nfnetlink_cthelper.c in the Linux kernel through 4.14.4 does not require the CAP_NET_ADMIN capability for new, get, and del operations, which allows local users to bypass intended access restrictions because the nfnl_cthelper_list data structure is shared across all net namespaces. <p>Publish Date: 2017-12-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-17448>CVE-2017-17448</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17448">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17448</a></p> <p>Release Date: 2017-12-07</p> <p>Fix Resolution: v4.15-rc4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files net netfilter nfnetlink cthelper c net netfilter nfnetlink cthelper c vulnerability details net netfilter nfnetlink cthelper c in the linux kernel through does not require the cap net admin capability for new get and del operations which allows local users to bypass intended access restrictions because the nfnl cthelper list data structure is shared across all net namespaces publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
10,738
13,534,921,935
IssuesEvent
2020-09-16 06:41:52
nion-software/nionswift
https://api.github.com/repos/nion-software/nionswift
closed
Line profile of a sequence/collection fails
f - line-profile f - processing priority - critical type - bug
The processing uses 'xdata' so that it can handle complex data (see #233 and f0665f2ac12c638bf8c5b0d52f8c2812a64ea9c3), however this fails for sequences and collections. There needs to be a way to get the data to be displayed (with sequence, collection, and slices applied) before the conversion to complex. Perhaps `data_item.used_data`? It might also be time to rethink the different ways of accessing the raw data, the 1d/2d data, the display data, the cropped data, the filtered data, the filter mask, etc. Is there a more uniform way?
1.0
Line profile of a sequence/collection fails - The processing uses 'xdata' so that it can handle complex data (see #233 and f0665f2ac12c638bf8c5b0d52f8c2812a64ea9c3), however this fails for sequences and collections. There needs to be a way to get the data to be displayed (with sequence, collection, and slices applied) before the conversion to complex. Perhaps `data_item.used_data`? It might also be time to rethink the different ways of accessing the raw data, the 1d/2d data, the display data, the cropped data, the filtered data, the filter mask, etc. Is there a more uniform way?
process
line profile of a sequence collection fails the processing uses xdata so that it can handle complex data see and however this fails for sequences and collections there needs to be a way to get the data to be displayed with sequence collection and slices applied before the conversion to complex perhaps data item used data it might also be time to rethink the different ways of accessing the raw data the data the display data the cropped data the filtered data the filter mask etc is there a more uniform way
1
19,583
25,915,853,595
IssuesEvent
2022-12-15 17:17:27
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Problem when running goaccess with cron
question log-processing
I have the following script `updateStats.sh` that works great when I run it manually but fails when run on Ubuntu via cron with `0 * * * * /var/www/html/updateStats.sh` `cat updateStats.sh`: ``` goaccess /var/log/nginx/access.log -o /var/www/html/nginxDaily.html --log-format=COMBINED --anonymize-ip zcat -f /var/log/nginx/access.log* | goaccess -o /var/www/html/nginxDailyAll.html --log-format=COMBINED --anonymize-ip ``` `cat cron.log` provides: ``` [PARSING /var/log/nginx/access.log] {4907} @ {0/s} Cleaning up resources... GoAccess - version 1.5.5 - Feb 5 2022 18:37:15 Config file: /etc/goaccess/goaccess.conf Fatal error has occurred Error occurred at: src/goaccess.c - initializer - 1457 No input data was provided nor there's data to restore. ``` The first line runs fine. The second line fails. Any ideas what I'm doing wrong?
1.0
Problem when running goaccess with cron - I have the following script `updateStats.sh` that works great when I run it manually but fails when run on Ubuntu via cron with `0 * * * * /var/www/html/updateStats.sh` `cat updateStats.sh`: ``` goaccess /var/log/nginx/access.log -o /var/www/html/nginxDaily.html --log-format=COMBINED --anonymize-ip zcat -f /var/log/nginx/access.log* | goaccess -o /var/www/html/nginxDailyAll.html --log-format=COMBINED --anonymize-ip ``` `cat cron.log` provides: ``` [PARSING /var/log/nginx/access.log] {4907} @ {0/s} Cleaning up resources... GoAccess - version 1.5.5 - Feb 5 2022 18:37:15 Config file: /etc/goaccess/goaccess.conf Fatal error has occurred Error occurred at: src/goaccess.c - initializer - 1457 No input data was provided nor there's data to restore. ``` The first line runs fine. The second line fails. Any ideas what I'm doing wrong?
process
problem when running goaccess with cron i have the following script updatestats sh that works great when i run it manually but fails when run on ubuntu via cron with var www html updatestats sh cat updatestats sh goaccess var log nginx access log o var www html nginxdaily html log format combined anonymize ip zcat f var log nginx access log goaccess o var www html nginxdailyall html log format combined anonymize ip cat cron log provides s cleaning up resources goaccess version feb config file etc goaccess goaccess conf fatal error has occurred error occurred at src goaccess c initializer no input data was provided nor there s data to restore the first line runs fine the second line fails any ideas what i m doing wrong
1
115,376
11,872,954,861
IssuesEvent
2020-03-26 16:34:03
laminas/laminas.github.io
https://api.github.com/repos/laminas/laminas.github.io
closed
Add information to Migration guide / FAQ about failing cases
Documentation Enhancement
### Documentation needed As discussed in https://github.com/laminas/laminas-zendframework-bridge/pull/56#discussion_r398152665 we need add information ot migration guide / FAQ about failing cases during the migration: 1. ```php use Zend\Expressive\Authorization\Acl; Acl\ZendAcl::class; ``` it will be migrated to: ```php use Mezzio\Authorization\Acl; Acl\ZendAcl::class; // <!-- wrong, should be Acl\LaminasAcl::class; ``` 2. ```php use Some\Vendor\ZendAcl; ZendAcl::class; ``` after migration it becomes: ```php use Some\Vendor\ZendAcl; LaminasAcl::class; // <-- wrong as this class does not exist, it should be ZendAcl::class ``` We cannot really fix it because it is more important that config post processor replaces names correctly than all edge cases migration works fine. This can be fixed manually by users after migration and we cannot provide atm any reliable tool to fix it automatically.
1.0
Add information to Migration guide / FAQ about failing cases - ### Documentation needed As discussed in https://github.com/laminas/laminas-zendframework-bridge/pull/56#discussion_r398152665 we need add information ot migration guide / FAQ about failing cases during the migration: 1. ```php use Zend\Expressive\Authorization\Acl; Acl\ZendAcl::class; ``` it will be migrated to: ```php use Mezzio\Authorization\Acl; Acl\ZendAcl::class; // <!-- wrong, should be Acl\LaminasAcl::class; ``` 2. ```php use Some\Vendor\ZendAcl; ZendAcl::class; ``` after migration it becomes: ```php use Some\Vendor\ZendAcl; LaminasAcl::class; // <-- wrong as this class does not exist, it should be ZendAcl::class ``` We cannot really fix it because it is more important that config post processor replaces names correctly than all edge cases migration works fine. This can be fixed manually by users after migration and we cannot provide atm any reliable tool to fix it automatically.
non_process
add information to migration guide faq about failing cases documentation needed as discussed in we need add information ot migration guide faq about failing cases during the migration php use zend expressive authorization acl acl zendacl class it will be migrated to php use mezzio authorization acl acl zendacl class wrong should be acl laminasacl class php use some vendor zendacl zendacl class after migration it becomes php use some vendor zendacl laminasacl class wrong as this class does not exist it should be zendacl class we cannot really fix it because it is more important that config post processor replaces names correctly than all edge cases migration works fine this can be fixed manually by users after migration and we cannot provide atm any reliable tool to fix it automatically
0
609
2,589,976,103
IssuesEvent
2015-02-18 16:14:09
arrayfire/arrayfire
https://api.github.com/repos/arrayfire/arrayfire
closed
Can't build on MacOsX 10.9.5 - libafopencl.dylib
build OSX
Hi, I did all the steps outlined in Readme for installing for Mac OS. Instead of `homebrew` I am using `macports`. On the step when I try to build arrayfire I get the following error: `code Linking CXX shared library libafopencl.dylib clang: warning: argument unused during compilation: '-pthread' [ 52%] Built target afopencl make: *** [all] Error 2 ` I have no idea what is wrong.
1.0
Can't build on MacOsX 10.9.5 - libafopencl.dylib - Hi, I did all the steps outlined in Readme for installing for Mac OS. Instead of `homebrew` I am using `macports`. On the step when I try to build arrayfire I get the following error: `code Linking CXX shared library libafopencl.dylib clang: warning: argument unused during compilation: '-pthread' [ 52%] Built target afopencl make: *** [all] Error 2 ` I have no idea what is wrong.
non_process
can t build on macosx libafopencl dylib hi i did all the steps outlined in readme for installing for mac os instead of homebrew i am using macports on the step when i try to build arrayfire i get the following error code linking cxx shared library libafopencl dylib clang warning argument unused during compilation pthread built target afopencl make error i have no idea what is wrong
0
487,480
14,047,348,023
IssuesEvent
2020-11-02 06:58:17
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
news.google.com - see bug description
browser-fenix engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical
<!-- @browser: Firefox Mobile 83.0 --> <!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/60921 --> <!-- @extra_labels: browser-fenix --> **URL**: https://news.google.com/topstories?hl=en-US&gl=US&ceid=US:en **Browser / Version**: Firefox Mobile 83.0 **Operating System**: Android **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: Menus don't open properly, freezes Firefox **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201025174155</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/11/608e0ce0-8c73-45e8-a808-6eae5a6c970e) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
news.google.com - see bug description - <!-- @browser: Firefox Mobile 83.0 --> <!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/60921 --> <!-- @extra_labels: browser-fenix --> **URL**: https://news.google.com/topstories?hl=en-US&gl=US&ceid=US:en **Browser / Version**: Firefox Mobile 83.0 **Operating System**: Android **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: Menus don't open properly, freezes Firefox **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201025174155</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/11/608e0ce0-8c73-45e8-a808-6eae5a6c970e) _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
news google com see bug description url browser version firefox mobile operating system android tested another browser yes chrome problem type something else description menus don t open properly freezes firefox steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
0
3,949
6,889,993,320
IssuesEvent
2017-11-22 12:26:27
GoogleCloudPlatform/google-cloud-dotnet
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-dotnet
closed
runintegrationtests should allow APIs to be specified
type: process
build.sh only builds specified projects; ditto builddocs.sh. buildrelease.sh only builds the projects that are going to be released runintegrationtests.sh should do the same thing - and buildrelease.sh should either run them itself or dump the command line to run.
1.0
runintegrationtests should allow APIs to be specified - build.sh only builds specified projects; ditto builddocs.sh. buildrelease.sh only builds the projects that are going to be released runintegrationtests.sh should do the same thing - and buildrelease.sh should either run them itself or dump the command line to run.
process
runintegrationtests should allow apis to be specified build sh only builds specified projects ditto builddocs sh buildrelease sh only builds the projects that are going to be released runintegrationtests sh should do the same thing and buildrelease sh should either run them itself or dump the command line to run
1
20,417
27,075,836,441
IssuesEvent
2023-02-14 10:30:24
billingran/Newsletter
https://api.github.com/repos/billingran/Newsletter
closed
Un message pour demander d’en choisir au moins un centre d’intérêt.
processing... Brief 2
- [ ] Un message pour demander d’en choisir au moins un centre d’intérêt.
1.0
Un message pour demander d’en choisir au moins un centre d’intérêt. - - [ ] Un message pour demander d’en choisir au moins un centre d’intérêt.
process
un message pour demander d’en choisir au moins un centre d’intérêt un message pour demander d’en choisir au moins un centre d’intérêt
1
730
3,214,309,847
IssuesEvent
2015-10-07 00:42:39
broadinstitute/hellbender-dataflow
https://api.github.com/repos/broadinstitute/hellbender-dataflow
opened
Add mechanism to generate unique ids for data types (reads, variants, etc.)
Dataflow DataflowPreprocessingPipeline
_From @droazen on May 28, 2015 18:28_ Needed for GroupByKey, since Java serialization is not deterministic. Initial idea is to create IDs based on the source of each record (eg., URI + file offset or record number). _Copied from original issue: broadinstitute/hellbender#532_
1.0
Add mechanism to generate unique ids for data types (reads, variants, etc.) - _From @droazen on May 28, 2015 18:28_ Needed for GroupByKey, since Java serialization is not deterministic. Initial idea is to create IDs based on the source of each record (eg., URI + file offset or record number). _Copied from original issue: broadinstitute/hellbender#532_
process
add mechanism to generate unique ids for data types reads variants etc from droazen on may needed for groupbykey since java serialization is not deterministic initial idea is to create ids based on the source of each record eg uri file offset or record number copied from original issue broadinstitute hellbender
1
14,725
25,518,363,132
IssuesEvent
2022-11-28 18:13:13
renovatebot/renovate
https://api.github.com/repos/renovatebot/renovate
opened
Terraform manager doesn't properly use existing tag on Kubernetes resource
type:bug status:requirements priority-5-triage
### How are you running Renovate? Self-hosted ### If you're self-hosting Renovate, tell us what version of Renovate you run. latest ### If you're self-hosting Renovate, select which platform you are using. github.com ### If you're self-hosting Renovate, tell us what version of the platform you run. github.com ### Was this something which used to work for you, and then stopped? I never saw this working ### Describe the bug I am using the image: https://github.com/onedr0p/home-ops/blob/1879d378fd92914977701f8435a1f3cc6557671a/terraform/storage/app_vector_agent.tf#L37 Renovate seems to ignore the existing `-debian` suffix and wants to update it to a tag with a `-distroless-static` suffix The PR that renovate opened is here: https://github.com/onedr0p/home-ops/pull/4279 ### Relevant debug logs Full job run can be viewed at https://github.com/onedr0p/home-ops/actions/runs/3567394786/jobs/5995040016 <details><summary>Logs</summary> ``` { "packageFile": "terraform/storage/app_vector_agent.tf", "deps": [ {"skipReason": "invalid-name", "depIndex": 0, "updates": []}, { "depName": "docker.io/timberio/vector", "currentValue": "0.25.1-debian", "replaceString": "docker.io/timberio/vector:0.25.1-debian", "autoReplaceStringTemplate": "{{depName}}{{#if newValue}}:{{newValue}}{{/if}}{{#if newDigest}}@{{newDigest}}{{/if}}", "datasource": "docker", "depType": "kubernetes_daemonset", "depIndex": 1, "updates": [ { "bucket": "non-major", "newVersion": "0.25.1-distroless-static", "newValue": "0.25.1-distroless-static", "newMajor": 0, "newMinor": 25, "updateType": "patch", "branchName": "renovate/vector" } ], "warnings": [], "versioning": "hashicorp", "currentVersion": "0.25.1-debian", "isSingleVersion": true, "fixedVersion": "0.25.1-debian" } ] }, ``` </details> ### Have you created a minimal reproduction repository? No reproduction, but I have linked to a public repo where it occurs
1.0
Terraform manager doesn't properly use existing tag on Kubernetes resource - ### How are you running Renovate? Self-hosted ### If you're self-hosting Renovate, tell us what version of Renovate you run. latest ### If you're self-hosting Renovate, select which platform you are using. github.com ### If you're self-hosting Renovate, tell us what version of the platform you run. github.com ### Was this something which used to work for you, and then stopped? I never saw this working ### Describe the bug I am using the image: https://github.com/onedr0p/home-ops/blob/1879d378fd92914977701f8435a1f3cc6557671a/terraform/storage/app_vector_agent.tf#L37 Renovate seems to ignore the existing `-debian` suffix and wants to update it to a tag with a `-distroless-static` suffix The PR that renovate opened is here: https://github.com/onedr0p/home-ops/pull/4279 ### Relevant debug logs Full job run can be viewed at https://github.com/onedr0p/home-ops/actions/runs/3567394786/jobs/5995040016 <details><summary>Logs</summary> ``` { "packageFile": "terraform/storage/app_vector_agent.tf", "deps": [ {"skipReason": "invalid-name", "depIndex": 0, "updates": []}, { "depName": "docker.io/timberio/vector", "currentValue": "0.25.1-debian", "replaceString": "docker.io/timberio/vector:0.25.1-debian", "autoReplaceStringTemplate": "{{depName}}{{#if newValue}}:{{newValue}}{{/if}}{{#if newDigest}}@{{newDigest}}{{/if}}", "datasource": "docker", "depType": "kubernetes_daemonset", "depIndex": 1, "updates": [ { "bucket": "non-major", "newVersion": "0.25.1-distroless-static", "newValue": "0.25.1-distroless-static", "newMajor": 0, "newMinor": 25, "updateType": "patch", "branchName": "renovate/vector" } ], "warnings": [], "versioning": "hashicorp", "currentVersion": "0.25.1-debian", "isSingleVersion": true, "fixedVersion": "0.25.1-debian" } ] }, ``` </details> ### Have you created a minimal reproduction repository? No reproduction, but I have linked to a public repo where it occurs
non_process
terraform manager doesn t properly use existing tag on kubernetes resource how are you running renovate self hosted if you re self hosting renovate tell us what version of renovate you run latest if you re self hosting renovate select which platform you are using github com if you re self hosting renovate tell us what version of the platform you run github com was this something which used to work for you and then stopped i never saw this working describe the bug i am using the image renovate seems to ignore the existing debian suffix and wants to update it to a tag with a distroless static suffix the pr that renovate opened is here relevant debug logs full job run can be viewed at logs packagefile terraform storage app vector agent tf deps skipreason invalid name depindex updates depname docker io timberio vector currentvalue debian replacestring docker io timberio vector debian autoreplacestringtemplate depname if newvalue newvalue if if newdigest newdigest if datasource docker deptype kubernetes daemonset depindex updates bucket non major newversion distroless static newvalue distroless static newmajor newminor updatetype patch branchname renovate vector warnings versioning hashicorp currentversion debian issingleversion true fixedversion debian have you created a minimal reproduction repository no reproduction but i have linked to a public repo where it occurs
0
6,335
9,378,124,700
IssuesEvent
2019-04-04 12:08:37
decidim/decidim
https://api.github.com/repos/decidim/decidim
opened
Call to action button in steps box in a participatory process is not working correctly
space: processes type: bug
The call to action button in steps box is not working correctly: ![image](https://user-images.githubusercontent.com/3855859/55553554-c9370500-56e0-11e9-9c09-d68a82ec0233.png) The configuration of the URL of the call to action is done from [here](https://meta.decidim.org/admin/participatory_processes/bug-report/steps/1100/edit?locale=ca ) in the configuration of the steps. It seems that there is a problem with the slug: https://meta.decidim.org/processes/bug-report**?locale=ca**/f/210/proposals/new
1.0
Call to action button in steps box in a participatory process is not working correctly - The call to action button in steps box is not working correctly: ![image](https://user-images.githubusercontent.com/3855859/55553554-c9370500-56e0-11e9-9c09-d68a82ec0233.png) The configuration of the URL of the call to action is done from [here](https://meta.decidim.org/admin/participatory_processes/bug-report/steps/1100/edit?locale=ca ) in the configuration of the steps. It seems that there is a problem with the slug: https://meta.decidim.org/processes/bug-report**?locale=ca**/f/210/proposals/new
process
call to action button in steps box in a participatory process is not working correctly the call to action button in steps box is not working correctly the configuration of the url of the call to action is done from in the configuration of the steps it seems that there is a problem with the slug
1