Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,373
| 3,928,152,222
|
IssuesEvent
|
2016-04-24 02:20:30
|
RicoVZ/cuckoo-unittest-docs
|
https://api.github.com/repos/RicoVZ/cuckoo-unittest-docs
|
closed
|
Cuckoo Score 1.2 - VT: 37/56 - Cuckoo ID: 933
|
AutoIssue NoProcess PASSED
|
----
###### Basic information
----
- **Cuckoo analysis information**
- Cuckoo score: **1.2**
- VT score: **37/56**
- Cuckoo analysis ID: **933**
- Analysis checker ID: **15**
- VT md5: **9a4a171db069af2b15d6f88759b08db0**
- VT URL: https://www.virustotal.com/file/39fbce28b7369cd0e3003fb5469148dc549398ed8843acda8753fd3042de4d5d/analysis/1445496198/
- Cuckoo URL: http://cuckoo.rznet.nl:13337/analysis/933
- **Possible reason:**
- No process started
|
1.0
|
Cuckoo Score 1.2 - VT: 37/56 - Cuckoo ID: 933 -
----
###### Basic information
----
- **Cuckoo analysis information**
- Cuckoo score: **1.2**
- VT score: **37/56**
- Cuckoo analysis ID: **933**
- Analysis checker ID: **15**
- VT md5: **9a4a171db069af2b15d6f88759b08db0**
- VT URL: https://www.virustotal.com/file/39fbce28b7369cd0e3003fb5469148dc549398ed8843acda8753fd3042de4d5d/analysis/1445496198/
- Cuckoo URL: http://cuckoo.rznet.nl:13337/analysis/933
- **Possible reason:**
- No process started
|
process
|
cuckoo score vt cuckoo id basic information cuckoo analysis information cuckoo score vt score cuckoo analysis id analysis checker id vt vt url cuckoo url possible reason no process started
| 1
|
78,826
| 7,674,469,761
|
IssuesEvent
|
2018-05-15 04:11:48
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Failing Test : [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones
|
kind/api-change kind/bug priority/failing-test priority/important-soon sig/apps status/approved-for-milestone
|
## Failing Job
### [sig-release-master-upgrade#gce-1.10-master-upgrade-master](https://k8s-testgrid.appspot.com/sig-release-master-upgrade#gce-1.10-master-upgrade-master)
## Failing Test
### [[sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-new-master-upgrade-master/1484)
## Triage results
https://storage.googleapis.com/k8s-gubernator/triage/index.html?sig=apps&test=%5C%5Bsig%5C-apps%5C%5D%5C%20Deployment%5C%20RollingUpdateDeployment%5C%20should%5C%20delete%5C%20old%5C%20pods%5C%20and%5C%20create%5C%20new%5C%20ones
This test is currently failing in sig-release master-upgrade suite, This could potentially become release blocking if not addressed sooner. @mattfarina can you please triage this failure?
/kind bug
/priority failing-test
/priority important-soon
/sig apps
@kubernetes/sig-apps-bugs
cc @jberkus @tpepper
/assign @mattfarina
|
1.0
|
Failing Test : [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones - ## Failing Job
### [sig-release-master-upgrade#gce-1.10-master-upgrade-master](https://k8s-testgrid.appspot.com/sig-release-master-upgrade#gce-1.10-master-upgrade-master)
## Failing Test
### [[sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-new-master-upgrade-master/1484)
## Triage results
https://storage.googleapis.com/k8s-gubernator/triage/index.html?sig=apps&test=%5C%5Bsig%5C-apps%5C%5D%5C%20Deployment%5C%20RollingUpdateDeployment%5C%20should%5C%20delete%5C%20old%5C%20pods%5C%20and%5C%20create%5C%20new%5C%20ones
This test is currently failing in sig-release master-upgrade suite, This could potentially become release blocking if not addressed sooner. @mattfarina can you please triage this failure?
/kind bug
/priority failing-test
/priority important-soon
/sig apps
@kubernetes/sig-apps-bugs
cc @jberkus @tpepper
/assign @mattfarina
|
non_process
|
failing test deployment rollingupdatedeployment should delete old pods and create new ones failing job failing test deployment rollingupdatedeployment should delete old pods and create new ones triage results this test is currently failing in sig release master upgrade suite this could potentially become release blocking if not addressed sooner mattfarina can you please triage this failure kind bug priority failing test priority important soon sig apps kubernetes sig apps bugs cc jberkus tpepper assign mattfarina
| 0
|
275,482
| 23,917,772,140
|
IssuesEvent
|
2022-09-09 14:08:47
|
project-chip/connectedhomeip
|
https://api.github.com/repos/project-chip/connectedhomeip
|
closed
|
[SVE2][TC-ACL-2.5] - It shows FabricIndex rather than AdminFabricIndex
|
request cert blocker invalid test
|
In TC-ACL-2.5,
In step 5: ./chip-tool accesscontrol read-event access-control-extension-changed 1 0
The log shows FabricIndex rather than AdminFabricIndex in Draft_SVE2_TestPlanVerificationSteps.
[1661409734.979807][27141:27146] CHIP:TOO: Endpoint: 0 Cluster: 0x0000_001F Event 0x0000_0001
[1661409734.979863][27141:27146] CHIP:TOO: Event number: 3
[1661409734.979905][27141:27146] CHIP:TOO: Priority: Info
[1661409734.979945][27141:27146] CHIP:TOO: Timestamp: 4426028
[1661409734.980101][27141:27146] CHIP:TOO: AccessControlExtensionChanged: {
[1661409734.980175][27141:27146] CHIP:TOO: AdminNodeID: 112233
[1661409734.980227][27141:27146] CHIP:TOO: AdminPasscodeID: null
[1661409734.980281][27141:27146] CHIP:TOO: ChangeType: 1
[1661409734.980333][27141:27146] CHIP:TOO: LatestValue: {
[1661409734.980388][27141:27146] CHIP:TOO: Data: 1718
[1661409734.980431][27141:27146] CHIP:TOO: FabricIndex: 1
[1661409734.980480][27141:27146] CHIP:TOO: }
[1661409734.980534][27141:27146] CHIP:TOO: **FabricIndex**: 1
[1661409734.980582][27141:27146] CHIP:TOO: }
Test Environment
App used - lighting app
Platform - Chip-tool - RPI-4, 8GB RAM
DUT - CYW30739
Logs
[[TC-ACL-2.5]_CHIPTool_log.txt](https://github.com/project-chip/connectedhomeip/files/9422193/TC-ACL-2.5._CHIPTool_log.txt)
|
1.0
|
[SVE2][TC-ACL-2.5] - It shows FabricIndex rather than AdminFabricIndex - In TC-ACL-2.5,
In step 5: ./chip-tool accesscontrol read-event access-control-extension-changed 1 0
The log shows FabricIndex rather than AdminFabricIndex in Draft_SVE2_TestPlanVerificationSteps.
[1661409734.979807][27141:27146] CHIP:TOO: Endpoint: 0 Cluster: 0x0000_001F Event 0x0000_0001
[1661409734.979863][27141:27146] CHIP:TOO: Event number: 3
[1661409734.979905][27141:27146] CHIP:TOO: Priority: Info
[1661409734.979945][27141:27146] CHIP:TOO: Timestamp: 4426028
[1661409734.980101][27141:27146] CHIP:TOO: AccessControlExtensionChanged: {
[1661409734.980175][27141:27146] CHIP:TOO: AdminNodeID: 112233
[1661409734.980227][27141:27146] CHIP:TOO: AdminPasscodeID: null
[1661409734.980281][27141:27146] CHIP:TOO: ChangeType: 1
[1661409734.980333][27141:27146] CHIP:TOO: LatestValue: {
[1661409734.980388][27141:27146] CHIP:TOO: Data: 1718
[1661409734.980431][27141:27146] CHIP:TOO: FabricIndex: 1
[1661409734.980480][27141:27146] CHIP:TOO: }
[1661409734.980534][27141:27146] CHIP:TOO: **FabricIndex**: 1
[1661409734.980582][27141:27146] CHIP:TOO: }
Test Environment
App used - lighting app
Platform - Chip-tool - RPI-4, 8GB RAM
DUT - CYW30739
Logs
[[TC-ACL-2.5]_CHIPTool_log.txt](https://github.com/project-chip/connectedhomeip/files/9422193/TC-ACL-2.5._CHIPTool_log.txt)
|
non_process
|
it shows fabricindex rather than adminfabricindex in tc acl in step chip tool accesscontrol read event access control extension changed the log shows fabricindex rather than adminfabricindex in draft testplanverificationsteps chip too endpoint cluster event chip too event number chip too priority info chip too timestamp chip too accesscontrolextensionchanged chip too adminnodeid chip too adminpasscodeid null chip too changetype chip too latestvalue chip too data chip too fabricindex chip too chip too fabricindex chip too test environment app used lighting app platform chip tool rpi ram dut logs chiptool log txt
| 0
|
16,889
| 22,192,160,914
|
IssuesEvent
|
2022-06-07 00:57:38
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
opened
|
DISABLED test_exception_single (__main__.SpawnTest)
|
module: multiprocessing module: flaky-tests skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_exception_single%2C%20SpawnTest) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/6764234515).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 red and 6 green.
|
1.0
|
DISABLED test_exception_single (__main__.SpawnTest) - Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_exception_single%2C%20SpawnTest) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/6764234515).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 red and 6 green.
|
process
|
disabled test exception single main spawntest platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with red and green
| 1
|
118,278
| 11,965,466,768
|
IssuesEvent
|
2020-04-05 23:30:48
|
physiopy/phys2bids
|
https://api.github.com/repos/physiopy/phys2bids
|
opened
|
Add documentation page on "What's BIDS and why adopting it"
|
BrainWeb Documentation
|
## Detailed Description
<!--- Provide a detailed description of the change or addition you are proposing -->
Our documentation lacks a short, BIDS-oriented section in which we give the principles of BIDS and we explain what's the advantage in adopting it.
Writing it would be very useful - and finally close #12!
It doesn't have to be extended, something around half an A4 page could be more than enough.
## Possible Implementation
<!--- Not obligatory, but suggest an idea for implementing addition or change -->
<!--- If you already have worked on the idea, please share a link to the branch in your forked project -->
There are many resources we can use to write a couple of paragraphs, starting from [BIDS' website](https://bids.neuroimaging.io/), its [further information paragraph](https://bids.neuroimaging.io/index.html#further-information) and its [Nature Scientific Data paper](https://www.nature.com/articles/sdata201644).
There is also the first part of this [video](https://youtu.be/aRDK4Gj5qzE?t=13) that could be used.
|
1.0
|
Add documentation page on "What's BIDS and why adopting it" - ## Detailed Description
<!--- Provide a detailed description of the change or addition you are proposing -->
Our documentation lacks a short, BIDS-oriented section in which we give the principles of BIDS and we explain what's the advantage in adopting it.
Writing it would be very useful - and finally close #12!
It doesn't have to be extended, something around half an A4 page could be more than enough.
## Possible Implementation
<!--- Not obligatory, but suggest an idea for implementing addition or change -->
<!--- If you already have worked on the idea, please share a link to the branch in your forked project -->
There are many resources we can use to write a couple of paragraphs, starting from [BIDS' website](https://bids.neuroimaging.io/), its [further information paragraph](https://bids.neuroimaging.io/index.html#further-information) and its [Nature Scientific Data paper](https://www.nature.com/articles/sdata201644).
There is also the first part of this [video](https://youtu.be/aRDK4Gj5qzE?t=13) that could be used.
|
non_process
|
add documentation page on what s bids and why adopting it detailed description our documentation lacks a short bids oriented section in which we give the principles of bids and we explain what s the advantage in adopting it writing it would be very useful and finally close it doesn t have to be extended something around half an page could be more than enough possible implementation there are many resources we can use to write a couple of paragraphs starting from its and its there is also the first part of this that could be used
| 0
|
10,687
| 13,466,521,247
|
IssuesEvent
|
2020-09-09 23:08:59
|
googleapis/releasetool
|
https://api.github.com/repos/googleapis/releasetool
|
closed
|
Migrate autorelease here (or somewhere else on GitHub)
|
type: process
|
We noticed that autorelease appears to exist entirely in the confines of git-on-borg, where we dare not tread. We should either move autorelease here, or create a new GitHub repository for it.
|
1.0
|
Migrate autorelease here (or somewhere else on GitHub) - We noticed that autorelease appears to exist entirely in the confines of git-on-borg, where we dare not tread. We should either move autorelease here, or create a new GitHub repository for it.
|
process
|
migrate autorelease here or somewhere else on github we noticed that autorelease appears to exist entirely in the confines of git on borg where we dare not tread we should either move autorelease here or create a new github repository for it
| 1
|
76,883
| 9,967,590,934
|
IssuesEvent
|
2019-07-08 13:55:36
|
spring-cloud/spring-cloud-dataflow
|
https://api.github.com/repos/spring-cloud/spring-cloud-dataflow
|
closed
|
Cleanup refs to minikube in docs
|
documentation 📝
|
Review reference guide / web site for minikube references, ie:
`If you run on a Kubernetes cluster without a load balancer, such as in Minikube` would be generalized to `If you run on a Kubernetes cluster without a load balancer..`
`If you run on a Kubernetes cluster without RBAC, such as in Minikube,` would be generalized to `If you run on a Kubernetes cluster without RBAC,...` -> rbac has been enabled on minikube on versions earlier than we support
there are others, but in general review minikube callouts as there are other platforms that are coming along as well that may benefit from the same advice (resource allocation etc)
|
1.0
|
Cleanup refs to minikube in docs - Review reference guide / web site for minikube references, ie:
`If you run on a Kubernetes cluster without a load balancer, such as in Minikube` would be generalized to `If you run on a Kubernetes cluster without a load balancer..`
`If you run on a Kubernetes cluster without RBAC, such as in Minikube,` would be generalized to `If you run on a Kubernetes cluster without RBAC,...` -> rbac has been enabled on minikube on versions earlier than we support
there are others, but in general review minikube callouts as there are other platforms that are coming along as well that may benefit from the same advice (resource allocation etc)
|
non_process
|
cleanup refs to minikube in docs review reference guide web site for minikube references ie if you run on a kubernetes cluster without a load balancer such as in minikube would be generalized to if you run on a kubernetes cluster without a load balancer if you run on a kubernetes cluster without rbac such as in minikube would be generalized to if you run on a kubernetes cluster without rbac rbac has been enabled on minikube on versions earlier than we support there are others but in general review minikube callouts as there are other platforms that are coming along as well that may benefit from the same advice resource allocation etc
| 0
|
9,542
| 12,509,770,644
|
IssuesEvent
|
2020-06-02 17:30:05
|
pacificclimate/climate-explorer-data-prep
|
https://api.github.com/repos/pacificclimate/climate-explorer-data-prep
|
closed
|
calculate seasonal and monthly frost free day data
|
process new data
|
The[ frost free day calculation script](https://github.com/pacificclimate/data-prep-actions/tree/master/actions/frost_free_days) is annual-only. It should be upgraded to handle seasonal and monthly data, and that data for the PCIC12 models should be calculated.
|
1.0
|
calculate seasonal and monthly frost free day data - The[ frost free day calculation script](https://github.com/pacificclimate/data-prep-actions/tree/master/actions/frost_free_days) is annual-only. It should be upgraded to handle seasonal and monthly data, and that data for the PCIC12 models should be calculated.
|
process
|
calculate seasonal and monthly frost free day data the is annual only it should be upgraded to handle seasonal and monthly data and that data for the models should be calculated
| 1
|
45,114
| 12,552,307,317
|
IssuesEvent
|
2020-06-06 17:39:47
|
hikaya-io/dots-backend
|
https://api.github.com/repos/hikaya-io/dots-backend
|
opened
|
Change email name from 'support' to 'Hikaya team'
|
BE General app defect
|
**Current behavior**
The email name is showing as 'support'
**To Reproduce**
1. I am a new user
2. I go to the 'register' page
3. I fill the information and click on 'register' to create an account
4. I get an email for confirming my email address
**Expected behavior**
Change email name from 'support' to 'Hikaya team'
Hikaya team <support@hikaya.io>
**Screenshots**
<img width="1182" alt="Screen Shot 2020-06-06 at 7 35 04 PM" src="https://user-images.githubusercontent.com/13760198/83950684-f0311f80-a82c-11ea-8975-72d326b20a3c.png">
|
1.0
|
Change email name from 'support' to 'Hikaya team' - **Current behavior**
The email name is showing as 'support'
**To Reproduce**
1. I am a new user
2. I go to the 'register' page
3. I fill the information and click on 'register' to create an account
4. I get an email for confirming my email address
**Expected behavior**
Change email name from 'support' to 'Hikaya team'
Hikaya team <support@hikaya.io>
**Screenshots**
<img width="1182" alt="Screen Shot 2020-06-06 at 7 35 04 PM" src="https://user-images.githubusercontent.com/13760198/83950684-f0311f80-a82c-11ea-8975-72d326b20a3c.png">
|
non_process
|
change email name from support to hikaya team current behavior the email name is showing as support to reproduce i am a new user i go to the register page i fill the information and click on register to create an account i get an email for confirming my email address expected behavior change email name from support to hikaya team hikaya team screenshots img width alt screen shot at pm src
| 0
|
199,294
| 6,987,902,655
|
IssuesEvent
|
2017-12-14 10:51:29
|
fga-gpp-mds/Falko-2017.2-FrontEnd
|
https://api.github.com/repos/fga-gpp-mds/Falko-2017.2-FrontEnd
|
closed
|
Fazer cobertura de testes ser de 70%
|
Front End high-priority
|
## Comportamento esperado
O Front End deve ter testes cobrindo 70% do código.
## Checklist
- [x] A issue possui Labels.
- [x] A issue possui nome significativo.
|
1.0
|
Fazer cobertura de testes ser de 70% - ## Comportamento esperado
O Front End deve ter testes cobrindo 70% do código.
## Checklist
- [x] A issue possui Labels.
- [x] A issue possui nome significativo.
|
non_process
|
fazer cobertura de testes ser de comportamento esperado o front end deve ter testes cobrindo do código checklist a issue possui labels a issue possui nome significativo
| 0
|
116,429
| 11,912,090,085
|
IssuesEvent
|
2020-03-31 09:41:31
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
Small error in gamma continuous rv fit comments
|
Documentation scipy.stats
|
https://github.com/scipy/scipy/blob/adc4f4f7bab120ccfab9383aba272954a0a12fb0/scipy/stats/_continuous_distns.py#L2794
The comment and the line below it currently read:
```python
# shape and scale are both free.
# The MLE for the shape parameter `a` is the solution to:
# np.log(a) - sc.digamma(a) - np.log(xbar) +
# np.log(data.mean) = 0
s = np.log(xbar) - np.log(data).mean()
```
The comment should read:
```python
# shape and scale are both free.
# The MLE for the shape parameter `a` is the solution to:
# np.log(a) - sc.digamma(a) - np.log(xbar) +
# np.log(data).mean() = 0
```
|
1.0
|
Small error in gamma continuous rv fit comments - https://github.com/scipy/scipy/blob/adc4f4f7bab120ccfab9383aba272954a0a12fb0/scipy/stats/_continuous_distns.py#L2794
The comment and the line below it currently read:
```python
# shape and scale are both free.
# The MLE for the shape parameter `a` is the solution to:
# np.log(a) - sc.digamma(a) - np.log(xbar) +
# np.log(data.mean) = 0
s = np.log(xbar) - np.log(data).mean()
```
The comment should read:
```python
# shape and scale are both free.
# The MLE for the shape parameter `a` is the solution to:
# np.log(a) - sc.digamma(a) - np.log(xbar) +
# np.log(data).mean() = 0
```
|
non_process
|
small error in gamma continuous rv fit comments the comment and the line below it currently read python shape and scale are both free the mle for the shape parameter a is the solution to np log a sc digamma a np log xbar np log data mean s np log xbar np log data mean the comment should read python shape and scale are both free the mle for the shape parameter a is the solution to np log a sc digamma a np log xbar np log data mean
| 0
|
18,537
| 24,553,378,894
|
IssuesEvent
|
2022-10-12 14:10:47
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] [Offline indicator] 'You are offline' error message is not getting displayed when participant clicks on the 'Reach out' menu
|
Bug P1 Android Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Sign in and complete the passcode process
2. Turn off the internet/mobile data after navigated to study list screen
3. Click on 'Hamburger icon
4. Click on 'Reach out' and Verify
**AR:** 'You are offline' error message is not getting displayed when participant clicks on the 'Reach out' menu
**ER:** 'You are offline' error message should get displayed when participant clicks on the 'Reach out' menu

|
3.0
|
[Android] [Offline indicator] 'You are offline' error message is not getting displayed when participant clicks on the 'Reach out' menu - **Steps:**
1. Sign in and complete the passcode process
2. Turn off the internet/mobile data after navigated to study list screen
3. Click on 'Hamburger icon
4. Click on 'Reach out' and Verify
**AR:** 'You are offline' error message is not getting displayed when participant clicks on the 'Reach out' menu
**ER:** 'You are offline' error message should get displayed when participant clicks on the 'Reach out' menu

|
process
|
you are offline error message is not getting displayed when participant clicks on the reach out menu steps sign in and complete the passcode process turn off the internet mobile data after navigated to study list screen click on hamburger icon click on reach out and verify ar you are offline error message is not getting displayed when participant clicks on the reach out menu er you are offline error message should get displayed when participant clicks on the reach out menu
| 1
|
42,898
| 23,035,972,877
|
IssuesEvent
|
2022-07-22 18:47:08
|
earthly/earthly
|
https://api.github.com/repos/earthly/earthly
|
opened
|
Earthly has overhead in simple cases
|
type:performance
|
Example: https://gist.github.com/lynaghk/0f9b66cf889398a7c73b01f39beaffee
This takes 7 seconds when it's fully cached. (Toast, for example, takes 800ms in the same situation)
|
True
|
Earthly has overhead in simple cases - Example: https://gist.github.com/lynaghk/0f9b66cf889398a7c73b01f39beaffee
This takes 7 seconds when it's fully cached. (Toast, for example, takes 800ms in the same situation)
|
non_process
|
earthly has overhead in simple cases example this takes seconds when it s fully cached toast for example takes in the same situation
| 0
|
13,099
| 15,495,527,460
|
IssuesEvent
|
2021-03-11 01:01:15
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
Warning: a recent release failed
|
type: process
|
The following release PRs may have failed:
* #10778
* #10649
* #10650
* #10651
* #10652
* #10653
* #10654
|
1.0
|
Warning: a recent release failed - The following release PRs may have failed:
* #10778
* #10649
* #10650
* #10651
* #10652
* #10653
* #10654
|
process
|
warning a recent release failed the following release prs may have failed
| 1
|
17,507
| 3,619,809,481
|
IssuesEvent
|
2016-02-08 17:22:52
|
medic/medic-webapp
|
https://api.github.com/repos/medic/medic-webapp
|
closed
|
`count-selected` not working
|
4 - Acceptance testing Bug
|
Some of the forms used in the field test work in build 2508 (lg.dev) but do not work in build 2598 (alpha.dev).
For instance, the `assessment_follow_up` form fails with `The string 'count-selected("")' is not a valid XPath expression."`.
Also, the `assessment` form failed with `The string 'concat("")' is not a valid XPath expression."`. This was resolved by avoiding the use of concat with a single arg (https://github.com/medic/medic-projects/commit/f37311b5fbfc6e446b75d8f6d36aad538140566d), but then the count-selected() error is encountered.
|
1.0
|
`count-selected` not working - Some of the forms used in the field test work in build 2508 (lg.dev) but do not work in build 2598 (alpha.dev).
For instance, the `assessment_follow_up` form fails with `The string 'count-selected("")' is not a valid XPath expression."`.
Also, the `assessment` form failed with `The string 'concat("")' is not a valid XPath expression."`. This was resolved by avoiding the use of concat with a single arg (https://github.com/medic/medic-projects/commit/f37311b5fbfc6e446b75d8f6d36aad538140566d), but then the count-selected() error is encountered.
|
non_process
|
count selected not working some of the forms used in the field test work in build lg dev but do not work in build alpha dev for instance the assessment follow up form fails with the string count selected is not a valid xpath expression also the assessment form failed with the string concat is not a valid xpath expression this was resolved by avoiding the use of concat with a single arg but then the count selected error is encountered
| 0
|
10,586
| 13,394,393,980
|
IssuesEvent
|
2020-09-03 06:39:26
|
assimp/assimp
|
https://api.github.com/repos/assimp/assimp
|
opened
|
SpatialSort::.ToBin works only when single precision is configured
|
Bug Postprocessing
|
**Describe the bug**
The line https://github.com/assimp/assimp/blob/master/code/Common/SpatialSort.cpp#L178 works only for ai_real == float
**To Reproduce**
Steps to reproduce the behavior:
1. Create buid for double precision
2. Use spatal sort
**Expected behavior**
Shallo work for double as well
**Desktop (please complete the following information):**
- OS: Windows, Linux, MacOS
|
1.0
|
SpatialSort::.ToBin works only when single precision is configured - **Describe the bug**
The line https://github.com/assimp/assimp/blob/master/code/Common/SpatialSort.cpp#L178 works only for ai_real == float
**To Reproduce**
Steps to reproduce the behavior:
1. Create buid for double precision
2. Use spatal sort
**Expected behavior**
Shallo work for double as well
**Desktop (please complete the following information):**
- OS: Windows, Linux, MacOS
|
process
|
spatialsort tobin works only when single precision is configured describe the bug the line works only for ai real float to reproduce steps to reproduce the behavior create buid for double precision use spatal sort expected behavior shallo work for double as well desktop please complete the following information os windows linux macos
| 1
|
284,102
| 30,913,592,367
|
IssuesEvent
|
2023-08-05 02:20:15
|
panasalap/linux-4.19.72_Fix
|
https://api.github.com/repos/panasalap/linux-4.19.72_Fix
|
reopened
|
WS-2022-0019 (High) detected in multiple libraries
|
Mend: dependency security vulnerability
|
## WS-2022-0019 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
sctp: use call_rcu to free endpoint
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://github.com/gregkh/linux/commit/75799e71df1da11394740b43ae5686646179561d>WS-2022-0019</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GSD-2022-1000054">https://osv.dev/vulnerability/GSD-2022-1000054</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: v5.15.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2022-0019 (High) detected in multiple libraries - ## WS-2022-0019 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
sctp: use call_rcu to free endpoint
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://github.com/gregkh/linux/commit/75799e71df1da11394740b43ae5686646179561d>WS-2022-0019</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GSD-2022-1000054">https://osv.dev/vulnerability/GSD-2022-1000054</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: v5.15.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws high detected in multiple libraries ws high severity vulnerability vulnerable libraries linux linux linux vulnerability details sctp use call rcu to free endpoint publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
685,766
| 23,466,991,391
|
IssuesEvent
|
2022-08-16 17:45:08
|
larsiusprime/tdrpg-bugs
|
https://api.github.com/repos/larsiusprime/tdrpg-bugs
|
closed
|
Custom levels lack description text.
|
bug DQ CORE 1 Priority MID Mods
|
In level 1 my text is TEST. In level 2, my text is SUPER EPIC repeated 100 times.
Selecting either one in the mod list in-game presents the text 'Description goes here'
|
1.0
|
Custom levels lack description text. - In level 1 my text is TEST. In level 2, my text is SUPER EPIC repeated 100 times.
Selecting either one in the mod list in-game presents the text 'Description goes here'
|
non_process
|
custom levels lack description text in level my text is test in level my text is super epic repeated times selecting either one in the mod list in game presents the text description goes here
| 0
|
92,524
| 18,886,317,552
|
IssuesEvent
|
2021-11-15 08:19:56
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
opened
|
[Bug]: Mongo Plugin: get structure fails when `system.xyz` collections are present.
|
Bug Datasources community Needs Triaging Mongo BE Coders Pod
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
If `system.xyz` collection is present in database then `getStructure` call fails with the following error:
```
not authorized on <db name> to execute command { find: "system.views", limit: 1, singleBatch: true...
```
However, user is able to run queries successfully on normal collections. The issues are:
(1) they are not able to see the list of available collections in the entity explorer
(2) generate page fails because `getStructure` call fails.
Ref: https://discord.com/channels/725602949748752515/909411482754752512/909480363900936264
Ref: https://jira.mongodb.org/browse/SERVER-27554
### Steps To Reproduce
1. Create a database on MongoAtlas such that it has a `system.views` or `system.xyz` collection (not sure how to do it).
2. Create a datasource on Appsmith to connect to the database in (1).
3. Create new query on the datasource in (2) and check out the entity explorer section that lists the collections.
4. Click on `generate page` for the datasource.
### Environment
Production
### Version
Cloud
|
1.0
|
[Bug]: Mongo Plugin: get structure fails when `system.xyz` collections are present. - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
If `system.xyz` collection is present in database then `getStructure` call fails with the following error:
```
not authorized on <db name> to execute command { find: "system.views", limit: 1, singleBatch: true...
```
However, user is able to run queries successfully on normal collections. The issues are:
(1) they are not able to see the list of available collections in the entity explorer
(2) generate page fails because `getStructure` call fails.
Ref: https://discord.com/channels/725602949748752515/909411482754752512/909480363900936264
Ref: https://jira.mongodb.org/browse/SERVER-27554
### Steps To Reproduce
1. Create a database on MongoAtlas such that it has a `system.views` or `system.xyz` collection (not sure how to do it).
2. Create a datasource on Appsmith to connect to the database in (1).
3. Create new query on the datasource in (2) and check out the entity explorer section that lists the collections.
4. Click on `generate page` for the datasource.
### Environment
Production
### Version
Cloud
|
non_process
|
mongo plugin get structure fails when system xyz collections are present is there an existing issue for this i have searched the existing issues current behavior if system xyz collection is present in database then getstructure call fails with the following error not authorized on to execute command find system views limit singlebatch true however user is able to run queries successfully on normal collections the issues are they are not able to see the list of available collections in the entity explorer generate page fails because getstructure call fails ref ref steps to reproduce create a database on mongoatlas such that it has a system views or system xyz collection not sure how to do it create a datasource on appsmith to connect to the database in create new query on the datasource in and check out the entity explorer section that lists the collections click on generate page for the datasource environment production version cloud
| 0
|
496,802
| 14,355,329,105
|
IssuesEvent
|
2020-11-30 09:57:26
|
syrus-bot/syrus-bot
|
https://api.github.com/repos/syrus-bot/syrus-bot
|
opened
|
Add containerization
|
feat: core priority: urgent
|
**Is your feature request related to a problem? Please describe.**
Hosting is currently difficult to manage and integrate into the development cycle.
**Describe the solution you'd like**
The following must be implemented:
- [ ] Docker containerization
- [ ] Automatic deployment
**Describe alternatives you've considered**
N/A.
**Additional context**
N/A.
|
1.0
|
Add containerization - **Is your feature request related to a problem? Please describe.**
Hosting is currently difficult to manage and integrate into the development cycle.
**Describe the solution you'd like**
The following must be implemented:
- [ ] Docker containerization
- [ ] Automatic deployment
**Describe alternatives you've considered**
N/A.
**Additional context**
N/A.
|
non_process
|
add containerization is your feature request related to a problem please describe hosting is currently difficult to manage and integrate into the development cycle describe the solution you d like the following must be implemented docker containerization automatic deployment describe alternatives you ve considered n a additional context n a
| 0
|
2,249
| 5,088,649,578
|
IssuesEvent
|
2017-01-01 00:00:55
|
sw4j-org/tool-jpa-processor
|
https://api.github.com/repos/sw4j-org/tool-jpa-processor
|
opened
|
Handle @OneToMany Annotation
|
annotation processor task
|
Handle the `@OneToMany` annotation for a property or field.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.40 OneToMany Annotation
|
1.0
|
Handle @OneToMany Annotation - Handle the `@OneToMany` annotation for a property or field.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.40 OneToMany Annotation
|
process
|
handle onetomany annotation handle the onetomany annotation for a property or field see onetomany annotation
| 1
|
193,709
| 15,385,362,106
|
IssuesEvent
|
2021-03-03 06:26:58
|
uswds/uswds-site
|
https://api.github.com/repos/uswds/uswds-site
|
closed
|
Intro paragraph for Banner component
|
affects: banner documentation enhancement
|
New intro paragraph for Banner has been created. See preview [here](https://federalist-ead78f8d-8948-417c-a957-c21ec5617a57.app.cloud.gov/preview/uswds/uswds-site/jm-banner-intro/components/banner/#site-component-intro)
- [X] create [draft intro](https://docs.google.com/document/d/1I6bHDbbLz28Hbamr2mL7ZRSSVqe7Wy5vv_iVleKRtSc/edit#heading=h.cwqvrw8xqdb1)
- [x] PR created: https://github.com/uswds/uswds-site/pull/1105
- [x] align w comms plan
- [x] client review 2.17
- [x] update and finalize PR
**DOD**
- [x] all tasked are complete
- [ ] new paragraph is published to banner component page
|
1.0
|
Intro paragraph for Banner component - New intro paragraph for Banner has been created. See preview [here](https://federalist-ead78f8d-8948-417c-a957-c21ec5617a57.app.cloud.gov/preview/uswds/uswds-site/jm-banner-intro/components/banner/#site-component-intro)
- [X] create [draft intro](https://docs.google.com/document/d/1I6bHDbbLz28Hbamr2mL7ZRSSVqe7Wy5vv_iVleKRtSc/edit#heading=h.cwqvrw8xqdb1)
- [x] PR created: https://github.com/uswds/uswds-site/pull/1105
- [x] align w comms plan
- [x] client review 2.17
- [x] update and finalize PR
**DOD**
- [x] all tasked are complete
- [ ] new paragraph is published to banner component page
|
non_process
|
intro paragraph for banner component new intro paragraph for banner has been created see preview create pr created align w comms plan client review update and finalize pr dod all tasked are complete new paragraph is published to banner component page
| 0
|
4,783
| 7,656,202,730
|
IssuesEvent
|
2018-05-10 15:33:25
|
our-city-app/oca-backend
|
https://api.github.com/repos/our-city-app/oca-backend
|
closed
|
GDPR: Newsletters
|
priority_critical process_duplicate type_feature
|
Next to the GDPR related email that we'll send in the beginning of May, marketing emails will be sent to inform users of our new features: Regional news, Joyn, Payconiq, ...
In the emails, we'll ask the user's permission to keep him up to date with emails with news. There will be a button that should link to an unauthenticated request handler in OCA. OCA should save this setting somewhere.
The customer export should include the user's permission to send emails (such that we can hide the GDPR part if the user already granted us permission)
- [ ] Add 2 columns to the customer export: `confirmation URL` and `confirmed`
- [ ] Create a RequestHandler that stores the confirmation and show a success message. Let's use the style of the customer signup page.
- Store it in the datastore

|
1.0
|
GDPR: Newsletters - Next to the GDPR related email that we'll send in the beginning of May, marketing emails will be sent to inform users of our new features: Regional news, Joyn, Payconiq, ...
In the emails, we'll ask the user's permission to keep him up to date with emails with news. There will be a button that should link to an unauthenticated request handler in OCA. OCA should save this setting somewhere.
The customer export should include the user's permission to send emails (such that we can hide the GDPR part if the user already granted us permission)
- [ ] Add 2 columns to the customer export: `confirmation URL` and `confirmed`
- [ ] Create a RequestHandler that stores the confirmation and show a success message. Let's use the style of the customer signup page.
- Store it in the datastore

|
process
|
gdpr newsletters next to the gdpr related email that we ll send in the beginning of may marketing emails will be sent to inform users of our new features regional news joyn payconiq in the emails we ll ask the user s permission to keep him up to date with emails with news there will be a button that should link to an unauthenticated request handler in oca oca should save this setting somewhere the customer export should include the user s permission to send emails such that we can hide the gdpr part if the user already granted us permission add columns to the customer export confirmation url and confirmed create a requesthandler that stores the confirmation and show a success message let s use the style of the customer signup page store it in the datastore
| 1
|
171,118
| 20,922,480,566
|
IssuesEvent
|
2022-03-24 18:47:14
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
opened
|
Implement full validation for `UnionArrays` construction from ArrayData
|
arrow enhancement security
|
**Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
It is possible to construct invalid `UnionArrays` when building them using the `ArrayData` API
**Describe the solution you'd like**
Validate all the data is correct for union arrays
**Additional context**
This is a left over from #921 when was implemented concurrently with proper `UnionArray` support
|
True
|
Implement full validation for `UnionArrays` construction from ArrayData - **Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
It is possible to construct invalid `UnionArrays` when building them using the `ArrayData` API
**Describe the solution you'd like**
Validate all the data is correct for union arrays
**Additional context**
This is a left over from #921 when was implemented concurrently with proper `UnionArray` support
|
non_process
|
implement full validation for unionarrays construction from arraydata is your feature request related to a problem or challenge please describe what you are trying to do it is possible to construct invalid unionarrays when building them using the arraydata api describe the solution you d like validate all the data is correct for union arrays additional context this is a left over from when was implemented concurrently with proper unionarray support
| 0
|
18,554
| 24,555,436,627
|
IssuesEvent
|
2022-10-12 15:31:10
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Responses are not getting synced if user submits multiple study activities in offline (Refer scenarios)
|
Bug P0 iOS Process: Fixed Process: Tested dev
|
**Scenario 1:** Study activities which are submitted in offline are not synced
**Steps:**
1. Login and enroll into a study
2. Open any study activity 1
3. Switch off the internet connection
4. Submit the response
5. Switch on the internet connection
6. Again open any other study activity 2
7. Switch off the internet connection
8. Submit the response
9. Switch on the internet connection and wait > 10 mins
10. Logout and login and observe responses are not synced and status showing 'Resume'
**Scenario 2:** Study activities which are submitted in online are also not synced in below scenario
**Steps:**
1. Login and enroll into a study
2. Open any study activity 1
3. Switch off the internet connection
4. Submit the response
5. Switch on the internet connection
6. Again open any other study activity 2
7. Switch off the internet connection
8. Submit the response
9. Switch on the internet connection
10. Again open any other study activity 3 and submit the response in online
11. Logout and login and observe responses are not synced and status showing 'Resume' even for online activities
**A/R:** Responses are not getting synced if user submits multiple study activities in offline
**E/R:** Responses should be synced properly
Refer attached video:
https://user-images.githubusercontent.com/60386291/188820554-e0ca6ea9-9e2d-4b74-8473-9fc14691e7be.mp4
|
2.0
|
[iOS] Responses are not getting synced if user submits multiple study activities in offline (Refer scenarios) - **Scenario 1:** Study activities which are submitted in offline are not synced
**Steps:**
1. Login and enroll into a study
2. Open any study activity 1
3. Switch off the internet connection
4. Submit the response
5. Switch on the internet connection
6. Again open any other study activity 2
7. Switch off the internet connection
8. Submit the response
9. Switch on the internet connection and wait > 10 mins
10. Logout and login and observe responses are not synced and status showing 'Resume'
**Scenario 2:** Study activities which are submitted in online are also not synced in below scenario
**Steps:**
1. Login and enroll into a study
2. Open any study activity 1
3. Switch off the internet connection
4. Submit the response
5. Switch on the internet connection
6. Again open any other study activity 2
7. Switch off the internet connection
8. Submit the response
9. Switch on the internet connection
10. Again open any other study activity 3 and submit the response in online
11. Logout and login and observe responses are not synced and status showing 'Resume' even for online activities
**A/R:** Responses are not getting synced if user submits multiple study activities in offline
**E/R:** Responses should be synced properly
Refer attached video:
https://user-images.githubusercontent.com/60386291/188820554-e0ca6ea9-9e2d-4b74-8473-9fc14691e7be.mp4
|
process
|
responses are not getting synced if user submits multiple study activities in offline refer scenarios scenario study activities which are submitted in offline are not synced steps login and enroll into a study open any study activity switch off the internet connection submit the response switch on the internet connection again open any other study activity switch off the internet connection submit the response switch on the internet connection and wait mins logout and login and observe responses are not synced and status showing resume scenario study activities which are submitted in online are also not synced in below scenario steps login and enroll into a study open any study activity switch off the internet connection submit the response switch on the internet connection again open any other study activity switch off the internet connection submit the response switch on the internet connection again open any other study activity and submit the response in online logout and login and observe responses are not synced and status showing resume even for online activities a r responses are not getting synced if user submits multiple study activities in offline e r responses should be synced properly refer attached video
| 1
|
249,198
| 26,890,027,768
|
IssuesEvent
|
2023-02-06 08:10:14
|
valtech-ch/microservice-kubernetes-cluster
|
https://api.github.com/repos/valtech-ch/microservice-kubernetes-cluster
|
closed
|
CVE-2020-25649 (High) detected in jackson-databind-2.9.8.jar - autoclosed
|
security vulnerability
|
## CVE-2020-25649 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /functions/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-function-web-4.0.1.jar (Root Library)
- spring-boot-starter-web-2.7.8.jar
- spring-boot-starter-json-2.7.8.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/335a4047c89f52dfe860e93daefb32dc86a521a2">335a4047c89f52dfe860e93daefb32dc86a521a2</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.
<p>Publish Date: 2020-12-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-25649>CVE-2020-25649</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-12-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.4,2.9.10.7,2.10.5.1,2.11.0.rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-25649 (High) detected in jackson-databind-2.9.8.jar - autoclosed - ## CVE-2020-25649 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /functions/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-function-web-4.0.1.jar (Root Library)
- spring-boot-starter-web-2.7.8.jar
- spring-boot-starter-json-2.7.8.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/335a4047c89f52dfe860e93daefb32dc86a521a2">335a4047c89f52dfe860e93daefb32dc86a521a2</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.
<p>Publish Date: 2020-12-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-25649>CVE-2020-25649</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-12-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.4,2.9.10.7,2.10.5.1,2.11.0.rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file functions build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring cloud starter function web jar root library spring boot starter web jar spring boot starter json jar x jackson databind jar vulnerable library found in head commit a href found in base branch develop vulnerability details a flaw was found in fasterxml jackson databind where it did not have entity expansion secured properly this flaw allows vulnerability to xml external entity xxe attacks the highest threat from this vulnerability is data integrity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with mend
| 0
|
2,735
| 5,622,839,527
|
IssuesEvent
|
2017-04-04 13:45:34
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Cannot read property 'headerText' of undefined
|
inprocess
|
I'm getting this error when I add search to my table.
`TypeError: Cannot read property 'headerText' of undefined
at http://localhost:3000/static/js/bundle.js:102201:27
at Array.map (native)
at BootstrapTable.renderToolBar (http://localhost:3000/static/js/bundle.js:102197:31)
at BootstrapTable.render (http://localhost:3000/static/js/bundle.js:101417:27)
at http://localhost:3000/static/js/bundle.js:26822:22
at measureLifeCyclePerf (http://localhost:3000/static/js/bundle.js:26101:13)
at ReactCompositeComponentWrapper._renderValidatedComponentWithoutOwnerOrContext (http://localhost:3000/static/js/bundle.js:26821:26)
at ReactCompositeComponentWrapper._renderValidatedComponent (http://localhost:3000/static/js/bundle.js:26848:33)`
|
1.0
|
Cannot read property 'headerText' of undefined - I'm getting this error when I add search to my table.
`TypeError: Cannot read property 'headerText' of undefined
at http://localhost:3000/static/js/bundle.js:102201:27
at Array.map (native)
at BootstrapTable.renderToolBar (http://localhost:3000/static/js/bundle.js:102197:31)
at BootstrapTable.render (http://localhost:3000/static/js/bundle.js:101417:27)
at http://localhost:3000/static/js/bundle.js:26822:22
at measureLifeCyclePerf (http://localhost:3000/static/js/bundle.js:26101:13)
at ReactCompositeComponentWrapper._renderValidatedComponentWithoutOwnerOrContext (http://localhost:3000/static/js/bundle.js:26821:26)
at ReactCompositeComponentWrapper._renderValidatedComponent (http://localhost:3000/static/js/bundle.js:26848:33)`
|
process
|
cannot read property headertext of undefined i m getting this error when i add search to my table typeerror cannot read property headertext of undefined at at array map native at bootstraptable rendertoolbar at bootstraptable render at at measurelifecycleperf at reactcompositecomponentwrapper rendervalidatedcomponentwithoutownerorcontext at reactcompositecomponentwrapper rendervalidatedcomponent
| 1
|
398,538
| 11,741,734,816
|
IssuesEvent
|
2020-03-11 22:31:07
|
thaliawww/concrexit
|
https://api.github.com/repos/thaliawww/concrexit
|
closed
|
Create/expand unit tests for display names
|
easy and fun members priority: medium
|
In GitLab by @joostrijneveld on Dec 6, 2016, 17:09
In particular, cover the issues raised in #167
|
1.0
|
Create/expand unit tests for display names - In GitLab by @joostrijneveld on Dec 6, 2016, 17:09
In particular, cover the issues raised in #167
|
non_process
|
create expand unit tests for display names in gitlab by joostrijneveld on dec in particular cover the issues raised in
| 0
|
10,107
| 13,044,162,148
|
IssuesEvent
|
2020-07-29 03:47:30
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `UTCDate` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `UTCDate` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `UTCDate` from TiDB -
## Description
Port the scalar function `UTCDate` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function utcdate from tidb description port the scalar function utcdate from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
22,008
| 30,512,948,609
|
IssuesEvent
|
2023-07-18 22:48:06
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
roblox-pyc 1.16.45 has 2 GuardDog issues
|
guarddog silent-process-execution
|
https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "1.16.45",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-1.16.45/src/robloxpy.py:115",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.16.45/src/robloxpy.py:122",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpl9e_gg75/roblox-pyc"
}
}```
|
1.0
|
roblox-pyc 1.16.45 has 2 GuardDog issues - https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "1.16.45",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-1.16.45/src/robloxpy.py:115",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.16.45/src/robloxpy.py:122",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpl9e_gg75/roblox-pyc"
}
}```
|
process
|
roblox pyc has guarddog issues dependency roblox pyc version result issues errors results silent process execution location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp roblox pyc
| 1
|
8,909
| 12,014,541,078
|
IssuesEvent
|
2020-04-10 11:45:11
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `Lpad` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `Lpad` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `Lpad` from TiDB -
## Description
Port the scalar function `Lpad` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function lpad from tidb description port the scalar function lpad from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
1,449
| 4,020,060,315
|
IssuesEvent
|
2016-05-16 17:03:28
|
emergence-lab/emergence-lab
|
https://api.github.com/repos/emergence-lab/emergence-lab
|
closed
|
SEM File Handling
|
backend bug frontend process
|
SEM file upload needs fixing for Python 3 as well as fixes to the UI for uploading SEM images
|
1.0
|
SEM File Handling - SEM file upload needs fixing for Python 3 as well as fixes to the UI for uploading SEM images
|
process
|
sem file handling sem file upload needs fixing for python as well as fixes to the ui for uploading sem images
| 1
|
15,068
| 18,765,024,604
|
IssuesEvent
|
2021-11-05 22:00:54
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Flue Gas hotfix
|
bug Process Heating
|
- Solid Mat calculations not updating until fuel changes
- For old assessments, moisture in combustion should be added as default 0
|
1.0
|
Flue Gas hotfix - - Solid Mat calculations not updating until fuel changes
- For old assessments, moisture in combustion should be added as default 0
|
process
|
flue gas hotfix solid mat calculations not updating until fuel changes for old assessments moisture in combustion should be added as default
| 1
|
19,384
| 25,520,757,666
|
IssuesEvent
|
2022-11-28 20:14:59
|
astricklandd/unimportant-zebra
|
https://api.github.com/repos/astricklandd/unimportant-zebra
|
closed
|
Working on Frontend product
|
Dev process
|
Create a quick and dirty layout of the frontend product on codepen.
|
1.0
|
Working on Frontend product - Create a quick and dirty layout of the frontend product on codepen.
|
process
|
working on frontend product create a quick and dirty layout of the frontend product on codepen
| 1
|
757,533
| 26,516,869,095
|
IssuesEvent
|
2023-01-18 21:37:56
|
woocommerce/woocommerce
|
https://api.github.com/repos/woocommerce/woocommerce
|
opened
|
[HPOS] Orders list table is sorted by ID when using orders tables, but by date when using posts table
|
type: bug priority: low focus: custom order tables
|
Ideally the list table would be sorted the same way in both cases. Under normal circumstances, I imagine the orders would end up sorted the same way regardless, but if you generate orders via smooth generator, the order date/time is randomized, so you can end up with non-sequential order IDs when sorting by date.
|
1.0
|
[HPOS] Orders list table is sorted by ID when using orders tables, but by date when using posts table - Ideally the list table would be sorted the same way in both cases. Under normal circumstances, I imagine the orders would end up sorted the same way regardless, but if you generate orders via smooth generator, the order date/time is randomized, so you can end up with non-sequential order IDs when sorting by date.
|
non_process
|
orders list table is sorted by id when using orders tables but by date when using posts table ideally the list table would be sorted the same way in both cases under normal circumstances i imagine the orders would end up sorted the same way regardless but if you generate orders via smooth generator the order date time is randomized so you can end up with non sequential order ids when sorting by date
| 0
|
52,997
| 13,099,353,634
|
IssuesEvent
|
2020-08-03 21:25:01
|
syncthing/syncthing
|
https://api.github.com/repos/syncthing/syncthing
|
closed
|
TestWatchRename test fails on FreeBSD.
|
bug build
|
Hi,
Whilst investigating that kqueue issue (#6596) I've found an unrelated (?) test failure:
```
=== RUN TestWatchRename
TestWatchRename: basicfs_watch_test.go:444: Timed out before receiving all expected events
--- FAIL: TestWatchRename (10.01s)
```
This is today's master (c20ed80dc473ae7476a9359a55f7ec2618eb62e2) on FreeBSD-12.1.
All I did was add `-v` and `-count=1` to the test target in `build.go`.
Happy to provide more info if you let me know how to get it :)
Please note that I'm not very familiar with FreeBSD.
|
1.0
|
TestWatchRename test fails on FreeBSD. - Hi,
Whilst investigating that kqueue issue (#6596) I've found an unrelated (?) test failure:
```
=== RUN TestWatchRename
TestWatchRename: basicfs_watch_test.go:444: Timed out before receiving all expected events
--- FAIL: TestWatchRename (10.01s)
```
This is today's master (c20ed80dc473ae7476a9359a55f7ec2618eb62e2) on FreeBSD-12.1.
All I did was add `-v` and `-count=1` to the test target in `build.go`.
Happy to provide more info if you let me know how to get it :)
Please note that I'm not very familiar with FreeBSD.
|
non_process
|
testwatchrename test fails on freebsd hi whilst investigating that kqueue issue i ve found an unrelated test failure run testwatchrename testwatchrename basicfs watch test go timed out before receiving all expected events fail testwatchrename this is today s master on freebsd all i did was add v and count to the test target in build go happy to provide more info if you let me know how to get it please note that i m not very familiar with freebsd
| 0
|
5,469
| 8,335,742,213
|
IssuesEvent
|
2018-09-28 04:12:39
|
mozilla-tw/ScreenshotGo
|
https://api.github.com/repos/mozilla-tw/ScreenshotGo
|
closed
|
On settings page, tapping give feedback and rate 5 stars will lead to empty google play page
|
epm wanted others process
|
STR
1. Launch Scryer
2. Tap Menu -> Settings -> Give feedback
3. Tap Yes, Rate 5 Stars
Expected result:
Leads to a google play page to rate the app
Actual result:
Leads to item not found page
ScreenshotGo version:
v0.8(445)
Device: Redmi 4X
Android 7.1.2
Screenshot:

|
1.0
|
On settings page, tapping give feedback and rate 5 stars will lead to empty google play page - STR
1. Launch Scryer
2. Tap Menu -> Settings -> Give feedback
3. Tap Yes, Rate 5 Stars
Expected result:
Leads to a google play page to rate the app
Actual result:
Leads to item not found page
ScreenshotGo version:
v0.8(445)
Device: Redmi 4X
Android 7.1.2
Screenshot:

|
process
|
on settings page tapping give feedback and rate stars will lead to empty google play page str launch scryer tap menu settings give feedback tap yes rate stars expected result leads to a google play page to rate the app actual result leads to item not found page screenshotgo version device redmi android screenshot
| 1
|
7,610
| 7,993,519,853
|
IssuesEvent
|
2018-07-20 08:00:21
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
FCM server key confusing
|
app-service-mobile/svc cxp in-progress product-question triaged
|
What key should i use from FCM ?
1) just Server key ?
2) LEgacy Server Key ?
in https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-xamarin-forms-get-started-push#configure-the-mobile-apps-back-end-to-send-push-requests-by-using-fcm , it says legacy server key. But in Xamarin university Rene Ruppert video he copies Server key.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6f503e37-b16e-c92c-8668-313428451053
* Version Independent ID: edc66d3e-bf88-3c4d-a148-6288a7022f16
* Content: [Add push notifications to your Xamarin.Forms app](https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-xamarin-forms-get-started-push#configure-the-mobile-apps-back-end-to-send-push-requests-by-using-fcm)
* Content Source: [articles/app-service-mobile/app-service-mobile-xamarin-forms-get-started-push.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service-mobile/app-service-mobile-xamarin-forms-get-started-push.md)
* Service: **app-service-mobile**
* GitHub Login: @conceptdev
* Microsoft Alias: **crdun**
|
1.0
|
FCM server key confusing - What key should i use from FCM ?
1) just Server key ?
2) LEgacy Server Key ?
in https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-xamarin-forms-get-started-push#configure-the-mobile-apps-back-end-to-send-push-requests-by-using-fcm , it says legacy server key. But in Xamarin university Rene Ruppert video he copies Server key.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6f503e37-b16e-c92c-8668-313428451053
* Version Independent ID: edc66d3e-bf88-3c4d-a148-6288a7022f16
* Content: [Add push notifications to your Xamarin.Forms app](https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-xamarin-forms-get-started-push#configure-the-mobile-apps-back-end-to-send-push-requests-by-using-fcm)
* Content Source: [articles/app-service-mobile/app-service-mobile-xamarin-forms-get-started-push.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service-mobile/app-service-mobile-xamarin-forms-get-started-push.md)
* Service: **app-service-mobile**
* GitHub Login: @conceptdev
* Microsoft Alias: **crdun**
|
non_process
|
fcm server key confusing what key should i use from fcm just server key legacy server key in it says legacy server key but in xamarin university rene ruppert video he copies server key document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service mobile github login conceptdev microsoft alias crdun
| 0
|
119,006
| 25,448,795,035
|
IssuesEvent
|
2022-11-24 08:51:03
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
opened
|
The show button text label feature should not rely on a global classname
|
[Priority] Low [Type] Code Quality
|
### Description
If you toggle the "show button text label" feature, a classname "show-icon-labels" is added as a top level element of the editors and components from the "block-editor" package check the presence of a global classname to trigger or not text labels on icon buttons.
I see two issues with this approach:
- The classname is not an actual API, it could be considered but I think it's not ideal.
- Performance could be impacted since changing a global classname re-renders all components that are children of the root component where the classname is applied.
Potential approaches:
- Adding a `showIconLabels` prop to UI components like `BlockToolbar`...
- or Adding a `showIconLabels` setting to the BlockEditorProvider settings prop.
I think I prefer the first approach if possible but the second is ok too.
### Step-by-step reproduction instructions
Trigger the "show button text labels" option in the preferences modal.
### Screenshots, screen recording, code snippet
_No response_
### Environment info
_No response_
### Please confirm that you have searched existing issues in the repo.
Yes
### Please confirm that you have tested with all plugins deactivated except Gutenberg.
Yes
|
1.0
|
The show button text label feature should not rely on a global classname - ### Description
If you toggle the "show button text label" feature, a classname "show-icon-labels" is added as a top level element of the editors and components from the "block-editor" package check the presence of a global classname to trigger or not text labels on icon buttons.
I see two issues with this approach:
- The classname is not an actual API, it could be considered but I think it's not ideal.
- Performance could be impacted since changing a global classname re-renders all components that are children of the root component where the classname is applied.
Potential approaches:
- Adding a `showIconLabels` prop to UI components like `BlockToolbar`...
- or Adding a `showIconLabels` setting to the BlockEditorProvider settings prop.
I think I prefer the first approach if possible but the second is ok too.
### Step-by-step reproduction instructions
Trigger the "show button text labels" option in the preferences modal.
### Screenshots, screen recording, code snippet
_No response_
### Environment info
_No response_
### Please confirm that you have searched existing issues in the repo.
Yes
### Please confirm that you have tested with all plugins deactivated except Gutenberg.
Yes
|
non_process
|
the show button text label feature should not rely on a global classname description if you toggle the show button text label feature a classname show icon labels is added as a top level element of the editors and components from the block editor package check the presence of a global classname to trigger or not text labels on icon buttons i see two issues with this approach the classname is not an actual api it could be considered but i think it s not ideal performance could be impacted since changing a global classname re renders all components that are children of the root component where the classname is applied potential approaches adding a showiconlabels prop to ui components like blocktoolbar or adding a showiconlabels setting to the blockeditorprovider settings prop i think i prefer the first approach if possible but the second is ok too step by step reproduction instructions trigger the show button text labels option in the preferences modal screenshots screen recording code snippet no response environment info no response please confirm that you have searched existing issues in the repo yes please confirm that you have tested with all plugins deactivated except gutenberg yes
| 0
|
138,999
| 20,755,812,082
|
IssuesEvent
|
2022-03-15 12:06:11
|
EscolaDeSaudePublica/DesignLab
|
https://api.github.com/repos/EscolaDeSaudePublica/DesignLab
|
closed
|
2. Apoiar a Construção da Política Estadual de Transplante (item 2.7)
|
Oficina Design Sem Projeto Definido Prioridade Design: Alta
|
## **Objetivo**
**Como** designer
**Quero** apoiar a construção da Política Estadual de Transplante
**Para** nortear da elaboração do Plano Estadual de Doação e Transplante do Ceará (2022-2023)
## **Contexto**
- A Secretaria Executiva de Políticas de Saúde - SEPOS/SESA, solicitou à Superintendência da ESP/CE, por meio do ofício 10/2022, apoio do DesignLab/Felicilab para facilitação na construção da Política Estadual de Transplante. O pedido partiu da experiência na I Oficina de Design de Serviços Públicos, realizada com a Sepos em Ago/2021. Foi definido que a Adins será a responsável pela condução do processo e o DesignLab entrará como apoio.
## **Escopo**
- [x] 2.7 Fornecer instruções e orientações para equipe de apoio da Sepos
## Observações
Épico: [Oficinas de Design para Políticas Públicas #225](https://github.com/EscolaDeSaudePublica/DesignLab/issues/225)
Stakeholder: Rosangela Cavalcante (Solicitar contato no privado) e Wilma (Adins)
Pessoa de Contato: Camila Mendes (Sepos)

|
2.0
|
2. Apoiar a Construção da Política Estadual de Transplante (item 2.7) - ## **Objetivo**
**Como** designer
**Quero** apoiar a construção da Política Estadual de Transplante
**Para** nortear da elaboração do Plano Estadual de Doação e Transplante do Ceará (2022-2023)
## **Contexto**
- A Secretaria Executiva de Políticas de Saúde - SEPOS/SESA, solicitou à Superintendência da ESP/CE, por meio do ofício 10/2022, apoio do DesignLab/Felicilab para facilitação na construção da Política Estadual de Transplante. O pedido partiu da experiência na I Oficina de Design de Serviços Públicos, realizada com a Sepos em Ago/2021. Foi definido que a Adins será a responsável pela condução do processo e o DesignLab entrará como apoio.
## **Escopo**
- [x] 2.7 Fornecer instruções e orientações para equipe de apoio da Sepos
## Observações
Épico: [Oficinas de Design para Políticas Públicas #225](https://github.com/EscolaDeSaudePublica/DesignLab/issues/225)
Stakeholder: Rosangela Cavalcante (Solicitar contato no privado) e Wilma (Adins)
Pessoa de Contato: Camila Mendes (Sepos)

|
non_process
|
apoiar a construção da política estadual de transplante item objetivo como designer quero apoiar a construção da política estadual de transplante para nortear da elaboração do plano estadual de doação e transplante do ceará contexto a secretaria executiva de políticas de saúde sepos sesa solicitou à superintendência da esp ce por meio do ofício apoio do designlab felicilab para facilitação na construção da política estadual de transplante o pedido partiu da experiência na i oficina de design de serviços públicos realizada com a sepos em ago foi definido que a adins será a responsável pela condução do processo e o designlab entrará como apoio escopo fornecer instruções e orientações para equipe de apoio da sepos observações épico stakeholder rosangela cavalcante solicitar contato no privado e wilma adins pessoa de contato camila mendes sepos
| 0
|
4,345
| 7,246,143,789
|
IssuesEvent
|
2018-02-14 20:34:22
|
hardvolk/foodie-journal
|
https://api.github.com/repos/hardvolk/foodie-journal
|
closed
|
Investigación para integrar animaciones utilizando angular
|
In process
|
Si es posible crear un Demo para ver como funcionan las animaciones en angular, esto servira para los porcentajes en el caso de la aplicación cuando el usuario cumpla una tarea.
|
1.0
|
Investigación para integrar animaciones utilizando angular - Si es posible crear un Demo para ver como funcionan las animaciones en angular, esto servira para los porcentajes en el caso de la aplicación cuando el usuario cumpla una tarea.
|
process
|
investigación para integrar animaciones utilizando angular si es posible crear un demo para ver como funcionan las animaciones en angular esto servira para los porcentajes en el caso de la aplicación cuando el usuario cumpla una tarea
| 1
|
17,842
| 23,779,986,945
|
IssuesEvent
|
2022-09-02 02:55:51
|
holmesconsulting/geekonfilm
|
https://api.github.com/repos/holmesconsulting/geekonfilm
|
closed
|
Create a shared calendar
|
process
|
Shared calendar should have:
- movie releases for May, June & July
- recording dates
- podcast release dates
|
1.0
|
Create a shared calendar - Shared calendar should have:
- movie releases for May, June & July
- recording dates
- podcast release dates
|
process
|
create a shared calendar shared calendar should have movie releases for may june july recording dates podcast release dates
| 1
|
15,354
| 19,526,401,933
|
IssuesEvent
|
2021-12-30 08:41:54
|
gradle/gradle
|
https://api.github.com/repos/gradle/gradle
|
closed
|
Gradle cache and incremental compiling is not working with Micronaut 3
|
a:bug closed:duplicate in:annotation-processing
|
Hi I am using Gradle 6.9.1 and Micronaut-inject-groovy 3.2.3, the project is fully recompiled when I changed just one file,
it complains for not finding the source of the generated class from Micronaut so it recompiles everything !
here the debug info :
```
> Task :compileGroovy
Caching disabled for task ':compileGroovy' because:
Gradle does not know how file 'build/classes/groovy/main/uberall/admin/AdminListingController$_listGoogleEnterprise_closure3$_closure43.class' was created (output property 'destinationDirectory'). Task output caching requires exclusive access to output paths to guarantee correctness (i.e. multiple tasks are not allowed to produce output in the same location).
Task ':compileGroovy' is not up-to-date because:
Input property 'stableSources' file /Users/nmoualek/IdeaProjects/company/grails-app/controllers/company/admin/AdminBusinessController.groovy has changed.
Groovy compilation avoidance is an incubating feature.
Created classpath snapshot for incremental compilation in 0.712 secs. 8018 duplicate classes found in classpath (see all with --debug).
Class dependency analysis for incremental compilation took 3.494 secs.
Full recompilation is required because unable to find source file of class uberall.admin.AdminBusinessController$__tt__show_closure24$_closure53. Analysis took 4.251 secs.
Starting process 'Gradle Worker Daemon 1'. Working directory: /Users/nmoualek/.gradle/workers Command: /Library/Java/JavaVirtualMachines/jdk-15.0.2.jdk/Contents/Home/bin/java --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.lang.invoke=ALL-UNNAMED --add-opens java.prefs/java.util.prefs=ALL-UNNAMED @/private/var/folders/8m/mzc27cy502j7fhgn_f7nmd9c0000gn/T/gradle-worker-classpath12571877760384959526txt -Xms512m -Xmx4096m -Dfile.encoding=UTF-8 -Duser.country=FR -Duser.language=en -Duser.variant worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Worker Daemon 1'
Successfully started process 'Gradle Worker Daemon 1'
Started Gradle worker daemon (0.483 secs) with fork options DaemonForkOptions{executable=/Library/Java/JavaVirtualMachines/jdk-15.0.2.jdk/Contents/Home/bin/java, minHeapSize=512m, maxHeapSize=4096m, jvmArgs=[--add-opens, java.base/java.lang=ALL-UNNAMED, --add-opens, java.base/java.lang.invoke=ALL-UNNAMED, --add-opens, java.prefs/java.util.prefs=ALL-UNNAMED], keepAliveMode=SESSION}.
This JVM does not support getting OS memory, so no OS memory status updates will be broadcast
> Task :compileGroovy
12:36:38.937 [/127.0.0.1:58739 to /127.0.0.1:58738 workers] DEBUG io.micronaut.core.optim.StaticOptimizations - No optimizations class io.micronaut.core.io.service.SoftServiceLoader$Optimizations found
Compiling with JDK Java compiler API.
12:38:45.066 [/127.0.0.1:58739 to /127.0.0.1:58738 workers] DEBUG io.micronaut.core.optim.StaticOptimizations - No optimizations class io.micronaut.core.reflect.ClassUtils$Optimizations found
```
|
1.0
|
Gradle cache and incremental compiling is not working with Micronaut 3 - Hi I am using Gradle 6.9.1 and Micronaut-inject-groovy 3.2.3, the project is fully recompiled when I changed just one file,
it complains for not finding the source of the generated class from Micronaut so it recompiles everything !
here the debug info :
```
> Task :compileGroovy
Caching disabled for task ':compileGroovy' because:
Gradle does not know how file 'build/classes/groovy/main/uberall/admin/AdminListingController$_listGoogleEnterprise_closure3$_closure43.class' was created (output property 'destinationDirectory'). Task output caching requires exclusive access to output paths to guarantee correctness (i.e. multiple tasks are not allowed to produce output in the same location).
Task ':compileGroovy' is not up-to-date because:
Input property 'stableSources' file /Users/nmoualek/IdeaProjects/company/grails-app/controllers/company/admin/AdminBusinessController.groovy has changed.
Groovy compilation avoidance is an incubating feature.
Created classpath snapshot for incremental compilation in 0.712 secs. 8018 duplicate classes found in classpath (see all with --debug).
Class dependency analysis for incremental compilation took 3.494 secs.
Full recompilation is required because unable to find source file of class uberall.admin.AdminBusinessController$__tt__show_closure24$_closure53. Analysis took 4.251 secs.
Starting process 'Gradle Worker Daemon 1'. Working directory: /Users/nmoualek/.gradle/workers Command: /Library/Java/JavaVirtualMachines/jdk-15.0.2.jdk/Contents/Home/bin/java --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.lang.invoke=ALL-UNNAMED --add-opens java.prefs/java.util.prefs=ALL-UNNAMED @/private/var/folders/8m/mzc27cy502j7fhgn_f7nmd9c0000gn/T/gradle-worker-classpath12571877760384959526txt -Xms512m -Xmx4096m -Dfile.encoding=UTF-8 -Duser.country=FR -Duser.language=en -Duser.variant worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Worker Daemon 1'
Successfully started process 'Gradle Worker Daemon 1'
Started Gradle worker daemon (0.483 secs) with fork options DaemonForkOptions{executable=/Library/Java/JavaVirtualMachines/jdk-15.0.2.jdk/Contents/Home/bin/java, minHeapSize=512m, maxHeapSize=4096m, jvmArgs=[--add-opens, java.base/java.lang=ALL-UNNAMED, --add-opens, java.base/java.lang.invoke=ALL-UNNAMED, --add-opens, java.prefs/java.util.prefs=ALL-UNNAMED], keepAliveMode=SESSION}.
This JVM does not support getting OS memory, so no OS memory status updates will be broadcast
> Task :compileGroovy
12:36:38.937 [/127.0.0.1:58739 to /127.0.0.1:58738 workers] DEBUG io.micronaut.core.optim.StaticOptimizations - No optimizations class io.micronaut.core.io.service.SoftServiceLoader$Optimizations found
Compiling with JDK Java compiler API.
12:38:45.066 [/127.0.0.1:58739 to /127.0.0.1:58738 workers] DEBUG io.micronaut.core.optim.StaticOptimizations - No optimizations class io.micronaut.core.reflect.ClassUtils$Optimizations found
```
|
process
|
gradle cache and incremental compiling is not working with micronaut hi i am using gradle and micronaut inject groovy the project is fully recompiled when i changed just one file it complains for not finding the source of the generated class from micronaut so it recompiles everything here the debug info task compilegroovy caching disabled for task compilegroovy because gradle does not know how file build classes groovy main uberall admin adminlistingcontroller listgoogleenterprise class was created output property destinationdirectory task output caching requires exclusive access to output paths to guarantee correctness i e multiple tasks are not allowed to produce output in the same location task compilegroovy is not up to date because input property stablesources file users nmoualek ideaprojects company grails app controllers company admin adminbusinesscontroller groovy has changed groovy compilation avoidance is an incubating feature created classpath snapshot for incremental compilation in secs duplicate classes found in classpath see all with debug class dependency analysis for incremental compilation took secs full recompilation is required because unable to find source file of class uberall admin adminbusinesscontroller tt show analysis took secs starting process gradle worker daemon working directory users nmoualek gradle workers command library java javavirtualmachines jdk jdk contents home bin java add opens java base java lang all unnamed add opens java base java lang invoke all unnamed add opens java prefs java util prefs all unnamed private var folders t gradle worker dfile encoding utf duser country fr duser language en duser variant worker org gradle process internal worker gradleworkermain gradle worker daemon successfully started process gradle worker daemon started gradle worker daemon secs with fork options daemonforkoptions executable library java javavirtualmachines jdk jdk contents home bin java minheapsize maxheapsize jvmargs keepalivemode session this jvm does not support getting os memory so no os memory status updates will be broadcast task compilegroovy debug io micronaut core optim staticoptimizations no optimizations class io micronaut core io service softserviceloader optimizations found compiling with jdk java compiler api debug io micronaut core optim staticoptimizations no optimizations class io micronaut core reflect classutils optimizations found
| 1
|
73,991
| 7,371,899,940
|
IssuesEvent
|
2018-03-13 13:19:52
|
Kassensystem/DatabaseSystem
|
https://api.github.com/repos/Kassensystem/DatabaseSystem
|
opened
|
Test des PrinterService über die Netzwerkschnittstelle
|
testing
|
Die beiden Tests müssen über die Netzwerkschnittstelle von der Android-App getestet werden:
- [ ] Ausdrucken einer Bestellung unter Angabe der Order-ID. Das wird durch einen Button/Tick-Box in der App aktiviert.
- [ ] Ausdrucken von neu hinzugefügten OrderedItems. Dafür über die App einer neuen oder bereits existierenden Bestellung neuen Items hinzufügen.
|
1.0
|
Test des PrinterService über die Netzwerkschnittstelle - Die beiden Tests müssen über die Netzwerkschnittstelle von der Android-App getestet werden:
- [ ] Ausdrucken einer Bestellung unter Angabe der Order-ID. Das wird durch einen Button/Tick-Box in der App aktiviert.
- [ ] Ausdrucken von neu hinzugefügten OrderedItems. Dafür über die App einer neuen oder bereits existierenden Bestellung neuen Items hinzufügen.
|
non_process
|
test des printerservice über die netzwerkschnittstelle die beiden tests müssen über die netzwerkschnittstelle von der android app getestet werden ausdrucken einer bestellung unter angabe der order id das wird durch einen button tick box in der app aktiviert ausdrucken von neu hinzugefügten ordereditems dafür über die app einer neuen oder bereits existierenden bestellung neuen items hinzufügen
| 0
|
6,171
| 9,082,197,299
|
IssuesEvent
|
2019-02-17 10:00:36
|
herczeg6179/mood-music-player
|
https://api.github.com/repos/herczeg6179/mood-music-player
|
closed
|
Auto deploy gh-page branch
|
deploy-process
|
either do a push/merge hook, or maybe a jenkins job, the point is, whenever changes go up on `master`, the gh-pages branch should build and commit the built files
|
1.0
|
Auto deploy gh-page branch - either do a push/merge hook, or maybe a jenkins job, the point is, whenever changes go up on `master`, the gh-pages branch should build and commit the built files
|
process
|
auto deploy gh page branch either do a push merge hook or maybe a jenkins job the point is whenever changes go up on master the gh pages branch should build and commit the built files
| 1
|
289,708
| 21,790,085,509
|
IssuesEvent
|
2022-05-14 19:04:08
|
open-mmlab/mmsegmentation
|
https://api.github.com/repos/open-mmlab/mmsegmentation
|
closed
|
Why does Resizing come before RandomCropping?
|
documentation awaiting response
|
Hi, all. First, thanks for this great project.
Is there a specific reason for Resizing to happen before RandomCropping? I mean, why do not crop the image first, and then resize it to the desired resolution?
https://github.com/open-mmlab/mmsegmentation/blob/740b54577064a4738e689d80b961c67d176bc319/configs/_base_/datasets/cityscapes.py#L10-L11
I especially asked it, because I have tried experiments with [RandomResizedCrop](RandomResizedCrop) on Pytorch, and based on the configuration of Cityscapes we could just set the area scale to (1/16, 1) and fix the aspect_ratio to (1, 1), and I also adapted my code to ensure the `cat_max_ratio` feature here, but I'm getting far worse results than using the sequence (resizing, random_cropping).
|
1.0
|
Why does Resizing come before RandomCropping? - Hi, all. First, thanks for this great project.
Is there a specific reason for Resizing to happen before RandomCropping? I mean, why do not crop the image first, and then resize it to the desired resolution?
https://github.com/open-mmlab/mmsegmentation/blob/740b54577064a4738e689d80b961c67d176bc319/configs/_base_/datasets/cityscapes.py#L10-L11
I especially asked it, because I have tried experiments with [RandomResizedCrop](RandomResizedCrop) on Pytorch, and based on the configuration of Cityscapes we could just set the area scale to (1/16, 1) and fix the aspect_ratio to (1, 1), and I also adapted my code to ensure the `cat_max_ratio` feature here, but I'm getting far worse results than using the sequence (resizing, random_cropping).
|
non_process
|
why does resizing come before randomcropping hi all first thanks for this great project is there a specific reason for resizing to happen before randomcropping i mean why do not crop the image first and then resize it to the desired resolution i especially asked it because i have tried experiments with randomresizedcrop on pytorch and based on the configuration of cityscapes we could just set the area scale to and fix the aspect ratio to and i also adapted my code to ensure the cat max ratio feature here but i m getting far worse results than using the sequence resizing random cropping
| 0
|
195,641
| 15,531,727,692
|
IssuesEvent
|
2021-03-14 01:16:17
|
buildtesters/buildtest
|
https://api.github.com/repos/buildtesters/buildtest
|
closed
|
reorganize contributing guide topics
|
documentation
|
# Link
https://buildtest.readthedocs.io/en/devel/contributing.html#contributing-topics
# What is the issue?
The contributing guide is not in any particular order, it may be bit confusing to readers if they are reading this docs. We should organize the topics into meaningful titles that would address the target audience.
The target audience for contributors can be any of the following
1. Contribute to user documentation
2. Contributors that want to report Issues, feature request with framework or schema
3. Contribute to buildtest source code or regression test
4. Maintainer guide (documentation for maintainers) for administrative (github, gitlab) access, access to third party software (readthedocs, google analytics, slack)
|
1.0
|
reorganize contributing guide topics - # Link
https://buildtest.readthedocs.io/en/devel/contributing.html#contributing-topics
# What is the issue?
The contributing guide is not in any particular order, it may be bit confusing to readers if they are reading this docs. We should organize the topics into meaningful titles that would address the target audience.
The target audience for contributors can be any of the following
1. Contribute to user documentation
2. Contributors that want to report Issues, feature request with framework or schema
3. Contribute to buildtest source code or regression test
4. Maintainer guide (documentation for maintainers) for administrative (github, gitlab) access, access to third party software (readthedocs, google analytics, slack)
|
non_process
|
reorganize contributing guide topics link what is the issue the contributing guide is not in any particular order it may be bit confusing to readers if they are reading this docs we should organize the topics into meaningful titles that would address the target audience the target audience for contributors can be any of the following contribute to user documentation contributors that want to report issues feature request with framework or schema contribute to buildtest source code or regression test maintainer guide documentation for maintainers for administrative github gitlab access access to third party software readthedocs google analytics slack
| 0
|
10,027
| 13,044,161,481
|
IssuesEvent
|
2020-07-29 03:47:23
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `AddDateIntReal` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `AddDateIntReal` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `AddDateIntReal` from TiDB -
## Description
Port the scalar function `AddDateIntReal` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function adddateintreal from tidb description port the scalar function adddateintreal from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
253,479
| 21,685,046,269
|
IssuesEvent
|
2022-05-09 10:25:28
|
fabnumdef/chatbot-front
|
https://api.github.com/repos/fabnumdef/chatbot-front
|
closed
|
[Pas de réponse trouvée] Ajout de suggestions lorsque connaissances non trouvées
|
A tester ⚗️
|
**Fonctionnement actuel :**
Lorsque le bot n'est pas sûr ou ne trouve pas d'association proche à la question posée, c'est la réponse par défaut qui s'affiche.
Sans suggestions d'autres connaissances.
**Fonctionnement souhaité :**
Lorsqu'une question posée ne trouve pas d'association sûre mais une association supérieure à 20% , alors il faudrait que le bot puisse afficher les 3 meilleures propositions (à voir si le message est rendu administrable?)
"Message suggéré " : C'est la première fois qu'on me pose cette question de cette manière. 😅 Je vais vous proposer quelques suggestions ! 😉
- suggestion 1
- suggestion 2
- suggestion 3
ex ci-dessous:
<img width="540" alt="Capture d’écran 2022-02-08 à 14 49 13" src="https://user-images.githubusercontent.com/63412351/153002921-2b3ed1a8-6f83-4244-a33f-9323495ce7b4.png">
Lorsque le bot ne trouve pas du tout de correspondance (ex; si l'utilisateur tape "H8D90KJJ", alors c'est la réponse par défaut qui s'affiche, sans suggestions.
|
1.0
|
[Pas de réponse trouvée] Ajout de suggestions lorsque connaissances non trouvées - **Fonctionnement actuel :**
Lorsque le bot n'est pas sûr ou ne trouve pas d'association proche à la question posée, c'est la réponse par défaut qui s'affiche.
Sans suggestions d'autres connaissances.
**Fonctionnement souhaité :**
Lorsqu'une question posée ne trouve pas d'association sûre mais une association supérieure à 20% , alors il faudrait que le bot puisse afficher les 3 meilleures propositions (à voir si le message est rendu administrable?)
"Message suggéré " : C'est la première fois qu'on me pose cette question de cette manière. 😅 Je vais vous proposer quelques suggestions ! 😉
- suggestion 1
- suggestion 2
- suggestion 3
ex ci-dessous:
<img width="540" alt="Capture d’écran 2022-02-08 à 14 49 13" src="https://user-images.githubusercontent.com/63412351/153002921-2b3ed1a8-6f83-4244-a33f-9323495ce7b4.png">
Lorsque le bot ne trouve pas du tout de correspondance (ex; si l'utilisateur tape "H8D90KJJ", alors c'est la réponse par défaut qui s'affiche, sans suggestions.
|
non_process
|
ajout de suggestions lorsque connaissances non trouvées fonctionnement actuel lorsque le bot n est pas sûr ou ne trouve pas d association proche à la question posée c est la réponse par défaut qui s affiche sans suggestions d autres connaissances fonctionnement souhaité lorsqu une question posée ne trouve pas d association sûre mais une association supérieure à alors il faudrait que le bot puisse afficher les meilleures propositions à voir si le message est rendu administrable message suggéré c est la première fois qu on me pose cette question de cette manière 😅 je vais vous proposer quelques suggestions 😉 suggestion suggestion suggestion ex ci dessous img width alt capture d’écran à src lorsque le bot ne trouve pas du tout de correspondance ex si l utilisateur tape alors c est la réponse par défaut qui s affiche sans suggestions
| 0
|
753,161
| 26,340,262,732
|
IssuesEvent
|
2023-01-10 17:07:48
|
sugarlabs/musicblocks
|
https://api.github.com/repos/sugarlabs/musicblocks
|
closed
|
detune effect?
|
Issue-Enhancement Issue-Wontfix Component-Tonejs Priority-Minor
|
Another builtin to the synths in tone.js is detune. Might be worth exploring as well..
|
1.0
|
detune effect? - Another builtin to the synths in tone.js is detune. Might be worth exploring as well..
|
non_process
|
detune effect another builtin to the synths in tone js is detune might be worth exploring as well
| 0
|
20,634
| 27,314,668,185
|
IssuesEvent
|
2023-02-24 14:50:37
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
opened
|
Consider to move terminals out of shared process
|
terminal shared-process sandbox
|
Follow up from https://github.com/microsoft/vscode/issues/154050
We now have sufficient infrastructure in place to easily fork a `UtilityProcess` with support of message port communication to other processes:
https://github.com/microsoft/vscode/blob/ed6bb0790b236b6732dba147021a2eefe7d94a13/src/vs/platform/utilityProcess/electron-main/utilityProcess.ts#L121
Currently the PTY agent is still running as child process in the shared process housing all terminals:
https://github.com/microsoft/vscode/blob/ed6bb0790b236b6732dba147021a2eefe7d94a13/src/vs/code/node/sharedProcess/sharedProcessMain.ts#L386-L399
We should consider moving the PTY agent out into a standalone utility process to reduce pressure on the shared process (and actually vice versa, reduce pressure on terminals).
|
1.0
|
Consider to move terminals out of shared process - Follow up from https://github.com/microsoft/vscode/issues/154050
We now have sufficient infrastructure in place to easily fork a `UtilityProcess` with support of message port communication to other processes:
https://github.com/microsoft/vscode/blob/ed6bb0790b236b6732dba147021a2eefe7d94a13/src/vs/platform/utilityProcess/electron-main/utilityProcess.ts#L121
Currently the PTY agent is still running as child process in the shared process housing all terminals:
https://github.com/microsoft/vscode/blob/ed6bb0790b236b6732dba147021a2eefe7d94a13/src/vs/code/node/sharedProcess/sharedProcessMain.ts#L386-L399
We should consider moving the PTY agent out into a standalone utility process to reduce pressure on the shared process (and actually vice versa, reduce pressure on terminals).
|
process
|
consider to move terminals out of shared process follow up from we now have sufficient infrastructure in place to easily fork a utilityprocess with support of message port communication to other processes currently the pty agent is still running as child process in the shared process housing all terminals we should consider moving the pty agent out into a standalone utility process to reduce pressure on the shared process and actually vice versa reduce pressure on terminals
| 1
|
154,756
| 24,335,342,189
|
IssuesEvent
|
2022-10-01 02:23:16
|
TRAVELERSTEAM/TRAVELERS-FE-USER
|
https://api.github.com/repos/TRAVELERSTEAM/TRAVELERS-FE-USER
|
closed
|
NoticeFrom/공지사항 폼 구현
|
✨ Feature 💄 Design
|
## 💻 작업 사항
feat
- [x] 라우터 기능 추가
- [x] 공지사항 페이지 추가
- [x] 메인 이미지
- [x] 공지사항 / 자료실 이동 버튼
- [x] 공지사항 글 목록 테이블
- [x] 공지사항 글 페이징 10개 단위
design
- [x] 공지사항 폼 디자인
|
1.0
|
NoticeFrom/공지사항 폼 구현 - ## 💻 작업 사항
feat
- [x] 라우터 기능 추가
- [x] 공지사항 페이지 추가
- [x] 메인 이미지
- [x] 공지사항 / 자료실 이동 버튼
- [x] 공지사항 글 목록 테이블
- [x] 공지사항 글 페이징 10개 단위
design
- [x] 공지사항 폼 디자인
|
non_process
|
noticefrom 공지사항 폼 구현 💻 작업 사항 feat 라우터 기능 추가 공지사항 페이지 추가 메인 이미지 공지사항 자료실 이동 버튼 공지사항 글 목록 테이블 공지사항 글 페이징 단위 design 공지사항 폼 디자인
| 0
|
17,297
| 23,114,103,533
|
IssuesEvent
|
2022-07-27 15:13:08
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Raster to vector tools malfunction in v 3.26.1
|
Raster Processing Bug
|
### What is the bug or the crash?
Using raster tile index tool or raster to vector with ascii file as input fails to output a spatial file when selecting "save to temporary file". Output tables are created but lacking spatial definition.
When select an output file location it works ok. Checked version 3.16 and it works ok to temporary file.
Similar issue when checked 'raster to vector' (rasterise)
[manning.zip](https://github.com/qgis/QGIS/files/9200132/manning.zip)
### Steps to reproduce the issue
Raster>>Misc>>Tile Index>>choose ascii grid and output to temporary file
Same approach for rasterise tool
### Versions
3.26.1
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
Raster to vector tools malfunction in v 3.26.1 - ### What is the bug or the crash?
Using raster tile index tool or raster to vector with ascii file as input fails to output a spatial file when selecting "save to temporary file". Output tables are created but lacking spatial definition.
When select an output file location it works ok. Checked version 3.16 and it works ok to temporary file.
Similar issue when checked 'raster to vector' (rasterise)
[manning.zip](https://github.com/qgis/QGIS/files/9200132/manning.zip)
### Steps to reproduce the issue
Raster>>Misc>>Tile Index>>choose ascii grid and output to temporary file
Same approach for rasterise tool
### Versions
3.26.1
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
raster to vector tools malfunction in v what is the bug or the crash using raster tile index tool or raster to vector with ascii file as input fails to output a spatial file when selecting save to temporary file output tables are created but lacking spatial definition when select an output file location it works ok checked version and it works ok to temporary file similar issue when checked raster to vector rasterise steps to reproduce the issue raster misc tile index choose ascii grid and output to temporary file same approach for rasterise tool versions supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
638,300
| 20,720,960,249
|
IssuesEvent
|
2022-03-13 11:37:44
|
polygon-isecure/core
|
https://api.github.com/repos/polygon-isecure/core
|
closed
|
[improvement]: standardized response structure for API routes
|
enhancement help wanted api internal priority: high
|
Currently, the API responds only with status codes which do not necessarily include any helpful reasons for understanding why the request failed or succeeded. By implementing a standardized response structure for API routes, it will be easier to debug the API and use it in headless environments.
Here is an example of an interface for implementing this:
```ts
interface Response<T> {
// only allowed if `status` is in the range of 100-299
data?: T;
status: number;
// only allowed if `status` is in the range of 400-599
error?: string;
}
```
|
1.0
|
[improvement]: standardized response structure for API routes - Currently, the API responds only with status codes which do not necessarily include any helpful reasons for understanding why the request failed or succeeded. By implementing a standardized response structure for API routes, it will be easier to debug the API and use it in headless environments.
Here is an example of an interface for implementing this:
```ts
interface Response<T> {
// only allowed if `status` is in the range of 100-299
data?: T;
status: number;
// only allowed if `status` is in the range of 400-599
error?: string;
}
```
|
non_process
|
standardized response structure for api routes currently the api responds only with status codes which do not necessarily include any helpful reasons for understanding why the request failed or succeeded by implementing a standardized response structure for api routes it will be easier to debug the api and use it in headless environments here is an example of an interface for implementing this ts interface response only allowed if status is in the range of data t status number only allowed if status is in the range of error string
| 0
|
20,625
| 27,295,855,678
|
IssuesEvent
|
2023-02-23 20:16:24
|
winter-telescope/winterdrp
|
https://api.github.com/repos/winter-telescope/winterdrp
|
opened
|
[FEATURE] Load zipped files
|
enhancement processors
|
**Is your feature request related to a problem? Please describe.**
I'm always frustrated when the pipeline doesn't check for zipped files (extension `.fits.fz`) in a given directory.
**Describe the solution you'd like**
Let's create a function like `ImageLoader.load_from_dir()`, but for zipped files!
|
1.0
|
[FEATURE] Load zipped files - **Is your feature request related to a problem? Please describe.**
I'm always frustrated when the pipeline doesn't check for zipped files (extension `.fits.fz`) in a given directory.
**Describe the solution you'd like**
Let's create a function like `ImageLoader.load_from_dir()`, but for zipped files!
|
process
|
load zipped files is your feature request related to a problem please describe i m always frustrated when the pipeline doesn t check for zipped files extension fits fz in a given directory describe the solution you d like let s create a function like imageloader load from dir but for zipped files
| 1
|
101,451
| 31,153,727,461
|
IssuesEvent
|
2023-08-16 11:47:45
|
pandas-dev/pandas
|
https://api.github.com/repos/pandas-dev/pandas
|
closed
|
BUILD: "error: incomplete definition of type 'struct _frame'" while compiling the datetime library on macOS & python 3.11
|
Build Needs Info
|
### Installation check
- [X] I have read the [installation guide](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#installing-pandas).
### Platform
macOS-13.4.1-arm64-arm-64bit
### Installation Method
Built from source
### pandas Version
main (https://github.com/pandas-dev/pandas/commit/1c5c4efbad2873d137089a1fd32267f40c966850)
### Python Version
3.11.3
### Installation Logs
The environment is a clean `pyenv virtualenv` with just `pip install -r requirements-dev.txt` applied,
(after `CFLAGS="<setup include and linker directories for homebrew snappy>" pip install python-snappy`).
```
(pandas-dev) ➜ pandas git:(main) python setup.py build_ext -j 4
/Users/yves/git-public/pandas/setup.py:19: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import parse_version
/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/setuptools/config/pyprojecttoml.py:66: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*.
config = read_configuration(filepath, True, ignore_option_errors, dist)
running build_ext
building 'pandas._libs.parsers' extension
building 'pandas._libs.lib' extension
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/lib.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/lib.o -Wno-error=unreachable-code
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/parsers.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/parsers.o -Wno-error=unreachable-code
building 'pandas._libs.ops_dispatch' extension
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/ops_dispatch.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/ops_dispatch.o -Wno-error=unreachable-code
building 'pandas._libs.pandas_datetime' extension
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/src/datetime/date_conversions.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/datetime/date_conversions.o -Wno-error=unreachable-code
pandas/_libs/ops_dispatch.c:3995:43: warning: 'ob_shash' is deprecated [-Wdeprecated-declarations]
hash1 = ((PyBytesObject*)s1)->ob_shash;
^
/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/cpython/bytesobject.h:7:5: note: 'ob_shash' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) Py_hash_t ob_shash;
^
/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/pyport.h:336:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
pandas/_libs/ops_dispatch.c:3996:43: warning: 'ob_shash' is deprecated [-Wdeprecated-declarations]
hash2 = ((PyBytesObject*)s2)->ob_shash;
^
/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/cpython/bytesobject.h:7:5: note: 'ob_shash' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) Py_hash_t ob_shash;
^
/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/pyport.h:336:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/src/datetime/pd_datetime.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/datetime/pd_datetime.o -Wno-error=unreachable-code
pandas/_libs/ops_dispatch.c:4665:5: error: incomplete definition of type 'struct _frame'
__Pyx_PyFrame_SetLineNumber(py_frame, py_line);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/ops_dispatch.c:457:62: note: expanded from macro '__Pyx_PyFrame_SetLineNumber'
#define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
~~~~~~~^
/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/pytypedefs.h:22:16: note: forward declaration of 'struct _frame'
typedef struct _frame PyFrameObject;
^
2 warnings and 1 error generated.
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/src/vendored/numpy/datetime/np_datetime.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/vendored/numpy/datetime/np_datetime.o -Wno-error=unreachable-code
pandas/_libs/parsers.c:47389:21: warning: fallthrough annotation in unreachable code [-Wunreachable-code-fallthrough]
CYTHON_FALLTHROUGH;
^
pandas/_libs/parsers.c:382:34: note: expanded from macro 'CYTHON_FALLTHROUGH'
#define CYTHON_FALLTHROUGH __attribute__((fallthrough))
^
pandas/_libs/parsers.c:47400:21: warning: fallthrough annotation in unreachable code [-Wunreachable-code-fallthrough]
CYTHON_FALLTHROUGH;
^
pandas/_libs/parsers.c:382:34: note: expanded from macro 'CYTHON_FALLTHROUGH'
#define CYTHON_FALLTHROUGH __attribute__((fallthrough))
^
pandas/_libs/lib.c:68646:21: warning: fallthrough annotation in unreachable code [-Wunreachable-code-fallthrough]
CYTHON_FALLTHROUGH;
^
pandas/_libs/lib.c:378:34: note: expanded from macro 'CYTHON_FALLTHROUGH'
#define CYTHON_FALLTHROUGH __attribute__((fallthrough))
^
pandas/_libs/lib.c:68657:21: warning: fallthrough annotation in unreachable code [-Wunreachable-code-fallthrough]
CYTHON_FALLTHROUGH;
^
pandas/_libs/lib.c:378:34: note: expanded from macro 'CYTHON_FALLTHROUGH'
#define CYTHON_FALLTHROUGH __attribute__((fallthrough))
^
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/src/vendored/numpy/datetime/np_datetime_strings.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/vendored/numpy/datetime/np_datetime_strings.o -Wno-error=unreachable-code
clang -bundle -undefined dynamic_lookup -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/yves/.pyenv/versions/3.11.3/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/yves/.pyenv/versions/3.11.3/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/datetime/date_conversions.o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/datetime/pd_datetime.o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/vendored/numpy/datetime/np_datetime.o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/vendored/numpy/datetime/np_datetime_strings.o -o build/lib.macosx-13.3-arm64-cpython-311/pandas/_libs/pandas_datetime.cpython-311-darwin.so
2 warnings generated.
clang -bundle -undefined dynamic_lookup -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/yves/.pyenv/versions/3.11.3/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/yves/.pyenv/versions/3.11.3/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/parsers.o -o build/lib.macosx-13.3-arm64-cpython-311/pandas/_libs/parsers.cpython-311-darwin.so
2 warnings generated.
clang -bundle -undefined dynamic_lookup -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/yves/.pyenv/versions/3.11.3/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/yves/.pyenv/versions/3.11.3/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/lib.o -o build/lib.macosx-13.3-arm64-cpython-311/pandas/_libs/lib.cpython-311-darwin.so
error: command '/usr/bin/clang' failed with exit code 1
```
|
1.0
|
BUILD: "error: incomplete definition of type 'struct _frame'" while compiling the datetime library on macOS & python 3.11 - ### Installation check
- [X] I have read the [installation guide](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#installing-pandas).
### Platform
macOS-13.4.1-arm64-arm-64bit
### Installation Method
Built from source
### pandas Version
main (https://github.com/pandas-dev/pandas/commit/1c5c4efbad2873d137089a1fd32267f40c966850)
### Python Version
3.11.3
### Installation Logs
The environment is a clean `pyenv virtualenv` with just `pip install -r requirements-dev.txt` applied,
(after `CFLAGS="<setup include and linker directories for homebrew snappy>" pip install python-snappy`).
```
(pandas-dev) ➜ pandas git:(main) python setup.py build_ext -j 4
/Users/yves/git-public/pandas/setup.py:19: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import parse_version
/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/setuptools/config/pyprojecttoml.py:66: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*.
config = read_configuration(filepath, True, ignore_option_errors, dist)
running build_ext
building 'pandas._libs.parsers' extension
building 'pandas._libs.lib' extension
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/lib.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/lib.o -Wno-error=unreachable-code
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/parsers.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/parsers.o -Wno-error=unreachable-code
building 'pandas._libs.ops_dispatch' extension
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/ops_dispatch.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/ops_dispatch.o -Wno-error=unreachable-code
building 'pandas._libs.pandas_datetime' extension
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/src/datetime/date_conversions.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/datetime/date_conversions.o -Wno-error=unreachable-code
pandas/_libs/ops_dispatch.c:3995:43: warning: 'ob_shash' is deprecated [-Wdeprecated-declarations]
hash1 = ((PyBytesObject*)s1)->ob_shash;
^
/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/cpython/bytesobject.h:7:5: note: 'ob_shash' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) Py_hash_t ob_shash;
^
/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/pyport.h:336:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
pandas/_libs/ops_dispatch.c:3996:43: warning: 'ob_shash' is deprecated [-Wdeprecated-declarations]
hash2 = ((PyBytesObject*)s2)->ob_shash;
^
/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/cpython/bytesobject.h:7:5: note: 'ob_shash' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) Py_hash_t ob_shash;
^
/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/pyport.h:336:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/src/datetime/pd_datetime.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/datetime/pd_datetime.o -Wno-error=unreachable-code
pandas/_libs/ops_dispatch.c:4665:5: error: incomplete definition of type 'struct _frame'
__Pyx_PyFrame_SetLineNumber(py_frame, py_line);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/ops_dispatch.c:457:62: note: expanded from macro '__Pyx_PyFrame_SetLineNumber'
#define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
~~~~~~~^
/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/pytypedefs.h:22:16: note: forward declaration of 'struct _frame'
typedef struct _frame PyFrameObject;
^
2 warnings and 1 error generated.
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/src/vendored/numpy/datetime/np_datetime.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/vendored/numpy/datetime/np_datetime.o -Wno-error=unreachable-code
pandas/_libs/parsers.c:47389:21: warning: fallthrough annotation in unreachable code [-Wunreachable-code-fallthrough]
CYTHON_FALLTHROUGH;
^
pandas/_libs/parsers.c:382:34: note: expanded from macro 'CYTHON_FALLTHROUGH'
#define CYTHON_FALLTHROUGH __attribute__((fallthrough))
^
pandas/_libs/parsers.c:47400:21: warning: fallthrough annotation in unreachable code [-Wunreachable-code-fallthrough]
CYTHON_FALLTHROUGH;
^
pandas/_libs/parsers.c:382:34: note: expanded from macro 'CYTHON_FALLTHROUGH'
#define CYTHON_FALLTHROUGH __attribute__((fallthrough))
^
pandas/_libs/lib.c:68646:21: warning: fallthrough annotation in unreachable code [-Wunreachable-code-fallthrough]
CYTHON_FALLTHROUGH;
^
pandas/_libs/lib.c:378:34: note: expanded from macro 'CYTHON_FALLTHROUGH'
#define CYTHON_FALLTHROUGH __attribute__((fallthrough))
^
pandas/_libs/lib.c:68657:21: warning: fallthrough annotation in unreachable code [-Wunreachable-code-fallthrough]
CYTHON_FALLTHROUGH;
^
pandas/_libs/lib.c:378:34: note: expanded from macro 'CYTHON_FALLTHROUGH'
#define CYTHON_FALLTHROUGH __attribute__((fallthrough))
^
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -DNPY_NO_DEPRECATED_API=0 -Ipandas/_libs/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/lib/python3.11/site-packages/numpy/core/include -I/Users/yves/.pyenv/versions/3.11.3/envs/pandas-dev/include -I/Users/yves/.pyenv/versions/3.11.3/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c pandas/_libs/src/vendored/numpy/datetime/np_datetime_strings.c -o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/vendored/numpy/datetime/np_datetime_strings.o -Wno-error=unreachable-code
clang -bundle -undefined dynamic_lookup -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/yves/.pyenv/versions/3.11.3/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/yves/.pyenv/versions/3.11.3/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/datetime/date_conversions.o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/datetime/pd_datetime.o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/vendored/numpy/datetime/np_datetime.o build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/src/vendored/numpy/datetime/np_datetime_strings.o -o build/lib.macosx-13.3-arm64-cpython-311/pandas/_libs/pandas_datetime.cpython-311-darwin.so
2 warnings generated.
clang -bundle -undefined dynamic_lookup -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/yves/.pyenv/versions/3.11.3/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/yves/.pyenv/versions/3.11.3/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/parsers.o -o build/lib.macosx-13.3-arm64-cpython-311/pandas/_libs/parsers.cpython-311-darwin.so
2 warnings generated.
clang -bundle -undefined dynamic_lookup -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/yves/.pyenv/versions/3.11.3/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/yves/.pyenv/versions/3.11.3/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib build/temp.macosx-13.3-arm64-cpython-311/pandas/_libs/lib.o -o build/lib.macosx-13.3-arm64-cpython-311/pandas/_libs/lib.cpython-311-darwin.so
error: command '/usr/bin/clang' failed with exit code 1
```
|
non_process
|
build error incomplete definition of type struct frame while compiling the datetime library on macos python installation check i have read the platform macos arm installation method built from source pandas version main python version installation logs the environment is a clean pyenv virtualenv with just pip install r requirements dev txt applied after cflags pip install python snappy pandas dev ➜ pandas git main python setup py build ext j users yves git public pandas setup py deprecationwarning pkg resources is deprecated as an api see from pkg resources import parse version users yves pyenv versions envs pandas dev lib site packages setuptools config pyprojecttoml py betaconfiguration support for in pyproject toml is still beta config read configuration filepath true ignore option errors dist running build ext building pandas libs parsers extension building pandas libs lib extension clang wsign compare wunreachable code fno common dynamic dndebug g fwrapv wall i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include dnpy no deprecated api ipandas libs include i users yves pyenv versions envs pandas dev lib site packages numpy core include i users yves pyenv versions envs pandas dev include i users yves pyenv versions library frameworks python framework versions include c pandas libs lib c o build temp macosx cpython pandas libs lib o wno error unreachable code clang wsign compare wunreachable code fno common dynamic dndebug g fwrapv wall i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include dnpy no deprecated api ipandas libs include i users yves pyenv versions envs pandas dev lib site packages numpy core include i users yves pyenv versions envs pandas dev include i users yves pyenv versions library frameworks python framework versions include c pandas libs parsers c o build temp macosx cpython pandas libs parsers o wno error unreachable code building pandas libs ops dispatch extension clang wsign compare wunreachable code fno common dynamic dndebug g fwrapv wall i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include dnpy no deprecated api ipandas libs include i users yves pyenv versions envs pandas dev lib site packages numpy core include i users yves pyenv versions envs pandas dev include i users yves pyenv versions library frameworks python framework versions include c pandas libs ops dispatch c o build temp macosx cpython pandas libs ops dispatch o wno error unreachable code building pandas libs pandas datetime extension clang wsign compare wunreachable code fno common dynamic dndebug g fwrapv wall i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include dnpy no deprecated api ipandas libs include i users yves pyenv versions envs pandas dev lib site packages numpy core include i users yves pyenv versions envs pandas dev include i users yves pyenv versions library frameworks python framework versions include c pandas libs src datetime date conversions c o build temp macosx cpython pandas libs src datetime date conversions o wno error unreachable code pandas libs ops dispatch c warning ob shash is deprecated pybytesobject ob shash users yves pyenv versions library frameworks python framework versions include cpython bytesobject h note ob shash has been explicitly marked deprecated here py deprecated py hash t ob shash users yves pyenv versions library frameworks python framework versions include pyport h note expanded from macro py deprecated define py deprecated version unused attribute deprecated pandas libs ops dispatch c warning ob shash is deprecated pybytesobject ob shash users yves pyenv versions library frameworks python framework versions include cpython bytesobject h note ob shash has been explicitly marked deprecated here py deprecated py hash t ob shash users yves pyenv versions library frameworks python framework versions include pyport h note expanded from macro py deprecated define py deprecated version unused attribute deprecated clang wsign compare wunreachable code fno common dynamic dndebug g fwrapv wall i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include dnpy no deprecated api ipandas libs include i users yves pyenv versions envs pandas dev lib site packages numpy core include i users yves pyenv versions envs pandas dev include i users yves pyenv versions library frameworks python framework versions include c pandas libs src datetime pd datetime c o build temp macosx cpython pandas libs src datetime pd datetime o wno error unreachable code pandas libs ops dispatch c error incomplete definition of type struct frame pyx pyframe setlinenumber py frame py line pandas libs ops dispatch c note expanded from macro pyx pyframe setlinenumber define pyx pyframe setlinenumber frame lineno frame f lineno lineno users yves pyenv versions library frameworks python framework versions include pytypedefs h note forward declaration of struct frame typedef struct frame pyframeobject warnings and error generated clang wsign compare wunreachable code fno common dynamic dndebug g fwrapv wall i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include dnpy no deprecated api ipandas libs include i users yves pyenv versions envs pandas dev lib site packages numpy core include i users yves pyenv versions envs pandas dev include i users yves pyenv versions library frameworks python framework versions include c pandas libs src vendored numpy datetime np datetime c o build temp macosx cpython pandas libs src vendored numpy datetime np datetime o wno error unreachable code pandas libs parsers c warning fallthrough annotation in unreachable code cython fallthrough pandas libs parsers c note expanded from macro cython fallthrough define cython fallthrough attribute fallthrough pandas libs parsers c warning fallthrough annotation in unreachable code cython fallthrough pandas libs parsers c note expanded from macro cython fallthrough define cython fallthrough attribute fallthrough pandas libs lib c warning fallthrough annotation in unreachable code cython fallthrough pandas libs lib c note expanded from macro cython fallthrough define cython fallthrough attribute fallthrough pandas libs lib c warning fallthrough annotation in unreachable code cython fallthrough pandas libs lib c note expanded from macro cython fallthrough define cython fallthrough attribute fallthrough clang wsign compare wunreachable code fno common dynamic dndebug g fwrapv wall i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include i applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr include dnpy no deprecated api ipandas libs include i users yves pyenv versions envs pandas dev lib site packages numpy core include i users yves pyenv versions envs pandas dev include i users yves pyenv versions library frameworks python framework versions include c pandas libs src vendored numpy datetime np datetime strings c o build temp macosx cpython pandas libs src vendored numpy datetime np datetime strings o wno error unreachable code clang bundle undefined dynamic lookup l opt homebrew opt readline lib l opt homebrew opt readline lib l users yves pyenv versions lib l opt homebrew lib wl rpath opt homebrew lib l applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr lib l opt homebrew opt readline lib l opt homebrew opt readline lib l users yves pyenv versions lib l opt homebrew lib wl rpath opt homebrew lib l applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr lib build temp macosx cpython pandas libs src datetime date conversions o build temp macosx cpython pandas libs src datetime pd datetime o build temp macosx cpython pandas libs src vendored numpy datetime np datetime o build temp macosx cpython pandas libs src vendored numpy datetime np datetime strings o o build lib macosx cpython pandas libs pandas datetime cpython darwin so warnings generated clang bundle undefined dynamic lookup l opt homebrew opt readline lib l opt homebrew opt readline lib l users yves pyenv versions lib l opt homebrew lib wl rpath opt homebrew lib l applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr lib l opt homebrew opt readline lib l opt homebrew opt readline lib l users yves pyenv versions lib l opt homebrew lib wl rpath opt homebrew lib l applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr lib build temp macosx cpython pandas libs parsers o o build lib macosx cpython pandas libs parsers cpython darwin so warnings generated clang bundle undefined dynamic lookup l opt homebrew opt readline lib l opt homebrew opt readline lib l users yves pyenv versions lib l opt homebrew lib wl rpath opt homebrew lib l applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr lib l opt homebrew opt readline lib l opt homebrew opt readline lib l users yves pyenv versions lib l opt homebrew lib wl rpath opt homebrew lib l applications xcode app contents developer platforms macosx platform developer sdks macosx sdk usr lib build temp macosx cpython pandas libs lib o o build lib macosx cpython pandas libs lib cpython darwin so error command usr bin clang failed with exit code
| 0
|
4,663
| 2,562,572,724
|
IssuesEvent
|
2015-02-06 03:15:39
|
HubTurbo/HubTurbo
|
https://api.github.com/repos/HubTurbo/HubTurbo
|
closed
|
Reset timer after an ad hoc sync
|
aspect-sync priority.medium status.accepted type.enhancement
|
For example, we do a refresh when HT gains focus. We can reset the timer at that point so that the next refresh happens after 60s only.
|
1.0
|
Reset timer after an ad hoc sync - For example, we do a refresh when HT gains focus. We can reset the timer at that point so that the next refresh happens after 60s only.
|
non_process
|
reset timer after an ad hoc sync for example we do a refresh when ht gains focus we can reset the timer at that point so that the next refresh happens after only
| 0
|
2,836
| 3,662,957,363
|
IssuesEvent
|
2016-02-19 02:08:42
|
coreos/etcd
|
https://api.github.com/repos/coreos/etcd
|
closed
|
etcd introduce significant throughput overhead(more than 10X) comparing to raw raft
|
area/performance
|
Test env: ec2 large machine
Proposal: etcd command ("/foo"="bar"); apply latency (10us)
raft 1 node perf with disk io `84966 ops/second`
raft 3 node perf with disk io `83127 ops/second`.
I did some **VERY SIMPLE PROFILING**, The bottleneck of raft seems to be CPU. A lot of marshaling and syscall.Write (not sync). The number of nodes does not matter a lot since our raft can do smart batching. The latency is much higher though. These are expected.
However, the best number I can get out of etcd with same command is around `10000 ops/second` for one member cluster and `5000 ops/second` for a three member cluster.
etcd introduces more than 10X throughput overhead than raw raft.
|
True
|
etcd introduce significant throughput overhead(more than 10X) comparing to raw raft - Test env: ec2 large machine
Proposal: etcd command ("/foo"="bar"); apply latency (10us)
raft 1 node perf with disk io `84966 ops/second`
raft 3 node perf with disk io `83127 ops/second`.
I did some **VERY SIMPLE PROFILING**, The bottleneck of raft seems to be CPU. A lot of marshaling and syscall.Write (not sync). The number of nodes does not matter a lot since our raft can do smart batching. The latency is much higher though. These are expected.
However, the best number I can get out of etcd with same command is around `10000 ops/second` for one member cluster and `5000 ops/second` for a three member cluster.
etcd introduces more than 10X throughput overhead than raw raft.
|
non_process
|
etcd introduce significant throughput overhead more than comparing to raw raft test env large machine proposal etcd command foo bar apply latency raft node perf with disk io ops second raft node perf with disk io ops second i did some very simple profiling the bottleneck of raft seems to be cpu a lot of marshaling and syscall write not sync the number of nodes does not matter a lot since our raft can do smart batching the latency is much higher though these are expected however the best number i can get out of etcd with same command is around ops second for one member cluster and ops second for a three member cluster etcd introduces more than throughput overhead than raw raft
| 0
|
70,154
| 23,021,505,076
|
IssuesEvent
|
2022-07-22 05:22:50
|
fsnotify/fsnotify
|
https://api.github.com/repos/fsnotify/fsnotify
|
closed
|
There is a race in inotify.go:watcher.Close()
|
defect
|
Hi!
Thanks a lot for this library.
Looking at the code I'm suspicious that there is a race in Close():
https://github.com/fsnotify/fsnotify/blob/7f4cf4dd2b522a984eaca51d1ccee54101d3414a/inotify.go#L72-L80
Is it possible that two concurrent goroutines running Close() could both pass the check function `w.isClosed()` and then both call `close(w.done)`? That would result in two closes of the channel and a panic.
One solution would be to lock the `w.mu` at least around the `w.IsClosed` and `close(w.done)` calls.
|
1.0
|
There is a race in inotify.go:watcher.Close() - Hi!
Thanks a lot for this library.
Looking at the code I'm suspicious that there is a race in Close():
https://github.com/fsnotify/fsnotify/blob/7f4cf4dd2b522a984eaca51d1ccee54101d3414a/inotify.go#L72-L80
Is it possible that two concurrent goroutines running Close() could both pass the check function `w.isClosed()` and then both call `close(w.done)`? That would result in two closes of the channel and a panic.
One solution would be to lock the `w.mu` at least around the `w.IsClosed` and `close(w.done)` calls.
|
non_process
|
there is a race in inotify go watcher close hi thanks a lot for this library looking at the code i m suspicious that there is a race in close is it possible that two concurrent goroutines running close could both pass the check function w isclosed and then both call close w done that would result in two closes of the channel and a panic one solution would be to lock the w mu at least around the w isclosed and close w done calls
| 0
|
100,258
| 16,486,705,050
|
IssuesEvent
|
2021-05-24 19:07:35
|
CrazyKidJack/WebGoat_2.0_clone
|
https://api.github.com/repos/CrazyKidJack/WebGoat_2.0_clone
|
closed
|
CVE-2020-11022 (Medium) detected in multiple libraries - autoclosed
|
security vulnerability
|
## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.1.4.min.js</b>, <b>jquery-3.3.1.tgz</b>, <b>jquery-2.2.3.js</b>, <b>jquery-3.4.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: WebGoat_2.0_clone/docs/node_modules/jquery.easing/example/example.html</p>
<p>Path to vulnerable library: WebGoat_2.0_clone/webgoat-container/target/classes/static/js/libs/jquery-2.1.4.min.js,WebGoat_2.0_clone/docs/node_modules/jquery.easing/example/example.html,WebGoat_2.0_clone/webgoat-container/src/main/resources/static/js/libs/jquery-2.1.4.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.3.1.tgz</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz">https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz</a></p>
<p>Path to dependency file: WebGoat_2.0_clone/docs/package.json</p>
<p>Path to vulnerable library: WebGoat_2.0_clone/docs/node_modules/jquery/package.json</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.2.3.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.js</a></p>
<p>Path to dependency file: WebGoat_2.0_clone/docs/node_modules/jquery.easing/example/verify.html</p>
<p>Path to vulnerable library: WebGoat_2.0_clone/docs/node_modules/jquery.easing/example/verify.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.2.3.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.4.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js</a></p>
<p>Path to vulnerable library: WebGoat_2.0_clone/webgoat-container/src/main/resources/static/js/libs/jquery.min.js,WebGoat_2.0_clone/webgoat-container/target/classes/static/js/libs/jquery.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.4.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/CrazyKidJack/WebGoat_2.0_clone/commits/bf2e3239dd01ebad5bdcf3161aa931ddd47755ff">bf2e3239dd01ebad5bdcf3161aa931ddd47755ff</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.1.4","packageFilePaths":["/docs/node_modules/jquery.easing/example/example.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:2.1.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"javascript/Node.js","packageName":"jquery","packageVersion":"3.3.1","packageFilePaths":["/docs/package.json"],"isTransitiveDependency":false,"dependencyTree":"jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.2.3","packageFilePaths":["/docs/node_modules/jquery.easing/example/verify.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:2.2.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"3.4.1","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"jquery:3.4.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2020-11022","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-11022 (Medium) detected in multiple libraries - autoclosed - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.1.4.min.js</b>, <b>jquery-3.3.1.tgz</b>, <b>jquery-2.2.3.js</b>, <b>jquery-3.4.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: WebGoat_2.0_clone/docs/node_modules/jquery.easing/example/example.html</p>
<p>Path to vulnerable library: WebGoat_2.0_clone/webgoat-container/target/classes/static/js/libs/jquery-2.1.4.min.js,WebGoat_2.0_clone/docs/node_modules/jquery.easing/example/example.html,WebGoat_2.0_clone/webgoat-container/src/main/resources/static/js/libs/jquery-2.1.4.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.3.1.tgz</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz">https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz</a></p>
<p>Path to dependency file: WebGoat_2.0_clone/docs/package.json</p>
<p>Path to vulnerable library: WebGoat_2.0_clone/docs/node_modules/jquery/package.json</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.2.3.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.js</a></p>
<p>Path to dependency file: WebGoat_2.0_clone/docs/node_modules/jquery.easing/example/verify.html</p>
<p>Path to vulnerable library: WebGoat_2.0_clone/docs/node_modules/jquery.easing/example/verify.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.2.3.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.4.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js</a></p>
<p>Path to vulnerable library: WebGoat_2.0_clone/webgoat-container/src/main/resources/static/js/libs/jquery.min.js,WebGoat_2.0_clone/webgoat-container/target/classes/static/js/libs/jquery.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.4.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/CrazyKidJack/WebGoat_2.0_clone/commits/bf2e3239dd01ebad5bdcf3161aa931ddd47755ff">bf2e3239dd01ebad5bdcf3161aa931ddd47755ff</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.1.4","packageFilePaths":["/docs/node_modules/jquery.easing/example/example.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:2.1.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"javascript/Node.js","packageName":"jquery","packageVersion":"3.3.1","packageFilePaths":["/docs/package.json"],"isTransitiveDependency":false,"dependencyTree":"jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.2.3","packageFilePaths":["/docs/node_modules/jquery.easing/example/verify.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:2.2.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"3.4.1","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"jquery:3.4.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2020-11022","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in multiple libraries autoclosed cve medium severity vulnerability vulnerable libraries jquery min js jquery tgz jquery js jquery min js jquery min js javascript library for dom operations library home page a href path to dependency file webgoat clone docs node modules jquery easing example example html path to vulnerable library webgoat clone webgoat container target classes static js libs jquery min js webgoat clone docs node modules jquery easing example example html webgoat clone webgoat container src main resources static js libs jquery min js dependency hierarchy x jquery min js vulnerable library jquery tgz javascript library for dom operations library home page a href path to dependency file webgoat clone docs package json path to vulnerable library webgoat clone docs node modules jquery package json dependency hierarchy x jquery tgz vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file webgoat clone docs node modules jquery easing example verify html path to vulnerable library webgoat clone docs node modules jquery easing example verify html dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library webgoat clone webgoat container src main resources static js libs jquery min js webgoat clone webgoat container target classes static js libs jquery min js dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree jquery isminimumfixversionavailable true minimumfixversion jquery packagetype javascript node js packagename jquery packageversion packagefilepaths istransitivedependency false dependencytree jquery isminimumfixversionavailable true minimumfixversion jquery packagetype javascript packagename jquery packageversion packagefilepaths istransitivedependency false dependencytree jquery isminimumfixversionavailable true minimumfixversion jquery packagetype javascript packagename jquery packageversion packagefilepaths istransitivedependency false dependencytree jquery isminimumfixversionavailable true minimumfixversion jquery basebranches vulnerabilityidentifier cve vulnerabilitydetails in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery vulnerabilityurl
| 0
|
62,970
| 15,391,679,839
|
IssuesEvent
|
2021-03-03 14:49:40
|
root-project/root
|
https://api.github.com/repos/root-project/root
|
closed
|
Add an option to disable TBB warning of specific type
|
in:Build System in:Core Libraries
|
Dears,
we are having excessive number of warnings coming from ROOT of this type:
```
%MSG-i Root_Information: DTGeometryValidate:valid@ctor TThreadExecutor::ParallelFor() 22-Feb-2021 23:50:26 CET pre-events
tbb::global_control is limiting the number of parallel workers. Proceeding with 1 threads this time
```
see, https://github.com/cms-sw/cmssw/issues/32977 .
The warnings are as described but this is an intentional limitation in CMSSW (as from what I understood)
One possibility we discussed was to ask ROOT devs to put a cmake flag to silence it or anything similar you can propose (such that we turn it on and the warning disappears)
Please see the issue on cmssw for more info
|
1.0
|
Add an option to disable TBB warning of specific type - Dears,
we are having excessive number of warnings coming from ROOT of this type:
```
%MSG-i Root_Information: DTGeometryValidate:valid@ctor TThreadExecutor::ParallelFor() 22-Feb-2021 23:50:26 CET pre-events
tbb::global_control is limiting the number of parallel workers. Proceeding with 1 threads this time
```
see, https://github.com/cms-sw/cmssw/issues/32977 .
The warnings are as described but this is an intentional limitation in CMSSW (as from what I understood)
One possibility we discussed was to ask ROOT devs to put a cmake flag to silence it or anything similar you can propose (such that we turn it on and the warning disappears)
Please see the issue on cmssw for more info
|
non_process
|
add an option to disable tbb warning of specific type dears we are having excessive number of warnings coming from root of this type msg i root information dtgeometryvalidate valid ctor tthreadexecutor parallelfor feb cet pre events tbb global control is limiting the number of parallel workers proceeding with threads this time see the warnings are as described but this is an intentional limitation in cmssw as from what i understood one possibility we discussed was to ask root devs to put a cmake flag to silence it or anything similar you can propose such that we turn it on and the warning disappears please see the issue on cmssw for more info
| 0
|
18,624
| 24,579,648,911
|
IssuesEvent
|
2022-10-13 14:44:43
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Auth Server] Randomly getting session expired message and user is automatically logged off
|
Bug P2 Process: Fixed Process: Tested QA Process: Tested dev Auth server
|
**Steps:**
1. Login into app (iOS)
2. Enroll into the study
3. Keep the app idle for some time(>30 mins)
4. Open the app again and observe user got session expired message
**Actual:** Randomly getting session expired message and user is automatically logged off
**Expected:** User should not be automatically logged off
1. One of the user got session expired message at 6:44PM - Sep 02 2022
|
3.0
|
[Auth Server] Randomly getting session expired message and user is automatically logged off - **Steps:**
1. Login into app (iOS)
2. Enroll into the study
3. Keep the app idle for some time(>30 mins)
4. Open the app again and observe user got session expired message
**Actual:** Randomly getting session expired message and user is automatically logged off
**Expected:** User should not be automatically logged off
1. One of the user got session expired message at 6:44PM - Sep 02 2022
|
process
|
randomly getting session expired message and user is automatically logged off steps login into app ios enroll into the study keep the app idle for some time mins open the app again and observe user got session expired message actual randomly getting session expired message and user is automatically logged off expected user should not be automatically logged off one of the user got session expired message at sep
| 1
|
1,549
| 22,553,026,642
|
IssuesEvent
|
2022-06-27 07:47:56
|
lkrg-org/lkrg
|
https://api.github.com/repos/lkrg-org/lkrg
|
closed
|
_STEXT memory block hash is different sometime between kernel 5.15.38 and 5.15.42
|
portability
|
It looks like sometime after kernel 5.15.38 I am seeing LKRG alert on mismatched _STEXT memory block hashes. The latest kernel that I have access to where this does not occur is 5.15.38, and the earliest kernel where I started seeing this behavior is 5.15.41.
On this particular system, I see these logs:
```
Jun 21 08:55:05 fuuko kernel: [p_lkrg] Loading LKRG...
Jun 21 08:55:06 fuuko kernel: Freezing user space processes ... (elapsed 0.001 seconds) done.
Jun 21 08:55:06 fuuko kernel: OOM killer disabled.
Jun 21 08:55:06 fuuko kernel: [p_lkrg] LKRG can't enforce SELinux validation (CONFIG_GCC_PLUGIN_RANDSTRUCT detected)
Jun 21 08:55:06 fuuko kernel: [p_lkrg] [kretprobe] register_kretprobe() for <lookup_fast> failed! [err=-2]
Jun 21 08:55:06 fuuko kernel: [p_lkrg] LKRG won't enforce pCFI validation on 'lookup_fast'
Jun 21 08:55:06 fuuko kernel: [p_lkrg] LKRG initialized successfully!
Jun 21 08:55:06 fuuko kernel: OOM killer enabled.
Jun 21 08:55:06 fuuko kernel: Restarting tasks ... done.
[...]
Jun 21 08:55:06 fuuko kernel: [p_lkrg] Enabling "blocking modules" feature.
Jun 21 08:55:06 fuuko kernel: [p_lkrg] Changing "kint_enforce" logic. From Old[2 | PANIC] to new[1 | LOG ONLY (For SELinux and CR0.WP LOG & RESTORE)] one.
Jun 21 08:55:06 fuuko kernel: [p_lkrg] Changing "pint_validate" logic. From Old[1 | CURRENT] to new[2 | CURRENT] one.
Jun 21 08:55:06 fuuko kernel: [p_lkrg] Changing "smap_enforce" logic. From Old[2 | PANIC] to new[1 | LOG & RESTORE] one.
[...]
Jun 21 08:55:06 fuuko kernel: iwlwifi 0000:03:00.0 wlo1: renamed from wlan0
Jun 21 08:55:06 fuuko kernel: r8169 0000:02:00.0 enp2s0: renamed from eth0
Jun 21 08:55:06 fuuko kernel: BTRFS info (device dm-4): flagging fs with big metadata feature
Jun 21 08:55:06 fuuko kernel: BTRFS info (device dm-4): force zstd compression, level 3
Jun 21 08:55:06 fuuko kernel: BTRFS info (device dm-4): disk space caching is enabled
Jun 21 08:55:06 fuuko kernel: BTRFS info (device dm-4): has skinny extents
Jun 21 08:55:07 fuuko systemd-journald[809]: Received client request to flush runtime journal.
Jun 21 08:55:07 fuuko kernel: BTRFS info (device dm-4): enabling ssd optimizations
Jun 21 08:55:07 fuuko kernel: zram3: detected capacity change from 0 to 33554432
Jun 21 08:55:07 fuuko kernel: EXT4-fs (zram3): mounted filesystem without journal. Opts: discard. Quota mode: disabled.
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0x5ff5f1ce81b5d993] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0x1739de8631129e48] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0x8fdc91091b91bc6c] and should be [0x266415ea2ac81e19] !!!
```
The audit log seems to suggest this may be BPF related?
```
Jun 21 08:55:07 fuuko rngd[1343]: [hwrng ]: Initialized
Jun 21 08:55:07 fuuko rngd[1343]: [rdrand]: Enabling RDSEED rng support
Jun 21 08:55:07 fuuko rngd[1343]: [rdrand]: Initialized
Jun 21 08:55:07 fuuko audit: BPF prog-id=29 op=LOAD
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0x5ff5f1ce81b5d993] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:07 fuuko audit: BPF prog-id=30 op=LOAD
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0x1739de8631129e48] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:07 fuuko audit: BPF prog-id=31 op=LOAD
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0x8fdc91091b91bc6c] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:07 fuuko audit: BPF prog-id=32 op=LOAD
Jun 21 08:55:07 fuuko audit[1346]: NETFILTER_CFG table=filter:2 family=1 entries=44 op=nft_register_chain pid=1346 subj=system_u:system_r:iptables_t:s0 comm="nft"
Jun 21 08:55:07 fuuko audit[1346]: SYSCALL arch=c000003e syscall=46 success=yes exit=6044 a0=3 a1=7b57057c9438 a2=0 a3=67cccd748a04 items=0 ppid=1341 pid=1346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="nft" exe=>
Jun 21 08:55:07 fuuko audit: SOCKADDR saddr=100000000000000000000000
Jun 21 08:55:07 fuuko audit: PROCTITLE proctitle=6E6674002D66002D
Jun 21 08:55:07 fuuko systemd[1]: Starting User Login Management...
░░ Subject: A start job for unit systemd-logind.service has begun execution
░░ Defined-By: systemd
░░ Support: https://gentoo.org/support/
░░
░░ A start job for unit systemd-logind.service has begun execution.
░░
░░ The job identifier is 311.
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0xfff2f8b5967957e2] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0xfff2f8b5967957e2] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:08 fuuko kernel: Process accounting resumed
Jun 21 08:55:08 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0xfff2f8b5967957e2] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko audit: BPF prog-id=33 op=LOAD
Jun 21 08:55:08 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
```
This particular kernel is 5.15.47.
So far I am seeing this same behavior on 2 different machines.
|
True
|
_STEXT memory block hash is different sometime between kernel 5.15.38 and 5.15.42 - It looks like sometime after kernel 5.15.38 I am seeing LKRG alert on mismatched _STEXT memory block hashes. The latest kernel that I have access to where this does not occur is 5.15.38, and the earliest kernel where I started seeing this behavior is 5.15.41.
On this particular system, I see these logs:
```
Jun 21 08:55:05 fuuko kernel: [p_lkrg] Loading LKRG...
Jun 21 08:55:06 fuuko kernel: Freezing user space processes ... (elapsed 0.001 seconds) done.
Jun 21 08:55:06 fuuko kernel: OOM killer disabled.
Jun 21 08:55:06 fuuko kernel: [p_lkrg] LKRG can't enforce SELinux validation (CONFIG_GCC_PLUGIN_RANDSTRUCT detected)
Jun 21 08:55:06 fuuko kernel: [p_lkrg] [kretprobe] register_kretprobe() for <lookup_fast> failed! [err=-2]
Jun 21 08:55:06 fuuko kernel: [p_lkrg] LKRG won't enforce pCFI validation on 'lookup_fast'
Jun 21 08:55:06 fuuko kernel: [p_lkrg] LKRG initialized successfully!
Jun 21 08:55:06 fuuko kernel: OOM killer enabled.
Jun 21 08:55:06 fuuko kernel: Restarting tasks ... done.
[...]
Jun 21 08:55:06 fuuko kernel: [p_lkrg] Enabling "blocking modules" feature.
Jun 21 08:55:06 fuuko kernel: [p_lkrg] Changing "kint_enforce" logic. From Old[2 | PANIC] to new[1 | LOG ONLY (For SELinux and CR0.WP LOG & RESTORE)] one.
Jun 21 08:55:06 fuuko kernel: [p_lkrg] Changing "pint_validate" logic. From Old[1 | CURRENT] to new[2 | CURRENT] one.
Jun 21 08:55:06 fuuko kernel: [p_lkrg] Changing "smap_enforce" logic. From Old[2 | PANIC] to new[1 | LOG & RESTORE] one.
[...]
Jun 21 08:55:06 fuuko kernel: iwlwifi 0000:03:00.0 wlo1: renamed from wlan0
Jun 21 08:55:06 fuuko kernel: r8169 0000:02:00.0 enp2s0: renamed from eth0
Jun 21 08:55:06 fuuko kernel: BTRFS info (device dm-4): flagging fs with big metadata feature
Jun 21 08:55:06 fuuko kernel: BTRFS info (device dm-4): force zstd compression, level 3
Jun 21 08:55:06 fuuko kernel: BTRFS info (device dm-4): disk space caching is enabled
Jun 21 08:55:06 fuuko kernel: BTRFS info (device dm-4): has skinny extents
Jun 21 08:55:07 fuuko systemd-journald[809]: Received client request to flush runtime journal.
Jun 21 08:55:07 fuuko kernel: BTRFS info (device dm-4): enabling ssd optimizations
Jun 21 08:55:07 fuuko kernel: zram3: detected capacity change from 0 to 33554432
Jun 21 08:55:07 fuuko kernel: EXT4-fs (zram3): mounted filesystem without journal. Opts: discard. Quota mode: disabled.
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0x5ff5f1ce81b5d993] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0x1739de8631129e48] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0x8fdc91091b91bc6c] and should be [0x266415ea2ac81e19] !!!
```
The audit log seems to suggest this may be BPF related?
```
Jun 21 08:55:07 fuuko rngd[1343]: [hwrng ]: Initialized
Jun 21 08:55:07 fuuko rngd[1343]: [rdrand]: Enabling RDSEED rng support
Jun 21 08:55:07 fuuko rngd[1343]: [rdrand]: Initialized
Jun 21 08:55:07 fuuko audit: BPF prog-id=29 op=LOAD
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0x5ff5f1ce81b5d993] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:07 fuuko audit: BPF prog-id=30 op=LOAD
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0x1739de8631129e48] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:07 fuuko audit: BPF prog-id=31 op=LOAD
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0x8fdc91091b91bc6c] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:07 fuuko audit: BPF prog-id=32 op=LOAD
Jun 21 08:55:07 fuuko audit[1346]: NETFILTER_CFG table=filter:2 family=1 entries=44 op=nft_register_chain pid=1346 subj=system_u:system_r:iptables_t:s0 comm="nft"
Jun 21 08:55:07 fuuko audit[1346]: SYSCALL arch=c000003e syscall=46 success=yes exit=6044 a0=3 a1=7b57057c9438 a2=0 a3=67cccd748a04 items=0 ppid=1341 pid=1346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="nft" exe=>
Jun 21 08:55:07 fuuko audit: SOCKADDR saddr=100000000000000000000000
Jun 21 08:55:07 fuuko audit: PROCTITLE proctitle=6E6674002D66002D
Jun 21 08:55:07 fuuko systemd[1]: Starting User Login Management...
░░ Subject: A start job for unit systemd-logind.service has begun execution
░░ Defined-By: systemd
░░ Support: https://gentoo.org/support/
░░
░░ A start job for unit systemd-logind.service has begun execution.
░░
░░ The job identifier is 311.
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0xfff2f8b5967957e2] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0xfff2f8b5967957e2] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
Jun 21 08:55:08 fuuko kernel: Process accounting resumed
Jun 21 08:55:08 fuuko kernel: [p_lkrg] ALERT !!! _STEXT MEMORY BLOCK HASH IS DIFFERENT - it is [0xfff2f8b5967957e2] and should be [0x266415ea2ac81e19] !!!
Jun 21 08:55:07 fuuko audit: BPF prog-id=33 op=LOAD
Jun 21 08:55:08 fuuko kernel: [p_lkrg] ALERT !!! SYSTEM HAS BEEN COMPROMISED - DETECTED DIFFERENT 1 CHECKSUMS !!!
```
This particular kernel is 5.15.47.
So far I am seeing this same behavior on 2 different machines.
|
non_process
|
stext memory block hash is different sometime between kernel and it looks like sometime after kernel i am seeing lkrg alert on mismatched stext memory block hashes the latest kernel that i have access to where this does not occur is and the earliest kernel where i started seeing this behavior is on this particular system i see these logs jun fuuko kernel loading lkrg jun fuuko kernel freezing user space processes elapsed seconds done jun fuuko kernel oom killer disabled jun fuuko kernel lkrg can t enforce selinux validation config gcc plugin randstruct detected jun fuuko kernel register kretprobe for failed jun fuuko kernel lkrg won t enforce pcfi validation on lookup fast jun fuuko kernel lkrg initialized successfully jun fuuko kernel oom killer enabled jun fuuko kernel restarting tasks done jun fuuko kernel enabling blocking modules feature jun fuuko kernel changing kint enforce logic from old to new one jun fuuko kernel changing pint validate logic from old to new one jun fuuko kernel changing smap enforce logic from old to new one jun fuuko kernel iwlwifi renamed from jun fuuko kernel renamed from jun fuuko kernel btrfs info device dm flagging fs with big metadata feature jun fuuko kernel btrfs info device dm force zstd compression level jun fuuko kernel btrfs info device dm disk space caching is enabled jun fuuko kernel btrfs info device dm has skinny extents jun fuuko systemd journald received client request to flush runtime journal jun fuuko kernel btrfs info device dm enabling ssd optimizations jun fuuko kernel detected capacity change from to jun fuuko kernel fs mounted filesystem without journal opts discard quota mode disabled jun fuuko kernel alert stext memory block hash is different it is and should be jun fuuko kernel alert system has been compromised detected different checksums jun fuuko kernel alert stext memory block hash is different it is and should be jun fuuko kernel alert system has been compromised detected different checksums jun fuuko kernel alert stext memory block hash is different it is and should be the audit log seems to suggest this may be bpf related jun fuuko rngd initialized jun fuuko rngd enabling rdseed rng support jun fuuko rngd initialized jun fuuko audit bpf prog id op load jun fuuko kernel alert stext memory block hash is different it is and should be jun fuuko kernel alert system has been compromised detected different checksums jun fuuko audit bpf prog id op load jun fuuko kernel alert stext memory block hash is different it is and should be jun fuuko kernel alert system has been compromised detected different checksums jun fuuko audit bpf prog id op load jun fuuko kernel alert stext memory block hash is different it is and should be jun fuuko kernel alert system has been compromised detected different checksums jun fuuko audit bpf prog id op load jun fuuko audit netfilter cfg table filter family entries op nft register chain pid subj system u system r iptables t comm nft jun fuuko audit syscall arch syscall success yes exit items ppid pid auid uid gid euid suid fsuid egid sgid fsgid tty none ses comm nft exe jun fuuko audit sockaddr saddr jun fuuko audit proctitle proctitle jun fuuko systemd starting user login management ░░ subject a start job for unit systemd logind service has begun execution ░░ defined by systemd ░░ support ░░ ░░ a start job for unit systemd logind service has begun execution ░░ ░░ the job identifier is jun fuuko kernel alert stext memory block hash is different it is and should be jun fuuko kernel alert system has been compromised detected different checksums jun fuuko kernel alert stext memory block hash is different it is and should be jun fuuko kernel alert system has been compromised detected different checksums jun fuuko kernel process accounting resumed jun fuuko kernel alert stext memory block hash is different it is and should be jun fuuko audit bpf prog id op load jun fuuko kernel alert system has been compromised detected different checksums this particular kernel is so far i am seeing this same behavior on different machines
| 0
|
136,559
| 30,548,208,325
|
IssuesEvent
|
2023-07-20 06:31:30
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
jina 3.19.2.dev4 has 2 GuardDog issues
|
guarddog code-execution typosquatting
|
https://pypi.org/project/jina
https://inspector.pypi.io/project/jina
```{
"dependency": "jina",
"version": "3.19.2.dev4",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: jira",
"code-execution": [
{
"location": "jina-3.19.2.dev4/setup.py:169",
"code": " ret_code = subprocess.run(['go', 'version']).returncode",
"message": "This package is executing OS commands in the setup.py file"
}
]
},
"path": "/tmp/tmp2_h2l1jq/jina"
}
}```
|
1.0
|
jina 3.19.2.dev4 has 2 GuardDog issues - https://pypi.org/project/jina
https://inspector.pypi.io/project/jina
```{
"dependency": "jina",
"version": "3.19.2.dev4",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: jira",
"code-execution": [
{
"location": "jina-3.19.2.dev4/setup.py:169",
"code": " ret_code = subprocess.run(['go', 'version']).returncode",
"message": "This package is executing OS commands in the setup.py file"
}
]
},
"path": "/tmp/tmp2_h2l1jq/jina"
}
}```
|
non_process
|
jina has guarddog issues dependency jina version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt jira code execution location jina setup py code ret code subprocess run returncode message this package is executing os commands in the setup py file path tmp jina
| 0
|
818,037
| 30,668,402,093
|
IssuesEvent
|
2023-07-25 20:11:28
|
dag-hammarskjold-library/dlx-rest
|
https://api.github.com/repos/dag-hammarskjold-library/dlx-rest
|
closed
|
File Upload: only "_" should be used to replace "/" in document symbol detection.
|
priority: medium function: file
|
At the time "-" in the file name is replaced with "/" to suggest document symbol. This is not optimal as "-" can be a legitimate characters in document symbols. Remove "-" and leave only "_" to convert to "/".
|
1.0
|
File Upload: only "_" should be used to replace "/" in document symbol detection. - At the time "-" in the file name is replaced with "/" to suggest document symbol. This is not optimal as "-" can be a legitimate characters in document symbols. Remove "-" and leave only "_" to convert to "/".
|
non_process
|
file upload only should be used to replace in document symbol detection at the time in the file name is replaced with to suggest document symbol this is not optimal as can be a legitimate characters in document symbols remove and leave only to convert to
| 0
|
8,167
| 11,386,003,349
|
IssuesEvent
|
2020-01-29 12:21:36
|
prisma/specs
|
https://api.github.com/repos/prisma/specs
|
opened
|
Prisma Client JS: Introduce clear differentiation between error types
|
area/errors area/photonjs kind/spec process/candidate spec/change
|
You can distinguish Prisma Client errors into the following categories:
1. Rust Panic
2. Known Error originating from Engine
3. Validation Error
4. Unknown Errors that did not lead to a panic.
We should make them clearly identifiable.
This can be added to the existing Prisma Client JS spec instead of the error spec, as this is about Prisma Client JS specific error handling.
|
1.0
|
Prisma Client JS: Introduce clear differentiation between error types - You can distinguish Prisma Client errors into the following categories:
1. Rust Panic
2. Known Error originating from Engine
3. Validation Error
4. Unknown Errors that did not lead to a panic.
We should make them clearly identifiable.
This can be added to the existing Prisma Client JS spec instead of the error spec, as this is about Prisma Client JS specific error handling.
|
process
|
prisma client js introduce clear differentiation between error types you can distinguish prisma client errors into the following categories rust panic known error originating from engine validation error unknown errors that did not lead to a panic we should make them clearly identifiable this can be added to the existing prisma client js spec instead of the error spec as this is about prisma client js specific error handling
| 1
|
668,763
| 22,597,257,799
|
IssuesEvent
|
2022-06-29 05:19:01
|
nakhll-company/nakhll_frontend
|
https://api.github.com/repos/nakhll-company/nakhll_frontend
|
closed
|
اضافه نکردن محصولات به سبد خرید بدون ورود به سایت
|
Priority 1
|
اگر به حساب کاربری وارد نشده باشیم با کلیک روی دکمه خرید ارور "لطفا ابتدا وارد شوید" نمایش داده می شود.
|
1.0
|
اضافه نکردن محصولات به سبد خرید بدون ورود به سایت - اگر به حساب کاربری وارد نشده باشیم با کلیک روی دکمه خرید ارور "لطفا ابتدا وارد شوید" نمایش داده می شود.
|
non_process
|
اضافه نکردن محصولات به سبد خرید بدون ورود به سایت اگر به حساب کاربری وارد نشده باشیم با کلیک روی دکمه خرید ارور لطفا ابتدا وارد شوید نمایش داده می شود
| 0
|
128,201
| 10,519,700,546
|
IssuesEvent
|
2019-09-29 19:48:58
|
MyIntervals/emogrifier
|
https://api.github.com/repos/MyIntervals/emogrifier
|
opened
|
Support all pseudo-classes that Symfony CssSelector does
|
cleanup enhancement unit tests needed
|
The following are (presumably) supported by Symfony CssSelector, but the regex in `PSEUDO_CLASS_MATCHER` does not allow them:
- `:empty`
- `:first-of-type` (with a type)
- `:last-of-type` (with a type)
- `:only-of-type` (with a type)
- `:optional`
- `:required`
We should add tests to confirm they are supported, and, if indeed they are, modify the regex to allow them. If any are in fact not supported, we should ensure the regex does not allow them (so that rules using them are copied to the `<style>` element).
When supported, we can list them as supported in the README - see #723.
|
1.0
|
Support all pseudo-classes that Symfony CssSelector does - The following are (presumably) supported by Symfony CssSelector, but the regex in `PSEUDO_CLASS_MATCHER` does not allow them:
- `:empty`
- `:first-of-type` (with a type)
- `:last-of-type` (with a type)
- `:only-of-type` (with a type)
- `:optional`
- `:required`
We should add tests to confirm they are supported, and, if indeed they are, modify the regex to allow them. If any are in fact not supported, we should ensure the regex does not allow them (so that rules using them are copied to the `<style>` element).
When supported, we can list them as supported in the README - see #723.
|
non_process
|
support all pseudo classes that symfony cssselector does the following are presumably supported by symfony cssselector but the regex in pseudo class matcher does not allow them empty first of type with a type last of type with a type only of type with a type optional required we should add tests to confirm they are supported and if indeed they are modify the regex to allow them if any are in fact not supported we should ensure the regex does not allow them so that rules using them are copied to the element when supported we can list them as supported in the readme see
| 0
|
216,944
| 24,312,668,586
|
IssuesEvent
|
2022-09-30 01:06:52
|
vlaship/graphql-resolver
|
https://api.github.com/repos/vlaship/graphql-resolver
|
opened
|
CVE-2021-43980 (High) detected in tomcat-embed-core-9.0.26.jar
|
security vulnerability
|
## CVE-2021-43980 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.26.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.26/6312ba542bc58fa9ee789a43516ce4d862548a6b/tomcat-embed-core-9.0.26.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.26/6312ba542bc58fa9ee789a43516ce4d862548a6b/tomcat-embed-core-9.0.26.jar</p>
<p>
Dependency Hierarchy:
- graphql-spring-boot-starter-5.0.2.jar (Root Library)
- graphql-spring-boot-autoconfigure-5.0.2.jar
- spring-boot-starter-websocket-2.1.9.RELEASE.jar
- spring-boot-starter-web-2.1.9.RELEASE.jar
- spring-boot-starter-tomcat-2.1.9.RELEASE.jar
- :x: **tomcat-embed-core-9.0.26.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vlaship/graphql-resolver/commit/69b2d878133579d23ef9f8bc407028c32bfc4a47">69b2d878133579d23ef9f8bc407028c32bfc4a47</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The simplified implementation of blocking reads and writes introduced in Tomcat 10 and back-ported to Tomcat 9.0.47 onwards exposed a long standing (but extremely hard to trigger) concurrency bug in Apache Tomcat 10.1.0 to 10.1.0-M12, 10.0.0-M1 to 10.0.18, 9.0.0-M1 to 9.0.60 and 8.5.0 to 8.5.77 that could cause client connections to share an Http11Processor instance resulting in responses, or part responses, to be received by the wrong client.
<p>Publish Date: 2022-09-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43980>CVE-2021-43980</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread/3jjqbsp6j88b198x5rmg99b1qr8ht3g3">https://lists.apache.org/thread/3jjqbsp6j88b198x5rmg99b1qr8ht3g3</a></p>
<p>Release Date: 2022-09-28</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:8.5.78,9.0.62,10.0.20,10.1.0-M14</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-43980 (High) detected in tomcat-embed-core-9.0.26.jar - ## CVE-2021-43980 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.26.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.26/6312ba542bc58fa9ee789a43516ce4d862548a6b/tomcat-embed-core-9.0.26.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.26/6312ba542bc58fa9ee789a43516ce4d862548a6b/tomcat-embed-core-9.0.26.jar</p>
<p>
Dependency Hierarchy:
- graphql-spring-boot-starter-5.0.2.jar (Root Library)
- graphql-spring-boot-autoconfigure-5.0.2.jar
- spring-boot-starter-websocket-2.1.9.RELEASE.jar
- spring-boot-starter-web-2.1.9.RELEASE.jar
- spring-boot-starter-tomcat-2.1.9.RELEASE.jar
- :x: **tomcat-embed-core-9.0.26.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vlaship/graphql-resolver/commit/69b2d878133579d23ef9f8bc407028c32bfc4a47">69b2d878133579d23ef9f8bc407028c32bfc4a47</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The simplified implementation of blocking reads and writes introduced in Tomcat 10 and back-ported to Tomcat 9.0.47 onwards exposed a long standing (but extremely hard to trigger) concurrency bug in Apache Tomcat 10.1.0 to 10.1.0-M12, 10.0.0-M1 to 10.0.18, 9.0.0-M1 to 9.0.60 and 8.5.0 to 8.5.77 that could cause client connections to share an Http11Processor instance resulting in responses, or part responses, to be received by the wrong client.
<p>Publish Date: 2022-09-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43980>CVE-2021-43980</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread/3jjqbsp6j88b198x5rmg99b1qr8ht3g3">https://lists.apache.org/thread/3jjqbsp6j88b198x5rmg99b1qr8ht3g3</a></p>
<p>Release Date: 2022-09-28</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:8.5.78,9.0.62,10.0.20,10.1.0-M14</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in tomcat embed core jar cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file build gradle path to vulnerable library root gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar root gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy graphql spring boot starter jar root library graphql spring boot autoconfigure jar spring boot starter websocket release jar spring boot starter web release jar spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href vulnerability details the simplified implementation of blocking reads and writes introduced in tomcat and back ported to tomcat onwards exposed a long standing but extremely hard to trigger concurrency bug in apache tomcat to to to and to that could cause client connections to share an instance resulting in responses or part responses to be received by the wrong client publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat coyote step up your open source security game with mend
| 0
|
74,798
| 20,373,319,049
|
IssuesEvent
|
2022-02-21 13:22:54
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
build: Bazel rebuilding unchanged targets often
|
C-bug A-build-system T-dev-inf
|
Over the past few weeks of using exclusively bazel, I've noticed it regularly rebuilding things that it doesn't seem like it should be rebuilding. For example, I've noticed it rebuilding protobuf c and java files, or external go dependencies like bcrypt that haven't changed. Most recently this morning I'm seeing it rebuild jemalloc, which hasn't been updated since 2020.
At first I thought this was `dev` messing with PATH or something that busted cache fingerprints, but for the past two weeks I've switched to exclusively using `bazel` directly and controlling exactly what flags I'm passing to it to eliminate that variable. Then I thought it might be bazel 4 vs 5 upgrade as I switch branches, but I've now seen it recurring even when all my active branches are rebased to post 5 upgrade. Even then I'd have expected the "remote" cache to eventually have the 4 and 5 fingerprint but now, I'm still seeing rebuilds of e.g. protoc's C files regularly.
I haven't setup an isolated reproduction yet; this is just based on happening to notice a target name as it flashes by while working on other things. This might be a red herring, but it _feels_ like I notice it most when switching from `build //pkg/cmd/cockroach-short` to `run //pkg/gen` and back.
Epic: CRDB-8036
|
1.0
|
build: Bazel rebuilding unchanged targets often - Over the past few weeks of using exclusively bazel, I've noticed it regularly rebuilding things that it doesn't seem like it should be rebuilding. For example, I've noticed it rebuilding protobuf c and java files, or external go dependencies like bcrypt that haven't changed. Most recently this morning I'm seeing it rebuild jemalloc, which hasn't been updated since 2020.
At first I thought this was `dev` messing with PATH or something that busted cache fingerprints, but for the past two weeks I've switched to exclusively using `bazel` directly and controlling exactly what flags I'm passing to it to eliminate that variable. Then I thought it might be bazel 4 vs 5 upgrade as I switch branches, but I've now seen it recurring even when all my active branches are rebased to post 5 upgrade. Even then I'd have expected the "remote" cache to eventually have the 4 and 5 fingerprint but now, I'm still seeing rebuilds of e.g. protoc's C files regularly.
I haven't setup an isolated reproduction yet; this is just based on happening to notice a target name as it flashes by while working on other things. This might be a red herring, but it _feels_ like I notice it most when switching from `build //pkg/cmd/cockroach-short` to `run //pkg/gen` and back.
Epic: CRDB-8036
|
non_process
|
build bazel rebuilding unchanged targets often over the past few weeks of using exclusively bazel i ve noticed it regularly rebuilding things that it doesn t seem like it should be rebuilding for example i ve noticed it rebuilding protobuf c and java files or external go dependencies like bcrypt that haven t changed most recently this morning i m seeing it rebuild jemalloc which hasn t been updated since at first i thought this was dev messing with path or something that busted cache fingerprints but for the past two weeks i ve switched to exclusively using bazel directly and controlling exactly what flags i m passing to it to eliminate that variable then i thought it might be bazel vs upgrade as i switch branches but i ve now seen it recurring even when all my active branches are rebased to post upgrade even then i d have expected the remote cache to eventually have the and fingerprint but now i m still seeing rebuilds of e g protoc s c files regularly i haven t setup an isolated reproduction yet this is just based on happening to notice a target name as it flashes by while working on other things this might be a red herring but it feels like i notice it most when switching from build pkg cmd cockroach short to run pkg gen and back epic crdb
| 0
|
13,041
| 15,384,974,994
|
IssuesEvent
|
2021-03-03 05:42:15
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Sign up > Keypad for password fields does not appear immediately
|
Bug P1 Process: under observation UX iOS
|
During sign up, tap on the password or confirm password fields. Notice that the keypad does not come up immediately and the screen shows a blank space in the keypad area instead. The keypad shows up after some delay .
EB - keypad should show up immediately once you tap on the field
|
1.0
|
[iOS] Sign up > Keypad for password fields does not appear immediately - During sign up, tap on the password or confirm password fields. Notice that the keypad does not come up immediately and the screen shows a blank space in the keypad area instead. The keypad shows up after some delay .
EB - keypad should show up immediately once you tap on the field
|
process
|
sign up keypad for password fields does not appear immediately during sign up tap on the password or confirm password fields notice that the keypad does not come up immediately and the screen shows a blank space in the keypad area instead the keypad shows up after some delay eb keypad should show up immediately once you tap on the field
| 1
|
113,408
| 9,639,756,203
|
IssuesEvent
|
2019-05-16 14:10:46
|
wordpress-mobile/WordPress-iOS
|
https://api.github.com/repos/wordpress-mobile/WordPress-iOS
|
opened
|
Testing: Ensure tests pass with any device language
|
Testing [Type] Enhancement
|
We should be able to run our tests in any device language. Unit tests are passing in any language, but we need to make some improvements to the UI tests.
- [ ] LoginTests https://github.com/wordpress-mobile/WordPress-iOS/pull/11665
- [ ] EditorAztecTests
- [ ] EditorGutenbergTests
We should also add documentation about our tests that include a note about ensuring that they pass regardless of the device language.
|
1.0
|
Testing: Ensure tests pass with any device language - We should be able to run our tests in any device language. Unit tests are passing in any language, but we need to make some improvements to the UI tests.
- [ ] LoginTests https://github.com/wordpress-mobile/WordPress-iOS/pull/11665
- [ ] EditorAztecTests
- [ ] EditorGutenbergTests
We should also add documentation about our tests that include a note about ensuring that they pass regardless of the device language.
|
non_process
|
testing ensure tests pass with any device language we should be able to run our tests in any device language unit tests are passing in any language but we need to make some improvements to the ui tests logintests editoraztectests editorgutenbergtests we should also add documentation about our tests that include a note about ensuring that they pass regardless of the device language
| 0
|
2,771
| 5,704,974,192
|
IssuesEvent
|
2017-04-18 07:04:28
|
sysown/proxysql
|
https://api.github.com/repos/sysown/proxysql
|
opened
|
Disable multiplexing for a short period of time
|
ADMIN CONNECTION POOL development enhancement QUERY PROCESSOR ROUTING
|
See https://github.com/sysown/proxysql/issues/594#issuecomment-294703577 .
Configuration could either:
* [ ] at global level with a new global variable
* [ ] at hostgroup level , see #955
* [ ] as a new modifier in `mysql_query_rules`
|
1.0
|
Disable multiplexing for a short period of time - See https://github.com/sysown/proxysql/issues/594#issuecomment-294703577 .
Configuration could either:
* [ ] at global level with a new global variable
* [ ] at hostgroup level , see #955
* [ ] as a new modifier in `mysql_query_rules`
|
process
|
disable multiplexing for a short period of time see configuration could either at global level with a new global variable at hostgroup level see as a new modifier in mysql query rules
| 1
|
89,343
| 10,596,023,470
|
IssuesEvent
|
2019-10-09 20:18:03
|
trekview/tourer
|
https://api.github.com/repos/trekview/tourer
|
opened
|
Updated database field documentation
|
documentation
|
Some of the fields in the DB are not immediately clear as to what they represent.
I have added some data to the docs here:
https://github.com/trekview/tourer/wiki/7-Database#photos-table
Can you add / update this to reflect the current status of app.
This will help map fields to other integrations in future.
|
1.0
|
Updated database field documentation - Some of the fields in the DB are not immediately clear as to what they represent.
I have added some data to the docs here:
https://github.com/trekview/tourer/wiki/7-Database#photos-table
Can you add / update this to reflect the current status of app.
This will help map fields to other integrations in future.
|
non_process
|
updated database field documentation some of the fields in the db are not immediately clear as to what they represent i have added some data to the docs here can you add update this to reflect the current status of app this will help map fields to other integrations in future
| 0
|
92,767
| 11,708,454,983
|
IssuesEvent
|
2020-03-08 13:25:11
|
icondelta/icondelta-front
|
https://api.github.com/repos/icondelta/icondelta-front
|
closed
|
IOS style
|
UI bug design
|
IOS(safari)에서 TokenSearchBar, History Component 스타일 깨짐 수정필요
prefix문제는 아닌것같고.. safari에서 padding 또는 margin이 다르게 동작하는지 알아보고 수정할것
|
1.0
|
IOS style - IOS(safari)에서 TokenSearchBar, History Component 스타일 깨짐 수정필요
prefix문제는 아닌것같고.. safari에서 padding 또는 margin이 다르게 동작하는지 알아보고 수정할것
|
non_process
|
ios style ios safari 에서 tokensearchbar history component 스타일 깨짐 수정필요 prefix문제는 아닌것같고 safari에서 padding 또는 margin이 다르게 동작하는지 알아보고 수정할것
| 0
|
767,107
| 26,910,847,960
|
IssuesEvent
|
2023-02-06 23:32:32
|
Roche/rtables
|
https://api.github.com/repos/Roche/rtables
|
closed
|
Add new format for `add_colcounts(-, format = "N=xx (100%)")`
|
enhancement good first issue priority fast fix
|
Hi there,
I like rtables a lot and am always amazed by how lean the code for complicated tables can be using your package. One thing, my organization is overly focused on, is column counts being displayed as `N=134 (100%)`. The display of 100%, to indicate the reference for all following percentages, can currently not be achieved with rtables.
Is this something you would consider addressing in a future update?
Thank a lot and best regards,
Alex
|
1.0
|
Add new format for `add_colcounts(-, format = "N=xx (100%)")` - Hi there,
I like rtables a lot and am always amazed by how lean the code for complicated tables can be using your package. One thing, my organization is overly focused on, is column counts being displayed as `N=134 (100%)`. The display of 100%, to indicate the reference for all following percentages, can currently not be achieved with rtables.
Is this something you would consider addressing in a future update?
Thank a lot and best regards,
Alex
|
non_process
|
add new format for add colcounts format n xx hi there i like rtables a lot and am always amazed by how lean the code for complicated tables can be using your package one thing my organization is overly focused on is column counts being displayed as n the display of to indicate the reference for all following percentages can currently not be achieved with rtables is this something you would consider addressing in a future update thank a lot and best regards alex
| 0
|
109,544
| 11,645,136,645
|
IssuesEvent
|
2020-02-29 23:03:58
|
topblossom/icgbooks
|
https://api.github.com/repos/topblossom/icgbooks
|
closed
|
Wdrożenie UMLa do issues/wiki
|
documentation enhancement
|
Shit just got real:
https://plantuml.com/running
Przyda mi się do przelewania projektowych myśli na papier
|
1.0
|
Wdrożenie UMLa do issues/wiki - Shit just got real:
https://plantuml.com/running
Przyda mi się do przelewania projektowych myśli na papier
|
non_process
|
wdrożenie umla do issues wiki shit just got real przyda mi się do przelewania projektowych myśli na papier
| 0
|
18,403
| 24,543,468,443
|
IssuesEvent
|
2022-10-12 06:51:15
|
home-climate-control/dz
|
https://api.github.com/repos/home-climate-control/dz
|
closed
|
Economizer: implement the feature
|
enhancement process control reactive-only
|
### Task Definition
Data structures created in #237 are turned into code supporting the feature.
|
1.0
|
Economizer: implement the feature - ### Task Definition
Data structures created in #237 are turned into code supporting the feature.
|
process
|
economizer implement the feature task definition data structures created in are turned into code supporting the feature
| 1
|
9,854
| 12,854,291,344
|
IssuesEvent
|
2020-07-09 01:28:29
|
ClickHouse/ClickHouse
|
https://api.github.com/repos/ClickHouse/ClickHouse
|
closed
|
Processors performance downgrade with UNION & LIMIT
|
comp-processors performance
|
```sql
SELECT count()
FROM
(
SELECT number
FROM numbers_mt(100000000)
ORDER BY number DESC
LIMIT 100
UNION ALL
SELECT number
FROM numbers_mt(100000000)
ORDER BY number DESC
LIMIT 100
UNION ALL
SELECT number
FROM numbers_mt(100000000)
ORDER BY number DESC
LIMIT 100
)
SET experimental_use_processors = 0;
-- 1 rows in set. Elapsed: 1.581 sec. Processed 300.00 million rows, 2.40 GB (189.78 million rows/s., 1.52 GB/s.)
SET experimental_use_processors = 1;
-- 1 rows in set. Elapsed: 3.569 sec. Processed 300.00 million rows, 2.40 GB (84.06 million rows/s., 672.50 MB/s.)
```
Similar for MergeTree:
```sql
create table test engine=MergeTree order by tuple() as select * from numbers(30000000);
select count() from (select number from test group by number order by number limit 10 union all select number from test group by number order by number limit 10);
set experimental_use_processors=0;
-- Elapsed: 2.664 sec
set experimental_use_processors=1;
-- Elapsed: 3.656 sec
```
|
1.0
|
Processors performance downgrade with UNION & LIMIT - ```sql
SELECT count()
FROM
(
SELECT number
FROM numbers_mt(100000000)
ORDER BY number DESC
LIMIT 100
UNION ALL
SELECT number
FROM numbers_mt(100000000)
ORDER BY number DESC
LIMIT 100
UNION ALL
SELECT number
FROM numbers_mt(100000000)
ORDER BY number DESC
LIMIT 100
)
SET experimental_use_processors = 0;
-- 1 rows in set. Elapsed: 1.581 sec. Processed 300.00 million rows, 2.40 GB (189.78 million rows/s., 1.52 GB/s.)
SET experimental_use_processors = 1;
-- 1 rows in set. Elapsed: 3.569 sec. Processed 300.00 million rows, 2.40 GB (84.06 million rows/s., 672.50 MB/s.)
```
Similar for MergeTree:
```sql
create table test engine=MergeTree order by tuple() as select * from numbers(30000000);
select count() from (select number from test group by number order by number limit 10 union all select number from test group by number order by number limit 10);
set experimental_use_processors=0;
-- Elapsed: 2.664 sec
set experimental_use_processors=1;
-- Elapsed: 3.656 sec
```
|
process
|
processors performance downgrade with union limit sql select count from select number from numbers mt order by number desc limit union all select number from numbers mt order by number desc limit union all select number from numbers mt order by number desc limit set experimental use processors rows in set elapsed sec processed million rows gb million rows s gb s set experimental use processors rows in set elapsed sec processed million rows gb million rows s mb s similar for mergetree sql create table test engine mergetree order by tuple as select from numbers select count from select number from test group by number order by number limit union all select number from test group by number order by number limit set experimental use processors elapsed sec set experimental use processors elapsed sec
| 1
|
316,858
| 27,189,701,966
|
IssuesEvent
|
2023-02-19 16:51:54
|
apache/beam
|
https://api.github.com/repos/apache/beam
|
closed
|
[Failing Test]: Tensorflow Integration tests is failing Python 37 and Python 39 postcommits
|
tests bug P2 failing test flake awaiting triage
|
### What happened?

Model path is misconfigured.
### Issue Failure
Failure: Test is flaky
### Issue Priority
Priority: 2 (backlog / disabled test but we think the product is healthy)
### Issue Components
- [ ] Component: Python SDK
- [ ] Component: Java SDK
- [ ] Component: Go SDK
- [ ] Component: Typescript SDK
- [ ] Component: IO connector
- [ ] Component: Beam examples
- [ ] Component: Beam playground
- [ ] Component: Beam katas
- [ ] Component: Website
- [ ] Component: Spark Runner
- [ ] Component: Flink Runner
- [ ] Component: Samza Runner
- [ ] Component: Twister2 Runner
- [ ] Component: Hazelcast Jet Runner
- [ ] Component: Google Cloud Dataflow Runner
|
2.0
|
[Failing Test]: Tensorflow Integration tests is failing Python 37 and Python 39 postcommits - ### What happened?

Model path is misconfigured.
### Issue Failure
Failure: Test is flaky
### Issue Priority
Priority: 2 (backlog / disabled test but we think the product is healthy)
### Issue Components
- [ ] Component: Python SDK
- [ ] Component: Java SDK
- [ ] Component: Go SDK
- [ ] Component: Typescript SDK
- [ ] Component: IO connector
- [ ] Component: Beam examples
- [ ] Component: Beam playground
- [ ] Component: Beam katas
- [ ] Component: Website
- [ ] Component: Spark Runner
- [ ] Component: Flink Runner
- [ ] Component: Samza Runner
- [ ] Component: Twister2 Runner
- [ ] Component: Hazelcast Jet Runner
- [ ] Component: Google Cloud Dataflow Runner
|
non_process
|
tensorflow integration tests is failing python and python postcommits what happened model path is misconfigured issue failure failure test is flaky issue priority priority backlog disabled test but we think the product is healthy issue components component python sdk component java sdk component go sdk component typescript sdk component io connector component beam examples component beam playground component beam katas component website component spark runner component flink runner component samza runner component runner component hazelcast jet runner component google cloud dataflow runner
| 0
|
95,003
| 10,863,592,940
|
IssuesEvent
|
2019-11-14 15:23:29
|
SAP/fundamental-ngx
|
https://api.github.com/repos/SAP/fundamental-ngx
|
closed
|
Documentation Overhaul
|
documentation enhancement grooming
|
#### Is this a bug, enhancement, or feature request?
Enhancement
#### Briefly describe your proposal.
- [x] Style like fd-react's next version
- [x] Display version number
- [x] Table-like layout for inputs/outputs etc
- [x] Maybe automatically scrape the files for the properties/methods and display the descriptions using comments in the file? [TSDoc](https://github.com/Microsoft/tsdoc) has a parser. We could use it in a per-file basis. Ideally, we'd do it once when we compile. This needs to be investigated further.
- [x] Tabs for Examples/API at the top. API page should contain tables representing all related child component/service/model etc.
- [x] Tabs to switch between the files of an example, also display file name.
- [x] Use StackBlitz as a playground.
- [x] Have a way to hide/display the code.
- [x] Anchors on all <h> tags, with a link icon to easily share a section of the docs.
- [ ] Table of contents on the right side (as with fd-react). They use [TocBot](https://tscanlin.github.io/tocbot/).
More may be added later
|
1.0
|
Documentation Overhaul - #### Is this a bug, enhancement, or feature request?
Enhancement
#### Briefly describe your proposal.
- [x] Style like fd-react's next version
- [x] Display version number
- [x] Table-like layout for inputs/outputs etc
- [x] Maybe automatically scrape the files for the properties/methods and display the descriptions using comments in the file? [TSDoc](https://github.com/Microsoft/tsdoc) has a parser. We could use it in a per-file basis. Ideally, we'd do it once when we compile. This needs to be investigated further.
- [x] Tabs for Examples/API at the top. API page should contain tables representing all related child component/service/model etc.
- [x] Tabs to switch between the files of an example, also display file name.
- [x] Use StackBlitz as a playground.
- [x] Have a way to hide/display the code.
- [x] Anchors on all <h> tags, with a link icon to easily share a section of the docs.
- [ ] Table of contents on the right side (as with fd-react). They use [TocBot](https://tscanlin.github.io/tocbot/).
More may be added later
|
non_process
|
documentation overhaul is this a bug enhancement or feature request enhancement briefly describe your proposal style like fd react s next version display version number table like layout for inputs outputs etc maybe automatically scrape the files for the properties methods and display the descriptions using comments in the file has a parser we could use it in a per file basis ideally we d do it once when we compile this needs to be investigated further tabs for examples api at the top api page should contain tables representing all related child component service model etc tabs to switch between the files of an example also display file name use stackblitz as a playground have a way to hide display the code anchors on all tags with a link icon to easily share a section of the docs table of contents on the right side as with fd react they use more may be added later
| 0
|
3,848
| 6,808,540,143
|
IssuesEvent
|
2017-11-04 04:18:21
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
reopened
|
ethName: The searching is confusing
|
status-inprocess tools-ethName type-bug
|
As per issue #428 comment 6, searching with this tool is confusing. It should be must more clear how things work.
|
1.0
|
ethName: The searching is confusing - As per issue #428 comment 6, searching with this tool is confusing. It should be must more clear how things work.
|
process
|
ethname the searching is confusing as per issue comment searching with this tool is confusing it should be must more clear how things work
| 1
|
13,832
| 16,597,482,079
|
IssuesEvent
|
2021-06-01 15:00:51
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
it's useless
|
devops-cicd-process/tech devops/prod needs-more-info
|
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3339a2e0-be29-1363-f588-b231d4472c02
* Version Independent ID: 72dd11a3-704d-d0fd-6dfa-cf49f3352de3
* Content: [Container Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops)
* Content Source: [docs/pipelines/process/container-phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/container-phases.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
it's useless -
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3339a2e0-be29-1363-f588-b231d4472c02
* Version Independent ID: 72dd11a3-704d-d0fd-6dfa-cf49f3352de3
* Content: [Container Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops)
* Content Source: [docs/pipelines/process/container-phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/container-phases.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
it s useless document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
18,944
| 24,904,757,506
|
IssuesEvent
|
2022-10-29 05:01:50
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
ARM: LDRHT instruction has incomplete/incorrect constructor
|
Feature: Processor/ARM Status: Internal
|
**Describe the bug**
The ``ldrht`` instruction is specified in the sleigh as follows:


However, addrmode3 does not support the combination of ``P24=0`` and either of ``c2122=1`` or ``c2122 = 3``

**To Reproduce**
The (little endian) bytes ``b2 50 f4 e0`` should disassemble to ``ldrht r5, [r4] #0x2`` but fail to disassemble
|
1.0
|
ARM: LDRHT instruction has incomplete/incorrect constructor - **Describe the bug**
The ``ldrht`` instruction is specified in the sleigh as follows:


However, addrmode3 does not support the combination of ``P24=0`` and either of ``c2122=1`` or ``c2122 = 3``

**To Reproduce**
The (little endian) bytes ``b2 50 f4 e0`` should disassemble to ``ldrht r5, [r4] #0x2`` but fail to disassemble
|
process
|
arm ldrht instruction has incomplete incorrect constructor describe the bug the ldrht instruction is specified in the sleigh as follows however does not support the combination of and either of or to reproduce the little endian bytes should disassemble to ldrht but fail to disassemble
| 1
|
51,467
| 13,207,495,184
|
IssuesEvent
|
2020-08-14 23:19:31
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
I3ParticleVector pybindings missing bases (Trac #479)
|
Incomplete Migration Migrated from Trac dataclasses defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/479">https://code.icecube.wisc.edu/projects/icecube/ticket/479</a>, reported by david.schultzand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-02-12T06:52:34",
"_ts": "1423723954189338",
"description": "I3ParticleVector pybindings are missing the I3FrameObject base, and maybe other things.\n\nWe also have a similar pybinding by the name of I3VectorI3Particle that does work. Let's resolve this naming duplication.\n\nAlso examine other I3Vector classes for similar problems. I3RecoPulseSeries was noted.",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"time": "2014-01-22T03:15:57",
"component": "dataclasses",
"summary": "I3ParticleVector pybindings missing bases",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
I3ParticleVector pybindings missing bases (Trac #479) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/479">https://code.icecube.wisc.edu/projects/icecube/ticket/479</a>, reported by david.schultzand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-02-12T06:52:34",
"_ts": "1423723954189338",
"description": "I3ParticleVector pybindings are missing the I3FrameObject base, and maybe other things.\n\nWe also have a similar pybinding by the name of I3VectorI3Particle that does work. Let's resolve this naming duplication.\n\nAlso examine other I3Vector classes for similar problems. I3RecoPulseSeries was noted.",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"time": "2014-01-22T03:15:57",
"component": "dataclasses",
"summary": "I3ParticleVector pybindings missing bases",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
pybindings missing bases trac migrated from json status closed changetime ts description pybindings are missing the base and maybe other things n nwe also have a similar pybinding by the name of that does work let s resolve this naming duplication n nalso examine other classes for similar problems was noted reporter david schultz cc resolution fixed time component dataclasses summary pybindings missing bases priority normal keywords milestone owner olivas type defect
| 0
|
136,787
| 30,591,139,420
|
IssuesEvent
|
2023-07-21 17:09:52
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
outliertree 1.8.1.post9 has 3 GuardDog issues
|
guarddog code-execution
|
https://pypi.org/project/outliertree
https://inspector.pypi.io/project/outliertree
```{
"dependency": "outliertree",
"version": "1.8.1.post9",
"result": {
"issues": 3,
"errors": {},
"results": {
"code-execution": [
{
"location": "outliertree-1.8.1.post9/setup.py:231",
"code": " val = subprocess.run(cmd + comm + [fname], capture_output=silent_tests).returncode",
"message": "This package is executing OS commands in the setup.py file"
},
{
"location": "outliertree-1.8.1.post9/setup.py:234",
"code": " val = subprocess.run(cmd0 + comm + [fname], capture_output=silent_tests).returncode",
"message": "This package is executing OS commands in the setup.py file"
},
{
"location": "outliertree-1.8.1.post9/setup.py:265",
"code": " val = subprocess.run(cmd + comm + [fname], capture_output=silent_tests).returncode",
"message": "This package is executing OS commands in the setup.py file"
}
]
},
"path": "/tmp/tmpm48cc8o_/outliertree"
}
}```
|
1.0
|
outliertree 1.8.1.post9 has 3 GuardDog issues - https://pypi.org/project/outliertree
https://inspector.pypi.io/project/outliertree
```{
"dependency": "outliertree",
"version": "1.8.1.post9",
"result": {
"issues": 3,
"errors": {},
"results": {
"code-execution": [
{
"location": "outliertree-1.8.1.post9/setup.py:231",
"code": " val = subprocess.run(cmd + comm + [fname], capture_output=silent_tests).returncode",
"message": "This package is executing OS commands in the setup.py file"
},
{
"location": "outliertree-1.8.1.post9/setup.py:234",
"code": " val = subprocess.run(cmd0 + comm + [fname], capture_output=silent_tests).returncode",
"message": "This package is executing OS commands in the setup.py file"
},
{
"location": "outliertree-1.8.1.post9/setup.py:265",
"code": " val = subprocess.run(cmd + comm + [fname], capture_output=silent_tests).returncode",
"message": "This package is executing OS commands in the setup.py file"
}
]
},
"path": "/tmp/tmpm48cc8o_/outliertree"
}
}```
|
non_process
|
outliertree has guarddog issues dependency outliertree version result issues errors results code execution location outliertree setup py code val subprocess run cmd comm capture output silent tests returncode message this package is executing os commands in the setup py file location outliertree setup py code val subprocess run comm capture output silent tests returncode message this package is executing os commands in the setup py file location outliertree setup py code val subprocess run cmd comm capture output silent tests returncode message this package is executing os commands in the setup py file path tmp outliertree
| 0
|
92,947
| 26,819,039,449
|
IssuesEvent
|
2023-02-02 08:03:19
|
envoyproxy/envoy
|
https://api.github.com/repos/envoyproxy/envoy
|
closed
|
Newer release available `com_github_intel_ipp_crypto_crypto_mb`: ippcp_2021.7 (current: ippcp_2021.6)
|
area/build no stalebot dependencies
|
Package Name: com_github_intel_ipp_crypto_crypto_mb@2021.6
Current Version: ippcp_2021.6@2022-08-09
Available Version: ippcp_2021.7@2022-12-15
Upstream releases: https://github.com/intel/ipp-crypto/releases
|
1.0
|
Newer release available `com_github_intel_ipp_crypto_crypto_mb`: ippcp_2021.7 (current: ippcp_2021.6) -
Package Name: com_github_intel_ipp_crypto_crypto_mb@2021.6
Current Version: ippcp_2021.6@2022-08-09
Available Version: ippcp_2021.7@2022-12-15
Upstream releases: https://github.com/intel/ipp-crypto/releases
|
non_process
|
newer release available com github intel ipp crypto crypto mb ippcp current ippcp package name com github intel ipp crypto crypto mb current version ippcp available version ippcp upstream releases
| 0
|
250,142
| 7,969,195,102
|
IssuesEvent
|
2018-07-16 08:09:50
|
nanoframework/Home
|
https://api.github.com/repos/nanoframework/Home
|
opened
|
Improve referenced projects processing to have a correct assemblies list when deploying
|
Area: Visual Studio extension Priority: High Status: Investigating Type: Bug
|
There are currently issues when there is a solution with multiple projects, ones referencing the others.
- [ ] the deployment occurs multiple times (need to investigate if it's possible to control the deploy option for each individual project.
- [ ] the references aren't computed correctly (need to improve this, possibly using the ResolveRuntimeDependenciesTask
|
1.0
|
Improve referenced projects processing to have a correct assemblies list when deploying - There are currently issues when there is a solution with multiple projects, ones referencing the others.
- [ ] the deployment occurs multiple times (need to investigate if it's possible to control the deploy option for each individual project.
- [ ] the references aren't computed correctly (need to improve this, possibly using the ResolveRuntimeDependenciesTask
|
non_process
|
improve referenced projects processing to have a correct assemblies list when deploying there are currently issues when there is a solution with multiple projects ones referencing the others the deployment occurs multiple times need to investigate if it s possible to control the deploy option for each individual project the references aren t computed correctly need to improve this possibly using the resolveruntimedependenciestask
| 0
|
137,407
| 12,752,061,246
|
IssuesEvent
|
2020-06-27 14:31:04
|
badging/meta
|
https://api.github.com/repos/badging/meta
|
closed
|
Documentation: Creating a centralized documentation repository.
|
documentation todo
|
> **Important Note**: This issue is being closed for now and will be revisited after closing #29.
This issue tracks the process of moving all non-contributor-facing documentation into https://github.com/badging/documentation.
<!--
## TODO
- [ ] Identify all respective documentation assets across existing repositories
> This should exclude `README.md` and other contributor-facing assets
- [ ] Define the contribution process for documentation
> This should outline the naming conventions used for transferred and new assets
- [ ] Transfer existing documentation assets and histories into https://github.com/badging/documentation
-->
|
1.0
|
Documentation: Creating a centralized documentation repository. - > **Important Note**: This issue is being closed for now and will be revisited after closing #29.
This issue tracks the process of moving all non-contributor-facing documentation into https://github.com/badging/documentation.
<!--
## TODO
- [ ] Identify all respective documentation assets across existing repositories
> This should exclude `README.md` and other contributor-facing assets
- [ ] Define the contribution process for documentation
> This should outline the naming conventions used for transferred and new assets
- [ ] Transfer existing documentation assets and histories into https://github.com/badging/documentation
-->
|
non_process
|
documentation creating a centralized documentation repository important note this issue is being closed for now and will be revisited after closing this issue tracks the process of moving all non contributor facing documentation into todo identify all respective documentation assets across existing repositories this should exclude readme md and other contributor facing assets define the contribution process for documentation this should outline the naming conventions used for transferred and new assets transfer existing documentation assets and histories into
| 0
|
20,305
| 29,667,624,042
|
IssuesEvent
|
2023-06-11 01:44:49
|
ValveSoftware/Proton
|
https://api.github.com/repos/ValveSoftware/Proton
|
closed
|
Raiden III Digital Edition (315670)
|
Game compatibility - Unofficial DXVK/D3D9 🐸
|
# Compatibility Report
- Name of the game with compatibility issues: Raiden III Digital Edition
- Steam AppID of the game: 315670
## System Information
- Steam Deck 512 gb
- Proton version: Experimental
## I confirm:
- [x] that I haven't found an existing compatibility report for this game.
- [x] that I have checked whether there are updates for my system available.
<!-- Please add `PROTON_LOG=1 %command%` to the game's launch options and
attach the generated $HOME/steam-$APPID.log to this issue report as a file.
(Proton logs compress well if needed.)-->
## Symptoms <!-- What's the problem? -->
Game seems to boot to the title fine, but silently exits to the game page when trying to start a game. According to protondb with the deck, it only runs on version 4.11-13, but some users on Linux computers have different results with other versions like 4.26 or 4.29. Deck rating is unsupported until this gets fixed.
[steam-315670.log](https://github.com/ValveSoftware/Proton/files/9639870/steam-315670.log)
## Reproduction
Start the game and let it go into the game demo or start a new game. It should crash right away.
<!--
1. You can find the Steam AppID in the URL of the shop page of the game.
e.g. for `The Witcher 3: Wild Hunt` the AppID is `292030`.
2. You can find your driver and Linux version, as well as your graphics
processor's name in the system information report of Steam.
3. You can retrieve a full system information report by clicking
`Help` > `System Information` in the Steam client on your machine.
4. Please copy it to your clipboard by pressing `Ctrl+A` and then `Ctrl+C`.
Then paste it in a [Gist](https://gist.github.com/) and post the link in
this issue.
5. Please search for open issues and pull requests by the name of the game and
find out whether they are relevant and should be referenced above.
-->
|
True
|
Raiden III Digital Edition (315670) - # Compatibility Report
- Name of the game with compatibility issues: Raiden III Digital Edition
- Steam AppID of the game: 315670
## System Information
- Steam Deck 512 gb
- Proton version: Experimental
## I confirm:
- [x] that I haven't found an existing compatibility report for this game.
- [x] that I have checked whether there are updates for my system available.
<!-- Please add `PROTON_LOG=1 %command%` to the game's launch options and
attach the generated $HOME/steam-$APPID.log to this issue report as a file.
(Proton logs compress well if needed.)-->
## Symptoms <!-- What's the problem? -->
Game seems to boot to the title fine, but silently exits to the game page when trying to start a game. According to protondb with the deck, it only runs on version 4.11-13, but some users on Linux computers have different results with other versions like 4.26 or 4.29. Deck rating is unsupported until this gets fixed.
[steam-315670.log](https://github.com/ValveSoftware/Proton/files/9639870/steam-315670.log)
## Reproduction
Start the game and let it go into the game demo or start a new game. It should crash right away.
<!--
1. You can find the Steam AppID in the URL of the shop page of the game.
e.g. for `The Witcher 3: Wild Hunt` the AppID is `292030`.
2. You can find your driver and Linux version, as well as your graphics
processor's name in the system information report of Steam.
3. You can retrieve a full system information report by clicking
`Help` > `System Information` in the Steam client on your machine.
4. Please copy it to your clipboard by pressing `Ctrl+A` and then `Ctrl+C`.
Then paste it in a [Gist](https://gist.github.com/) and post the link in
this issue.
5. Please search for open issues and pull requests by the name of the game and
find out whether they are relevant and should be referenced above.
-->
|
non_process
|
raiden iii digital edition compatibility report name of the game with compatibility issues raiden iii digital edition steam appid of the game system information steam deck gb proton version experimental i confirm that i haven t found an existing compatibility report for this game that i have checked whether there are updates for my system available please add proton log command to the game s launch options and attach the generated home steam appid log to this issue report as a file proton logs compress well if needed symptoms game seems to boot to the title fine but silently exits to the game page when trying to start a game according to protondb with the deck it only runs on version but some users on linux computers have different results with other versions like or deck rating is unsupported until this gets fixed reproduction start the game and let it go into the game demo or start a new game it should crash right away you can find the steam appid in the url of the shop page of the game e g for the witcher wild hunt the appid is you can find your driver and linux version as well as your graphics processor s name in the system information report of steam you can retrieve a full system information report by clicking help system information in the steam client on your machine please copy it to your clipboard by pressing ctrl a and then ctrl c then paste it in a and post the link in this issue please search for open issues and pull requests by the name of the game and find out whether they are relevant and should be referenced above
| 0
|
671,762
| 22,775,108,048
|
IssuesEvent
|
2022-07-08 13:47:12
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.twitch.tv - see bug description
|
browser-firefox priority-important engine-gecko
|
<!-- @browser: Firefox Browser beta 103.0b6 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:103.0) Gecko/20100101 Firefox/103.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/107101 -->
**URL**: https://www.twitch.tv/hitch
**Browser / Version**: Firefox Browser beta 103.0b6
**Operating System**: Windows 11 beta build 22622.290
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: Video livestream becomes pixelated. Laptop specs: Firefox Browser beta 103.0b6, Edge Browser version 103.0.1264.49, Windows 11 Home edition version 22H2 OS build 22622.290, Intel i7-9750H processor 2.60 GHz with 12 MB of cache, Intel UHD graphics 31.0.101.1999, 8GB DDR4 memory, Nvidia GeForce GTX 1650 4 GB GDDR5 graphics card driver version 516.40 Windows Feature Experience Pack 1000.22632.1000.0 ASUS TUF Gaming laptop model TUF505FT-AH73
**Steps to Reproduce**:
Video livestream became pixelated after playing for over 11 minutes.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.twitch.tv - see bug description - <!-- @browser: Firefox Browser beta 103.0b6 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:103.0) Gecko/20100101 Firefox/103.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/107101 -->
**URL**: https://www.twitch.tv/hitch
**Browser / Version**: Firefox Browser beta 103.0b6
**Operating System**: Windows 11 beta build 22622.290
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: Video livestream becomes pixelated. Laptop specs: Firefox Browser beta 103.0b6, Edge Browser version 103.0.1264.49, Windows 11 Home edition version 22H2 OS build 22622.290, Intel i7-9750H processor 2.60 GHz with 12 MB of cache, Intel UHD graphics 31.0.101.1999, 8GB DDR4 memory, Nvidia GeForce GTX 1650 4 GB GDDR5 graphics card driver version 516.40 Windows Feature Experience Pack 1000.22632.1000.0 ASUS TUF Gaming laptop model TUF505FT-AH73
**Steps to Reproduce**:
Video livestream became pixelated after playing for over 11 minutes.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
see bug description url browser version firefox browser beta operating system windows beta build tested another browser yes chrome problem type something else description video livestream becomes pixelated laptop specs firefox browser beta edge browser version windows home edition version os build intel processor ghz with mb of cache intel uhd graphics memory nvidia geforce gtx gb graphics card driver version windows feature experience pack asus tuf gaming laptop model steps to reproduce video livestream became pixelated after playing for over minutes browser configuration none from with ❤️
| 0
|
25,774
| 12,301,102,646
|
IssuesEvent
|
2020-05-11 14:56:08
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
Nested grouping-over
|
Feature:Alerting Team:Alerting Services
|
As a user I want to be able to generate separate alert instances by defining a group-over a field within a group-over (nested group-overs). As a solutions-specific example, metrics would like to be able to determine an alert per disk, per host (see _'Separate alerts are sent for each combination of host and disk'_ [section](https://github.com/elastic/infra/issues/19949).
|
1.0
|
Nested grouping-over - As a user I want to be able to generate separate alert instances by defining a group-over a field within a group-over (nested group-overs). As a solutions-specific example, metrics would like to be able to determine an alert per disk, per host (see _'Separate alerts are sent for each combination of host and disk'_ [section](https://github.com/elastic/infra/issues/19949).
|
non_process
|
nested grouping over as a user i want to be able to generate separate alert instances by defining a group over a field within a group over nested group overs as a solutions specific example metrics would like to be able to determine an alert per disk per host see separate alerts are sent for each combination of host and disk
| 0
|
340,825
| 30,546,624,918
|
IssuesEvent
|
2023-07-20 04:51:17
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix losses.test_sparse_cross_entropy
|
Sub Task Ivy Functional API Failing Test
|
| | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5606912812/jobs/10257542597"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5606912812/jobs/10257542597"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5606912812/jobs/10257542597"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5606912812/jobs/10257542597"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5606912812/jobs/10257542597"><img src=https://img.shields.io/badge/-failure-red></a>
|
1.0
|
Fix losses.test_sparse_cross_entropy - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5606912812/jobs/10257542597"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5606912812/jobs/10257542597"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5606912812/jobs/10257542597"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5606912812/jobs/10257542597"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5606912812/jobs/10257542597"><img src=https://img.shields.io/badge/-failure-red></a>
|
non_process
|
fix losses test sparse cross entropy numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src
| 0
|
9,546
| 12,511,856,779
|
IssuesEvent
|
2020-06-02 21:24:02
|
googleapis/synthtool
|
https://api.github.com/repos/googleapis/synthtool
|
closed
|
Run synthtool for each language as part of CI
|
type: process
|
We should run a generation for each language synthtool supports as part of CI to make sure synthesis succeeds.
|
1.0
|
Run synthtool for each language as part of CI - We should run a generation for each language synthtool supports as part of CI to make sure synthesis succeeds.
|
process
|
run synthtool for each language as part of ci we should run a generation for each language synthtool supports as part of ci to make sure synthesis succeeds
| 1
|
22,288
| 30,839,635,182
|
IssuesEvent
|
2023-08-02 09:46:06
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/resourcedetection, receiver/hostmetrics] Report total memory and CPU capacity numbers as resource attributes
|
enhancement priority:p2 processor/resourcedetection receiver/hostmetrics on hold
|
### Component(s)
processor/resourcedetection, receiver/hostmetrics
### Describe the issue you're reporting
As part of improving the infrastructure monitoring capabilities of the OpenTelemetry Collector, I want to report total memory and filesystem capacity as well as CPU cores.
Part of this information can already be retrieved by combining information from the hostmetrics receiver; for example if you count the number of `cpu` values on `system.cpu.time` you can get the total number of cores. However, if you want to produce this information at the exporter, you then depend on all metrics reaching the same exporter and therefore you would make your deployment stateful.
I want therefore to add this information as resource attributes to avoid stateful deployments.
My remaining open question is where to add this. I see two possibilities:
1. Make this part of the resource attributes on metrics generated by the host metrics receiver.
2. Make this part of the resource attributes added by the resource detection processor system detector.
I am leaning towards (2), since that way this information can be leveraged by users that do not use the host metrics receiver but still want to have that kind of information, but I want some other opinions.
|
1.0
|
[processor/resourcedetection, receiver/hostmetrics] Report total memory and CPU capacity numbers as resource attributes - ### Component(s)
processor/resourcedetection, receiver/hostmetrics
### Describe the issue you're reporting
As part of improving the infrastructure monitoring capabilities of the OpenTelemetry Collector, I want to report total memory and filesystem capacity as well as CPU cores.
Part of this information can already be retrieved by combining information from the hostmetrics receiver; for example if you count the number of `cpu` values on `system.cpu.time` you can get the total number of cores. However, if you want to produce this information at the exporter, you then depend on all metrics reaching the same exporter and therefore you would make your deployment stateful.
I want therefore to add this information as resource attributes to avoid stateful deployments.
My remaining open question is where to add this. I see two possibilities:
1. Make this part of the resource attributes on metrics generated by the host metrics receiver.
2. Make this part of the resource attributes added by the resource detection processor system detector.
I am leaning towards (2), since that way this information can be leveraged by users that do not use the host metrics receiver but still want to have that kind of information, but I want some other opinions.
|
process
|
report total memory and cpu capacity numbers as resource attributes component s processor resourcedetection receiver hostmetrics describe the issue you re reporting as part of improving the infrastructure monitoring capabilities of the opentelemetry collector i want to report total memory and filesystem capacity as well as cpu cores part of this information can already be retrieved by combining information from the hostmetrics receiver for example if you count the number of cpu values on system cpu time you can get the total number of cores however if you want to produce this information at the exporter you then depend on all metrics reaching the same exporter and therefore you would make your deployment stateful i want therefore to add this information as resource attributes to avoid stateful deployments my remaining open question is where to add this i see two possibilities make this part of the resource attributes on metrics generated by the host metrics receiver make this part of the resource attributes added by the resource detection processor system detector i am leaning towards since that way this information can be leveraged by users that do not use the host metrics receiver but still want to have that kind of information but i want some other opinions
| 1
|
111,775
| 17,033,497,468
|
IssuesEvent
|
2021-07-05 01:26:12
|
attesch/hackazon
|
https://api.github.com/repos/attesch/hackazon
|
opened
|
CVE-2015-2309 (High) detected in symfonyv2.6.6
|
security vulnerability
|
## CVE-2015-2309 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>symfonyv2.6.6</b></p></summary>
<p>
<p>The Symfony PHP framework</p>
<p>Library home page: <a href=https://github.com/symfony/symfony.git>https://github.com/symfony/symfony.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>hackazon/vendor/symfony/http-foundation/Symfony/Component/HttpFoundation/Request.php</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Symfony before 2.3.27, 2.5.11 and 2.6.6 is vulnerable to man-in-the-middle attack.The Symfony\Component\HttpFoundation\Request class provides a mechanism that ensures it does not trust HTTP header values coming from a "non-trusted" client. Unfortunately, it assumes that the remote address is always a trusted client if at least one trusted proxy is involved in the request. this allows a man-in-the-middle attack between the latest trusted proxy and the web server.
<p>Publish Date: 2020-07-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-2309>CVE-2015-2309</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://symfony.com/blog/cve-2015-2309-unsafe-methods-in-the-request-class">https://symfony.com/blog/cve-2015-2309-unsafe-methods-in-the-request-class</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: v2.3.27,v2.5.11,v2.6.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-2309 (High) detected in symfonyv2.6.6 - ## CVE-2015-2309 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>symfonyv2.6.6</b></p></summary>
<p>
<p>The Symfony PHP framework</p>
<p>Library home page: <a href=https://github.com/symfony/symfony.git>https://github.com/symfony/symfony.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>hackazon/vendor/symfony/http-foundation/Symfony/Component/HttpFoundation/Request.php</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Symfony before 2.3.27, 2.5.11 and 2.6.6 is vulnerable to man-in-the-middle attack.The Symfony\Component\HttpFoundation\Request class provides a mechanism that ensures it does not trust HTTP header values coming from a "non-trusted" client. Unfortunately, it assumes that the remote address is always a trusted client if at least one trusted proxy is involved in the request. this allows a man-in-the-middle attack between the latest trusted proxy and the web server.
<p>Publish Date: 2020-07-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-2309>CVE-2015-2309</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://symfony.com/blog/cve-2015-2309-unsafe-methods-in-the-request-class">https://symfony.com/blog/cve-2015-2309-unsafe-methods-in-the-request-class</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: v2.3.27,v2.5.11,v2.6.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in cve high severity vulnerability vulnerable library the symfony php framework library home page a href vulnerable source files hackazon vendor symfony http foundation symfony component httpfoundation request php vulnerability details symfony before and is vulnerable to man in the middle attack the symfony component httpfoundation request class provides a mechanism that ensures it does not trust http header values coming from a non trusted client unfortunately it assumes that the remote address is always a trusted client if at least one trusted proxy is involved in the request this allows a man in the middle attack between the latest trusted proxy and the web server publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
8,356
| 11,503,674,727
|
IssuesEvent
|
2020-02-12 21:34:41
|
googleapis/nodejs-dataproc
|
https://api.github.com/repos/googleapis/nodejs-dataproc
|
closed
|
Beta release
|
api: dataproc type: process
|
Package name: `@google-cloud/dataproc`
Current release: Alpha
Proposed release: Beta
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] Server API is beta or GA
- [x] Service API is public
- [ ] Client surface is mostly stable (no known issues that could significantly change the surface)
- [ ] All manual types and methods have comment documentation
- [ ] Package name is idiomatic for the platform
- [ ] At least one integration/smoke test is defined and passing
- [ ] Central GitHub README lists and points to the per-API README
- [ ] Per-API README links to product page on cloud.google.com
- [ ] Manual code has been reviewed for API stability by repo owner
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
1.0
|
Beta release - Package name: `@google-cloud/dataproc`
Current release: Alpha
Proposed release: Beta
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] Server API is beta or GA
- [x] Service API is public
- [ ] Client surface is mostly stable (no known issues that could significantly change the surface)
- [ ] All manual types and methods have comment documentation
- [ ] Package name is idiomatic for the platform
- [ ] At least one integration/smoke test is defined and passing
- [ ] Central GitHub README lists and points to the per-API README
- [ ] Per-API README links to product page on cloud.google.com
- [ ] Manual code has been reviewed for API stability by repo owner
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
process
|
beta release package name google cloud dataproc current release alpha proposed release beta instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required server api is beta or ga service api is public client surface is mostly stable no known issues that could significantly change the surface all manual types and methods have comment documentation package name is idiomatic for the platform at least one integration smoke test is defined and passing central github readme lists and points to the per api readme per api readme links to product page on cloud google com manual code has been reviewed for api stability by repo owner optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
| 1
|
12,161
| 14,741,499,509
|
IssuesEvent
|
2021-01-07 10:42:54
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
SA Billing - Triad - Invalid Late Fees
|
anc-process anp-1 ant-bug has attachment
|
In GitLab by @kdjstudios on Jan 23, 2019, 08:42
**Submitted by:** "Amecia Snelling" <amecia.snelling@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-01-15-93958
**Server:** Internal
**Client/Site:** Triad
**Account:** NA
**Issue:**
We have gone through the list and highlighted some things.
The accts that has valid that are highlighted are invalid and the accounts that are highlighted with invalid are actually valid and then all the rest is correct. Please use billing code 7882 - The Late Fee Adjustment code.
[Triad+1-15-19-final.xlsx](/uploads/29c8b65222003509a565019ae9983642/Triad+1-15-19-final.xlsx)
|
1.0
|
SA Billing - Triad - Invalid Late Fees - In GitLab by @kdjstudios on Jan 23, 2019, 08:42
**Submitted by:** "Amecia Snelling" <amecia.snelling@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-01-15-93958
**Server:** Internal
**Client/Site:** Triad
**Account:** NA
**Issue:**
We have gone through the list and highlighted some things.
The accts that has valid that are highlighted are invalid and the accounts that are highlighted with invalid are actually valid and then all the rest is correct. Please use billing code 7882 - The Late Fee Adjustment code.
[Triad+1-15-19-final.xlsx](/uploads/29c8b65222003509a565019ae9983642/Triad+1-15-19-final.xlsx)
|
process
|
sa billing triad invalid late fees in gitlab by kdjstudios on jan submitted by amecia snelling helpdesk server internal client site triad account na issue we have gone through the list and highlighted some things the accts that has valid that are highlighted are invalid and the accounts that are highlighted with invalid are actually valid and then all the rest is correct please use billing code the late fee adjustment code uploads triad final xlsx
| 1
|
2,475
| 5,251,950,573
|
IssuesEvent
|
2017-02-02 01:47:42
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Unable to optionally cast to array in SQL Editor mode
|
Bug Query Processor
|
We have a few queries where users want to be able to add in multiple conditions (i.e., ID is in (`1, 2, 3`) or type is (`Ball, Bat`)).
I've been able to restructure a text variable as an array using type casts in Postgres,
`select * from foobars where foobars.id in (string_to_array({{foobar_id}}, ',')::integer[])`,
However, as soon as I try and make that optional;
`select * from foobars [[ where foobars.id in (string_to_array({{foobar_id}}, ',')::integer[]) ]]`
I start getting syntax errors. I imagine that it has to do with the coercion to an array including the `[` and `]` characters which may be running amok of the variable parsing, but I don't know for sure.
Also, if there is a better way of doing this, I'd love to learn! This feels a bit hacky and like I might be overlooking a simpler solution.
Thanks!
|
1.0
|
Unable to optionally cast to array in SQL Editor mode - We have a few queries where users want to be able to add in multiple conditions (i.e., ID is in (`1, 2, 3`) or type is (`Ball, Bat`)).
I've been able to restructure a text variable as an array using type casts in Postgres,
`select * from foobars where foobars.id in (string_to_array({{foobar_id}}, ',')::integer[])`,
However, as soon as I try and make that optional;
`select * from foobars [[ where foobars.id in (string_to_array({{foobar_id}}, ',')::integer[]) ]]`
I start getting syntax errors. I imagine that it has to do with the coercion to an array including the `[` and `]` characters which may be running amok of the variable parsing, but I don't know for sure.
Also, if there is a better way of doing this, I'd love to learn! This feels a bit hacky and like I might be overlooking a simpler solution.
Thanks!
|
process
|
unable to optionally cast to array in sql editor mode we have a few queries where users want to be able to add in multiple conditions i e id is in or type is ball bat i ve been able to restructure a text variable as an array using type casts in postgres select from foobars where foobars id in string to array foobar id integer however as soon as i try and make that optional select from foobars i start getting syntax errors i imagine that it has to do with the coercion to an array including the characters which may be running amok of the variable parsing but i don t know for sure also if there is a better way of doing this i d love to learn this feels a bit hacky and like i might be overlooking a simpler solution thanks
| 1
|
706,836
| 24,285,627,658
|
IssuesEvent
|
2022-09-28 21:48:13
|
googleapis/google-cloud-go
|
https://api.github.com/repos/googleapis/google-cloud-go
|
opened
|
storage: TestIntegration_SignedURL_Bucket failed
|
type: bug priority: p1 flakybot: issue
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 2f6ad022a015fafcd6ada199a17a0fe49586aca1
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/d67820ba-e232-44a5-85e1-0c52feafb49f), [Sponge](http://sponge2/d67820ba-e232-44a5-85e1-0c52feafb49f)
status: failed
<details><summary>Test output</summary><br><pre> integration_test.go:4750: NewClient: dialing: multiple credential options provided</pre></details>
|
1.0
|
storage: TestIntegration_SignedURL_Bucket failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 2f6ad022a015fafcd6ada199a17a0fe49586aca1
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/d67820ba-e232-44a5-85e1-0c52feafb49f), [Sponge](http://sponge2/d67820ba-e232-44a5-85e1-0c52feafb49f)
status: failed
<details><summary>Test output</summary><br><pre> integration_test.go:4750: NewClient: dialing: multiple credential options provided</pre></details>
|
non_process
|
storage testintegration signedurl bucket failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output integration test go newclient dialing multiple credential options provided
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.