Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
117,596
| 15,128,851,841
|
IssuesEvent
|
2021-02-10 00:06:44
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
General Analytics Request for [VAOS - data backfill]
|
analytics-insights analytics-request vaos-product-design
|
# Metrics Reporting Request
## What this form is for
Use this template to request one-off performance, usability, or outcomes va.gov metrics that is difficult to access or that needs analytics interpretation. If you have your KPIs set, please [request a KPI dashboard](https://github.com/department-of-veterans-affairs/va.gov-team/issues/new?assignees=joanneesteban&labels=analytics-insights%2C+analytics-request%2C+kpi-dashboard&template=analytics-request-kpi-dashboard.md&title=Analytics+KPI+dashboard+request+for+%5BTeam%5D).
Insights will usually provide data reports within the current sprint, but if there is time-sensitivity, please note them in "additional comments."
## Instructions
Please fill out the following sections:
- [ ] Description
- [ ] Timeframe for Analytics Requested
- [ ] Analytics to Track Down
- [ ] Which Product Questions are you trying to answer?
- [ ] Additional Comments
## Description
- Who is this request for?
- Lauren Ernest
- What team/project is this work for?
- VAOS
- What type of analytics support are you looking for?
- [ ] one time metrics report (this is not a KPI dashboard, but a temporary or a one time report)
- [x] metrics clarification (verification that the metrics you are looking at are being read accurately)
- [x] analytics discussion (please share if you're looking for a general conversation, a KPI setting meeting, etc. in the "additional comments" section)
## Timeframe for Analytics Requested
_Please provide the timeframe for the metrics you are requesting._
What timeframes are you looking to analyze (e.g. before vs. after dd/mm/yy)?
- As far back as possible to present
## Analytics to Track Down
- Number of users on the VAOS app homepage (/health-care/schedule-view-va-appointments/appointments/)
## Which Product Questions are you trying to answer?
## Other Helpful Questions to think through
- Who is your audience?
- Why do you need this data?
- What are you hoping to find?
- What is your hypothesis that you're hoping the data will support?
- What area of the VA are you hoping to assess?
- How frequently do you need this data?
## Link to Metrics
https://va-gov.domo.com/page/1315231862 "Number of Veterans who log in / access VAOS"
## Additional Comments
- This is the GA equivalent I think (I hope) it's supposed to represent- https://analytics.google.com/analytics/web/template?uid=W_utzG6JTdeRU777VhHfdw
- The data stops at Nov 2020. Can we get a current view?
> Please leave the following blank
## Acceptance Criteria
- [ ] The data has been identified and documented in this ticket - items that we don't have access to are noted with _cannot be accessed_
- [ ] The metrics are shared back with users.
## Definition of Done
- [ ] All appropriate issue tagging is completed
- [ ] All AC completed
|
1.0
|
General Analytics Request for [VAOS - data backfill] - # Metrics Reporting Request
## What this form is for
Use this template to request one-off performance, usability, or outcomes va.gov metrics that is difficult to access or that needs analytics interpretation. If you have your KPIs set, please [request a KPI dashboard](https://github.com/department-of-veterans-affairs/va.gov-team/issues/new?assignees=joanneesteban&labels=analytics-insights%2C+analytics-request%2C+kpi-dashboard&template=analytics-request-kpi-dashboard.md&title=Analytics+KPI+dashboard+request+for+%5BTeam%5D).
Insights will usually provide data reports within the current sprint, but if there is time-sensitivity, please note them in "additional comments."
## Instructions
Please fill out the following sections:
- [ ] Description
- [ ] Timeframe for Analytics Requested
- [ ] Analytics to Track Down
- [ ] Which Product Questions are you trying to answer?
- [ ] Additional Comments
## Description
- Who is this request for?
- Lauren Ernest
- What team/project is this work for?
- VAOS
- What type of analytics support are you looking for?
- [ ] one time metrics report (this is not a KPI dashboard, but a temporary or a one time report)
- [x] metrics clarification (verification that the metrics you are looking at are being read accurately)
- [x] analytics discussion (please share if you're looking for a general conversation, a KPI setting meeting, etc. in the "additional comments" section)
## Timeframe for Analytics Requested
_Please provide the timeframe for the metrics you are requesting._
What timeframes are you looking to analyze (e.g. before vs. after dd/mm/yy)?
- As far back as possible to present
## Analytics to Track Down
- Number of users on the VAOS app homepage (/health-care/schedule-view-va-appointments/appointments/)
## Which Product Questions are you trying to answer?
## Other Helpful Questions to think through
- Who is your audience?
- Why do you need this data?
- What are you hoping to find?
- What is your hypothesis that you're hoping the data will support?
- What area of the VA are you hoping to assess?
- How frequently do you need this data?
## Link to Metrics
https://va-gov.domo.com/page/1315231862 "Number of Veterans who log in / access VAOS"
## Additional Comments
- This is the GA equivalent I think (I hope) it's supposed to represent- https://analytics.google.com/analytics/web/template?uid=W_utzG6JTdeRU777VhHfdw
- The data stops at Nov 2020. Can we get a current view?
> Please leave the following blank
## Acceptance Criteria
- [ ] The data has been identified and documented in this ticket - items that we don't have access to are noted with _cannot be accessed_
- [ ] The metrics are shared back with users.
## Definition of Done
- [ ] All appropriate issue tagging is completed
- [ ] All AC completed
|
non_process
|
general analytics request for metrics reporting request what this form is for use this template to request one off performance usability or outcomes va gov metrics that is difficult to access or that needs analytics interpretation if you have your kpis set please insights will usually provide data reports within the current sprint but if there is time sensitivity please note them in additional comments instructions please fill out the following sections description timeframe for analytics requested analytics to track down which product questions are you trying to answer additional comments description who is this request for lauren ernest what team project is this work for vaos what type of analytics support are you looking for one time metrics report this is not a kpi dashboard but a temporary or a one time report metrics clarification verification that the metrics you are looking at are being read accurately analytics discussion please share if you re looking for a general conversation a kpi setting meeting etc in the additional comments section timeframe for analytics requested please provide the timeframe for the metrics you are requesting what timeframes are you looking to analyze e g before vs after dd mm yy as far back as possible to present analytics to track down number of users on the vaos app homepage health care schedule view va appointments appointments which product questions are you trying to answer other helpful questions to think through who is your audience why do you need this data what are you hoping to find what is your hypothesis that you re hoping the data will support what area of the va are you hoping to assess how frequently do you need this data link to metrics number of veterans who log in access vaos additional comments this is the ga equivalent i think i hope it s supposed to represent the data stops at nov can we get a current view please leave the following blank acceptance criteria the data has been identified and documented in this ticket items that we don t have access to are noted with cannot be accessed the metrics are shared back with users definition of done all appropriate issue tagging is completed all ac completed
| 0
|
166,342
| 26,341,240,409
|
IssuesEvent
|
2023-01-10 17:50:53
|
department-of-veterans-affairs/vets-design-system-documentation
|
https://api.github.com/repos/department-of-veterans-affairs/vets-design-system-documentation
|
opened
|
Tagalog translations for va-date
|
component-update vsp-design-system-team va-date dst-engineering
|
## Description
The [`va-date` component](https://design.va.gov/storybook/?path=/docs/components-va-date--default) needs Tagalog translations for all of the currently hard coded English text and that text updated in the component to display using `i18next` (if it has not been updated yet). Use the [va-text-input](https://github.com/department-of-veterans-affairs/component-library/blob/main/packages/web-components/src/components/va-text-input/va-text-input.tsx) component as an example of how to integrate the `i18next` functionality.
This is not an exhaustive list so please confirm in the component itself ([some translations may already be available](https://github.com/department-of-veterans-affairs/component-library/blob/main/packages/core/src/i18n/translations/tl.js)):
- (*Required)
- Please enter two digits for the month
- Month
- Please enter two digits for the day
- Please enter four digits for the year
- All months (January, February, March, etc)
Please update the [component translations spreadsheet](https://docs.google.com/spreadsheets/d/1iIg9pP5GG-QFk7DrsyoKJv9GebtYmDuzKf_heqxWks4/edit#gid=0) when complete.
Related Spanish translation ticket: https://github.com/department-of-veterans-affairs/vets-design-system-documentation/issues/1422
## Details
- Design documents: [add links to any design documents]
- Review accessibility considerations added to component design ticket (if any)
- Review [Design System Engineering Best Practices document](https://vfs.atlassian.net/wiki/spaces/DST/pages/2176516116/Design+System+Engineering+Best+Practices)
## Tasks
- [ ] Review DST backlog for outstanding issues with this component, if necessary
- [ ] Create web component and add to Storyboook
- [ ] Write any necessary tests
- [ ] Add Chromatic link to #[add accessibility ticket number] and request review from an accessibility specialist
- [ ] Ping designer for design review
- [ ] Display the appropriate [maturity scale](https://design.va.gov/about/maturity-scale) option in Storybook (once this feature is available)
- If this is a new component that has not gone through Staging Review, it should be labeled "Use with Caution: Candidate"
- [ ] Merge component
- [ ] Create a new release of component-library
- [ ] Update component-library dependency in vets-design-system-documentation to get the updated component-docs.json
- [ ] Add analytics set-up to `vets-website` repository. See [guidance here](https://vfs.atlassian.net/wiki/spaces/DST/pages/2079817745/Component+development+process#Analytics%5BinlineExtension%5D).
## Acceptance Criteria
- [ ] Component is written and added to Storybook
- [ ] Component has had accessibility and design reviews
- [ ] Design.va.gov has the latest version of component-library
- [ ] Analytics has been configured for the component in the `vets-website` repo
|
1.0
|
Tagalog translations for va-date - ## Description
The [`va-date` component](https://design.va.gov/storybook/?path=/docs/components-va-date--default) needs Tagalog translations for all of the currently hard coded English text and that text updated in the component to display using `i18next` (if it has not been updated yet). Use the [va-text-input](https://github.com/department-of-veterans-affairs/component-library/blob/main/packages/web-components/src/components/va-text-input/va-text-input.tsx) component as an example of how to integrate the `i18next` functionality.
This is not an exhaustive list so please confirm in the component itself ([some translations may already be available](https://github.com/department-of-veterans-affairs/component-library/blob/main/packages/core/src/i18n/translations/tl.js)):
- (*Required)
- Please enter two digits for the month
- Month
- Please enter two digits for the day
- Please enter four digits for the year
- All months (January, February, March, etc)
Please update the [component translations spreadsheet](https://docs.google.com/spreadsheets/d/1iIg9pP5GG-QFk7DrsyoKJv9GebtYmDuzKf_heqxWks4/edit#gid=0) when complete.
Related Spanish translation ticket: https://github.com/department-of-veterans-affairs/vets-design-system-documentation/issues/1422
## Details
- Design documents: [add links to any design documents]
- Review accessibility considerations added to component design ticket (if any)
- Review [Design System Engineering Best Practices document](https://vfs.atlassian.net/wiki/spaces/DST/pages/2176516116/Design+System+Engineering+Best+Practices)
## Tasks
- [ ] Review DST backlog for outstanding issues with this component, if necessary
- [ ] Create web component and add to Storyboook
- [ ] Write any necessary tests
- [ ] Add Chromatic link to #[add accessibility ticket number] and request review from an accessibility specialist
- [ ] Ping designer for design review
- [ ] Display the appropriate [maturity scale](https://design.va.gov/about/maturity-scale) option in Storybook (once this feature is available)
- If this is a new component that has not gone through Staging Review, it should be labeled "Use with Caution: Candidate"
- [ ] Merge component
- [ ] Create a new release of component-library
- [ ] Update component-library dependency in vets-design-system-documentation to get the updated component-docs.json
- [ ] Add analytics set-up to `vets-website` repository. See [guidance here](https://vfs.atlassian.net/wiki/spaces/DST/pages/2079817745/Component+development+process#Analytics%5BinlineExtension%5D).
## Acceptance Criteria
- [ ] Component is written and added to Storybook
- [ ] Component has had accessibility and design reviews
- [ ] Design.va.gov has the latest version of component-library
- [ ] Analytics has been configured for the component in the `vets-website` repo
|
non_process
|
tagalog translations for va date description the needs tagalog translations for all of the currently hard coded english text and that text updated in the component to display using if it has not been updated yet use the component as an example of how to integrate the functionality this is not an exhaustive list so please confirm in the component itself required please enter two digits for the month month please enter two digits for the day please enter four digits for the year all months january february march etc please update the when complete related spanish translation ticket details design documents review accessibility considerations added to component design ticket if any review tasks review dst backlog for outstanding issues with this component if necessary create web component and add to storyboook write any necessary tests add chromatic link to and request review from an accessibility specialist ping designer for design review display the appropriate option in storybook once this feature is available if this is a new component that has not gone through staging review it should be labeled use with caution candidate merge component create a new release of component library update component library dependency in vets design system documentation to get the updated component docs json add analytics set up to vets website repository see acceptance criteria component is written and added to storybook component has had accessibility and design reviews design va gov has the latest version of component library analytics has been configured for the component in the vets website repo
| 0
|
3,871
| 6,808,648,250
|
IssuesEvent
|
2017-11-04 06:10:46
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
reopened
|
Non field tokens are not documented in ethslurp
|
apps-ethslurp status-inprocess type-enhancement
|
Like [{NOW}], [{DATE:MONTH}] [{ADDR}] [{HEADER}][{RECORDS}] etc.
Also--these should be able to accept arbitrary internal text such as [Generated on {NOW}.] Perhaps this requires a new class called something like 'Generic'
Now that Slurp is a full fledged class it can take many additional fields for summary:
name
nTransactions
toValue
avgValue
largestValue
smallestValue
totalGasUsed
averageGasPrice
etc.
From https://github.com/Great-Hill-Corporation/ethslurp/issues/23
|
1.0
|
Non field tokens are not documented in ethslurp - Like [{NOW}], [{DATE:MONTH}] [{ADDR}] [{HEADER}][{RECORDS}] etc.
Also--these should be able to accept arbitrary internal text such as [Generated on {NOW}.] Perhaps this requires a new class called something like 'Generic'
Now that Slurp is a full fledged class it can take many additional fields for summary:
name
nTransactions
toValue
avgValue
largestValue
smallestValue
totalGasUsed
averageGasPrice
etc.
From https://github.com/Great-Hill-Corporation/ethslurp/issues/23
|
process
|
non field tokens are not documented in ethslurp like etc also these should be able to accept arbitrary internal text such as perhaps this requires a new class called something like generic now that slurp is a full fledged class it can take many additional fields for summary name ntransactions tovalue avgvalue largestvalue smallestvalue totalgasused averagegasprice etc from
| 1
|
18,475
| 3,691,465,806
|
IssuesEvent
|
2016-02-26 00:10:41
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
opened
|
e2e flake: failed to create instance template resource already exists
|
kind/flake team/test-infra
|
http://kubekins.dls.corp.google.com:8081/job/kubernetes-pull-build-test-e2e-gce/30552/console
```
Attempt 5 failed to create instance template e2e-gce-builder-3-1-minion-template. Retrying.
Attempt 6 to create e2e-gce-builder-3-1-minion-template
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks/persistent-disks#pdperformance.
ERROR: (gcloud.compute.instance-templates.create) Some requests did not succeed:
- The resource 'projects/kubernetes-jenkins-pull/global/instanceTemplates/e2e-gce-builder-3-1-minion-template' already exists
```
|
1.0
|
e2e flake: failed to create instance template resource already exists - http://kubekins.dls.corp.google.com:8081/job/kubernetes-pull-build-test-e2e-gce/30552/console
```
Attempt 5 failed to create instance template e2e-gce-builder-3-1-minion-template. Retrying.
Attempt 6 to create e2e-gce-builder-3-1-minion-template
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks/persistent-disks#pdperformance.
ERROR: (gcloud.compute.instance-templates.create) Some requests did not succeed:
- The resource 'projects/kubernetes-jenkins-pull/global/instanceTemplates/e2e-gce-builder-3-1-minion-template' already exists
```
|
non_process
|
flake failed to create instance template resource already exists attempt failed to create instance template gce builder minion template retrying attempt to create gce builder minion template warning you have selected a disk size of under this may result in poor i o performance for more information see error gcloud compute instance templates create some requests did not succeed the resource projects kubernetes jenkins pull global instancetemplates gce builder minion template already exists
| 0
|
429
| 2,859,212,587
|
IssuesEvent
|
2015-06-03 09:15:51
|
genomizer/genomizer-server
|
https://api.github.com/repos/genomizer/genomizer-server
|
opened
|
UNKNOWN_AUTHOR for processing
|
api BL Medium priority Processing
|
A field "author should be added to the `processCommands` API entry and Process instance in process pool updated accordingly
|
1.0
|
UNKNOWN_AUTHOR for processing - A field "author should be added to the `processCommands` API entry and Process instance in process pool updated accordingly
|
process
|
unknown author for processing a field author should be added to the processcommands api entry and process instance in process pool updated accordingly
| 1
|
86,888
| 3,734,679,875
|
IssuesEvent
|
2016-03-08 08:31:52
|
ufal/lindat-dspace
|
https://api.github.com/repos/ufal/lindat-dspace
|
closed
|
Showing i18n keys instead of values
|
bug high priority
|
Submission -> review
```
input_forms.value_pairs.metashare_detailed_toolService.tool
input_forms.value_pairs.common_types.toolService
```
|
1.0
|
Showing i18n keys instead of values - Submission -> review
```
input_forms.value_pairs.metashare_detailed_toolService.tool
input_forms.value_pairs.common_types.toolService
```
|
non_process
|
showing keys instead of values submission review input forms value pairs metashare detailed toolservice tool input forms value pairs common types toolservice
| 0
|
17,484
| 23,301,272,169
|
IssuesEvent
|
2022-08-07 11:08:17
|
Battle-s/battle-school-backend
|
https://api.github.com/repos/Battle-s/battle-school-backend
|
closed
|
[FEAT] Swagger μΈν
|
feature :computer: processing :hourglass_flowing_sand:
|
## μ€λͺ
> μ΄μμ λν μ€λͺ
μ μμ±ν©λλ€. λ΄λΉμλ ν¨κ» μμ±νλ©΄ μ’μ΅λλ€.
## 체ν¬μ¬ν
> μ΄μλ₯Ό closeνκΈ° μν΄ νμν 쑰건λ€μ 체ν¬λ°μ€λ‘ λμ΄ν©λλ€.
- Swagger μΈν
## μ°Έκ³ μλ£
> μ΄μλ₯Ό ν΄κ²°νκΈ° μν΄ νμν μ°Έκ³ μλ£κ° μλ€λ©΄ μΆκ°ν©λλ€.
## κ΄λ ¨ λ
Όμ
> μ΄μμ λν λ
Όμκ° μμλ€λ©΄ λ
Όμ λ΄μ©μ κ°λ΅νκ² μΆκ°ν©λλ€.
|
1.0
|
[FEAT] Swagger μΈν
- ## μ€λͺ
> μ΄μμ λν μ€λͺ
μ μμ±ν©λλ€. λ΄λΉμλ ν¨κ» μμ±νλ©΄ μ’μ΅λλ€.
## 체ν¬μ¬ν
> μ΄μλ₯Ό closeνκΈ° μν΄ νμν 쑰건λ€μ 체ν¬λ°μ€λ‘ λμ΄ν©λλ€.
- Swagger μΈν
## μ°Έκ³ μλ£
> μ΄μλ₯Ό ν΄κ²°νκΈ° μν΄ νμν μ°Έκ³ μλ£κ° μλ€λ©΄ μΆκ°ν©λλ€.
## κ΄λ ¨ λ
Όμ
> μ΄μμ λν λ
Όμκ° μμλ€λ©΄ λ
Όμ λ΄μ©μ κ°λ΅νκ² μΆκ°ν©λλ€.
|
process
|
swagger μΈν
μ€λͺ
μ΄μμ λν μ€λͺ
μ μμ±ν©λλ€ λ΄λΉμλ ν¨κ» μμ±νλ©΄ μ’μ΅λλ€ μ²΄ν¬μ¬ν μ΄μλ₯Ό closeνκΈ° μν΄ νμν 쑰건λ€μ 체ν¬λ°μ€λ‘ λμ΄ν©λλ€ swagger μΈν
μ°Έκ³ μλ£ μ΄μλ₯Ό ν΄κ²°νκΈ° μν΄ νμν μ°Έκ³ μλ£κ° μλ€λ©΄ μΆκ°ν©λλ€ κ΄λ ¨ λ
Όμ μ΄μμ λν λ
Όμκ° μμλ€λ©΄ λ
Όμ λ΄μ©μ κ°λ΅νκ² μΆκ°ν©λλ€
| 1
|
66,459
| 20,201,944,547
|
IssuesEvent
|
2022-02-11 16:04:45
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
opened
|
Content releases fail with "AMI <id...> is pending, and cannot be run"
|
Defect Unplanned work
|
## Describe the defect
Late in the day on 02/10, content releases started failing with errors like the folllowing:
<img width="749" alt="Screen Shot 2022-02-10 at 5 51 20 PM" src="https://user-images.githubusercontent.com/1318579/153625531-e7f9e4c9-1f7e-4e4f-8c5c-6988a79f419c.png">
This ended up being a result of incorrect syntax in the content release GitHub action.
### CMS Team
Please check the team(s) that will do this work.
- [ ] `CMS Program`
- [x] `Platform CMS Team`
- [ ] `Sitewide CMS Team ` (leave Sitewide unchecked and check the specific team instead)
- [ ] `βοΈ Content ops`
- [ ] `βοΈ CMS experience`
- [ ] `βοΈ Offices`
- [ ] `βοΈ Product support`
- [ ] `βοΈ User support`
|
1.0
|
Content releases fail with "AMI <id...> is pending, and cannot be run" - ## Describe the defect
Late in the day on 02/10, content releases started failing with errors like the folllowing:
<img width="749" alt="Screen Shot 2022-02-10 at 5 51 20 PM" src="https://user-images.githubusercontent.com/1318579/153625531-e7f9e4c9-1f7e-4e4f-8c5c-6988a79f419c.png">
This ended up being a result of incorrect syntax in the content release GitHub action.
### CMS Team
Please check the team(s) that will do this work.
- [ ] `CMS Program`
- [x] `Platform CMS Team`
- [ ] `Sitewide CMS Team ` (leave Sitewide unchecked and check the specific team instead)
- [ ] `βοΈ Content ops`
- [ ] `βοΈ CMS experience`
- [ ] `βοΈ Offices`
- [ ] `βοΈ Product support`
- [ ] `βοΈ User support`
|
non_process
|
content releases fail with ami is pending and cannot be run describe the defect late in the day on content releases started failing with errors like the folllowing img width alt screen shot at pm src this ended up being a result of incorrect syntax in the content release github action cms team please check the team s that will do this work cms program platform cms team sitewide cms team leave sitewide unchecked and check the specific team instead βοΈ content ops βοΈ cms experience βοΈ offices βοΈ product support βοΈ user support
| 0
|
15,467
| 19,681,108,709
|
IssuesEvent
|
2022-01-11 16:50:46
|
alexrp/system-terminal
|
https://api.github.com/repos/alexrp/system-terminal
|
opened
|
Provide convenient APIs for piping between child processes
|
type: feature state: approved area: samples area: processes
|
It should be easy to pipe from one child process to another, as well as to/from files, streams, collections, etc.
Overloaded operators such as `|`, `>`, and `<` should be provided to easily achieve shell-like piping semantics.
|
1.0
|
Provide convenient APIs for piping between child processes - It should be easy to pipe from one child process to another, as well as to/from files, streams, collections, etc.
Overloaded operators such as `|`, `>`, and `<` should be provided to easily achieve shell-like piping semantics.
|
process
|
provide convenient apis for piping between child processes it should be easy to pipe from one child process to another as well as to from files streams collections etc overloaded operators such as and should be provided to easily achieve shell like piping semantics
| 1
|
3,857
| 6,808,623,387
|
IssuesEvent
|
2017-11-04 05:44:42
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
reopened
|
Testing in the presence of a cache
|
build-cmake status-inprocess type-enhancement
|
We definitely want to remove all 'temporal' data from the 'gold test cases, but in the presence of a cache, this becomes complicated.
When one runs a test case for the first time (with ethslurp for example), the behaviour of the program is different than if one runs it thereafter. For example, ethslurp needs to download the transactions from http://etherscan.io the first time, but it reads that same data from its cache thereafter.
I think we need a CMakeLists.txt function called something like `clearCacheEntry` that takes an Ethereum address. It will simply delete the cache file so that each test case starts in the same state.
Here's a problem: some of the accounts we are using for testing may be accounts that I need for other (unrelated) work. If we remove the caches, my other code will have to re-create them. This is solved by using 'rare' accounts such as EEAR (use `ethName -a EEAR` to find address). EEAR was created about a year ago, and has not been used since.
You can tell if a test case has this problem by removing the file ~/.quickBlocks/slurps/ADDRESS.bin and ~/.quickBlocks/abis/ADDRESS.json or .abi prior to the test. If you get different results on the first run after deleting, than the second, this problem is here.
TODO: Write a function in CMakeList (or a python script -- I would prefer a python script so I can use it from the command line) to delete an address from the cache.
|
1.0
|
Testing in the presence of a cache - We definitely want to remove all 'temporal' data from the 'gold test cases, but in the presence of a cache, this becomes complicated.
When one runs a test case for the first time (with ethslurp for example), the behaviour of the program is different than if one runs it thereafter. For example, ethslurp needs to download the transactions from http://etherscan.io the first time, but it reads that same data from its cache thereafter.
I think we need a CMakeLists.txt function called something like `clearCacheEntry` that takes an Ethereum address. It will simply delete the cache file so that each test case starts in the same state.
Here's a problem: some of the accounts we are using for testing may be accounts that I need for other (unrelated) work. If we remove the caches, my other code will have to re-create them. This is solved by using 'rare' accounts such as EEAR (use `ethName -a EEAR` to find address). EEAR was created about a year ago, and has not been used since.
You can tell if a test case has this problem by removing the file ~/.quickBlocks/slurps/ADDRESS.bin and ~/.quickBlocks/abis/ADDRESS.json or .abi prior to the test. If you get different results on the first run after deleting, than the second, this problem is here.
TODO: Write a function in CMakeList (or a python script -- I would prefer a python script so I can use it from the command line) to delete an address from the cache.
|
process
|
testing in the presence of a cache we definitely want to remove all temporal data from the gold test cases but in the presence of a cache this becomes complicated when one runs a test case for the first time with ethslurp for example the behaviour of the program is different than if one runs it thereafter for example ethslurp needs to download the transactions from the first time but it reads that same data from its cache thereafter i think we need a cmakelists txt function called something like clearcacheentry that takes an ethereum address it will simply delete the cache file so that each test case starts in the same state here s a problem some of the accounts we are using for testing may be accounts that i need for other unrelated work if we remove the caches my other code will have to re create them this is solved by using rare accounts such as eear use ethname a eear to find address eear was created about a year ago and has not been used since you can tell if a test case has this problem by removing the file quickblocks slurps address bin and quickblocks abis address json or abi prior to the test if you get different results on the first run after deleting than the second this problem is here todo write a function in cmakelist or a python script i would prefer a python script so i can use it from the command line to delete an address from the cache
| 1
|
74,232
| 15,325,437,330
|
IssuesEvent
|
2021-02-26 01:18:44
|
idonthaveafifaaddiction/MapLoom
|
https://api.github.com/repos/idonthaveafifaaddiction/MapLoom
|
opened
|
WS-2020-0091 (High) detected in http-proxy-0.10.4.tgz
|
security vulnerability
|
## WS-2020-0091 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-0.10.4.tgz</b></p></summary>
<p>A full-featured http reverse proxy for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-0.10.4.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-0.10.4.tgz</a></p>
<p>Path to dependency file: MapLoom/vendor/angular-ui-router/package.json</p>
<p>Path to vulnerable library: MapLoom/vendor/angular-ui-router/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- karma-0.10.10.tgz (Root Library)
- :x: **http-proxy-0.10.4.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p>
<p>Release Date: 2020-05-26</p>
<p>Fix Resolution: http-proxy - 1.18.1 </p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"http-proxy","packageVersion":"0.10.4","packageFilePaths":["/vendor/angular-ui-router/package.json"],"isTransitiveDependency":true,"dependencyTree":"karma:0.10.10;http-proxy:0.10.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"http-proxy - 1.18.1 "}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2020-0091","vulnerabilityDetails":"Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.","vulnerabilityUrl":"https://github.com/http-party/node-http-proxy/pull/1447","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2020-0091 (High) detected in http-proxy-0.10.4.tgz - ## WS-2020-0091 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-0.10.4.tgz</b></p></summary>
<p>A full-featured http reverse proxy for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-0.10.4.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-0.10.4.tgz</a></p>
<p>Path to dependency file: MapLoom/vendor/angular-ui-router/package.json</p>
<p>Path to vulnerable library: MapLoom/vendor/angular-ui-router/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- karma-0.10.10.tgz (Root Library)
- :x: **http-proxy-0.10.4.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p>
<p>Release Date: 2020-05-26</p>
<p>Fix Resolution: http-proxy - 1.18.1 </p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"http-proxy","packageVersion":"0.10.4","packageFilePaths":["/vendor/angular-ui-router/package.json"],"isTransitiveDependency":true,"dependencyTree":"karma:0.10.10;http-proxy:0.10.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"http-proxy - 1.18.1 "}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2020-0091","vulnerabilityDetails":"Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.","vulnerabilityUrl":"https://github.com/http-party/node-http-proxy/pull/1447","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
ws high detected in http proxy tgz ws high severity vulnerability vulnerable library http proxy tgz a full featured http reverse proxy for node js library home page a href path to dependency file maploom vendor angular ui router package json path to vulnerable library maploom vendor angular ui router node modules http proxy package json dependency hierarchy karma tgz root library x http proxy tgz vulnerable library found in base branch master vulnerability details versions of http proxy prior to are vulnerable to denial of service an http request with a long body triggers an err http headers sent unhandled exception that crashes the proxy server this is only possible when the proxy server sets headers in the proxy request using the proxyreq setheader function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http proxy isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree karma http proxy isminimumfixversionavailable true minimumfixversion http proxy basebranches vulnerabilityidentifier ws vulnerabilitydetails versions of http proxy prior to are vulnerable to denial of service an http request with a long body triggers an err http headers sent unhandled exception that crashes the proxy server this is only possible when the proxy server sets headers in the proxy request using the proxyreq setheader function vulnerabilityurl
| 0
|
16,305
| 20,960,721,721
|
IssuesEvent
|
2022-03-27 19:05:27
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add Black Hammer, White Lightning
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Black Hammer, White Lightning
Type (film/tv show): film
Film or show in which it appears: Major League II
Is the parent film/show streaming anywhere? Starz, DirectTV, Spectrum on Demand
About when in the parent film/show does it appear? 12:52
Actual footage of the film/show can be seen (yes/no)? Yes
https://www.youtube.com/watch?v=NoZzyL1KEis

|
1.0
|
Add Black Hammer, White Lightning - Please add as much of the following info as you can:
Title: Black Hammer, White Lightning
Type (film/tv show): film
Film or show in which it appears: Major League II
Is the parent film/show streaming anywhere? Starz, DirectTV, Spectrum on Demand
About when in the parent film/show does it appear? 12:52
Actual footage of the film/show can be seen (yes/no)? Yes
https://www.youtube.com/watch?v=NoZzyL1KEis

|
process
|
add black hammer white lightning please add as much of the following info as you can title black hammer white lightning type film tv show film film or show in which it appears major league ii is the parent film show streaming anywhere starz directtv spectrum on demand about when in the parent film show does it appear actual footage of the film show can be seen yes no yes
| 1
|
15,455
| 19,668,481,525
|
IssuesEvent
|
2022-01-11 02:49:51
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
SQLite queries don't work correctly with multiple columns with same name but different case
|
Type:Bug Priority:P2 Database/SQLite Querying/Processor .Correctness
|
e.g. if you try to select a column named `COUNT` and add an aggregation named `count`, you'll only get back one or the other.
Apparently SQLite treats identifiers as case-insensitive even if they are quoted. This was actually the root cause of #14255, but our fix in #18065 was sort of a band-aid fix that worked around the problem by avoiding adding in columns that weren't needed in some cases.
In #19384 when I tore out all the band aid code again the test for #14255 popped back up. With the refactored code I was able to fix it the right way this time and make sure column names are unique for SQLite regardless of case.
|
1.0
|
SQLite queries don't work correctly with multiple columns with same name but different case - e.g. if you try to select a column named `COUNT` and add an aggregation named `count`, you'll only get back one or the other.
Apparently SQLite treats identifiers as case-insensitive even if they are quoted. This was actually the root cause of #14255, but our fix in #18065 was sort of a band-aid fix that worked around the problem by avoiding adding in columns that weren't needed in some cases.
In #19384 when I tore out all the band aid code again the test for #14255 popped back up. With the refactored code I was able to fix it the right way this time and make sure column names are unique for SQLite regardless of case.
|
process
|
sqlite queries don t work correctly with multiple columns with same name but different case e g if you try to select a column named count and add an aggregation named count you ll only get back one or the other apparently sqlite treats identifiers as case insensitive even if they are quoted this was actually the root cause of but our fix in was sort of a band aid fix that worked around the problem by avoiding adding in columns that weren t needed in some cases in when i tore out all the band aid code again the test for popped back up with the refactored code i was able to fix it the right way this time and make sure column names are unique for sqlite regardless of case
| 1
|
277,468
| 21,045,559,233
|
IssuesEvent
|
2022-03-31 15:42:42
|
docktie/docktie
|
https://api.github.com/repos/docktie/docktie
|
opened
|
Define managing the project
|
documentation PRIORITY_HIGH
|
Should have answers to the following:
* How to add features that can both be both in v0.2.x and v0.1.x
-- add separate feature branch so it can be merged to both branch when done.
* How many minimum tickets should be done before bumping the version
-- maybe minimum of 10 (minor) or 3 for major ones?
* where is `wip` for each v0.2.x and v0.1.x
-- v0.2.x_wip - ongoing project's wip
-- wip - current stable release (for master) or v0.1.x at the moment
|
1.0
|
Define managing the project - Should have answers to the following:
* How to add features that can both be both in v0.2.x and v0.1.x
-- add separate feature branch so it can be merged to both branch when done.
* How many minimum tickets should be done before bumping the version
-- maybe minimum of 10 (minor) or 3 for major ones?
* where is `wip` for each v0.2.x and v0.1.x
-- v0.2.x_wip - ongoing project's wip
-- wip - current stable release (for master) or v0.1.x at the moment
|
non_process
|
define managing the project should have answers to the following how to add features that can both be both in x and x add separate feature branch so it can be merged to both branch when done how many minimum tickets should be done before bumping the version maybe minimum of minor or for major ones where is wip for each x and x x wip ongoing project s wip wip current stable release for master or x at the moment
| 0
|
772,105
| 27,106,507,137
|
IssuesEvent
|
2023-02-15 12:31:05
|
KDT3-MiniProject-8/MiniProject-BE
|
https://api.github.com/repos/KDT3-MiniProject-8/MiniProject-BE
|
closed
|
Feat: νμ μλΉμ€ - νμμ 보 μμ /μ‘°ν κ°λ°
|
For: API Priority: Medium Status: In Progress Type: Feature
|
## Description(μ€λͺ
)
νμμ 보 μμ κ°λ°
νμμ 보 μ‘°ν κ°λ°
## Tasks(New feature)
- [ ] νμμ 보 μμ κΈ°λ₯
- [ ] passwordλ μνΈν μ²λ¦¬ν΄μ update λ μ μλλ‘ μ²λ¦¬νλ€
- [ ] νμμ 보 μ‘°ν κΈ°λ₯
## References
|
1.0
|
Feat: νμ μλΉμ€ - νμμ 보 μμ /μ‘°ν κ°λ° - ## Description(μ€λͺ
)
νμμ 보 μμ κ°λ°
νμμ 보 μ‘°ν κ°λ°
## Tasks(New feature)
- [ ] νμμ 보 μμ κΈ°λ₯
- [ ] passwordλ μνΈν μ²λ¦¬ν΄μ update λ μ μλλ‘ μ²λ¦¬νλ€
- [ ] νμμ 보 μ‘°ν κΈ°λ₯
## References
|
non_process
|
feat νμ μλΉμ€ νμμ 보 μμ μ‘°ν κ°λ° description μ€λͺ
νμμ 보 μμ κ°λ° νμμ 보 μ‘°ν κ°λ° tasks new feature νμμ 보 μμ κΈ°λ₯ passwordλ μνΈν μ²λ¦¬ν΄μ update λ μ μλλ‘ μ²λ¦¬νλ€ νμμ 보 μ‘°ν κΈ°λ₯ references
| 0
|
58,438
| 11,880,540,338
|
IssuesEvent
|
2020-03-27 10:52:42
|
fac19/week4-JICG
|
https://api.github.com/repos/fac19/week4-JICG
|
opened
|
Nice 404 page!
|
code review
|
LOL. Though chrome complains `SameSite=None` and `Secure` headers ought to be set.
|
1.0
|
Nice 404 page! - LOL. Though chrome complains `SameSite=None` and `Secure` headers ought to be set.
|
non_process
|
nice page lol though chrome complains samesite none and secure headers ought to be set
| 0
|
368,038
| 25,774,732,570
|
IssuesEvent
|
2022-12-09 10:57:41
|
CardanoSolutions/kupo
|
https://api.github.com/repos/CardanoSolutions/kupo
|
closed
|
Documentation on rollbacks.
|
documentation
|
How does kupo handle rollbacks?
Out of my depth here.
I assume that there are cases where the data provider (node or ogmios?)
change their mind about whether some previously handed over set of blocks are actually on-chain.
I did a quick search for "roll" here and in the docs, but I can't extract an answer to
"What happens when ... "
Will entries appear in a `matches` response before a rollback, and then be removed from the kupo db on rollback, so no longer appear?
|
1.0
|
Documentation on rollbacks. - How does kupo handle rollbacks?
Out of my depth here.
I assume that there are cases where the data provider (node or ogmios?)
change their mind about whether some previously handed over set of blocks are actually on-chain.
I did a quick search for "roll" here and in the docs, but I can't extract an answer to
"What happens when ... "
Will entries appear in a `matches` response before a rollback, and then be removed from the kupo db on rollback, so no longer appear?
|
non_process
|
documentation on rollbacks how does kupo handle rollbacks out of my depth here i assume that there are cases where the data provider node or ogmios change their mind about whether some previously handed over set of blocks are actually on chain i did a quick search for roll here and in the docs but i can t extract an answer to what happens when will entries appear in a matches response before a rollback and then be removed from the kupo db on rollback so no longer appear
| 0
|
19,548
| 25,866,353,970
|
IssuesEvent
|
2022-12-13 21:16:01
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
[multiprocessing] Multiprocessing in spawn mode doesn't work when the target is a method in a unittest.TestCase subclass, when run either with unittest or with pytest
|
type-bug stdlib 3.8 3.7 expert-multiprocessing
|
BPO | [33884](https://bugs.python.org/issue33884)
--- | :---
Nosy | @pitrou, @applio
Files | <li>[mp_pickle_issues.py](https://bugs.python.org/file47645/mp_pickle_issues.py "Uploaded as text/plain at 2018-06-17.10:45:32 by Yoni Rozenshein")</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2018-06-17.10:45:32.688>
labels = ['3.7', '3.8', 'type-bug', 'library', 'invalid']
title = "[multiprocessing] Multiprocessing in spawn mode doesn't work when the target is a method in a unittest.TestCase subclass, when run either with unittest or with pytest"
updated_at = <Date 2018-06-17.18:51:59.760>
user = 'https://bugs.python.org/YoniRozenshein'
```
bugs.python.org fields:
```python
activity = <Date 2018-06-17.18:51:59.760>
actor = 'pitrou'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2018-06-17.10:45:32.688>
creator = 'Yoni Rozenshein'
dependencies = []
files = ['47645']
hgrepos = []
issue_num = 33884
keywords = []
message_count = 2.0
messages = ['319811', '319826']
nosy_count = 3.0
nosy_names = ['pitrou', 'davin', 'Yoni Rozenshein']
pr_nums = []
priority = 'normal'
resolution = 'not a bug'
stage = 'resolved'
status = 'pending'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue33884'
versions = ['Python 3.6', 'Python 3.7', 'Python 3.8']
```
</p></details>
|
1.0
|
[multiprocessing] Multiprocessing in spawn mode doesn't work when the target is a method in a unittest.TestCase subclass, when run either with unittest or with pytest - BPO | [33884](https://bugs.python.org/issue33884)
--- | :---
Nosy | @pitrou, @applio
Files | <li>[mp_pickle_issues.py](https://bugs.python.org/file47645/mp_pickle_issues.py "Uploaded as text/plain at 2018-06-17.10:45:32 by Yoni Rozenshein")</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2018-06-17.10:45:32.688>
labels = ['3.7', '3.8', 'type-bug', 'library', 'invalid']
title = "[multiprocessing] Multiprocessing in spawn mode doesn't work when the target is a method in a unittest.TestCase subclass, when run either with unittest or with pytest"
updated_at = <Date 2018-06-17.18:51:59.760>
user = 'https://bugs.python.org/YoniRozenshein'
```
bugs.python.org fields:
```python
activity = <Date 2018-06-17.18:51:59.760>
actor = 'pitrou'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2018-06-17.10:45:32.688>
creator = 'Yoni Rozenshein'
dependencies = []
files = ['47645']
hgrepos = []
issue_num = 33884
keywords = []
message_count = 2.0
messages = ['319811', '319826']
nosy_count = 3.0
nosy_names = ['pitrou', 'davin', 'Yoni Rozenshein']
pr_nums = []
priority = 'normal'
resolution = 'not a bug'
stage = 'resolved'
status = 'pending'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue33884'
versions = ['Python 3.6', 'Python 3.7', 'Python 3.8']
```
</p></details>
|
process
|
multiprocessing in spawn mode doesn t work when the target is a method in a unittest testcase subclass when run either with unittest or with pytest bpo nosy pitrou applio files uploaded as text plain at by yoni rozenshein note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title multiprocessing in spawn mode doesn t work when the target is a method in a unittest testcase subclass when run either with unittest or with pytest updated at user bugs python org fields python activity actor pitrou assignee none closed false closed date none closer none components creation creator yoni rozenshein dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution not a bug stage resolved status pending superseder none type behavior url versions
| 1
|
13,917
| 16,676,226,095
|
IssuesEvent
|
2021-06-07 16:31:05
|
googleapis/sphinx-docfx-yaml
|
https://api.github.com/repos/googleapis/sphinx-docfx-yaml
|
closed
|
Allow generate-docs Kokoro job to run for multi-version releases on failures
|
type: process
|
Currently, it is likely expected for the `generate-docs` Kokoro job to fail on multi-version releases when given the option for `GITHUB_TAG=all` is given. While the option is supported and will retrieve all the tags and run against all releases, there probably are releases where it doesn't have nox builds for `docs` or `docfx` jobs which would fail the run prematurely.
This issue should be closed if the Kokoro job doesn't fail on all the tags for every repo, or once it is updated to not fail on the tags (should skip on tags that we don't expect it to be upgraded on).
|
1.0
|
Allow generate-docs Kokoro job to run for multi-version releases on failures - Currently, it is likely expected for the `generate-docs` Kokoro job to fail on multi-version releases when given the option for `GITHUB_TAG=all` is given. While the option is supported and will retrieve all the tags and run against all releases, there probably are releases where it doesn't have nox builds for `docs` or `docfx` jobs which would fail the run prematurely.
This issue should be closed if the Kokoro job doesn't fail on all the tags for every repo, or once it is updated to not fail on the tags (should skip on tags that we don't expect it to be upgraded on).
|
process
|
allow generate docs kokoro job to run for multi version releases on failures currently it is likely expected for the generate docs kokoro job to fail on multi version releases when given the option for github tag all is given while the option is supported and will retrieve all the tags and run against all releases there probably are releases where it doesn t have nox builds for docs or docfx jobs which would fail the run prematurely this issue should be closed if the kokoro job doesn t fail on all the tags for every repo or once it is updated to not fail on the tags should skip on tags that we don t expect it to be upgraded on
| 1
|
22,543
| 31,717,826,051
|
IssuesEvent
|
2023-09-10 03:48:59
|
Flow-Glow/Code-Jam-2023-Async-Aggregators
|
https://api.github.com/repos/Flow-Glow/Code-Jam-2023-Async-Aggregators
|
closed
|
Real time motion distortions
|
image processing
|
Task to come up with various types of motion distortions (spike effect, swirl effect, etc) and how to apply them to an image in a quick way for real time rendering.
If I can manage to render them very quickly, any movement of the slider to adjust one of these effects could be animated with multiple frames as the image approaches the newly selected values.
|
1.0
|
Real time motion distortions - Task to come up with various types of motion distortions (spike effect, swirl effect, etc) and how to apply them to an image in a quick way for real time rendering.
If I can manage to render them very quickly, any movement of the slider to adjust one of these effects could be animated with multiple frames as the image approaches the newly selected values.
|
process
|
real time motion distortions task to come up with various types of motion distortions spike effect swirl effect etc and how to apply them to an image in a quick way for real time rendering if i can manage to render them very quickly any movement of the slider to adjust one of these effects could be animated with multiple frames as the image approaches the newly selected values
| 1
|
63,589
| 15,651,691,210
|
IssuesEvent
|
2021-03-23 10:29:18
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
closed
|
[Bug] [Property Pane] Dropdown control should get icon and sub text
|
Bug High Property Pane QA Quick effort UI Building Pod
|
## Description
This was not covered in the redesign of the property pane, but a dropdown option can also show an icon and some subtext
<img width="258" alt="Screenshot 2021-03-16 at 2 48 02 PM" src="https://user-images.githubusercontent.com/12022471/111288844-9175bd80-866a-11eb-82c5-5c2dbbc258eb.png">
|
1.0
|
[Bug] [Property Pane] Dropdown control should get icon and sub text - ## Description
This was not covered in the redesign of the property pane, but a dropdown option can also show an icon and some subtext
<img width="258" alt="Screenshot 2021-03-16 at 2 48 02 PM" src="https://user-images.githubusercontent.com/12022471/111288844-9175bd80-866a-11eb-82c5-5c2dbbc258eb.png">
|
non_process
|
dropdown control should get icon and sub text description this was not covered in the redesign of the property pane but a dropdown option can also show an icon and some subtext img width alt screenshot at pm src
| 0
|
15,248
| 19,186,043,863
|
IssuesEvent
|
2021-12-05 07:51:24
|
DSE511-Project3-Team/DSE511-Project-3-Code-Repo
|
https://api.github.com/repos/DSE511-Project3-Team/DSE511-Project-3-Code-Repo
|
closed
|
Data: Update Download Raw Data Script
|
Preprocess
|
We need to update the raw data generation script to directly download data from a web URL. This will help streamline the process.
|
1.0
|
Data: Update Download Raw Data Script - We need to update the raw data generation script to directly download data from a web URL. This will help streamline the process.
|
process
|
data update download raw data script we need to update the raw data generation script to directly download data from a web url this will help streamline the process
| 1
|
9,710
| 12,705,222,981
|
IssuesEvent
|
2020-06-23 03:55:21
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
CUDA IPC garbage collection hangs when disposing of LSTMs
|
module: cuda module: multiprocessing topic: deadlock triaged
|
## π Bug
Copying a LSTM's state dict to another process via a torch Queue will hang when the source process tries to dispose of the underlying tensors.
## To Reproduce
Below we have two processes. The learner process passes an LSTM's state dict to the actor process via a Queue.
```python
import os
from torch import nn
from torch.multiprocessing import Queue, Process, set_start_method
from time import sleep
from copy import deepcopy
def actor(queue):
print(f'Actor started on PID #{os.getpid()}')
while True:
if not queue.empty():
queue.get()
print('Actor stepped')
sleep(.01)
def learner(queue):
print(f'Learner started on PID #{os.getpid()}')
net = nn.LSTM(1, 1).cuda()
while True:
if not queue.full():
queue.put(deepcopy(net.state_dict()))
print('Learner stepped')
sleep(.01)
def run():
queue = Queue(1)
processes = dict(
a=Process(target=actor, args=(queue,)),
l=Process(target=learner, args=(queue,)))
for p in processes.values():
p.start()
for p in processes.values():
p.join()
if __name__ == '__main__':
set_start_method('spawn')
run()
```
When run, the actor and learner will each loop a few times an then the learner process will hang. The output will be something like
```
Learner started on PID #1193
Actor started on PID #1192
Learner stepped
Learner stepped
Actor stepped
Actor stepped
Learner stepped
```
## Expected behavior
The expected behaviour is that the learner doesn't hang, and the actor/learner stepped messages carry on forever. You can verify the script itself isn't broken by removing the `.cuda()` call - it works fine with a CPU network.
## Environment
```
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce RTX 2080 Ti
Nvidia driver version: 430.40
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] torch==1.2.0
[pip] torchfile==0.1.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.0.2 py37h7b6447c_0
[conda] mkl_fft 1.0.14 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.2.0 py3.7_cuda10.0.130_cudnn7.6.2_0 pytorch
[conda] torchfile 0.1.0 pypi_0 pypi
```
## Additional context
The bug goes away when using a CPU LSTM rather than a CUDA LSTM
The bug goes away when using a `nn.Linear(1, 1)` rather than a LSTM
In my research code, the bug takes longer to turn up when the `deepcopy` is removed, but it still appears eventually.
Attaching GDB to the learner process after the hang, `info threads` will show that some thread is sitting at `__lll_lock_wait ()`, and checking that thread's backtrace gets
```
#0 0x00007fa14e96b10d in __lll_lock_wait () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007fa14e964023 in pthread_mutex_lock () from /lib/x86_64-linux-gnu/libpthread.so.0
#2 0x00007fa13fe19307 in torch::(anonymous namespace)::CudaIPCSentDataLimbo::collect() () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#3 0x00007fa13fe195ec in torch::(anonymous namespace)::CudaIPCSentDataDelete(void*) () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#4 0x00007fa13fe17131 in torch::CudaIPCSentData::~CudaIPCSentData() () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#5 0x00007fa13fe19365 in torch::(anonymous namespace)::CudaIPCSentDataLimbo::collect() () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#6 0x00007fa13fe195d8 in torch::(anonymous namespace)::CudaIPCSentDataDelete(void*) () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#7 0x00007fa10ebb9fa4 in c10::TensorImpl::release_resources() [clone .localalias.182] () from /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so
#8 0x00007fa13fcba014 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_() () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#9 0x00007fa13ff0042b in THPVariable_clear(THPVariable*) () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#10 0x00007fa13ff00461 in THPVariable_dealloc(THPVariable*) () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#11 0x0000562bd5e2698f in subtype_dealloc () at /tmp/build/80754af9/python_1553721932202/work/Objects/typeobject.c:1256
#12 0x0000562bd5d88a0c in free_keys_object (keys=0x7fa0fb6e24c8) at /tmp/build/80754af9/python_1553721932202/work/Objects/dictobject.c:559
#13 dict_dealloc () at /tmp/build/80754af9/python_1553721932202/work/Objects/dictobject.c:1913
#14 0x0000562bd5e90521 in odict_dealloc () at /tmp/build/80754af9/python_1553721932202/work/Objects/odictobject.c:1376
```
so the issue seems to arise from the locks taken by `CudaIPCSentDataLimbo::collect()`.
|
1.0
|
CUDA IPC garbage collection hangs when disposing of LSTMs - ## π Bug
Copying a LSTM's state dict to another process via a torch Queue will hang when the source process tries to dispose of the underlying tensors.
## To Reproduce
Below we have two processes. The learner process passes an LSTM's state dict to the actor process via a Queue.
```python
import os
from torch import nn
from torch.multiprocessing import Queue, Process, set_start_method
from time import sleep
from copy import deepcopy
def actor(queue):
print(f'Actor started on PID #{os.getpid()}')
while True:
if not queue.empty():
queue.get()
print('Actor stepped')
sleep(.01)
def learner(queue):
print(f'Learner started on PID #{os.getpid()}')
net = nn.LSTM(1, 1).cuda()
while True:
if not queue.full():
queue.put(deepcopy(net.state_dict()))
print('Learner stepped')
sleep(.01)
def run():
queue = Queue(1)
processes = dict(
a=Process(target=actor, args=(queue,)),
l=Process(target=learner, args=(queue,)))
for p in processes.values():
p.start()
for p in processes.values():
p.join()
if __name__ == '__main__':
set_start_method('spawn')
run()
```
When run, the actor and learner will each loop a few times an then the learner process will hang. The output will be something like
```
Learner started on PID #1193
Actor started on PID #1192
Learner stepped
Learner stepped
Actor stepped
Actor stepped
Learner stepped
```
## Expected behavior
The expected behaviour is that the learner doesn't hang, and the actor/learner stepped messages carry on forever. You can verify the script itself isn't broken by removing the `.cuda()` call - it works fine with a CPU network.
## Environment
```
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce RTX 2080 Ti
Nvidia driver version: 430.40
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] torch==1.2.0
[pip] torchfile==0.1.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.0.2 py37h7b6447c_0
[conda] mkl_fft 1.0.14 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.2.0 py3.7_cuda10.0.130_cudnn7.6.2_0 pytorch
[conda] torchfile 0.1.0 pypi_0 pypi
```
## Additional context
The bug goes away when using a CPU LSTM rather than a CUDA LSTM
The bug goes away when using a `nn.Linear(1, 1)` rather than a LSTM
In my research code, the bug takes longer to turn up when the `deepcopy` is removed, but it still appears eventually.
Attaching GDB to the learner process after the hang, `info threads` will show that some thread is sitting at `__lll_lock_wait ()`, and checking that thread's backtrace gets
```
#0 0x00007fa14e96b10d in __lll_lock_wait () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007fa14e964023 in pthread_mutex_lock () from /lib/x86_64-linux-gnu/libpthread.so.0
#2 0x00007fa13fe19307 in torch::(anonymous namespace)::CudaIPCSentDataLimbo::collect() () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#3 0x00007fa13fe195ec in torch::(anonymous namespace)::CudaIPCSentDataDelete(void*) () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#4 0x00007fa13fe17131 in torch::CudaIPCSentData::~CudaIPCSentData() () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#5 0x00007fa13fe19365 in torch::(anonymous namespace)::CudaIPCSentDataLimbo::collect() () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#6 0x00007fa13fe195d8 in torch::(anonymous namespace)::CudaIPCSentDataDelete(void*) () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#7 0x00007fa10ebb9fa4 in c10::TensorImpl::release_resources() [clone .localalias.182] () from /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so
#8 0x00007fa13fcba014 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_() () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#9 0x00007fa13ff0042b in THPVariable_clear(THPVariable*) () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#10 0x00007fa13ff00461 in THPVariable_dealloc(THPVariable*) () from /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so
#11 0x0000562bd5e2698f in subtype_dealloc () at /tmp/build/80754af9/python_1553721932202/work/Objects/typeobject.c:1256
#12 0x0000562bd5d88a0c in free_keys_object (keys=0x7fa0fb6e24c8) at /tmp/build/80754af9/python_1553721932202/work/Objects/dictobject.c:559
#13 dict_dealloc () at /tmp/build/80754af9/python_1553721932202/work/Objects/dictobject.c:1913
#14 0x0000562bd5e90521 in odict_dealloc () at /tmp/build/80754af9/python_1553721932202/work/Objects/odictobject.c:1376
```
so the issue seems to arise from the locks taken by `CudaIPCSentDataLimbo::collect()`.
|
process
|
cuda ipc garbage collection hangs when disposing of lstms π bug copying a lstm s state dict to another process via a torch queue will hang when the source process tries to dispose of the underlying tensors to reproduce below we have two processes the learner process passes an lstm s state dict to the actor process via a queue python import os from torch import nn from torch multiprocessing import queue process set start method from time import sleep from copy import deepcopy def actor queue print f actor started on pid os getpid while true if not queue empty queue get print actor stepped sleep def learner queue print f learner started on pid os getpid net nn lstm cuda while true if not queue full queue put deepcopy net state dict print learner stepped sleep def run queue queue processes dict a process target actor args queue l process target learner args queue for p in processes values p start for p in processes values p join if name main set start method spawn run when run the actor and learner will each loop a few times an then the learner process will hang the output will be something like learner started on pid actor started on pid learner stepped learner stepped actor stepped actor stepped learner stepped expected behavior the expected behaviour is that the learner doesn t hang and the actor learner stepped messages carry on forever you can verify the script itself isn t broken by removing the cuda call it works fine with a cpu network environment pytorch version is debug build no cuda used to build pytorch os ubuntu lts gcc version ubuntu cmake version could not collect python version is cuda available yes cuda runtime version gpu models and configuration gpu geforce rtx ti nvidia driver version cudnn version could not collect versions of relevant libraries numpy torch torchfile blas mkl mkl mkl service mkl fft mkl random pytorch pytorch torchfile pypi pypi additional context the bug goes away when using a cpu lstm rather than a cuda lstm the bug goes away when using a nn linear rather than a lstm in my research code the bug takes longer to turn up when the deepcopy is removed but it still appears eventually attaching gdb to the learner process after the hang info threads will show that some thread is sitting at lll lock wait and checking that thread s backtrace gets in lll lock wait from lib linux gnu libpthread so in pthread mutex lock from lib linux gnu libpthread so in torch anonymous namespace cudaipcsentdatalimbo collect from opt conda lib site packages torch lib libtorch python so in torch anonymous namespace cudaipcsentdatadelete void from opt conda lib site packages torch lib libtorch python so in torch cudaipcsentdata cudaipcsentdata from opt conda lib site packages torch lib libtorch python so in torch anonymous namespace cudaipcsentdatalimbo collect from opt conda lib site packages torch lib libtorch python so in torch anonymous namespace cudaipcsentdatadelete void from opt conda lib site packages torch lib libtorch python so in tensorimpl release resources from opt conda lib site packages torch lib so in intrusive ptr reset from opt conda lib site packages torch lib libtorch python so in thpvariable clear thpvariable from opt conda lib site packages torch lib libtorch python so in thpvariable dealloc thpvariable from opt conda lib site packages torch lib libtorch python so in subtype dealloc at tmp build python work objects typeobject c in free keys object keys at tmp build python work objects dictobject c dict dealloc at tmp build python work objects dictobject c in odict dealloc at tmp build python work objects odictobject c so the issue seems to arise from the locks taken by cudaipcsentdatalimbo collect
| 1
|
17,809
| 23,737,346,686
|
IssuesEvent
|
2022-08-31 09:16:06
|
threefoldtech/tfgrid_dashboard
|
https://api.github.com/repos/threefoldtech/tfgrid_dashboard
|
closed
|
UI changes - Shades of blue need to be the same
|
process_wontfix type_bug
|
Looks a bit off with the two shades in the title and the account

|
1.0
|
UI changes - Shades of blue need to be the same - Looks a bit off with the two shades in the title and the account

|
process
|
ui changes shades of blue need to be the same looks a bit off with the two shades in the title and the account
| 1
|
87,927
| 15,790,357,610
|
IssuesEvent
|
2021-04-02 01:14:21
|
hudsonnog/locadora
|
https://api.github.com/repos/hudsonnog/locadora
|
opened
|
CVE-2019-0232 (High) detected in tomcat-embed-core-8.5.23.jar
|
security vulnerability
|
## CVE-2019-0232 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.23.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p>
<p>Path to dependency file: locadora/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.23/tomcat-embed-core-8.5.23.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.0.M5.jar (Root Library)
- spring-boot-starter-tomcat-2.0.0.M5.jar
- :x: **tomcat-embed-core-8.5.23.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When running on Windows with enableCmdLineArguments enabled, the CGI Servlet in Apache Tomcat 9.0.0.M1 to 9.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 is vulnerable to Remote Code Execution due to a bug in the way the JRE passes command line arguments to Windows. The CGI Servlet is disabled by default. The CGI option enableCmdLineArguments is disable by default in Tomcat 9.0.x (and will be disabled by default in all versions in response to this vulnerability). For a detailed explanation of the JRE behaviour, see Markus Wulftange's blog (https://codewhitesec.blogspot.com/2016/02/java-and-command-line-injections-in-windows.html) and this archived MSDN blog (https://web.archive.org/web/20161228144344/https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/).
<p>Publish Date: 2019-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0232>CVE-2019-0232</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0232">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0232</a></p>
<p>Release Date: 2019-04-15</p>
<p>Fix Resolution: 9.0.18,8.5.40,7.0.94</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-0232 (High) detected in tomcat-embed-core-8.5.23.jar - ## CVE-2019-0232 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.23.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p>
<p>Path to dependency file: locadora/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.23/tomcat-embed-core-8.5.23.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.0.M5.jar (Root Library)
- spring-boot-starter-tomcat-2.0.0.M5.jar
- :x: **tomcat-embed-core-8.5.23.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When running on Windows with enableCmdLineArguments enabled, the CGI Servlet in Apache Tomcat 9.0.0.M1 to 9.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 is vulnerable to Remote Code Execution due to a bug in the way the JRE passes command line arguments to Windows. The CGI Servlet is disabled by default. The CGI option enableCmdLineArguments is disable by default in Tomcat 9.0.x (and will be disabled by default in all versions in response to this vulnerability). For a detailed explanation of the JRE behaviour, see Markus Wulftange's blog (https://codewhitesec.blogspot.com/2016/02/java-and-command-line-injections-in-windows.html) and this archived MSDN blog (https://web.archive.org/web/20161228144344/https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/).
<p>Publish Date: 2019-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0232>CVE-2019-0232</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0232">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0232</a></p>
<p>Release Date: 2019-04-15</p>
<p>Fix Resolution: 9.0.18,8.5.40,7.0.94</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in tomcat embed core jar cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file locadora pom xml path to vulnerable library root repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web jar root library spring boot starter tomcat jar x tomcat embed core jar vulnerable library vulnerability details when running on windows with enablecmdlinearguments enabled the cgi servlet in apache tomcat to to and to is vulnerable to remote code execution due to a bug in the way the jre passes command line arguments to windows the cgi servlet is disabled by default the cgi option enablecmdlinearguments is disable by default in tomcat x and will be disabled by default in all versions in response to this vulnerability for a detailed explanation of the jre behaviour see markus wulftange s blog and this archived msdn blog publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
22,163
| 30,706,963,603
|
IssuesEvent
|
2023-07-27 07:03:03
|
googleapis/google-cloud-dotnet
|
https://api.github.com/repos/googleapis/google-cloud-dotnet
|
closed
|
Warning: a recent release failed
|
type: process
|
The following release PRs may have failed:
* #10692 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
|
1.0
|
Warning: a recent release failed - The following release PRs may have failed:
* #10692 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
|
process
|
warning a recent release failed the following release prs may have failed the release job is autorelease tagged but expected autorelease published
| 1
|
30,828
| 11,854,542,322
|
IssuesEvent
|
2020-03-25 01:10:36
|
raindigi/room-booking-system
|
https://api.github.com/repos/raindigi/room-booking-system
|
opened
|
CVE-2020-7608 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2020-7608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>yargs-parser-4.2.1.tgz</b>, <b>yargs-parser-5.0.0.tgz</b>, <b>yargs-parser-7.0.0.tgz</b></p></summary>
<p>
<details><summary><b>yargs-parser-4.2.1.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-4.2.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-4.2.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/room-booking-system/web/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/room-booking-system/web/node_modules/webpack-dev-server/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.17.tgz (Root Library)
- webpack-dev-server-2.9.4.tgz
- yargs-6.6.0.tgz
- :x: **yargs-parser-4.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-5.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/room-booking-system/web/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/room-booking-system/web/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.17.tgz (Root Library)
- jest-20.0.4.tgz
- jest-cli-20.0.4.tgz
- yargs-7.1.0.tgz
- :x: **yargs-parser-5.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-7.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-7.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-7.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/room-booking-system/web/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/room-booking-system/web/node_modules/webpack/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.17.tgz (Root Library)
- webpack-3.8.1.tgz
- yargs-8.0.2.tgz
- :x: **yargs-parser-7.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/raindigi/room-booking-system/commit/e4d34b3e5d801c8731917ae72a01b713a25f21e7">e4d34b3e5d801c8731917ae72a01b713a25f21e7</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution: v18.1.1;13.1.2;15.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7608 (Medium) detected in multiple libraries - ## CVE-2020-7608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>yargs-parser-4.2.1.tgz</b>, <b>yargs-parser-5.0.0.tgz</b>, <b>yargs-parser-7.0.0.tgz</b></p></summary>
<p>
<details><summary><b>yargs-parser-4.2.1.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-4.2.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-4.2.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/room-booking-system/web/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/room-booking-system/web/node_modules/webpack-dev-server/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.17.tgz (Root Library)
- webpack-dev-server-2.9.4.tgz
- yargs-6.6.0.tgz
- :x: **yargs-parser-4.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-5.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/room-booking-system/web/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/room-booking-system/web/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.17.tgz (Root Library)
- jest-20.0.4.tgz
- jest-cli-20.0.4.tgz
- yargs-7.1.0.tgz
- :x: **yargs-parser-5.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-7.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-7.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-7.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/room-booking-system/web/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/room-booking-system/web/node_modules/webpack/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.17.tgz (Root Library)
- webpack-3.8.1.tgz
- yargs-8.0.2.tgz
- :x: **yargs-parser-7.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/raindigi/room-booking-system/commit/e4d34b3e5d801c8731917ae72a01b713a25f21e7">e4d34b3e5d801c8731917ae72a01b713a25f21e7</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution: v18.1.1;13.1.2;15.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries yargs parser tgz yargs parser tgz yargs parser tgz yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm room booking system web package json path to vulnerable library tmp ws scm room booking system web node modules webpack dev server node modules yargs parser package json dependency hierarchy react scripts tgz root library webpack dev server tgz yargs tgz x yargs parser tgz vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm room booking system web package json path to vulnerable library tmp ws scm room booking system web node modules yargs parser package json dependency hierarchy react scripts tgz root library jest tgz jest cli tgz yargs tgz x yargs parser tgz vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm room booking system web package json path to vulnerable library tmp ws scm room booking system web node modules webpack node modules yargs parser package json dependency hierarchy react scripts tgz root library webpack tgz yargs tgz x yargs parser tgz vulnerable library found in head commit a href vulnerability details yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
2,086
| 4,912,683,184
|
IssuesEvent
|
2016-11-23 09:56:50
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
opened
|
Comments and refresh are not inline
|
browser: all bug comp: activiti-processList
|
Also the refresh button could be on the right of comments to match that in task list
**Process list**

**Task list**
<img width="833" alt="screen shot 2016-11-23 at 09 52 07" src="https://cloud.githubusercontent.com/assets/13200338/20557263/b4b546f8-b162-11e6-8e8e-b17cffa23560.png">
|
1.0
|
Comments and refresh are not inline - Also the refresh button could be on the right of comments to match that in task list
**Process list**

**Task list**
<img width="833" alt="screen shot 2016-11-23 at 09 52 07" src="https://cloud.githubusercontent.com/assets/13200338/20557263/b4b546f8-b162-11e6-8e8e-b17cffa23560.png">
|
process
|
comments and refresh are not inline also the refresh button could be on the right of comments to match that in task list process list task list img width alt screen shot at src
| 1
|
64,994
| 6,928,375,141
|
IssuesEvent
|
2017-12-01 04:26:11
|
golang/go
|
https://api.github.com/repos/golang/go
|
closed
|
os: TestChtimes fails on netbsd
|
OS-NetBSD Testing
|
The NetBSD builders are back up and running (now on GCE instead of dedicated machine singular) and are failing with:
```
ok net/url 0.009s
--- FAIL: TestChtimes (0.00s)
os_test.go:1067: AccessTime didn't go backwards; was={486366670 63623730007 6480864}, after={486366670 63623730007 6480864}
--- FAIL: TestChtimesDir (0.00s)
os_test.go:1067: AccessTime didn't go backwards; was={486603380 63623730007 6480864}, after={486603380 63623730007 6480864}
FAIL
FAIL os 0.363s
```
(NetBSD 7.0)
/cc @bsiegert @mdempsky
|
1.0
|
os: TestChtimes fails on netbsd - The NetBSD builders are back up and running (now on GCE instead of dedicated machine singular) and are failing with:
```
ok net/url 0.009s
--- FAIL: TestChtimes (0.00s)
os_test.go:1067: AccessTime didn't go backwards; was={486366670 63623730007 6480864}, after={486366670 63623730007 6480864}
--- FAIL: TestChtimesDir (0.00s)
os_test.go:1067: AccessTime didn't go backwards; was={486603380 63623730007 6480864}, after={486603380 63623730007 6480864}
FAIL
FAIL os 0.363s
```
(NetBSD 7.0)
/cc @bsiegert @mdempsky
|
non_process
|
os testchtimes fails on netbsd the netbsd builders are back up and running now on gce instead of dedicated machine singular and are failing with ok net url fail testchtimes os test go accesstime didn t go backwards was after fail testchtimesdir os test go accesstime didn t go backwards was after fail fail os netbsd cc bsiegert mdempsky
| 0
|
21,435
| 10,608,223,008
|
IssuesEvent
|
2019-10-11 06:56:36
|
fufunoyu/example-maven-travis
|
https://api.github.com/repos/fufunoyu/example-maven-travis
|
opened
|
CVE-2016-1000352 (High) detected in bcprov-ext-jdk15on-1.49.jar
|
security vulnerability
|
## CVE-2016-1000352 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-ext-jdk15on-1.49.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.7. Note: this package includes the IDEA and NTRU encryption algorithms.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: /tmp/ws-scm/example-maven-travis/pom.xml</p>
<p>Path to vulnerable library: epository/org/bouncycastle/bcprov-ext-jdk15on/1.49/bcprov-ext-jdk15on-1.49.jar</p>
<p>
Dependency Hierarchy:
- :x: **bcprov-ext-jdk15on-1.49.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/example-maven-travis/commit/48c9533d746c2e0017ea5f7739a8c4b5eadc874a">48c9533d746c2e0017ea5f7739a8c4b5eadc874a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Bouncy Castle JCE Provider version 1.55 and earlier the ECIES implementation allowed the use of ECB mode. This mode is regarded as unsafe and support for it has been removed from the provider.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000352>CVE-2016-1000352</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000352">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000352</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: 1.56</p>
</p>
</details>
<p></p>
|
True
|
CVE-2016-1000352 (High) detected in bcprov-ext-jdk15on-1.49.jar - ## CVE-2016-1000352 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-ext-jdk15on-1.49.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.7. Note: this package includes the IDEA and NTRU encryption algorithms.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: /tmp/ws-scm/example-maven-travis/pom.xml</p>
<p>Path to vulnerable library: epository/org/bouncycastle/bcprov-ext-jdk15on/1.49/bcprov-ext-jdk15on-1.49.jar</p>
<p>
Dependency Hierarchy:
- :x: **bcprov-ext-jdk15on-1.49.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/example-maven-travis/commit/48c9533d746c2e0017ea5f7739a8c4b5eadc874a">48c9533d746c2e0017ea5f7739a8c4b5eadc874a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Bouncy Castle JCE Provider version 1.55 and earlier the ECIES implementation allowed the use of ECB mode. This mode is regarded as unsafe and support for it has been removed from the provider.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000352>CVE-2016-1000352</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000352">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000352</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: 1.56</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in bcprov ext jar cve high severity vulnerability vulnerable library bcprov ext jar the bouncy castle crypto package is a java implementation of cryptographic algorithms this jar contains jce provider and lightweight api for the bouncy castle cryptography apis for jdk to jdk note this package includes the idea and ntru encryption algorithms library home page a href path to dependency file tmp ws scm example maven travis pom xml path to vulnerable library epository org bouncycastle bcprov ext bcprov ext jar dependency hierarchy x bcprov ext jar vulnerable library found in head commit a href vulnerability details in the bouncy castle jce provider version and earlier the ecies implementation allowed the use of ecb mode this mode is regarded as unsafe and support for it has been removed from the provider publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
6,089
| 8,950,928,681
|
IssuesEvent
|
2019-01-25 12:21:57
|
enKryptIO/ethvm
|
https://api.github.com/repos/enKryptIO/ethvm
|
closed
|
Some statistics are not being calculated on Kafka
|
enhancement milestone:1 priority:high project:processing
|
- [x] Average block time / day
- [x] Average Gas Limit / day
- [x] Average Gas Price / day
- [x] Average Tx Fees / day
- [x] Average block difficulty / day
- [x] Avg Hash Rate / day
- [x] Total Failed Tx / day
- [x] Total Succesful Tx / day
- [x] Total Tx / day
|
1.0
|
Some statistics are not being calculated on Kafka - - [x] Average block time / day
- [x] Average Gas Limit / day
- [x] Average Gas Price / day
- [x] Average Tx Fees / day
- [x] Average block difficulty / day
- [x] Avg Hash Rate / day
- [x] Total Failed Tx / day
- [x] Total Succesful Tx / day
- [x] Total Tx / day
|
process
|
some statistics are not being calculated on kafka average block time day average gas limit day average gas price day average tx fees day average block difficulty day avg hash rate day total failed tx day total succesful tx day total tx day
| 1
|
2,839
| 5,795,730,490
|
IssuesEvent
|
2017-05-02 17:47:26
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
opened
|
Re-enable SwiftLint
|
is:Blocked type:Process
|
We disabled SwiftLint as it was blocking our release from landing internally.
We should re-enable once we can support SwiftLink internally.
|
1.0
|
Re-enable SwiftLint - We disabled SwiftLint as it was blocking our release from landing internally.
We should re-enable once we can support SwiftLink internally.
|
process
|
re enable swiftlint we disabled swiftlint as it was blocking our release from landing internally we should re enable once we can support swiftlink internally
| 1
|
116
| 2,546,526,972
|
IssuesEvent
|
2015-01-30 00:51:25
|
tinkerpop/tinkerpop3
|
https://api.github.com/repos/tinkerpop/tinkerpop3
|
opened
|
Be nice to have a shorter method name than values(...)
|
enhancement process
|
```java
g.V().out().v("name")
```
:|
|
1.0
|
Be nice to have a shorter method name than values(...) - ```java
g.V().out().v("name")
```
:|
|
process
|
be nice to have a shorter method name than values java g v out v name
| 1
|
4,907
| 7,784,846,759
|
IssuesEvent
|
2018-06-06 14:22:02
|
GoogleCloudPlatform/google-cloud-dotnet
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-dotnet
|
closed
|
Work remaining before NLog beta release
|
type: process
|
* [x] Top-level documentation.
* [x] Make timeout configurable, not hardcoded to 1.5 seconds.
* ~~[ ] Consider allowing multiple concurrent `WriteLogEntriesAsync(...)` RPCs. Currently they occur sequentially.~~
* [x] Improve testing for situations where `WriteLogEntriesAsync(...)` RPCs are taking a long time, or failing.
|
1.0
|
Work remaining before NLog beta release - * [x] Top-level documentation.
* [x] Make timeout configurable, not hardcoded to 1.5 seconds.
* ~~[ ] Consider allowing multiple concurrent `WriteLogEntriesAsync(...)` RPCs. Currently they occur sequentially.~~
* [x] Improve testing for situations where `WriteLogEntriesAsync(...)` RPCs are taking a long time, or failing.
|
process
|
work remaining before nlog beta release top level documentation make timeout configurable not hardcoded to seconds consider allowing multiple concurrent writelogentriesasync rpcs currently they occur sequentially improve testing for situations where writelogentriesasync rpcs are taking a long time or failing
| 1
|
441,508
| 30,787,136,040
|
IssuesEvent
|
2023-07-31 13:55:40
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
opened
|
Collaboration Cycle - Needs a workflow/process map
|
documentation-support pw-footer-feedback
|
### Description
[This page](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/) while offering a "flow image":

as a relatively new joiner, the process is VERY opaque. One thing from prior jobs that I'm used to seeing is a process map that's more detailed with swim-lanes to call attention to specific responsible parties to help a new person understand a complicated process.
While the documentation is thorough for each specific item eg, questions you have to answer for a kickoff ticket, there's not something that delineates responsibilities and steps and the stages when those things should occur. Even for a more experience folks, this would help elevate everyone's understanding of the process and might help increase engagement at the 'right' times.
### Relevant URLs
https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/
### Which type of team are you on? (Platform team, VFS team, or Leadership)
VFS Team - AE profile
|
1.0
|
Collaboration Cycle - Needs a workflow/process map - ### Description
[This page](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/) while offering a "flow image":

as a relatively new joiner, the process is VERY opaque. One thing from prior jobs that I'm used to seeing is a process map that's more detailed with swim-lanes to call attention to specific responsible parties to help a new person understand a complicated process.
While the documentation is thorough for each specific item eg, questions you have to answer for a kickoff ticket, there's not something that delineates responsibilities and steps and the stages when those things should occur. Even for a more experience folks, this would help elevate everyone's understanding of the process and might help increase engagement at the 'right' times.
### Relevant URLs
https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/
### Which type of team are you on? (Platform team, VFS team, or Leadership)
VFS Team - AE profile
|
non_process
|
collaboration cycle needs a workflow process map description while offering a flow image as a relatively new joiner the process is very opaque one thing from prior jobs that i m used to seeing is a process map that s more detailed with swim lanes to call attention to specific responsible parties to help a new person understand a complicated process while the documentation is thorough for each specific item eg questions you have to answer for a kickoff ticket there s not something that delineates responsibilities and steps and the stages when those things should occur even for a more experience folks this would help elevate everyone s understanding of the process and might help increase engagement at the right times relevant urls which type of team are you on platform team vfs team or leadership vfs team ae profile
| 0
|
18,329
| 24,445,890,270
|
IssuesEvent
|
2022-10-06 17:55:57
|
gobuffalo/buffalo
|
https://api.github.com/repos/gobuffalo/buffalo
|
closed
|
milestone v1.1.0 or v1.0.1
|
process
|
Starting with #2306, stabilizing the Buffalo core library is almost done. Now time to clean up the Buffalo core library itself. The remaining issues to be (hopefully) addressed by this release are as follows:
* [x] #2306
* [x] #1653
* [x] gobuffalo/cli#228
* [x] gobuffalo/cli#229
* [x] gobuffalo/genny#51
* [x] #1769
* [x] #2264
* [x] #2287
* [x] #2300
* [x] #2334
* [x] #1206
https://github.com/gobuffalo/buffalo/milestone/77
|
1.0
|
milestone v1.1.0 or v1.0.1 - Starting with #2306, stabilizing the Buffalo core library is almost done. Now time to clean up the Buffalo core library itself. The remaining issues to be (hopefully) addressed by this release are as follows:
* [x] #2306
* [x] #1653
* [x] gobuffalo/cli#228
* [x] gobuffalo/cli#229
* [x] gobuffalo/genny#51
* [x] #1769
* [x] #2264
* [x] #2287
* [x] #2300
* [x] #2334
* [x] #1206
https://github.com/gobuffalo/buffalo/milestone/77
|
process
|
milestone or starting with stabilizing the buffalo core library is almost done now time to clean up the buffalo core library itself the remaining issues to be hopefully addressed by this release are as follows gobuffalo cli gobuffalo cli gobuffalo genny
| 1
|
12,116
| 14,740,644,188
|
IssuesEvent
|
2021-01-07 09:24:42
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Customer Invite Email
|
anc-process anp-2 ant-bug has attachment
|
In GitLab by @kdjstudios on Nov 20, 2018, 10:44
Hello Team,
I just sent out a customer invite to myself and when received the buttons were not showing. This was on Merge2. I also noticed that the time in the message on the left says 10:16am; which is when it was really sent, but the time in the message on the right says 5:15am which is not correct. Can we confirm what is going on with that too.

|
1.0
|
Customer Invite Email - In GitLab by @kdjstudios on Nov 20, 2018, 10:44
Hello Team,
I just sent out a customer invite to myself and when received the buttons were not showing. This was on Merge2. I also noticed that the time in the message on the left says 10:16am; which is when it was really sent, but the time in the message on the right says 5:15am which is not correct. Can we confirm what is going on with that too.

|
process
|
customer invite email in gitlab by kdjstudios on nov hello team i just sent out a customer invite to myself and when received the buttons were not showing this was on i also noticed that the time in the message on the left says which is when it was really sent but the time in the message on the right says which is not correct can we confirm what is going on with that too uploads image png
| 1
|
294,777
| 22,162,591,921
|
IssuesEvent
|
2022-06-04 18:26:25
|
pinojs/pino-pretty
|
https://api.github.com/repos/pinojs/pino-pretty
|
closed
|
Typo in Readme
|
good first issue documentation
|
In the Options [section ](https://github.com/pinojs/pino-pretty#options)there is a typo in function for pid
```
...
hostname: hostname => colorGreen(hostname)
pid: pid => colorRed(hostname) <- typo
name: name => colorBlue(name)
caller: caller => colorCyan(caller)
...
```
Maybe there should be colorRed(pid)
|
1.0
|
Typo in Readme - In the Options [section ](https://github.com/pinojs/pino-pretty#options)there is a typo in function for pid
```
...
hostname: hostname => colorGreen(hostname)
pid: pid => colorRed(hostname) <- typo
name: name => colorBlue(name)
caller: caller => colorCyan(caller)
...
```
Maybe there should be colorRed(pid)
|
non_process
|
typo in readme in the options is a typo in function for pid hostname hostname colorgreen hostname pid pid colorred hostname typo name name colorblue name caller caller colorcyan caller maybe there should be colorred pid
| 0
|
3,996
| 6,923,255,545
|
IssuesEvent
|
2017-11-30 08:16:24
|
decidim/decidim
|
https://api.github.com/repos/decidim/decidim
|
closed
|
"Past" and "Active" processes overlap
|
component: processes good first issue hacktoberfest type: bug
|
# This is a Bug Report
#### :tophat: Description
Some processes appear under both the "past" and "active" filters, which doesn't make sense.
Example: "Pla d'usos ciutat vella" - https://www.decidim.barcelona/processes
We should mark as "past" processes where its last step's end date is in the past, and active those whose first step's end date is in the past, but its last step's end date is in the future.
#### :pushpin: Related issues
*None*
#### :clipboard: Additional Data
* ***Decidim deployment where you found the issue***: Decidim Barcelona
* ***URL to reproduce the error***: https://www.decidim.barcelona/processes
|
1.0
|
"Past" and "Active" processes overlap - # This is a Bug Report
#### :tophat: Description
Some processes appear under both the "past" and "active" filters, which doesn't make sense.
Example: "Pla d'usos ciutat vella" - https://www.decidim.barcelona/processes
We should mark as "past" processes where its last step's end date is in the past, and active those whose first step's end date is in the past, but its last step's end date is in the future.
#### :pushpin: Related issues
*None*
#### :clipboard: Additional Data
* ***Decidim deployment where you found the issue***: Decidim Barcelona
* ***URL to reproduce the error***: https://www.decidim.barcelona/processes
|
process
|
past and active processes overlap this is a bug report tophat description some processes appear under both the past and active filters which doesn t make sense example pla d usos ciutat vella we should mark as past processes where its last step s end date is in the past and active those whose first step s end date is in the past but its last step s end date is in the future pushpin related issues none clipboard additional data decidim deployment where you found the issue decidim barcelona url to reproduce the error
| 1
|
15,490
| 19,698,664,693
|
IssuesEvent
|
2022-01-12 14:39:51
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Why is Multi-processing Pool slower than Singleprocessing?
|
module: multiprocessing triaged
|
### π Describe the bug
when I try to use `torch.multiprocessing` to accelerate computing, it becames slower...
Using `torch.multiprocessing` costs 5.29s, and 0.21s instead by setting `multi_p=0`.
if setting 'cuda=1', it will slower again.
Code:
```
import torch
import torch.nn.functional as F
from multiprocessing import Pool
import torch.multiprocessing as mp
import os, time, random
def worker(i, norm_feat, all_feat):
print(f"{i} start")
sim = torch.mm(norm_feat[i][None, :], norm_feat.t()).squeeze()
init_rank = torch.topk(sim, 5)[1]
weights = sim[init_rank].view(-1, 1)
weights = torch.pow(weights, 2)
ret = torch.mean(all_feat[init_rank, :] * weights, dim=0)
print(f"{i} end")
return ret
if __name__ == "__main__":
multi_p = 1
torch_pool = 1
cuda = 0
t_start = time.time()
all_feat = torch.randn(1000, 2048)
if cuda: all_feat = all_feat.cuda()
norm_feat = F.normalize(all_feat, p=2, dim=1)
# all_feat.share_memory_()
# norm_feat.share_memory_()
num = len(norm_feat)
if multi_p:
if torch_pool:
ctx = mp.get_context('spawn')
pool = ctx.Pool(8)
else:
pool = Pool(8)
pool_list = []
for i in range(num):
if multi_p:
res = pool.apply_async(worker, args=(i, norm_feat, all_feat,))
else:
res = worker(i, norm_feat, all_feat)
pool_list.append(res)
if multi_p:
pool.close()
pool.join()
# print(pool_list)
cost = time.time()-t_start
print(f"Cost: {cost:.2f}")
```
### Versions
python3.8
torch | 1.10.0+cu113 | Β
cc @VitalyFedyunin
|
1.0
|
Why is Multi-processing Pool slower than Singleprocessing? - ### π Describe the bug
when I try to use `torch.multiprocessing` to accelerate computing, it becames slower...
Using `torch.multiprocessing` costs 5.29s, and 0.21s instead by setting `multi_p=0`.
if setting 'cuda=1', it will slower again.
Code:
```
import torch
import torch.nn.functional as F
from multiprocessing import Pool
import torch.multiprocessing as mp
import os, time, random
def worker(i, norm_feat, all_feat):
print(f"{i} start")
sim = torch.mm(norm_feat[i][None, :], norm_feat.t()).squeeze()
init_rank = torch.topk(sim, 5)[1]
weights = sim[init_rank].view(-1, 1)
weights = torch.pow(weights, 2)
ret = torch.mean(all_feat[init_rank, :] * weights, dim=0)
print(f"{i} end")
return ret
if __name__ == "__main__":
multi_p = 1
torch_pool = 1
cuda = 0
t_start = time.time()
all_feat = torch.randn(1000, 2048)
if cuda: all_feat = all_feat.cuda()
norm_feat = F.normalize(all_feat, p=2, dim=1)
# all_feat.share_memory_()
# norm_feat.share_memory_()
num = len(norm_feat)
if multi_p:
if torch_pool:
ctx = mp.get_context('spawn')
pool = ctx.Pool(8)
else:
pool = Pool(8)
pool_list = []
for i in range(num):
if multi_p:
res = pool.apply_async(worker, args=(i, norm_feat, all_feat,))
else:
res = worker(i, norm_feat, all_feat)
pool_list.append(res)
if multi_p:
pool.close()
pool.join()
# print(pool_list)
cost = time.time()-t_start
print(f"Cost: {cost:.2f}")
```
### Versions
python3.8
torch | 1.10.0+cu113 | Β
cc @VitalyFedyunin
|
process
|
why is multi processing pool slower than singleprocessing π describe the bug when i try to use torch multiprocessing to accelerate computing it becames slower using torch multiprocessing costs and instead by setting multi p if setting cuda it will slower again code import torch import torch nn functional as f from multiprocessing import pool import torch multiprocessing as mp import os time random def worker i norm feat all feat print f i start sim torch mm norm feat norm feat t squeeze init rank torch topk sim weights sim view weights torch pow weights ret torch mean all feat weights dim print f i end return ret if name main multi p torch pool cuda t start time time all feat torch randn if cuda all feat all feat cuda norm feat f normalize all feat p dim all feat share memory norm feat share memory num len norm feat if multi p if torch pool ctx mp get context spawn pool ctx pool else pool pool pool list for i in range num if multi p res pool apply async worker args i norm feat all feat else res worker i norm feat all feat pool list append res if multi p pool close pool join print pool list cost time time t start print f cost cost versions torch Β cc vitalyfedyunin
| 1
|
337,468
| 24,541,152,555
|
IssuesEvent
|
2022-10-12 03:58:02
|
mikezimm/ALVFinMan
|
https://api.github.com/repos/mikezimm/ALVFinMan
|
closed
|
Fix crash when clicking some Pivots in Panel
|
bug documentation complete priority
|
## Notably the empty ones that should not be there.
NOTE that they have Tabs but no content

|
1.0
|
Fix crash when clicking some Pivots in Panel - ## Notably the empty ones that should not be there.
NOTE that they have Tabs but no content

|
non_process
|
fix crash when clicking some pivots in panel notably the empty ones that should not be there note that they have tabs but no content
| 0
|
13,772
| 16,528,789,398
|
IssuesEvent
|
2021-05-27 01:10:51
|
2i2c-org/team-compass
|
https://api.github.com/repos/2i2c-org/team-compass
|
opened
|
Create a once-a-month team meeting
|
:label: team-process type: enhancement
|
# Summary
We now have a few team practices around sharing with one another what we are up to, coordinating work, etc. However, since we're still a remote team, and since most of this coordination is done asynchronously, we rarely see one another face-to-face (well, face-to-screen-to-face π). I would personally enjoy (and wonder if others would as well) having one monthly "all-hands" meeting with everybody on the @2i2c-org/tech-team. This is an issue to discuss and plan for this.
# What format would the meeting have?
I'd imagine a meeting format similar to [the JupyterHub Team Meeting](https://hackmd.io/u2ghJJUCRWK-zRidCFid_Q?view).
We could use a hackmd to structure the meeting itself, and a Zoom room to hold the actual conversation. In the days leading up to the meeting, anybody could add an agenda item that they'd like to discuss. The agenda could be anything - ideally a combination of informal and formal conversation, as desired by the team members. If we had extra time in the agenda, we could just chat for a bit and enjoy one another's company.
The point of the meeting would *not* be to have major decisions, do extensive work that is specific to one project, etc. The main goal is to have some face time and chat with one another, and a secondary goal might be to discuss something interesting or important.
# How often and when would this happen?
I'd propose that we shoot for **once a month** at 8:00AM California time. (note: this is pushing it a bit for @yuvipanda, but I'm not sure of any other time where we could get everybody on the team on the call at once)
In the longer term, we might play around with different kinds of formats for this (e.g., alternating time zones, or planning more frequent and informal coffee chats).
# Tasks to complete
- [ ] Discuss what people think about a meeting like this. Would it be helpful or fun? Would it be inconvenient or noisy?
- [ ] Decide on whether to implement this as a team practice
- [ ] Have our first team meeting :-)
|
1.0
|
Create a once-a-month team meeting - # Summary
We now have a few team practices around sharing with one another what we are up to, coordinating work, etc. However, since we're still a remote team, and since most of this coordination is done asynchronously, we rarely see one another face-to-face (well, face-to-screen-to-face π). I would personally enjoy (and wonder if others would as well) having one monthly "all-hands" meeting with everybody on the @2i2c-org/tech-team. This is an issue to discuss and plan for this.
# What format would the meeting have?
I'd imagine a meeting format similar to [the JupyterHub Team Meeting](https://hackmd.io/u2ghJJUCRWK-zRidCFid_Q?view).
We could use a hackmd to structure the meeting itself, and a Zoom room to hold the actual conversation. In the days leading up to the meeting, anybody could add an agenda item that they'd like to discuss. The agenda could be anything - ideally a combination of informal and formal conversation, as desired by the team members. If we had extra time in the agenda, we could just chat for a bit and enjoy one another's company.
The point of the meeting would *not* be to have major decisions, do extensive work that is specific to one project, etc. The main goal is to have some face time and chat with one another, and a secondary goal might be to discuss something interesting or important.
# How often and when would this happen?
I'd propose that we shoot for **once a month** at 8:00AM California time. (note: this is pushing it a bit for @yuvipanda, but I'm not sure of any other time where we could get everybody on the team on the call at once)
In the longer term, we might play around with different kinds of formats for this (e.g., alternating time zones, or planning more frequent and informal coffee chats).
# Tasks to complete
- [ ] Discuss what people think about a meeting like this. Would it be helpful or fun? Would it be inconvenient or noisy?
- [ ] Decide on whether to implement this as a team practice
- [ ] Have our first team meeting :-)
|
process
|
create a once a month team meeting summary we now have a few team practices around sharing with one another what we are up to coordinating work etc however since we re still a remote team and since most of this coordination is done asynchronously we rarely see one another face to face well face to screen to face π i would personally enjoy and wonder if others would as well having one monthly all hands meeting with everybody on the org tech team this is an issue to discuss and plan for this what format would the meeting have i d imagine a meeting format similar to we could use a hackmd to structure the meeting itself and a zoom room to hold the actual conversation in the days leading up to the meeting anybody could add an agenda item that they d like to discuss the agenda could be anything ideally a combination of informal and formal conversation as desired by the team members if we had extra time in the agenda we could just chat for a bit and enjoy one another s company the point of the meeting would not be to have major decisions do extensive work that is specific to one project etc the main goal is to have some face time and chat with one another and a secondary goal might be to discuss something interesting or important how often and when would this happen i d propose that we shoot for once a month at california time note this is pushing it a bit for yuvipanda but i m not sure of any other time where we could get everybody on the team on the call at once in the longer term we might play around with different kinds of formats for this e g alternating time zones or planning more frequent and informal coffee chats tasks to complete discuss what people think about a meeting like this would it be helpful or fun would it be inconvenient or noisy decide on whether to implement this as a team practice have our first team meeting
| 1
|
27,276
| 4,049,241,346
|
IssuesEvent
|
2016-05-23 13:33:24
|
reactor10/figroll-app
|
https://api.github.com/repos/reactor10/figroll-app
|
opened
|
Design SSL
|
Design
|
Need to design SSL flow.
Global modal?
Progress bar?
How do we display all of the different states of uploading.
- Create a site
- Get SSL
- Upload a site
- CDN Upload
- Make site live
* see image attached.
|
1.0
|
Design SSL - Need to design SSL flow.
Global modal?
Progress bar?
How do we display all of the different states of uploading.
- Create a site
- Get SSL
- Upload a site
- CDN Upload
- Make site live
* see image attached.
|
non_process
|
design ssl need to design ssl flow global modal progress bar how do we display all of the different states of uploading create a site get ssl upload a site cdn upload make site live see image attached
| 0
|
604,758
| 18,718,233,360
|
IssuesEvent
|
2021-11-03 08:45:33
|
kheeyaa/Who-ate-my-fish
|
https://api.github.com/repos/kheeyaa/Who-ate-my-fish
|
closed
|
JailUsers λΉνμ±ν λ‘μ§ κ΅¬ν
|
enhancement priority: medium status: pending
|
- [x] λ΄κ° μ£½μ κ³ μμ΄μΈμ§ νμΈ
- [x] μ
λ ₯μ°½, ν¬νμ°½ λΉνμ±ν
- [x] νλ‘ν κ°μ₯μΌλ‘ λ³κ²½
- [x] λ€λ₯Έ μ μ λ€μκ²λ κ°μ₯ κ³ μμ΄λ‘ λ³κ²½
- [x] ν¬νμ κ°μ₯ κ³ μμ΄ μ ν λͺ»νκ² λ³κ²½
|
1.0
|
JailUsers λΉνμ±ν λ‘μ§ κ΅¬ν - - [x] λ΄κ° μ£½μ κ³ μμ΄μΈμ§ νμΈ
- [x] μ
λ ₯μ°½, ν¬νμ°½ λΉνμ±ν
- [x] νλ‘ν κ°μ₯μΌλ‘ λ³κ²½
- [x] λ€λ₯Έ μ μ λ€μκ²λ κ°μ₯ κ³ μμ΄λ‘ λ³κ²½
- [x] ν¬νμ κ°μ₯ κ³ μμ΄ μ ν λͺ»νκ² λ³κ²½
|
non_process
|
jailusers λΉνμ±ν λ‘μ§ κ΅¬ν λ΄κ° μ£½μ κ³ μμ΄μΈμ§ νμΈ μ
λ ₯μ°½ ν¬νμ°½ λΉνμ±ν νλ‘ν κ°μ₯μΌλ‘ λ³κ²½ λ€λ₯Έ μ μ λ€μκ²λ κ°μ₯ κ³ μμ΄λ‘ λ³κ²½ ν¬νμ κ°μ₯ κ³ μμ΄ μ ν λͺ»νκ² λ³κ²½
| 0
|
21,555
| 29,870,601,675
|
IssuesEvent
|
2023-06-20 08:14:20
|
bitfocus/companion-module-requests
|
https://api.github.com/repos/bitfocus/companion-module-requests
|
opened
|
Module request NEXO DTD T N
|
NOT YET PROCESSED
|
- [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
NEXO DTD-T N (Digital amp controler by Nexo)
What you would like to be able to make it do from Companion:
MUTE and ON/OFF
I have NXamp (module already created) and some DTD-T N controler that i wan't to shutdown or at least mute the output.
Thank's
|
1.0
|
Module request NEXO DTD T N - - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
NEXO DTD-T N (Digital amp controler by Nexo)
What you would like to be able to make it do from Companion:
MUTE and ON/OFF
I have NXamp (module already created) and some DTD-T N controler that i wan't to shutdown or at least mute the output.
Thank's
|
process
|
module request nexo dtd t n i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control nexo dtd t n digital amp controler by nexo what you would like to be able to make it do from companion mute and on off i have nxamp module already created and some dtd t n controler that i wan t to shutdown or at least mute the output thank s
| 1
|
9,920
| 4,682,461,870
|
IssuesEvent
|
2016-10-09 08:55:35
|
StargateMC/StargateMCPublic
|
https://api.github.com/repos/StargateMC/StargateMCPublic
|
closed
|
Connect tunnel in SW Hills on Lacun to remove reliance on rings
|
Bug Build
|
The tunnel on the way to the hunters area relies on Rings.
This needs to be replaced with a connection in the path (even if its stairs).
|
1.0
|
Connect tunnel in SW Hills on Lacun to remove reliance on rings - The tunnel on the way to the hunters area relies on Rings.
This needs to be replaced with a connection in the path (even if its stairs).
|
non_process
|
connect tunnel in sw hills on lacun to remove reliance on rings the tunnel on the way to the hunters area relies on rings this needs to be replaced with a connection in the path even if its stairs
| 0
|
220,079
| 7,349,670,524
|
IssuesEvent
|
2018-03-08 11:31:44
|
phovea/generator-phovea
|
https://api.github.com/repos/phovea/generator-phovea
|
closed
|
How to use a `app-slib` in the phovea_product.json
|
priority: high type: question
|
[Malevo](https://github.com/Caleydo/malevo/) is defined as `app-slib` (see [.yo-rc.json](https://github.com/Caleydo/malevo/blob/develop/.yo-rc.json)), because it consists of an app as client and a server part. But I'm not sure how to use the `app-slib` in the [phovea_product.json](https://github.com/Caleydo/malevo_product/blob/api-test/phovea_product.json) correctly.
I referenced the `app` part in the web section: https://github.com/Caleydo/malevo_product/blob/d74a474d20cf93e9080bbf10bbc7b8cbc6124e5e/phovea_product.json#L2-L8
However, then the server part (i.e., `/api/malevo/...`) is not available after build/deployment.
When I add now the same additional repository to the api section the build fails: https://github.com/Caleydo/malevo_product/compare/1af0f6dafb13b810a713bc7af1ae12da00e26a95...d74a474d20cf93e9080bbf10bbc7b8cbc6124e5e
CircleCI build log
```
npm ERR! Linux 4.4.0-111-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "run" "test:python"
npm ERR! node v6.11.1
npm ERR! npm v3.10.10
npm ERR! code ELIFECYCLE
npm ERR! malevo@1.0.0-SNAPSHOT test:python: `test ! -d tests || python setup.py test`
npm ERR! Exit status 2
npm ERR!
npm ERR! Failed at the malevo@1.0.0-SNAPSHOT test:python script 'test ! -d tests || python setup.py test'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the malevo package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! test ! -d tests || python setup.py test
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs malevo
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls malevo
npm ERR! There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
npm ERR! Please include the following file with any support request:
npm ERR! /home/circleci/phovea/tmp1/malevo/npm-debug.log
npm ERR! Linux 4.4.0-111-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "run" "build:python"
npm ERR! node v6.11.1
npm ERR! npm v3.10.10
npm ERR! code ELIFECYCLE
npm ERR! malevo@1.0.0-SNAPSHOT prebuild:python: `node -e "process.exit(process.env.PHOVEA_SKIP_TESTS === undefined?1:0)" || npm run test:python`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the malevo@1.0.0-SNAPSHOT prebuild:python script 'node -e "process.exit(process.env.PHOVEA_SKIP_TESTS === undefined?1:0)" || npm run test:python'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the malevo package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node -e "process.exit(process.env.PHOVEA_SKIP_TESTS === undefined?1:0)" || npm run test:python
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs malevo
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls malevo
npm ERR! There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
npm ERR! Please include the following file with any support request:
npm ERR! /home/circleci/phovea/tmp1/malevo/npm-debug.log
npm status code 1 null
ERROR building { type: 'api',
label: 'malevo_server',
repo: 'phovea/phovea_server',
branch: 'develop',
additional:
[ { name: 'malevo',
repo: 'Caleydo/malevo',
branch: 'gfrogat/features/api',
pluginType: 'app-slib',
isHybridType: true } ],
data:
[ { type: 'url', url: 'malevo_cifar10_data.mdb' },
{ type: 'url', url: 'malevo_cifar10_rundata.db' },
{ type: 'url', url: 'malevo_cifar10_lock.mdb' },
{ type: 'repo',
repo: 'Caleydo/malevo',
branch: 'gfrogat/features/api' } ],
name: 'phovea_server',
image: 'malevo/malevo_server:1.0.0-20180206-132828',
pluginType: 'service',
isHybridType: false,
error: 'npm failed with status code 1 null' } npm failed with status code 1 null
ERROR extra building npm failed with status code 1 null
Exited with code 1
```
Any idea how the phovea_product.json looks like for an `app-slib`?
|
1.0
|
How to use a `app-slib` in the phovea_product.json - [Malevo](https://github.com/Caleydo/malevo/) is defined as `app-slib` (see [.yo-rc.json](https://github.com/Caleydo/malevo/blob/develop/.yo-rc.json)), because it consists of an app as client and a server part. But I'm not sure how to use the `app-slib` in the [phovea_product.json](https://github.com/Caleydo/malevo_product/blob/api-test/phovea_product.json) correctly.
I referenced the `app` part in the web section: https://github.com/Caleydo/malevo_product/blob/d74a474d20cf93e9080bbf10bbc7b8cbc6124e5e/phovea_product.json#L2-L8
However, then the server part (i.e., `/api/malevo/...`) is not available after build/deployment.
When I add now the same additional repository to the api section the build fails: https://github.com/Caleydo/malevo_product/compare/1af0f6dafb13b810a713bc7af1ae12da00e26a95...d74a474d20cf93e9080bbf10bbc7b8cbc6124e5e
CircleCI build log
```
npm ERR! Linux 4.4.0-111-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "run" "test:python"
npm ERR! node v6.11.1
npm ERR! npm v3.10.10
npm ERR! code ELIFECYCLE
npm ERR! malevo@1.0.0-SNAPSHOT test:python: `test ! -d tests || python setup.py test`
npm ERR! Exit status 2
npm ERR!
npm ERR! Failed at the malevo@1.0.0-SNAPSHOT test:python script 'test ! -d tests || python setup.py test'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the malevo package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! test ! -d tests || python setup.py test
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs malevo
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls malevo
npm ERR! There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
npm ERR! Please include the following file with any support request:
npm ERR! /home/circleci/phovea/tmp1/malevo/npm-debug.log
npm ERR! Linux 4.4.0-111-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "run" "build:python"
npm ERR! node v6.11.1
npm ERR! npm v3.10.10
npm ERR! code ELIFECYCLE
npm ERR! malevo@1.0.0-SNAPSHOT prebuild:python: `node -e "process.exit(process.env.PHOVEA_SKIP_TESTS === undefined?1:0)" || npm run test:python`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the malevo@1.0.0-SNAPSHOT prebuild:python script 'node -e "process.exit(process.env.PHOVEA_SKIP_TESTS === undefined?1:0)" || npm run test:python'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the malevo package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node -e "process.exit(process.env.PHOVEA_SKIP_TESTS === undefined?1:0)" || npm run test:python
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs malevo
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls malevo
npm ERR! There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
npm ERR! Please include the following file with any support request:
npm ERR! /home/circleci/phovea/tmp1/malevo/npm-debug.log
npm status code 1 null
ERROR building { type: 'api',
label: 'malevo_server',
repo: 'phovea/phovea_server',
branch: 'develop',
additional:
[ { name: 'malevo',
repo: 'Caleydo/malevo',
branch: 'gfrogat/features/api',
pluginType: 'app-slib',
isHybridType: true } ],
data:
[ { type: 'url', url: 'malevo_cifar10_data.mdb' },
{ type: 'url', url: 'malevo_cifar10_rundata.db' },
{ type: 'url', url: 'malevo_cifar10_lock.mdb' },
{ type: 'repo',
repo: 'Caleydo/malevo',
branch: 'gfrogat/features/api' } ],
name: 'phovea_server',
image: 'malevo/malevo_server:1.0.0-20180206-132828',
pluginType: 'service',
isHybridType: false,
error: 'npm failed with status code 1 null' } npm failed with status code 1 null
ERROR extra building npm failed with status code 1 null
Exited with code 1
```
Any idea how the phovea_product.json looks like for an `app-slib`?
|
non_process
|
how to use a app slib in the phovea product json is defined as app slib see because it consists of an app as client and a server part but i m not sure how to use the app slib in the correctly i referenced the app part in the web section however then the server part i e api malevo is not available after build deployment when i add now the same additional repository to the api section the build fails circleci build log npm err linux generic npm err argv usr bin nodejs usr bin npm run test python npm err node npm err npm npm err code elifecycle npm err malevo snapshot test python test d tests python setup py test npm err exit status npm err npm err failed at the malevo snapshot test python script test d tests python setup py test npm err make sure you have the latest version of node js and npm installed npm err if you do this is most likely a problem with the malevo package npm err not with npm itself npm err tell the author that this fails on your system npm err test d tests python setup py test npm err you can get information on how to open an issue for this project with npm err npm bugs malevo npm err or if that isn t available you can get their info via npm err npm owner ls malevo npm err there is likely additional logging output above npm warn local package json exists but node modules missing did you mean to install npm err please include the following file with any support request npm err home circleci phovea malevo npm debug log npm err linux generic npm err argv usr bin nodejs usr bin npm run build python npm err node npm err npm npm err code elifecycle npm err malevo snapshot prebuild python node e process exit process env phovea skip tests undefined npm run test python npm err exit status npm err npm err failed at the malevo snapshot prebuild python script node e process exit process env phovea skip tests undefined npm run test python npm err make sure you have the latest version of node js and npm installed npm err if you do this is most likely a problem with the malevo package npm err not with npm itself npm err tell the author that this fails on your system npm err node e process exit process env phovea skip tests undefined npm run test python npm err you can get information on how to open an issue for this project with npm err npm bugs malevo npm err or if that isn t available you can get their info via npm err npm owner ls malevo npm err there is likely additional logging output above npm warn local package json exists but node modules missing did you mean to install npm err please include the following file with any support request npm err home circleci phovea malevo npm debug log npm status code null error building type api label malevo server repo phovea phovea server branch develop additional name malevo repo caleydo malevo branch gfrogat features api plugintype app slib ishybridtype true data type url url malevo data mdb type url url malevo rundata db type url url malevo lock mdb type repo repo caleydo malevo branch gfrogat features api name phovea server image malevo malevo server plugintype service ishybridtype false error npm failed with status code null npm failed with status code null error extra building npm failed with status code null exited with code any idea how the phovea product json looks like for an app slib
| 0
|
20,401
| 27,061,418,282
|
IssuesEvent
|
2023-02-13 20:06:02
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
closed
|
Release 33.0.0 of arrow/arrow-flight/parquet/parquet-derive (next release after 32.0.0)
|
development-process
|
Follow on from https://github.com/apache/arrow-rs/issues/3584
- Planned Release Candidate: 2023-02-10
- Planned Release and Publish to crates.io: 2023-02-13
Items (from [dev/release/README.md](https://github.com/apache/arrow-rs/blob/master/dev/release/README.md)):
- [x] PR to update version and CHANGELOG: https://github.com/apache/arrow-rs/pull/3686
- [ ] Release candidate created:
- [ ] Release candidate approved:
- [ ] Release to crates.io:
- [ ] Make ticket for next release
See full list here:
https://github.com/apache/arrow-rs/compare/32.0.0...master
cc @alamb @tustvold @viirya
|
1.0
|
Release 33.0.0 of arrow/arrow-flight/parquet/parquet-derive (next release after 32.0.0) - Follow on from https://github.com/apache/arrow-rs/issues/3584
- Planned Release Candidate: 2023-02-10
- Planned Release and Publish to crates.io: 2023-02-13
Items (from [dev/release/README.md](https://github.com/apache/arrow-rs/blob/master/dev/release/README.md)):
- [x] PR to update version and CHANGELOG: https://github.com/apache/arrow-rs/pull/3686
- [ ] Release candidate created:
- [ ] Release candidate approved:
- [ ] Release to crates.io:
- [ ] Make ticket for next release
See full list here:
https://github.com/apache/arrow-rs/compare/32.0.0...master
cc @alamb @tustvold @viirya
|
process
|
release of arrow arrow flight parquet parquet derive next release after follow on from planned release candidate planned release and publish to crates io items from pr to update version and changelog release candidate created release candidate approved release to crates io make ticket for next release see full list here cc alamb tustvold viirya
| 1
|
209,549
| 7,177,065,575
|
IssuesEvent
|
2018-01-31 12:18:50
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
Build warnings [-Wshift-overflow] with LLVM/icx (K_MEM_POOL_DEFINE)
|
area: Samples bug priority: low
|
**_Reported by Sharron LIU:_**
K_MEM_POOL_DEFINE with build warnings raised from LLVM/icx:
tests/kernel/mem_pool/*
```
/home/sharron/workspace/views/iot/zephyr/tests/kernel/mem_pool/test_mpool/src/pool.c:38:1: warning:
signed shift result (0x140000000) requires 34 bits to represent, but 'int'
only has 32 bits [-Wshift-overflow]
K_MEM_POOL_DEFINE(SECOND_POOL_ID, 16, 1024, 5, 4);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/sharron/workspace/views/iot/zephyr/include/kernel.h:3274:9: note:
expanded from macro 'K_MEM_POOL_DEFINE'
+ _MPOOL_BITS_SIZE(maxsz, minsz, nmax)]; \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/sharron/workspace/views/iot/zephyr/include/kernel.h:3242:2: note:
expanded from macro '_MPOOL_BITS_SIZE'
_MPOOL_LBIT_BYTES(maxsz, minsz, 15, n_max))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/sharron/workspace/views/iot/zephyr/include/kernel.h:3223:7: note:
expanded from macro '_MPOOL_LBIT_BYTES'
4 * _MPOOL_LBIT_WORDS((n_max), l) : 0)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/sharron/workspace/views/iot/zephyr/include/kernel.h:3217:3: note:
expanded from macro '_MPOOL_LBIT_WORDS'
(_MPOOL_LBIT_WORDS_UNCLAMPED(n_max, l) < 2 ? 0 \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/sharron/workspace/views/iot/zephyr/include/kernel.h:3211:13: note:
expanded from macro '_MPOOL_LBIT_WORDS_UNCLAMPED'
((((n_max) << (2*(l))) + 31) / 32)
~~~~~~~ ^ ~~~~~~~
```
how-to-reproduce:
# source zephyr-env.sh
# export ZEPHYR_GCC_VARIANT=issm
# cd tests/kernel/mem_pool/test_mpool
# make pristine
# make BOARD=quark_se_c1000_devboard CC=icx
(Imported from Jira ZEP-2179)
|
1.0
|
Build warnings [-Wshift-overflow] with LLVM/icx (K_MEM_POOL_DEFINE) - **_Reported by Sharron LIU:_**
K_MEM_POOL_DEFINE with build warnings raised from LLVM/icx:
tests/kernel/mem_pool/*
```
/home/sharron/workspace/views/iot/zephyr/tests/kernel/mem_pool/test_mpool/src/pool.c:38:1: warning:
signed shift result (0x140000000) requires 34 bits to represent, but 'int'
only has 32 bits [-Wshift-overflow]
K_MEM_POOL_DEFINE(SECOND_POOL_ID, 16, 1024, 5, 4);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/sharron/workspace/views/iot/zephyr/include/kernel.h:3274:9: note:
expanded from macro 'K_MEM_POOL_DEFINE'
+ _MPOOL_BITS_SIZE(maxsz, minsz, nmax)]; \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/sharron/workspace/views/iot/zephyr/include/kernel.h:3242:2: note:
expanded from macro '_MPOOL_BITS_SIZE'
_MPOOL_LBIT_BYTES(maxsz, minsz, 15, n_max))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/sharron/workspace/views/iot/zephyr/include/kernel.h:3223:7: note:
expanded from macro '_MPOOL_LBIT_BYTES'
4 * _MPOOL_LBIT_WORDS((n_max), l) : 0)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/sharron/workspace/views/iot/zephyr/include/kernel.h:3217:3: note:
expanded from macro '_MPOOL_LBIT_WORDS'
(_MPOOL_LBIT_WORDS_UNCLAMPED(n_max, l) < 2 ? 0 \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/sharron/workspace/views/iot/zephyr/include/kernel.h:3211:13: note:
expanded from macro '_MPOOL_LBIT_WORDS_UNCLAMPED'
((((n_max) << (2*(l))) + 31) / 32)
~~~~~~~ ^ ~~~~~~~
```
how-to-reproduce:
# source zephyr-env.sh
# export ZEPHYR_GCC_VARIANT=issm
# cd tests/kernel/mem_pool/test_mpool
# make pristine
# make BOARD=quark_se_c1000_devboard CC=icx
(Imported from Jira ZEP-2179)
|
non_process
|
build warnings with llvm icx k mem pool define reported by sharron liu k mem pool define with build warnings raised from llvm icx tests kernel mem pool home sharron workspace views iot zephyr tests kernel mem pool test mpool src pool c warning signed shift result requires bits to represent but int only has bits k mem pool define second pool id home sharron workspace views iot zephyr include kernel h note expanded from macro k mem pool define mpool bits size maxsz minsz nmax home sharron workspace views iot zephyr include kernel h note expanded from macro mpool bits size mpool lbit bytes maxsz minsz n max home sharron workspace views iot zephyr include kernel h note expanded from macro mpool lbit bytes mpool lbit words n max l home sharron workspace views iot zephyr include kernel h note expanded from macro mpool lbit words mpool lbit words unclamped n max l home sharron workspace views iot zephyr include kernel h note expanded from macro mpool lbit words unclamped n max l how to reproduce source zephyr env sh export zephyr gcc variant issm cd tests kernel mem pool test mpool make pristine make board quark se devboard cc icx imported from jira zep
| 0
|
19,145
| 25,208,708,773
|
IssuesEvent
|
2022-11-14 00:11:14
|
Harry25R/CPOP2
|
https://api.github.com/repos/Harry25R/CPOP2
|
closed
|
data(Pathways) not imported
|
component:PreProcess_Frank
|
This is needed from the directPA package. Chat with Nick about how to do this.
|
1.0
|
data(Pathways) not imported - This is needed from the directPA package. Chat with Nick about how to do this.
|
process
|
data pathways not imported this is needed from the directpa package chat with nick about how to do this
| 1
|
3,816
| 6,800,636,126
|
IssuesEvent
|
2017-11-02 14:34:26
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
opened
|
Add a "default shortcut menu" to Processing algorithm description
|
Processing help question
|
I've suggested this in #2123 but I think it deserves its own feature request report and needs a decision now that processing refactoring is going on.
Some of the Processing algorithms have a default menu shortcut, in Vector or Raster menu. It could be nice to add a section (when applicable) in the description to show that shortcut path.
Question is where could we place that?
@ghtmtt @SrNetoChan @yjacolin
|
1.0
|
Add a "default shortcut menu" to Processing algorithm description - I've suggested this in #2123 but I think it deserves its own feature request report and needs a decision now that processing refactoring is going on.
Some of the Processing algorithms have a default menu shortcut, in Vector or Raster menu. It could be nice to add a section (when applicable) in the description to show that shortcut path.
Question is where could we place that?
@ghtmtt @SrNetoChan @yjacolin
|
process
|
add a default shortcut menu to processing algorithm description i ve suggested this in but i think it deserves its own feature request report and needs a decision now that processing refactoring is going on some of the processing algorithms have a default menu shortcut in vector or raster menu it could be nice to add a section when applicable in the description to show that shortcut path question is where could we place that ghtmtt srnetochan yjacolin
| 1
|
88,441
| 8,142,437,830
|
IssuesEvent
|
2018-08-21 07:36:57
|
kartoza/healthyrivers
|
https://api.github.com/repos/kartoza/healthyrivers
|
closed
|
Search
|
bug testing
|
# Bug report
1. Click on search icon
2. Under the "category" filter, select "translocated'.
3. Click on "apply filter"
4.Click to open "sites(195)"
5. Select "amandel"
# Error
The only thing on screen is a navigation icon

# Expected behaviour
The map should not disappear when doing a search.
|
1.0
|
Search - # Bug report
1. Click on search icon
2. Under the "category" filter, select "translocated'.
3. Click on "apply filter"
4.Click to open "sites(195)"
5. Select "amandel"
# Error
The only thing on screen is a navigation icon

# Expected behaviour
The map should not disappear when doing a search.
|
non_process
|
search bug report click on search icon under the category filter select translocated click on apply filter click to open sites select amandel error the only thing on screen is a navigation icon expected behaviour the map should not disappear when doing a search
| 0
|
38,783
| 10,241,447,362
|
IssuesEvent
|
2019-08-20 00:23:58
|
DynamoRIO/dynamorio
|
https://api.github.com/repos/DynamoRIO/dynamorio
|
opened
|
Compile error with gcc 8.2
|
Component-Build OpSys-Linux
|
All builds with this version are now failing with
```
FAILED: core/CMakeFiles/drdecode.dir/string.c.o
/usr/bin/cc -I/usr/local/google/home/hgreving/dynamorio/src/core/drlibc -I/usr/local/google/home/hgreving/dynamorio/src/core/arch/x86 -I/usr/local/google/home/hgreving/dynamorio/src/core/unix -I/usr/local/google/home/hgreving/dynamorio/src/core/arch -I/usr/local/google/home/hgreving/dynamorio/src/core/lib -I. -Iinclude/annotations -m32 -fno-strict-aliasing -fno-stack-protector -mpreferred-stack-boundary=2 -fvisibility=internal -std=gnu99 -g3 -fno-omit-frame-pointer -fno-builtin-strcmp -Wall -Werror -Wwrite-strings -Wno-unused-but-set-variable -DNOT_DYNAMORIO_CORE_PROPER -DSTANDALONE_DECODER -fPIC -MD -MT core/CMakeFiles/drdecode.dir/string.c.o -MF core/CMakeFiles/drdecode.dir/string.c.o.d -o core/CMakeFiles/drdecode.dir/string.c.o -c /usr/local/google/home/hgreving/dynamorio/src/core/string.c
/usr/local/google/home/hgreving/dynamorio/src/core/string.c:176:1: error: β__memmove_chkβ alias between functions of incompatible types βvoid *(void *, const void *, size_t, size_t)β {aka βvoid *(void *, const
void *, unsigned int, unsigned int)β} and βvoid *(void *, const void *, size_t)β {aka βvoid *(void *, const void *, unsigned int)β} [-Werror=attribute-alias]
__memmove_chk(void *dst, const void *src, size_t n, size_t dst_len)
^~~~~~~~~~~~~
/usr/local/google/home/hgreving/dynamorio/src/core/string.c:155:1: note: aliased declaration here
d_r_memmove(void *dst, const void *src, size_t n)
^~~~~~~~~~~
/usr/local/google/home/hgreving/dynamorio/src/core/string.c:200:1: error: β__strncat_chkβ alias between functions of incompatible types βvoid *(char *, const char *, size_t, size_t)β {aka βvoid *(char *, const
char *, unsigned int, unsigned int)β} and βchar *(char *, const char *, size_t)β {aka βchar *(char *, const char *, unsigned int)β} [-Werror=attribute-alias]
__strncat_chk(char *dst, const char *src, size_t n, size_t dst_len)
^~~~~~~~~~~~~
/usr/local/google/home/hgreving/dynamorio/src/core/string.c:134:1: note: aliased declaration here
d_r_strncat(char *dest, const char *src, size_t n)
^~~~~~~~~~~
/usr/local/google/home/hgreving/dynamorio/src/core/string.c:188:1: error: β__strncpy_chkβ alias between functions of incompatible types βvoid *(char *, const char *, size_t, size_t)β {aka βvoid *(char *, const
char *, unsigned int, unsigned int)β} and βchar *(char *, const char *, size_t)β {aka βchar *(char *, const char *, unsigned int)β} [-Werror=attribute-alias]
__strncpy_chk(char *dst, const char *src, size_t n, size_t dst_len)
^~~~~~~~~~~~~
/usr/local/google/home/hgreving/dynamorio/src/core/string.c:121:1: note: aliased declaration here
d_r_strncpy(char *dst, const char *src, size_t n)
^~~~~~~~~~~
cc1: all warnings being treated as errors
[24/1227] Building C object core/CMakeFiles/dynamorio_static_nohide.dir/fragment.c.o
ninja: build stopped: subcommand failed.
```
I have not tested whether this affect only the gcc version I am working with or any > 8.2.
|
1.0
|
Compile error with gcc 8.2 - All builds with this version are now failing with
```
FAILED: core/CMakeFiles/drdecode.dir/string.c.o
/usr/bin/cc -I/usr/local/google/home/hgreving/dynamorio/src/core/drlibc -I/usr/local/google/home/hgreving/dynamorio/src/core/arch/x86 -I/usr/local/google/home/hgreving/dynamorio/src/core/unix -I/usr/local/google/home/hgreving/dynamorio/src/core/arch -I/usr/local/google/home/hgreving/dynamorio/src/core/lib -I. -Iinclude/annotations -m32 -fno-strict-aliasing -fno-stack-protector -mpreferred-stack-boundary=2 -fvisibility=internal -std=gnu99 -g3 -fno-omit-frame-pointer -fno-builtin-strcmp -Wall -Werror -Wwrite-strings -Wno-unused-but-set-variable -DNOT_DYNAMORIO_CORE_PROPER -DSTANDALONE_DECODER -fPIC -MD -MT core/CMakeFiles/drdecode.dir/string.c.o -MF core/CMakeFiles/drdecode.dir/string.c.o.d -o core/CMakeFiles/drdecode.dir/string.c.o -c /usr/local/google/home/hgreving/dynamorio/src/core/string.c
/usr/local/google/home/hgreving/dynamorio/src/core/string.c:176:1: error: β__memmove_chkβ alias between functions of incompatible types βvoid *(void *, const void *, size_t, size_t)β {aka βvoid *(void *, const
void *, unsigned int, unsigned int)β} and βvoid *(void *, const void *, size_t)β {aka βvoid *(void *, const void *, unsigned int)β} [-Werror=attribute-alias]
__memmove_chk(void *dst, const void *src, size_t n, size_t dst_len)
^~~~~~~~~~~~~
/usr/local/google/home/hgreving/dynamorio/src/core/string.c:155:1: note: aliased declaration here
d_r_memmove(void *dst, const void *src, size_t n)
^~~~~~~~~~~
/usr/local/google/home/hgreving/dynamorio/src/core/string.c:200:1: error: β__strncat_chkβ alias between functions of incompatible types βvoid *(char *, const char *, size_t, size_t)β {aka βvoid *(char *, const
char *, unsigned int, unsigned int)β} and βchar *(char *, const char *, size_t)β {aka βchar *(char *, const char *, unsigned int)β} [-Werror=attribute-alias]
__strncat_chk(char *dst, const char *src, size_t n, size_t dst_len)
^~~~~~~~~~~~~
/usr/local/google/home/hgreving/dynamorio/src/core/string.c:134:1: note: aliased declaration here
d_r_strncat(char *dest, const char *src, size_t n)
^~~~~~~~~~~
/usr/local/google/home/hgreving/dynamorio/src/core/string.c:188:1: error: β__strncpy_chkβ alias between functions of incompatible types βvoid *(char *, const char *, size_t, size_t)β {aka βvoid *(char *, const
char *, unsigned int, unsigned int)β} and βchar *(char *, const char *, size_t)β {aka βchar *(char *, const char *, unsigned int)β} [-Werror=attribute-alias]
__strncpy_chk(char *dst, const char *src, size_t n, size_t dst_len)
^~~~~~~~~~~~~
/usr/local/google/home/hgreving/dynamorio/src/core/string.c:121:1: note: aliased declaration here
d_r_strncpy(char *dst, const char *src, size_t n)
^~~~~~~~~~~
cc1: all warnings being treated as errors
[24/1227] Building C object core/CMakeFiles/dynamorio_static_nohide.dir/fragment.c.o
ninja: build stopped: subcommand failed.
```
I have not tested whether this affect only the gcc version I am working with or any > 8.2.
|
non_process
|
compile error with gcc all builds with this version are now failing with failed core cmakefiles drdecode dir string c o usr bin cc i usr local google home hgreving dynamorio src core drlibc i usr local google home hgreving dynamorio src core arch i usr local google home hgreving dynamorio src core unix i usr local google home hgreving dynamorio src core arch i usr local google home hgreving dynamorio src core lib i iinclude annotations fno strict aliasing fno stack protector mpreferred stack boundary fvisibility internal std fno omit frame pointer fno builtin strcmp wall werror wwrite strings wno unused but set variable dnot dynamorio core proper dstandalone decoder fpic md mt core cmakefiles drdecode dir string c o mf core cmakefiles drdecode dir string c o d o core cmakefiles drdecode dir string c o c usr local google home hgreving dynamorio src core string c usr local google home hgreving dynamorio src core string c error β memmove chkβ alias between functions of incompatible types βvoid void const void size t size t β aka βvoid void const void unsigned int unsigned int β and βvoid void const void size t β aka βvoid void const void unsigned int β memmove chk void dst const void src size t n size t dst len usr local google home hgreving dynamorio src core string c note aliased declaration here d r memmove void dst const void src size t n usr local google home hgreving dynamorio src core string c error β strncat chkβ alias between functions of incompatible types βvoid char const char size t size t β aka βvoid char const char unsigned int unsigned int β and βchar char const char size t β aka βchar char const char unsigned int β strncat chk char dst const char src size t n size t dst len usr local google home hgreving dynamorio src core string c note aliased declaration here d r strncat char dest const char src size t n usr local google home hgreving dynamorio src core string c error β strncpy chkβ alias between functions of incompatible types βvoid char const char size t size t β aka βvoid char const char unsigned int unsigned int β and βchar char const char size t β aka βchar char const char unsigned int β strncpy chk char dst const char src size t n size t dst len usr local google home hgreving dynamorio src core string c note aliased declaration here d r strncpy char dst const char src size t n all warnings being treated as errors building c object core cmakefiles dynamorio static nohide dir fragment c o ninja build stopped subcommand failed i have not tested whether this affect only the gcc version i am working with or any
| 0
|
2,428
| 5,204,089,524
|
IssuesEvent
|
2017-01-24 14:44:04
|
AffiliateWP/AffiliateWP
|
https://api.github.com/repos/AffiliateWP/AffiliateWP
|
closed
|
Introduce a generic, Ajax-based batch processor
|
batch-processing enhancement Has PR
|
Scope in progress. Right now, we have a bunch of export and migration scripts that all rely on server-side step processing. Realistically, abstracting the processing step to a generic batch system has a lot of benefits in terms of efficiency, performance, and UX.
And for this glorious work, we get the opportunity to reuse the aforementioned batch processor for other potential intensive tasks like database upgrades, importers (!), etc.
|
1.0
|
Introduce a generic, Ajax-based batch processor - Scope in progress. Right now, we have a bunch of export and migration scripts that all rely on server-side step processing. Realistically, abstracting the processing step to a generic batch system has a lot of benefits in terms of efficiency, performance, and UX.
And for this glorious work, we get the opportunity to reuse the aforementioned batch processor for other potential intensive tasks like database upgrades, importers (!), etc.
|
process
|
introduce a generic ajax based batch processor scope in progress right now we have a bunch of export and migration scripts that all rely on server side step processing realistically abstracting the processing step to a generic batch system has a lot of benefits in terms of efficiency performance and ux and for this glorious work we get the opportunity to reuse the aforementioned batch processor for other potential intensive tasks like database upgrades importers etc
| 1
|
55,173
| 6,891,306,689
|
IssuesEvent
|
2017-11-22 16:38:10
|
dpopp07/bartop
|
https://api.github.com/repos/dpopp07/bartop
|
closed
|
Decide on login page UI
|
authentication design ui
|
Auth0 has (and recommends) a hosted login page that they allow us to customize. Our options are either to
1. Use that and add fields for metadata we want to store (like real name, etc.)
2. Write our own UI and set it up with the Auth0 API
I feel like Option 1 is preferred because they really recommend that, but let me know your thoughts. Here is the code given for the hosted login page, that we are allowed to customize:
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title>Sign In with Auth0</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
</head>
<body>
<!--[if IE 8]>
<script src="//cdnjs.cloudflare.com/ajax/libs/ie8/0.2.5/ie8.js"></script>
<![endif]-->
<!--[if lte IE 9]>
<script src="https://cdn.auth0.com/js/base64.js"></script>
<script src="https://cdn.auth0.com/js/es5-shim.min.js"></script>
<![endif]-->
<script src="https://cdn.auth0.com/js/lock/10.18/lock.min.js"></script>
<script>
// Decode utf8 characters properly
var config = JSON.parse(decodeURIComponent(escape(window.atob('@@config@@'))));
config.extraParams = config.extraParams || {};
var connection = config.connection;
var prompt = config.prompt;
var languageDictionary;
var language;
if (config.dict && config.dict.signin && config.dict.signin.title) {
languageDictionary = { title: config.dict.signin.title };
} else if (typeof config.dict === 'string') {
language = config.dict;
}
var loginHint = config.extraParams.login_hint;
var lock = new Auth0Lock(config.clientID, config.auth0Domain, {
auth: {
redirectUrl: config.callbackURL,
responseType: (config.internalOptions || {}).response_type ||
config.callbackOnLocationHash ? 'token' : 'code',
params: config.internalOptions
},
assetsUrl: config.assetsUrl,
allowedConnections: connection ? [connection] : null,
rememberLastLogin: !prompt,
language: language,
languageDictionary: languageDictionary,
theme: {
//logo: 'YOUR LOGO HERE',
//primaryColor: 'green'
},
prefill: loginHint ? { email: loginHint, username: loginHint } : null,
closable: false,
// uncomment if you want small buttons for social providers
// socialButtonStyle: 'small'
});
lock.show();
</script>
</body>
</html>
```
|
1.0
|
Decide on login page UI - Auth0 has (and recommends) a hosted login page that they allow us to customize. Our options are either to
1. Use that and add fields for metadata we want to store (like real name, etc.)
2. Write our own UI and set it up with the Auth0 API
I feel like Option 1 is preferred because they really recommend that, but let me know your thoughts. Here is the code given for the hosted login page, that we are allowed to customize:
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title>Sign In with Auth0</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
</head>
<body>
<!--[if IE 8]>
<script src="//cdnjs.cloudflare.com/ajax/libs/ie8/0.2.5/ie8.js"></script>
<![endif]-->
<!--[if lte IE 9]>
<script src="https://cdn.auth0.com/js/base64.js"></script>
<script src="https://cdn.auth0.com/js/es5-shim.min.js"></script>
<![endif]-->
<script src="https://cdn.auth0.com/js/lock/10.18/lock.min.js"></script>
<script>
// Decode utf8 characters properly
var config = JSON.parse(decodeURIComponent(escape(window.atob('@@config@@'))));
config.extraParams = config.extraParams || {};
var connection = config.connection;
var prompt = config.prompt;
var languageDictionary;
var language;
if (config.dict && config.dict.signin && config.dict.signin.title) {
languageDictionary = { title: config.dict.signin.title };
} else if (typeof config.dict === 'string') {
language = config.dict;
}
var loginHint = config.extraParams.login_hint;
var lock = new Auth0Lock(config.clientID, config.auth0Domain, {
auth: {
redirectUrl: config.callbackURL,
responseType: (config.internalOptions || {}).response_type ||
config.callbackOnLocationHash ? 'token' : 'code',
params: config.internalOptions
},
assetsUrl: config.assetsUrl,
allowedConnections: connection ? [connection] : null,
rememberLastLogin: !prompt,
language: language,
languageDictionary: languageDictionary,
theme: {
//logo: 'YOUR LOGO HERE',
//primaryColor: 'green'
},
prefill: loginHint ? { email: loginHint, username: loginHint } : null,
closable: false,
// uncomment if you want small buttons for social providers
// socialButtonStyle: 'small'
});
lock.show();
</script>
</body>
</html>
```
|
non_process
|
decide on login page ui has and recommends a hosted login page that they allow us to customize our options are either to use that and add fields for metadata we want to store like real name etc write our own ui and set it up with the api i feel like option is preferred because they really recommend that but let me know your thoughts here is the code given for the hosted login page that we are allowed to customize html sign in with script src script src script src decode characters properly var config json parse decodeuricomponent escape window atob config config extraparams config extraparams var connection config connection var prompt config prompt var languagedictionary var language if config dict config dict signin config dict signin title languagedictionary title config dict signin title else if typeof config dict string language config dict var loginhint config extraparams login hint var lock new config clientid config auth redirecturl config callbackurl responsetype config internaloptions response type config callbackonlocationhash token code params config internaloptions assetsurl config assetsurl allowedconnections connection null rememberlastlogin prompt language language languagedictionary languagedictionary theme logo your logo here primarycolor green prefill loginhint email loginhint username loginhint null closable false uncomment if you want small buttons for social providers socialbuttonstyle small lock show
| 0
|
21,978
| 30,470,773,052
|
IssuesEvent
|
2023-07-17 13:30:58
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
[MLv2] [Bug] Join `fields` generate unexpected field references
|
.metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
When selecting just a few columns from a join RHS table, MLv2 produces field references that can't be processed by the BE. For example, if we join the sample Orders table with the Products table, and only pick Products' ID and Title columns, here's how joins will look in MLv1 and MLv2:
### MLv1
```js
{
alias: "Products",
"source-table": 1,
condition: [ ... ],
fields: [
["field", 5, { "join-alias": "Products" } ],
["field", 1, { "join-alias": "Products" } ],
]
}
```
### MLv2
```js
{
alias: "Products",
"source-table": 1,
condition: [ ... ],
fields: [
["field", 5, { "base-type": "type/BigInteger" } ],
["field", 12, { "base-type": "type/Text" } ],
]
}
```
Notice the `join-alias` option is missing in MLv2, but has `base-type` options
|
1.0
|
[MLv2] [Bug] Join `fields` generate unexpected field references - When selecting just a few columns from a join RHS table, MLv2 produces field references that can't be processed by the BE. For example, if we join the sample Orders table with the Products table, and only pick Products' ID and Title columns, here's how joins will look in MLv1 and MLv2:
### MLv1
```js
{
alias: "Products",
"source-table": 1,
condition: [ ... ],
fields: [
["field", 5, { "join-alias": "Products" } ],
["field", 1, { "join-alias": "Products" } ],
]
}
```
### MLv2
```js
{
alias: "Products",
"source-table": 1,
condition: [ ... ],
fields: [
["field", 5, { "base-type": "type/BigInteger" } ],
["field", 12, { "base-type": "type/Text" } ],
]
}
```
Notice the `join-alias` option is missing in MLv2, but has `base-type` options
|
process
|
join fields generate unexpected field references when selecting just a few columns from a join rhs table produces field references that can t be processed by the be for example if we join the sample orders table with the products table and only pick products id and title columns here s how joins will look in and js alias products source table condition fields js alias products source table condition fields notice the join alias option is missing in but has base type options
| 1
|
22,243
| 30,795,443,795
|
IssuesEvent
|
2023-07-31 19:29:31
|
gsoft-inc/ov-igloo-ui
|
https://api.github.com/repos/gsoft-inc/ov-igloo-ui
|
closed
|
[Feature Request]: Add infinite scroll capabilities to combobox
|
in backlog in process
|
### Component that this feature request involves
combobox
### Is your feature request related to a problem? Please describe
When we don't want to load all options at once because there's too much
### Describe the solution you'd like
An optional infinite scroll feature would allow us to iterate through a big list without having to load all results at once and load more results as needed.
### Describe alternatives you've considered
A custom component was made instead, calling a LoadMore(...) if it HasMore(...) when the user gets near the end of options.
### Additional context
_No response_
|
1.0
|
[Feature Request]: Add infinite scroll capabilities to combobox - ### Component that this feature request involves
combobox
### Is your feature request related to a problem? Please describe
When we don't want to load all options at once because there's too much
### Describe the solution you'd like
An optional infinite scroll feature would allow us to iterate through a big list without having to load all results at once and load more results as needed.
### Describe alternatives you've considered
A custom component was made instead, calling a LoadMore(...) if it HasMore(...) when the user gets near the end of options.
### Additional context
_No response_
|
process
|
add infinite scroll capabilities to combobox component that this feature request involves combobox is your feature request related to a problem please describe when we don t want to load all options at once because there s too much describe the solution you d like an optional infinite scroll feature would allow us to iterate through a big list without having to load all results at once and load more results as needed describe alternatives you ve considered a custom component was made instead calling a loadmore if it hasmore when the user gets near the end of options additional context no response
| 1
|
765,601
| 26,853,315,123
|
IssuesEvent
|
2023-02-03 12:48:21
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
compute.client_library.snippets.tests.test_create_vm: test_create_from_snapshot failed
|
priority: p2 type: bug api: compute samples flakybot: issue flakybot: flaky
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 0caa6618920c788cfece5d30e840b19c122ba426
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/25348d67-981f-497d-9ad3-a77f82e01436), [Sponge](http://sponge2/25348d67-981f-497d-9ad3-a77f82e01436)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/compute/client_library/snippets/tests/test_create_vm.py", line 96, in snapshot
op = snapshot_client.delete_unary(project=PROJECT, snapshot=snapshot.name)
File "/workspace/compute/client_library/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/compute_v1/services/snapshots/client.py", line 496, in delete_unary
response = rpc(
File "/workspace/compute/client_library/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/gapic_v1/method.py", line 113, in __call__
return wrapped_func(*args, **kwargs)
File "/workspace/compute/client_library/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/grpc_helpers.py", line 72, in error_remapped_callable
return callable_(*args, **kwargs)
File "/workspace/compute/client_library/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/compute_v1/services/snapshots/transports/rest.py", line 493, in __call__
raise core_exceptions.from_http_response(response)
google.api_core.exceptions.ServiceUnavailable: 503 DELETE https://compute.googleapis.com/compute/v1/projects/python-docs-samples-tests-py39/global/snapshots/test-snap-80d20362f7: Internal error. Please try again or contact Google Support. (Code: '5EF28BA1FB6D8.A3C4043.D9097FC8')</pre></details>
|
1.0
|
compute.client_library.snippets.tests.test_create_vm: test_create_from_snapshot failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 0caa6618920c788cfece5d30e840b19c122ba426
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/25348d67-981f-497d-9ad3-a77f82e01436), [Sponge](http://sponge2/25348d67-981f-497d-9ad3-a77f82e01436)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/compute/client_library/snippets/tests/test_create_vm.py", line 96, in snapshot
op = snapshot_client.delete_unary(project=PROJECT, snapshot=snapshot.name)
File "/workspace/compute/client_library/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/compute_v1/services/snapshots/client.py", line 496, in delete_unary
response = rpc(
File "/workspace/compute/client_library/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/gapic_v1/method.py", line 113, in __call__
return wrapped_func(*args, **kwargs)
File "/workspace/compute/client_library/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/grpc_helpers.py", line 72, in error_remapped_callable
return callable_(*args, **kwargs)
File "/workspace/compute/client_library/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/compute_v1/services/snapshots/transports/rest.py", line 493, in __call__
raise core_exceptions.from_http_response(response)
google.api_core.exceptions.ServiceUnavailable: 503 DELETE https://compute.googleapis.com/compute/v1/projects/python-docs-samples-tests-py39/global/snapshots/test-snap-80d20362f7: Internal error. Please try again or contact Google Support. (Code: '5EF28BA1FB6D8.A3C4043.D9097FC8')</pre></details>
|
non_process
|
compute client library snippets tests test create vm test create from snapshot failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output traceback most recent call last file workspace compute client library snippets tests test create vm py line in snapshot op snapshot client delete unary project project snapshot snapshot name file workspace compute client library nox py lib site packages google cloud compute services snapshots client py line in delete unary response rpc file workspace compute client library nox py lib site packages google api core gapic method py line in call return wrapped func args kwargs file workspace compute client library nox py lib site packages google api core grpc helpers py line in error remapped callable return callable args kwargs file workspace compute client library nox py lib site packages google cloud compute services snapshots transports rest py line in call raise core exceptions from http response response google api core exceptions serviceunavailable delete internal error please try again or contact google support code
| 0
|
68,821
| 3,292,887,106
|
IssuesEvent
|
2015-10-30 16:30:48
|
PowerPointLabs/powerpointlabs
|
https://api.github.com/repos/PowerPointLabs/powerpointlabs
|
closed
|
Remove 'recreate animation' button
|
Feature.AutoAnimate Priority.Low status.releaseCandidate
|
Removing it simplifies the user experience, although it makes things slightly inconvenient. Users will have to delete the created slide and re-create the animation.
|
1.0
|
Remove 'recreate animation' button - Removing it simplifies the user experience, although it makes things slightly inconvenient. Users will have to delete the created slide and re-create the animation.
|
non_process
|
remove recreate animation button removing it simplifies the user experience although it makes things slightly inconvenient users will have to delete the created slide and re create the animation
| 0
|
199,344
| 22,693,307,110
|
IssuesEvent
|
2022-07-05 01:10:50
|
LalithK90/prisonManagement
|
https://api.github.com/repos/LalithK90/prisonManagement
|
opened
|
CVE-2022-21363 (Medium) detected in mysql-connector-java-8.0.19.jar
|
security vulnerability
|
## CVE-2022-21363 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-8.0.19.jar</b></p></summary>
<p>JDBC Type 4 driver for MySQL</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /20210501165408_JLXWCQ/downloadResource_NMVCOW/20210501165443/mysql-connector-java-8.0.19.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-8.0.19.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/LalithK90/prisonManagement/commit/308974356a217e6fe93223a6aa51989a2e2bab4e">308974356a217e6fe93223a6aa51989a2e2bab4e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.27 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.1 Base Score 6.6 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H).
<p>Publish Date: 2022-01-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21363>CVE-2022-21363</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-g76j-4cxx-23h9">https://github.com/advisories/GHSA-g76j-4cxx-23h9</a></p>
<p>Release Date: 2022-01-19</p>
<p>Fix Resolution: mysql:mysql-connector-java:8.0.28</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-21363 (Medium) detected in mysql-connector-java-8.0.19.jar - ## CVE-2022-21363 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-8.0.19.jar</b></p></summary>
<p>JDBC Type 4 driver for MySQL</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /20210501165408_JLXWCQ/downloadResource_NMVCOW/20210501165443/mysql-connector-java-8.0.19.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-8.0.19.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/LalithK90/prisonManagement/commit/308974356a217e6fe93223a6aa51989a2e2bab4e">308974356a217e6fe93223a6aa51989a2e2bab4e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.27 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.1 Base Score 6.6 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H).
<p>Publish Date: 2022-01-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21363>CVE-2022-21363</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-g76j-4cxx-23h9">https://github.com/advisories/GHSA-g76j-4cxx-23h9</a></p>
<p>Release Date: 2022-01-19</p>
<p>Fix Resolution: mysql:mysql-connector-java:8.0.28</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in mysql connector java jar cve medium severity vulnerability vulnerable library mysql connector java jar jdbc type driver for mysql library home page a href path to dependency file build gradle path to vulnerable library jlxwcq downloadresource nmvcow mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch master vulnerability details vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr h ui n s u c h i h a h publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mysql mysql connector java step up your open source security game with mend
| 0
|
403,567
| 27,423,991,626
|
IssuesEvent
|
2023-03-01 18:47:33
|
k3s-io/k3s
|
https://api.github.com/repos/k3s-io/k3s
|
closed
|
k3s with private registry
|
kind/documentation
|
<!-- Thanks for helping us to improve k3s! We welcome all bug reports. Please fill out each area of the template so we can better help you. ***You can delete this message portion of the bug report.*** -->
**Version:**
k3s version v0.8.1 (d116e74a)
and k3s version v1.18.2+k3s1 (698e444a)
**K3s arguments:**
I ran k3s with docker compose as described in the manual with a minor change concerning TLS
```
version: '3.2'
services:
server:
image: rancher/k3s:latest
command: server --disable-agent --tls-san 192.168.2.110
environment:
- K3S_CLUSTER_SECRET=somethingtotallyrandom
- K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
- K3S_KUBECONFIG_MODE=666
volumes:
- k3s-server:/var/lib/rancher/k3s
# get the kubeconfig file
- .:/output
- ./registries.yaml:/etc/rancher/k3s/registries.yaml
ports:
- 192.168.2.110:6443:6443
node:
image: rancher/k3s:latest
volumes:
- ./registries.yaml:/etc/rancher/k3s/registries.yaml
tmpfs:
- /run
- /var/run
privileged: true
environment:
- K3S_URL=https://server:6443
- K3S_CLUSTER_SECRET=somethingtotallyrandom
ports:
- 31000-32000:31000-32000
volumes:
k3s-server: {}
```
My registry.yaml file looks as follows
```
mirrors:
docker.io:
endpoint:
- "http://192.168.2.110:5055"
```
my private insecure docker registry is defined as
```
version: '3'
services:
registry:
image: registry:2
ports:
- 192.168.2.110:5055:5000
```
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
The problem is that even though the registry.yaml is in accordance to the manual I can not pull docker images from a private insecure registry,
**To Reproduce**
<!-- Steps to reproduce the behavior: -->
Start the registry and the k3s cluster with docker-compose up
Execute
e.g.
```
docker exec -it $(docker ps |grep "k3s agent" |awk -F\ '{print $1}') crictl pull 192.168.2.110:5055/bla
FATA[2020-05-16T08:47:46.093261937Z] pulling image failed: rpc error: code = Unknown desc = failed to resolve image "192.168.2.110:5055/bla:latest": no available registry endpoint: failed to do request: Head https://192.168.2.110:5055/v2/bla/manifests/latest: http: server gave HTTP response to HTTPS client
```
(reproducible independent wether image `bla` exists
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
the image is pulled
**Actual behavior**
<!-- A clear and concise description of what actually happened. -->
`FATA[2020-05-16T08:47:46.093261937Z] pulling image failed: rpc error: code = Unknown desc = failed to resolve image "192.168.2.110:5055/bla:latest": no available registry endpoint: failed to do request: Head https://192.168.2.110:5055/v2/bla/manifests/latest: http: server gave HTTP response to HTTPS client`
**Additional context / logs**
<!-- Add any other context and/or logs about the problem here. -->
`docker exec -it $(docker ps |grep "k3s agent" |awk -F\ '{print $1}') ctr image pull --plain-http 192.168.2.110:5055/bla` works like charm (to be reproducible image needs to be pushed to registry first)
|
1.0
|
k3s with private registry - <!-- Thanks for helping us to improve k3s! We welcome all bug reports. Please fill out each area of the template so we can better help you. ***You can delete this message portion of the bug report.*** -->
**Version:**
k3s version v0.8.1 (d116e74a)
and k3s version v1.18.2+k3s1 (698e444a)
**K3s arguments:**
I ran k3s with docker compose as described in the manual with a minor change concerning TLS
```
version: '3.2'
services:
server:
image: rancher/k3s:latest
command: server --disable-agent --tls-san 192.168.2.110
environment:
- K3S_CLUSTER_SECRET=somethingtotallyrandom
- K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
- K3S_KUBECONFIG_MODE=666
volumes:
- k3s-server:/var/lib/rancher/k3s
# get the kubeconfig file
- .:/output
- ./registries.yaml:/etc/rancher/k3s/registries.yaml
ports:
- 192.168.2.110:6443:6443
node:
image: rancher/k3s:latest
volumes:
- ./registries.yaml:/etc/rancher/k3s/registries.yaml
tmpfs:
- /run
- /var/run
privileged: true
environment:
- K3S_URL=https://server:6443
- K3S_CLUSTER_SECRET=somethingtotallyrandom
ports:
- 31000-32000:31000-32000
volumes:
k3s-server: {}
```
My registry.yaml file looks as follows
```
mirrors:
docker.io:
endpoint:
- "http://192.168.2.110:5055"
```
my private insecure docker registry is defined as
```
version: '3'
services:
registry:
image: registry:2
ports:
- 192.168.2.110:5055:5000
```
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
The problem is that even though the registry.yaml is in accordance to the manual I can not pull docker images from a private insecure registry,
**To Reproduce**
<!-- Steps to reproduce the behavior: -->
Start the registry and the k3s cluster with docker-compose up
Execute
e.g.
```
docker exec -it $(docker ps |grep "k3s agent" |awk -F\ '{print $1}') crictl pull 192.168.2.110:5055/bla
FATA[2020-05-16T08:47:46.093261937Z] pulling image failed: rpc error: code = Unknown desc = failed to resolve image "192.168.2.110:5055/bla:latest": no available registry endpoint: failed to do request: Head https://192.168.2.110:5055/v2/bla/manifests/latest: http: server gave HTTP response to HTTPS client
```
(reproducible independent wether image `bla` exists
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
the image is pulled
**Actual behavior**
<!-- A clear and concise description of what actually happened. -->
`FATA[2020-05-16T08:47:46.093261937Z] pulling image failed: rpc error: code = Unknown desc = failed to resolve image "192.168.2.110:5055/bla:latest": no available registry endpoint: failed to do request: Head https://192.168.2.110:5055/v2/bla/manifests/latest: http: server gave HTTP response to HTTPS client`
**Additional context / logs**
<!-- Add any other context and/or logs about the problem here. -->
`docker exec -it $(docker ps |grep "k3s agent" |awk -F\ '{print $1}') ctr image pull --plain-http 192.168.2.110:5055/bla` works like charm (to be reproducible image needs to be pushed to registry first)
|
non_process
|
with private registry version version and version arguments i ran with docker compose as described in the manual with a minor change concerning tls version services server image rancher latest command server disable agent tls san environment cluster secret somethingtotallyrandom kubeconfig output output kubeconfig yaml kubeconfig mode volumes server var lib rancher get the kubeconfig file output registries yaml etc rancher registries yaml ports node image rancher latest volumes registries yaml etc rancher registries yaml tmpfs run var run privileged true environment url cluster secret somethingtotallyrandom ports volumes server my registry yaml file looks as follows mirrors docker io endpoint my private insecure docker registry is defined as version services registry image registry ports describe the bug the problem is that even though the registry yaml is in accordance to the manual i can not pull docker images from a private insecure registry to reproduce start the registry and the cluster with docker compose up execute e g docker exec it docker ps grep agent awk f print crictl pull bla fata pulling image failed rpc error code unknown desc failed to resolve image bla latest no available registry endpoint failed to do request head http server gave http response to https client reproducible independent wether image bla exists expected behavior the image is pulled actual behavior fata pulling image failed rpc error code unknown desc failed to resolve image bla latest no available registry endpoint failed to do request head http server gave http response to https client additional context logs docker exec it docker ps grep agent awk f print ctr image pull plain http bla works like charm to be reproducible image needs to be pushed to registry first
| 0
|
15,280
| 19,271,442,668
|
IssuesEvent
|
2021-12-10 06:13:48
|
DSE511-Project3-Team/DSE511-Project-3-Code-Repo
|
https://api.github.com/repos/DSE511-Project3-Team/DSE511-Project-3-Code-Repo
|
closed
|
Consolidate the preprocessing into one .py file for easier use.
|
Preprocess
|
Consolidate the preprocessing into one .py file for easier use.
|
1.0
|
Consolidate the preprocessing into one .py file for easier use. - Consolidate the preprocessing into one .py file for easier use.
|
process
|
consolidate the preprocessing into one py file for easier use consolidate the preprocessing into one py file for easier use
| 1
|
342,736
| 10,321,086,535
|
IssuesEvent
|
2019-08-30 23:12:28
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Need proper fix for pkg/master timeout
|
kind/bug kind/flake lifecycle/rotten priority/important-longterm sig/api-machinery
|
/kind bug
/kind flake
**What happened**:
2 months ago the pkg/master test was reliably taking less the 60 seconds to complete.
It now reliably takes 90 seconds on decently powerful box and times out a 5 minute timeout on the CI boxes.
Please see #59685 and #59441.
As @mikedanese notes :-
The Validate calls to the vendored go-openapi library are which make the test slow:
https://github.com/kubernetes/kubernetes/blob/master/pkg/master/master_openapi_test.go#L91
We should probably do more perf and send a patch to upstream.
**What you expected to happen**:
Someone needs to fully investigate the fix the issue causing the significant slow down of this test.
**How to reproduce it (as minimally and precisely as possible)**:
Run the test and note how long it takes.
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
|
1.0
|
Need proper fix for pkg/master timeout - /kind bug
/kind flake
**What happened**:
2 months ago the pkg/master test was reliably taking less the 60 seconds to complete.
It now reliably takes 90 seconds on decently powerful box and times out a 5 minute timeout on the CI boxes.
Please see #59685 and #59441.
As @mikedanese notes :-
The Validate calls to the vendored go-openapi library are which make the test slow:
https://github.com/kubernetes/kubernetes/blob/master/pkg/master/master_openapi_test.go#L91
We should probably do more perf and send a patch to upstream.
**What you expected to happen**:
Someone needs to fully investigate the fix the issue causing the significant slow down of this test.
**How to reproduce it (as minimally and precisely as possible)**:
Run the test and note how long it takes.
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
|
non_process
|
need proper fix for pkg master timeout kind bug kind flake what happened months ago the pkg master test was reliably taking less the seconds to complete it now reliably takes seconds on decently powerful box and times out a minute timeout on the ci boxes please see and as mikedanese notes the validate calls to the vendored go openapi library are which make the test slow we should probably do more perf and send a patch to upstream what you expected to happen someone needs to fully investigate the fix the issue causing the significant slow down of this test how to reproduce it as minimally and precisely as possible run the test and note how long it takes anything else we need to know environment kubernetes version use kubectl version cloud provider or hardware configuration os e g from etc os release kernel e g uname a install tools others
| 0
|
30,488
| 5,807,564,118
|
IssuesEvent
|
2017-05-04 08:13:26
|
axemclion/IndexedDBShim
|
https://api.github.com/repos/axemclion/IndexedDBShim
|
closed
|
PouchDB Example
|
Awaiting information Documentation
|
On your example page please mention that PouchDB no longer requires this shim on Android since PouchDB now uses webSQL if the database name includes the webSQL adapter in the name like this: 'websql://[my db]'.
|
1.0
|
PouchDB Example - On your example page please mention that PouchDB no longer requires this shim on Android since PouchDB now uses webSQL if the database name includes the webSQL adapter in the name like this: 'websql://[my db]'.
|
non_process
|
pouchdb example on your example page please mention that pouchdb no longer requires this shim on android since pouchdb now uses websql if the database name includes the websql adapter in the name like this websql
| 0
|
16,160
| 3,508,970,700
|
IssuesEvent
|
2016-01-08 20:22:06
|
czerkies/location
|
https://api.github.com/repos/czerkies/location
|
reopened
|
AmΓ©liorations
|
Ok Test Todo
|
- [x] Mail Γ l'inscription
- [x] Connecter automatiquement après inscription.
- [x] Changer le mot de passe.
- [x] PossibilitΓ© de se dΓ©sinscrire de la newsletter.
|
1.0
|
AmΓ©liorations - - [x] Mail Γ l'inscription
- [x] Connecter automatiquement après inscription.
- [x] Changer le mot de passe.
- [x] PossibilitΓ© de se dΓ©sinscrire de la newsletter.
|
non_process
|
améliorations mail à l inscription connecter automatiquement après inscription changer le mot de passe possibilité de se désinscrire de la newsletter
| 0
|
6,795
| 9,934,203,735
|
IssuesEvent
|
2019-07-02 14:00:39
|
aiidateam/aiida_core
|
https://api.github.com/repos/aiidateam/aiida_core
|
opened
|
Module class and name of process functions not properly defined
|
priority/important topic/engine topic/processes type/bug
|
The `process_type` of a process function is meant to be set to the corresponding entry point string, based on its module and name if it exists, or otherwise the concatenation of module and function name. When the function is ran, the process node created will get the value for `process_type` by calling `build_process_type` which tries to do exactly this. However, the `FunctionProcess` class that is dynamically built for the decorated function does not correctly inherit the `__module__` and `__name__` attribute. Therefore, for example for the function `from aiida_quantumespresso.workflows.functions.create_kpoints_from_distance import create_kpoints_from_distance`, which should be getting the `process_type` of `'aiida_quantumespresso.workflows.functions.create_kpoints_from_distance.create_kpoints_from_distance'`, instead gets the generic `'abc.create_kpoints_from_distance'`
|
1.0
|
Module class and name of process functions not properly defined - The `process_type` of a process function is meant to be set to the corresponding entry point string, based on its module and name if it exists, or otherwise the concatenation of module and function name. When the function is ran, the process node created will get the value for `process_type` by calling `build_process_type` which tries to do exactly this. However, the `FunctionProcess` class that is dynamically built for the decorated function does not correctly inherit the `__module__` and `__name__` attribute. Therefore, for example for the function `from aiida_quantumespresso.workflows.functions.create_kpoints_from_distance import create_kpoints_from_distance`, which should be getting the `process_type` of `'aiida_quantumespresso.workflows.functions.create_kpoints_from_distance.create_kpoints_from_distance'`, instead gets the generic `'abc.create_kpoints_from_distance'`
|
process
|
module class and name of process functions not properly defined the process type of a process function is meant to be set to the corresponding entry point string based on its module and name if it exists or otherwise the concatenation of module and function name when the function is ran the process node created will get the value for process type by calling build process type which tries to do exactly this however the functionprocess class that is dynamically built for the decorated function does not correctly inherit the module and name attribute therefore for example for the function from aiida quantumespresso workflows functions create kpoints from distance import create kpoints from distance which should be getting the process type of aiida quantumespresso workflows functions create kpoints from distance create kpoints from distance instead gets the generic abc create kpoints from distance
| 1
|
17,017
| 22,389,150,914
|
IssuesEvent
|
2022-06-17 05:19:48
|
PyCQA/pylint
|
https://api.github.com/repos/PyCQA/pylint
|
opened
|
Multiprocessing is not very efficient
|
High Effort π topic-multiprocessing
|
### Bug description
See: https://github.com/PyCQA/pylint/issues/6965#issuecomment-1158128082
> I've noticed that spawning the child processes is extremely slow - about 1-2 processes per second. So, 60-way parallelism generally means that there is about a 30 s delay while things spin up, then almost no time spent doing work. This slow startup time was happening despite my CPUs being about 70% idle. This slow startup time presumably explains the bug I saw about how ineffectual multiprocessing was for pylint. You would need a huge batch of files to justify doing significant multi-processing.
> I don't know whether the slow startup is a bug in multiprocessing or in pylint. I just measured some of our presubmits and 60-way parallelism more than doubles the time that they take, from ~30 to ~70 s.
Possible solution in https://github.com/PyCQA/pylint/issues/6965#issuecomment-1158158384
> We would need a complete rewrite of the parallel code. We currently spin up a new PyLinter class for every job. That is taking way too long (probably). But I'm not sure what the best approach is to create a PyLinterLite...
I personally think that a refactor of PyLinter will be required, and we'd have to classify checkers to know if they can benefit from multiprocessing or not. ``duplicate-code`` or ``cyclic import`` won't for example as they need information on the imports of a file. Some check are are file based like ``unused-private-member`` (the scope is a single class) or ``while-used`` (it just has to check if a while node exists) and can benefit from multiprocessing if done at the right time.
### Configuration
```ini
We should use a full configuration for this with a lot to parse, as we're probably parsing the configuration in each forks and this would make it apparent.
```
### Command used
```shell
``lint.Run(['--jobs', '42'] + argv)``
```
### Pylint output
```shell
NA
```
### Expected behavior
Run time decrease with more core (when there is more files to lint than cores available).
### Pylint version
```shell
2.14.2
```
### OS / Environment
_No response_
### Additional dependencies
_No response_
|
1.0
|
Multiprocessing is not very efficient - ### Bug description
See: https://github.com/PyCQA/pylint/issues/6965#issuecomment-1158128082
> I've noticed that spawning the child processes is extremely slow - about 1-2 processes per second. So, 60-way parallelism generally means that there is about a 30 s delay while things spin up, then almost no time spent doing work. This slow startup time was happening despite my CPUs being about 70% idle. This slow startup time presumably explains the bug I saw about how ineffectual multiprocessing was for pylint. You would need a huge batch of files to justify doing significant multi-processing.
> I don't know whether the slow startup is a bug in multiprocessing or in pylint. I just measured some of our presubmits and 60-way parallelism more than doubles the time that they take, from ~30 to ~70 s.
Possible solution in https://github.com/PyCQA/pylint/issues/6965#issuecomment-1158158384
> We would need a complete rewrite of the parallel code. We currently spin up a new PyLinter class for every job. That is taking way too long (probably). But I'm not sure what the best approach is to create a PyLinterLite...
I personally think that a refactor of PyLinter will be required, and we'd have to classify checkers to know if they can benefit from multiprocessing or not. ``duplicate-code`` or ``cyclic import`` won't for example as they need information on the imports of a file. Some check are are file based like ``unused-private-member`` (the scope is a single class) or ``while-used`` (it just has to check if a while node exists) and can benefit from multiprocessing if done at the right time.
### Configuration
```ini
We should use a full configuration for this with a lot to parse, as we're probably parsing the configuration in each forks and this would make it apparent.
```
### Command used
```shell
``lint.Run(['--jobs', '42'] + argv)``
```
### Pylint output
```shell
NA
```
### Expected behavior
Run time decrease with more core (when there is more files to lint than cores available).
### Pylint version
```shell
2.14.2
```
### OS / Environment
_No response_
### Additional dependencies
_No response_
|
process
|
multiprocessing is not very efficient bug description see i ve noticed that spawning the child processes is extremely slow about processes per second so way parallelism generally means that there is about a s delay while things spin up then almost no time spent doing work this slow startup time was happening despite my cpus being about idle this slow startup time presumably explains the bug i saw about how ineffectual multiprocessing was for pylint you would need a huge batch of files to justify doing significant multi processing i don t know whether the slow startup is a bug in multiprocessing or in pylint i just measured some of our presubmits and way parallelism more than doubles the time that they take from to s possible solution in we would need a complete rewrite of the parallel code we currently spin up a new pylinter class for every job that is taking way too long probably but i m not sure what the best approach is to create a pylinterlite i personally think that a refactor of pylinter will be required and we d have to classify checkers to know if they can benefit from multiprocessing or not duplicate code or cyclic import won t for example as they need information on the imports of a file some check are are file based like unused private member the scope is a single class or while used it just has to check if a while node exists and can benefit from multiprocessing if done at the right time configuration ini we should use a full configuration for this with a lot to parse as we re probably parsing the configuration in each forks and this would make it apparent command used shell lint run argv pylint output shell na expected behavior run time decrease with more core when there is more files to lint than cores available pylint version shell os environment no response additional dependencies no response
| 1
|
22,140
| 30,684,352,030
|
IssuesEvent
|
2023-07-26 11:18:18
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Add/Update Log Records Definition
|
automation/svc triaged assigned-to-author doc-idea process-automation/subsvc Pri2
|
Azure Automation supports new diagnostic log categories but these are not documented. Please add log description what the Category 'AuditEvent' is recording.
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: a52eb74e-6880-eb97-8cfd-c7c88feb64d7
* Version Independent ID: 612bb48d-0b43-1dbf-9053-14e55edb5095
* Content: [Forward Azure Automation job data to Azure Monitor logs](https://docs.microsoft.com/en-us/azure/automation/automation-manage-send-joblogs-log-analytics)
* Content Source: [articles/automation/automation-manage-send-joblogs-log-analytics.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-manage-send-joblogs-log-analytics.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Add/Update Log Records Definition -
Azure Automation supports new diagnostic log categories but these are not documented. Please add log description what the Category 'AuditEvent' is recording.
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: a52eb74e-6880-eb97-8cfd-c7c88feb64d7
* Version Independent ID: 612bb48d-0b43-1dbf-9053-14e55edb5095
* Content: [Forward Azure Automation job data to Azure Monitor logs](https://docs.microsoft.com/en-us/azure/automation/automation-manage-send-joblogs-log-analytics)
* Content Source: [articles/automation/automation-manage-send-joblogs-log-analytics.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-manage-send-joblogs-log-analytics.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
add update log records definition azure automation supports new diagnostic log categories but these are not documented please add log description what the category auditevent is recording document details β do not edit this section it is required for docs microsoft com β github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
662,852
| 22,154,514,876
|
IssuesEvent
|
2022-06-03 20:45:07
|
googleapis/repo-automation-bots
|
https://api.github.com/repos/googleapis/repo-automation-bots
|
closed
|
canary-bot-googleapis scheduled task is failing
|
type: bug priority: p2
|
Last succeeded on Jun 1st. Failing on Jun 2nd and 3rd.
|
1.0
|
canary-bot-googleapis scheduled task is failing - Last succeeded on Jun 1st. Failing on Jun 2nd and 3rd.
|
non_process
|
canary bot googleapis scheduled task is failing last succeeded on jun failing on jun and
| 0
|
21,293
| 11,625,230,484
|
IssuesEvent
|
2020-02-27 12:17:06
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Error execution illustrated command
|
Pri2 container-service/svc cxp product-question triaged
|
This command:
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --output table
didn't work on my computer. I get this response:
Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: a14a3f84-28b4-0a2a-4da7-47cb4d66689f
* Version Independent ID: d1ffdd88-ab9a-9f55-8689-526e7415b326
* Content: [Kubernetes on Azure tutorial - Upgrade a cluster - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-upgrade-cluster)
* Content Source: [articles/aks/tutorial-kubernetes-upgrade-cluster.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/tutorial-kubernetes-upgrade-cluster.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
|
1.0
|
Error execution illustrated command - This command:
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --output table
didn't work on my computer. I get this response:
Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: a14a3f84-28b4-0a2a-4da7-47cb4d66689f
* Version Independent ID: d1ffdd88-ab9a-9f55-8689-526e7415b326
* Content: [Kubernetes on Azure tutorial - Upgrade a cluster - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-upgrade-cluster)
* Content Source: [articles/aks/tutorial-kubernetes-upgrade-cluster.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/tutorial-kubernetes-upgrade-cluster.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
|
non_process
|
error execution illustrated command this command az aks get upgrades resource group myresourcegroup name myakscluster output table didn t work on my computer i get this response table output unavailable use the query option to specify an appropriate query use debug for more info document details β do not edit this section it is required for docs microsoft com β github issue linking id version independent id content content source service container service github login mlearned microsoft alias mlearned
| 0
|
18,184
| 24,235,733,337
|
IssuesEvent
|
2022-09-26 22:59:50
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
closed
|
Advice provider refactoring
|
assembly processor v0.3
|
After #393 is done, we will have refactored all IO operations except for the ones which deal with the advice tape. These operations are: `push.adv.n` and `loadw.adv`.
First, we should probably rename these operations to be consistent with the new naming conventions. Specifically:
* `loadw_adv` should be `adv_loadw`.
* Renaming `push_adv.n` is a bit more tricky since we use only load/store verbs in other place, but for the lack of a better option, I think we can replace it with `adv_push.n` (unless someone has better suggestions).
Second, a more fundamental issue with the advice provider is that we have only a single advice tape. For simple programs this tape can be pre-loaded with some values, and then the program can read these values one-by-one. However, for more complex program, pre-loading the tape would be far from trivial.
Imagine there is a program which needs to "un-hash" one of two values based on some condition which it needs to compute dynamically. To know which one of pre-images need to be put on the advice tape, we'd first need to execute the program up to the point the condition is computed, then once we know the condition, we could initialize the tape appropriately, and only after that, we could execute the program to the end. Needless to say that this approach is not workable for even moderately complicated programs.
One relatively simple way to address the above issue is to have many advice tapes. That is, we could replace a single tape in the advice provider with a key-value map where a key is some tape identifier and the value is the tape itself. This map would always have at least one tape (i.e., for key `0`), but could be initialized with any number of tapes under various keys.
To identify which tape to read values from, we could introduce a concept of an _active tape_. That is, only one tape can be active at a given time, and `adv_loadw` and `adv_push` instructions would always read from the active advice tape.
To change the active tape we could use a decorator - maybe something like `adv_config` (other name suggestions are welcome). The effect of this decorator would be set the active tape to the tape with the key equal to the top 4 items on the stack. The reason for using 4 elements as the key is because we frequently want to "un-hash" some value, and it is convenient to be able to look up hash pre-image by its hash.
A quick example. Let's say we have a Merkle tree. We want to get a leaf at position 1 from the tree, and then get hash pre-image of this leaf. Denoting leaf at position 1 as $n$, we assume that pre-image of $n$ is a tuple $a$, $b$. That is $n = hash(a, b)$. The program to do this could look like so:
```
begin
push.1.2.3.4 # push Merkle root of the tree onto the stack
push.1 # push node index onto the stack
push.16 # push the depth of the tree onto the stack
mtree.get # load the node value onto the stack
adv_config # set the active tape to the tape with key equal to the node value
swapw
adv_loadw # load the first 4 elements (a) from the advice tape onto the stack
adv_push.4 # load another 4 elements (b) from the advice tape onto the stack
repeat.8
dup.7 # make a copy of the top 8 elements
end
rphash # compute hash(a, b)
swapw
swapw.3
eqw # make sure hash(a, b) = n
assert
swapw # at this point, the stack will look like [b, a, ...]
end
```
To run the above program, we need to initialize advice provider with a map of tapes where one of the entries is `(n, [a, b])` (i.e., the tape containing values `[a, b]` is under the key `n`.
Side note: the above program is rather cumbersome and we probably would want something better than that. My hope that methodology discussed in #336 will make it much more efficient, but having "tape maps" as described above would be very useful there too.
|
1.0
|
Advice provider refactoring - After #393 is done, we will have refactored all IO operations except for the ones which deal with the advice tape. These operations are: `push.adv.n` and `loadw.adv`.
First, we should probably rename these operations to be consistent with the new naming conventions. Specifically:
* `loadw_adv` should be `adv_loadw`.
* Renaming `push_adv.n` is a bit more tricky since we use only load/store verbs in other place, but for the lack of a better option, I think we can replace it with `adv_push.n` (unless someone has better suggestions).
Second, a more fundamental issue with the advice provider is that we have only a single advice tape. For simple programs this tape can be pre-loaded with some values, and then the program can read these values one-by-one. However, for more complex program, pre-loading the tape would be far from trivial.
Imagine there is a program which needs to "un-hash" one of two values based on some condition which it needs to compute dynamically. To know which one of pre-images need to be put on the advice tape, we'd first need to execute the program up to the point the condition is computed, then once we know the condition, we could initialize the tape appropriately, and only after that, we could execute the program to the end. Needless to say that this approach is not workable for even moderately complicated programs.
One relatively simple way to address the above issue is to have many advice tapes. That is, we could replace a single tape in the advice provider with a key-value map where a key is some tape identifier and the value is the tape itself. This map would always have at least one tape (i.e., for key `0`), but could be initialized with any number of tapes under various keys.
To identify which tape to read values from, we could introduce a concept of an _active tape_. That is, only one tape can be active at a given time, and `adv_loadw` and `adv_push` instructions would always read from the active advice tape.
To change the active tape we could use a decorator - maybe something like `adv_config` (other name suggestions are welcome). The effect of this decorator would be set the active tape to the tape with the key equal to the top 4 items on the stack. The reason for using 4 elements as the key is because we frequently want to "un-hash" some value, and it is convenient to be able to look up hash pre-image by its hash.
A quick example. Let's say we have a Merkle tree. We want to get a leaf at position 1 from the tree, and then get hash pre-image of this leaf. Denoting leaf at position 1 as $n$, we assume that pre-image of $n$ is a tuple $a$, $b$. That is $n = hash(a, b)$. The program to do this could look like so:
```
begin
push.1.2.3.4 # push Merkle root of the tree onto the stack
push.1 # push node index onto the stack
push.16 # push the depth of the tree onto the stack
mtree.get # load the node value onto the stack
adv_config # set the active tape to the tape with key equal to the node value
swapw
adv_loadw # load the first 4 elements (a) from the advice tape onto the stack
adv_push.4 # load another 4 elements (b) from the advice tape onto the stack
repeat.8
dup.7 # make a copy of the top 8 elements
end
rphash # compute hash(a, b)
swapw
swapw.3
eqw # make sure hash(a, b) = n
assert
swapw # at this point, the stack will look like [b, a, ...]
end
```
To run the above program, we need to initialize advice provider with a map of tapes where one of the entries is `(n, [a, b])` (i.e., the tape containing values `[a, b]` is under the key `n`.
Side note: the above program is rather cumbersome and we probably would want something better than that. My hope that methodology discussed in #336 will make it much more efficient, but having "tape maps" as described above would be very useful there too.
|
process
|
advice provider refactoring after is done we will have refactored all io operations except for the ones which deal with the advice tape these operations are push adv n and loadw adv first we should probably rename these operations to be consistent with the new naming conventions specifically loadw adv should be adv loadw renaming push adv n is a bit more tricky since we use only load store verbs in other place but for the lack of a better option i think we can replace it with adv push n unless someone has better suggestions second a more fundamental issue with the advice provider is that we have only a single advice tape for simple programs this tape can be pre loaded with some values and then the program can read these values one by one however for more complex program pre loading the tape would be far from trivial imagine there is a program which needs to un hash one of two values based on some condition which it needs to compute dynamically to know which one of pre images need to be put on the advice tape we d first need to execute the program up to the point the condition is computed then once we know the condition we could initialize the tape appropriately and only after that we could execute the program to the end needless to say that this approach is not workable for even moderately complicated programs one relatively simple way to address the above issue is to have many advice tapes that is we could replace a single tape in the advice provider with a key value map where a key is some tape identifier and the value is the tape itself this map would always have at least one tape i e for key but could be initialized with any number of tapes under various keys to identify which tape to read values from we could introduce a concept of an active tape that is only one tape can be active at a given time and adv loadw and adv push instructions would always read from the active advice tape to change the active tape we could use a decorator maybe something like adv config other name suggestions are welcome the effect of this decorator would be set the active tape to the tape with the key equal to the top items on the stack the reason for using elements as the key is because we frequently want to un hash some value and it is convenient to be able to look up hash pre image by its hash a quick example let s say we have a merkle tree we want to get a leaf at position from the tree and then get hash pre image of this leaf denoting leaf at position as n we assume that pre image of n is a tuple a b that is n hash a b the program to do this could look like so begin push push merkle root of the tree onto the stack push push node index onto the stack push push the depth of the tree onto the stack mtree get load the node value onto the stack adv config set the active tape to the tape with key equal to the node value swapw adv loadw load the first elements a from the advice tape onto the stack adv push load another elements b from the advice tape onto the stack repeat dup make a copy of the top elements end rphash compute hash a b swapw swapw eqw make sure hash a b n assert swapw at this point the stack will look like end to run the above program we need to initialize advice provider with a map of tapes where one of the entries is n i e the tape containing values is under the key n side note the above program is rather cumbersome and we probably would want something better than that my hope that methodology discussed in will make it much more efficient but having tape maps as described above would be very useful there too
| 1
|
804,174
| 29,478,020,433
|
IssuesEvent
|
2023-06-02 01:12:18
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
closed
|
Fix performance issues for organizations with 5000 streams
|
area: stream settings priority: high area: performance
|
I did some performance testing with 5000 streams. The backend works great. The frontend has a few issues:
* [x] The `populate_subscriptions` process ends up consuming several seconds in this block of `create_sub_from_server_data`):
```
if (!sub.color) {
var used_colors = exports.get_colors(); <------ problem
sub.color = stream_color.pick_color(used_colors);
}
```
The issue is that `exports.get_colors()` iterates through all previously added streams (including ones the user has never been subscribed to) to determine the set of colors that have been used, which ends up being effectively a quadratic process when run in a loop.
We should probably just replace `exports.get_colors()` with something backed by a sensible data structure. The following results are done using `var used_colors = [];`, with which page load is pretty fast.
* [x] With that resolved, `stream_data.initialize_from_page_params` still takes O(160ms) to run; half of that is `marked` for rendering stream descriptions (which we'll eventually want to move to the backend for other reasons anyway, but is minor enough). This is #11272.
With those settled, page load would be fast enough. But "manage streams" still loads really slowly. The top issues there are:
* [ ] `populate_stream_settings_left_panel` is kinda slow (~700ms). That seems to be dominated by just generated and entering the 5000 rows of HTML to filter.
* [ ] Almost all the time (~7s) is in `filter_table`, specifically doing jquery stuff inside this loop:
```
_.each(all_stream_ids, function third (stream_id) {
$('#subscriptions_table .streams-list').append(widgets[stream_id]);
});
```
(I verified this by adding the name "third" to the anonymous function for that loop).
One can reproduce these issues using e.g. `./manage.py populate_db --extra-streams=5000`, though you'll probably want to create a new user in the realm after doing so, to avoid the atypical state where Iago is subscribed to all 5000 streams.
|
1.0
|
Fix performance issues for organizations with 5000 streams - I did some performance testing with 5000 streams. The backend works great. The frontend has a few issues:
* [x] The `populate_subscriptions` process ends up consuming several seconds in this block of `create_sub_from_server_data`):
```
if (!sub.color) {
var used_colors = exports.get_colors(); <------ problem
sub.color = stream_color.pick_color(used_colors);
}
```
The issue is that `exports.get_colors()` iterates through all previously added streams (including ones the user has never been subscribed to) to determine the set of colors that have been used, which ends up being effectively a quadratic process when run in a loop.
We should probably just replace `exports.get_colors()` with something backed by a sensible data structure. The following results are done using `var used_colors = [];`, with which page load is pretty fast.
* [x] With that resolved, `stream_data.initialize_from_page_params` still takes O(160ms) to run; half of that is `marked` for rendering stream descriptions (which we'll eventually want to move to the backend for other reasons anyway, but is minor enough). This is #11272.
With those settled, page load would be fast enough. But "manage streams" still loads really slowly. The top issues there are:
* [ ] `populate_stream_settings_left_panel` is kinda slow (~700ms). That seems to be dominated by just generated and entering the 5000 rows of HTML to filter.
* [ ] Almost all the time (~7s) is in `filter_table`, specifically doing jquery stuff inside this loop:
```
_.each(all_stream_ids, function third (stream_id) {
$('#subscriptions_table .streams-list').append(widgets[stream_id]);
});
```
(I verified this by adding the name "third" to the anonymous function for that loop).
One can reproduce these issues using e.g. `./manage.py populate_db --extra-streams=5000`, though you'll probably want to create a new user in the realm after doing so, to avoid the atypical state where Iago is subscribed to all 5000 streams.
|
non_process
|
fix performance issues for organizations with streams i did some performance testing with streams the backend works great the frontend has a few issues the populate subscriptions process ends up consuming several seconds in this block of create sub from server data if sub color var used colors exports get colors problem sub color stream color pick color used colors the issue is that exports get colors iterates through all previously added streams including ones the user has never been subscribed to to determine the set of colors that have been used which ends up being effectively a quadratic process when run in a loop we should probably just replace exports get colors with something backed by a sensible data structure the following results are done using var used colors with which page load is pretty fast with that resolved stream data initialize from page params still takes o to run half of that is marked for rendering stream descriptions which we ll eventually want to move to the backend for other reasons anyway but is minor enough this is with those settled page load would be fast enough but manage streams still loads really slowly the top issues there are populate stream settings left panel is kinda slow that seems to be dominated by just generated and entering the rows of html to filter almost all the time is in filter table specifically doing jquery stuff inside this loop each all stream ids function third stream id subscriptions table streams list append widgets i verified this by adding the name third to the anonymous function for that loop one can reproduce these issues using e g manage py populate db extra streams though you ll probably want to create a new user in the realm after doing so to avoid the atypical state where iago is subscribed to all streams
| 0
|
134,712
| 19,307,724,447
|
IssuesEvent
|
2021-12-13 13:21:07
|
psf/black
|
https://api.github.com/repos/psf/black
|
closed
|
Insert parenthesis to get better formatting
|
T: design
|
While handling fluent interfaces if the instruction is within parenthesis Black formats it this way:
* Original
```python
(spark.read.parquet(path).select(columns).filter('column is not null').filter((f.size('id_list') > 1) & (f.col('operating_system') != 'iOS')))
```
* Formatted
```python
(
spark.read.parquet(path)
.select(columns)
.filter("column is not null")
.filter((f.size("id_list") > 1) & (f.col("operating_system") != "iOS"))
)
```
However if the instruction is not within parenthesis it formats it differently:
* Original
```python
spark.read.parquet(path).select(columns).filter('column is not null').filter((f.size('id_list') > 1) & (f.col('operating_system') != 'iOS'))
```
* Formatted
```python
spark.read.parquet(path).select(columns).filter("column is not null").filter(
(f.size("id_list") > 1) & (f.col("operating_system") != "iOS")
)
```
I believe it would be beneficial to **force the use of parenthesis** to use the better formatting style instead of the other
|
1.0
|
Insert parenthesis to get better formatting - While handling fluent interfaces if the instruction is within parenthesis Black formats it this way:
* Original
```python
(spark.read.parquet(path).select(columns).filter('column is not null').filter((f.size('id_list') > 1) & (f.col('operating_system') != 'iOS')))
```
* Formatted
```python
(
spark.read.parquet(path)
.select(columns)
.filter("column is not null")
.filter((f.size("id_list") > 1) & (f.col("operating_system") != "iOS"))
)
```
However if the instruction is not within parenthesis it formats it differently:
* Original
```python
spark.read.parquet(path).select(columns).filter('column is not null').filter((f.size('id_list') > 1) & (f.col('operating_system') != 'iOS'))
```
* Formatted
```python
spark.read.parquet(path).select(columns).filter("column is not null").filter(
(f.size("id_list") > 1) & (f.col("operating_system") != "iOS")
)
```
I believe it would be beneficial to **force the use of parenthesis** to use the better formatting style instead of the other
|
non_process
|
insert parenthesis to get better formatting while handling fluent interfaces if the instruction is within parenthesis black formats it this way original python spark read parquet path select columns filter column is not null filter f size id list f col operating system ios formatted python spark read parquet path select columns filter column is not null filter f size id list f col operating system ios however if the instruction is not within parenthesis it formats it differently original python spark read parquet path select columns filter column is not null filter f size id list f col operating system ios formatted python spark read parquet path select columns filter column is not null filter f size id list f col operating system ios i believe it would be beneficial to force the use of parenthesis to use the better formatting style instead of the other
| 0
|
258,797
| 19,573,893,489
|
IssuesEvent
|
2022-01-04 13:20:53
|
secureCodeBox/documentation
|
https://api.github.com/repos/secureCodeBox/documentation
|
closed
|
Keeping docs complete and up-to-date
|
documentation good first issue maintainance
|
The documentation is in some place very out of date or incomplete. To give a few examples:
- [ ] Incomplete:
* [automatically-repeating-scans.md](https://github.com/secureCodeBox/documentation/blob/9acdf4004b36fbf63c55f64b19811b06ba4ff7ed/docs/how-tos/automatically-repeating-scans.md) #127
* [scanning-web-applications.md](https://github.com/secureCodeBox/documentation/blob/9acdf4004b36fbf63c55f64b19811b06ba4ff7ed/docs/how-tos/scanning-web-applications.md) #130
* [conventions.md](https://github.com/secureCodeBox/documentation/blob/9acdf4004b36fbf63c55f64b19811b06ba4ff7ed/docs/contributing/conventions.md)
- [x] Out-of-date #126 :
* [integrating-a-scanner.md](https://github.com/secureCodeBox/documentation/blob/9acdf4004b36fbf63c55f64b19811b06ba4ff7ed/docs/contributing/integrating-a-scanner.md) incorrect tree
* [integrating-a-hook.md](https://github.com/secureCodeBox/documentation/blob/9acdf4004b36fbf63c55f64b19811b06ba4ff7ed/docs/contributing/integrating-a-hook.md) incorrect tree
* [scanners.md](https://github.com/secureCodeBox/documentation/blob/9acdf4004b36fbf63c55f64b19811b06ba4ff7ed/docs/scanners.md) does not include nuclei, whatweb, typo3scan
What are the guidelines concerning keeping the documentation up-to-date? How is decided what topics may be published as incomplete? How are newly implemented features required to update the docs too?
This issue is in no way a shame and blame post, but I'd just like to get some more insight into how this process is currently maintained.
Maybe it would be a good idea to make issues for the currently incorrect documentation so that contributors can pick them up more easily.
|
1.0
|
Keeping docs complete and up-to-date - The documentation is in some place very out of date or incomplete. To give a few examples:
- [ ] Incomplete:
* [automatically-repeating-scans.md](https://github.com/secureCodeBox/documentation/blob/9acdf4004b36fbf63c55f64b19811b06ba4ff7ed/docs/how-tos/automatically-repeating-scans.md) #127
* [scanning-web-applications.md](https://github.com/secureCodeBox/documentation/blob/9acdf4004b36fbf63c55f64b19811b06ba4ff7ed/docs/how-tos/scanning-web-applications.md) #130
* [conventions.md](https://github.com/secureCodeBox/documentation/blob/9acdf4004b36fbf63c55f64b19811b06ba4ff7ed/docs/contributing/conventions.md)
- [x] Out-of-date #126 :
* [integrating-a-scanner.md](https://github.com/secureCodeBox/documentation/blob/9acdf4004b36fbf63c55f64b19811b06ba4ff7ed/docs/contributing/integrating-a-scanner.md) incorrect tree
* [integrating-a-hook.md](https://github.com/secureCodeBox/documentation/blob/9acdf4004b36fbf63c55f64b19811b06ba4ff7ed/docs/contributing/integrating-a-hook.md) incorrect tree
* [scanners.md](https://github.com/secureCodeBox/documentation/blob/9acdf4004b36fbf63c55f64b19811b06ba4ff7ed/docs/scanners.md) does not include nuclei, whatweb, typo3scan
What are the guidelines concerning keeping the documentation up-to-date? How is decided what topics may be published as incomplete? How are newly implemented features required to update the docs too?
This issue is in no way a shame and blame post, but I'd just like to get some more insight into how this process is currently maintained.
Maybe it would be a good idea to make issues for the currently incorrect documentation so that contributors can pick them up more easily.
|
non_process
|
keeping docs complete and up to date the documentation is in some place very out of date or incomplete to give a few examples incomplete out of date incorrect tree incorrect tree does not include nuclei whatweb what are the guidelines concerning keeping the documentation up to date how is decided what topics may be published as incomplete how are newly implemented features required to update the docs too this issue is in no way a shame and blame post but i d just like to get some more insight into how this process is currently maintained maybe it would be a good idea to make issues for the currently incorrect documentation so that contributors can pick them up more easily
| 0
|
7,088
| 10,237,084,456
|
IssuesEvent
|
2019-08-19 13:11:02
|
codacy/codacy-meta
|
https://api.github.com/repos/codacy/codacy-meta
|
opened
|
Create tool for testing metrics
|
Enhancement Processes
|
Could be nice to have a tool, like codacy-plugins-test, to do the same for metrics tools.
|
1.0
|
Create tool for testing metrics - Could be nice to have a tool, like codacy-plugins-test, to do the same for metrics tools.
|
process
|
create tool for testing metrics could be nice to have a tool like codacy plugins test to do the same for metrics tools
| 1
|
634,760
| 20,372,311,059
|
IssuesEvent
|
2022-02-21 12:27:16
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.livejasmin.com - video or audio doesn't play
|
priority-critical browser-fenix engine-gecko
|
<!-- @browser: Firefox Mobile 96.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:96.0) Gecko/96.0 Firefox/96.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/99868 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.livejasmin.com/en/girls
**Browser / Version**: Firefox Mobile 96.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Chrome
**Problem type**: Video or audio doesn't play
**Description**: Media controls are broken or missing
**Steps to Reproduce**:
Browse Doesn't work for me what
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/2/b03b9db1-0cc9-4949-acba-4a6a1e5b60cc.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20211223202418</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/2/c29af534-a646-48c9-b48e-69575268f634)
_From [webcompat.com](https://webcompat.com/) with β€οΈ_
|
1.0
|
www.livejasmin.com - video or audio doesn't play - <!-- @browser: Firefox Mobile 96.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:96.0) Gecko/96.0 Firefox/96.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/99868 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.livejasmin.com/en/girls
**Browser / Version**: Firefox Mobile 96.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Chrome
**Problem type**: Video or audio doesn't play
**Description**: Media controls are broken or missing
**Steps to Reproduce**:
Browse Doesn't work for me what
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/2/b03b9db1-0cc9-4949-acba-4a6a1e5b60cc.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20211223202418</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/2/c29af534-a646-48c9-b48e-69575268f634)
_From [webcompat.com](https://webcompat.com/) with β€οΈ_
|
non_process
|
video or audio doesn t play url browser version firefox mobile operating system android tested another browser yes chrome problem type video or audio doesn t play description media controls are broken or missing steps to reproduce browse doesn t work for me what view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with β€οΈ
| 0
|
201,789
| 23,039,651,929
|
IssuesEvent
|
2022-07-23 01:08:31
|
turkdevops/snyk
|
https://api.github.com/repos/turkdevops/snyk
|
opened
|
CVE-2022-31090 (Medium) detected in guzzlehttp/guzzle-6.3.0
|
security vulnerability
|
## CVE-2022-31090 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>guzzlehttp/guzzle-6.3.0</b></p></summary>
<p>Guzzle, an extensible PHP HTTP client</p>
<p>
Dependency Hierarchy:
- aws/aws-sdk-php-3.0.0 (Root Library)
- :x: **guzzlehttp/guzzle-6.3.0** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/snyk/commit/9505f4ca92405cc9273dc3726c2d274ce28a4407">9505f4ca92405cc9273dc3726c2d274ce28a4407</a></p>
<p>Found in base branch: <b>ALL_HANDS/major-secrets</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Guzzle, an extensible PHP HTTP client. `Authorization` headers on requests are sensitive information. In affected versions when using our Curl handler, it is possible to use the `CURLOPT_HTTPAUTH` option to specify an `Authorization` header. On making a request which responds with a redirect to a URI with a different origin (change in host, scheme or port), if we choose to follow it, we should remove the `CURLOPT_HTTPAUTH` option before continuing, stopping curl from appending the `Authorization` header to the new request. Affected Guzzle 7 users should upgrade to Guzzle 7.4.5 as soon as possible. Affected users using any earlier series of Guzzle should upgrade to Guzzle 6.5.8 or 7.4.5. Note that a partial fix was implemented in Guzzle 7.4.2, where a change in host would trigger removal of the curl-added Authorization header, however this earlier fix did not cover change in scheme or change in port. If you do not require or expect redirects to be followed, one should simply disable redirects all together. Alternatively, one can specify to use the Guzzle steam handler backend, rather than curl.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31090>CVE-2022-31090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/guzzle/guzzle/security/advisories/GHSA-25mq-v84q-4j7r">https://github.com/guzzle/guzzle/security/advisories/GHSA-25mq-v84q-4j7r</a></p>
<p>Release Date: 2022-05-19</p>
<p>Fix Resolution: 6.5.8,7.4.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-31090 (Medium) detected in guzzlehttp/guzzle-6.3.0 - ## CVE-2022-31090 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>guzzlehttp/guzzle-6.3.0</b></p></summary>
<p>Guzzle, an extensible PHP HTTP client</p>
<p>
Dependency Hierarchy:
- aws/aws-sdk-php-3.0.0 (Root Library)
- :x: **guzzlehttp/guzzle-6.3.0** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/snyk/commit/9505f4ca92405cc9273dc3726c2d274ce28a4407">9505f4ca92405cc9273dc3726c2d274ce28a4407</a></p>
<p>Found in base branch: <b>ALL_HANDS/major-secrets</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Guzzle, an extensible PHP HTTP client. `Authorization` headers on requests are sensitive information. In affected versions when using our Curl handler, it is possible to use the `CURLOPT_HTTPAUTH` option to specify an `Authorization` header. On making a request which responds with a redirect to a URI with a different origin (change in host, scheme or port), if we choose to follow it, we should remove the `CURLOPT_HTTPAUTH` option before continuing, stopping curl from appending the `Authorization` header to the new request. Affected Guzzle 7 users should upgrade to Guzzle 7.4.5 as soon as possible. Affected users using any earlier series of Guzzle should upgrade to Guzzle 6.5.8 or 7.4.5. Note that a partial fix was implemented in Guzzle 7.4.2, where a change in host would trigger removal of the curl-added Authorization header, however this earlier fix did not cover change in scheme or change in port. If you do not require or expect redirects to be followed, one should simply disable redirects all together. Alternatively, one can specify to use the Guzzle steam handler backend, rather than curl.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31090>CVE-2022-31090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/guzzle/guzzle/security/advisories/GHSA-25mq-v84q-4j7r">https://github.com/guzzle/guzzle/security/advisories/GHSA-25mq-v84q-4j7r</a></p>
<p>Release Date: 2022-05-19</p>
<p>Fix Resolution: 6.5.8,7.4.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in guzzlehttp guzzle cve medium severity vulnerability vulnerable library guzzlehttp guzzle guzzle an extensible php http client dependency hierarchy aws aws sdk php root library x guzzlehttp guzzle vulnerable library found in head commit a href found in base branch all hands major secrets vulnerability details guzzle an extensible php http client authorization headers on requests are sensitive information in affected versions when using our curl handler it is possible to use the curlopt httpauth option to specify an authorization header on making a request which responds with a redirect to a uri with a different origin change in host scheme or port if we choose to follow it we should remove the curlopt httpauth option before continuing stopping curl from appending the authorization header to the new request affected guzzle users should upgrade to guzzle as soon as possible affected users using any earlier series of guzzle should upgrade to guzzle or note that a partial fix was implemented in guzzle where a change in host would trigger removal of the curl added authorization header however this earlier fix did not cover change in scheme or change in port if you do not require or expect redirects to be followed one should simply disable redirects all together alternatively one can specify to use the guzzle steam handler backend rather than curl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
13,988
| 16,762,928,949
|
IssuesEvent
|
2021-06-14 03:39:18
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Model is not working correctly if opened with QGIS browser on Windows
|
Bug Modeller Processing Windows
|
<!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
QGIS model is not working correctly if opened with QGIS browser on Windows, i.e. available fields in vector field input are not updated accordingly if selected layer for vector layer input is changed by the user.
**How to Reproduce**
1. Download and extract [model_and_sample_data.zip](https://github.com/qgis/QGIS/files/5474419/model_and_sample_data.zip).
2. Add both layers of the GPKG to a QGIS project and have a look at the field names.
3. Open the model included in the zip file with QGIS browser.
4. Switch the layers for the vector layer inputs in the model UI and have a look at the fields shown in the vector field inputs.
5. See error: The vector fields in the vector field inputs are not all the time matching the fields of a selected layer.

**QGIS and OS versions**
QGIS version
3.16.0-Hannover
QGIS code revision
43b64b13f3
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.1.4
Running against GDAL/OGR
3.1.4
Compiled against GEOS
3.8.1-CAPI-1.13.3
Running against GEOS
3.8.1-CAPI-1.13.3
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.2
Running against PROJ
Rel. 6.3.2, May 1st, 2020
OS Version
Windows 10 (10.0)
Active python plugins
QuickOSM;
db_manager;
MetaSearch;
processing
**Additional context**
- If the model is opened and run from the proccesing toolbox panel (_Open Existing Model..._) it is working as expected on Windows.
- On Linux it is working as expected also if opened with QGIS browser.
|
1.0
|
Model is not working correctly if opened with QGIS browser on Windows - <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
QGIS model is not working correctly if opened with QGIS browser on Windows, i.e. available fields in vector field input are not updated accordingly if selected layer for vector layer input is changed by the user.
**How to Reproduce**
1. Download and extract [model_and_sample_data.zip](https://github.com/qgis/QGIS/files/5474419/model_and_sample_data.zip).
2. Add both layers of the GPKG to a QGIS project and have a look at the field names.
3. Open the model included in the zip file with QGIS browser.
4. Switch the layers for the vector layer inputs in the model UI and have a look at the fields shown in the vector field inputs.
5. See error: The vector fields in the vector field inputs are not all the time matching the fields of a selected layer.

**QGIS and OS versions**
QGIS version
3.16.0-Hannover
QGIS code revision
43b64b13f3
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.1.4
Running against GDAL/OGR
3.1.4
Compiled against GEOS
3.8.1-CAPI-1.13.3
Running against GEOS
3.8.1-CAPI-1.13.3
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.2
Running against PROJ
Rel. 6.3.2, May 1st, 2020
OS Version
Windows 10 (10.0)
Active python plugins
QuickOSM;
db_manager;
MetaSearch;
processing
**Additional context**
- If the model is opened and run from the proccesing toolbox panel (_Open Existing Model..._) it is working as expected on Windows.
- On Linux it is working as expected also if opened with QGIS browser.
|
process
|
model is not working correctly if opened with qgis browser on windows bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug qgis model is not working correctly if opened with qgis browser on windows i e available fields in vector field input are not updated accordingly if selected layer for vector layer input is changed by the user how to reproduce download and extract add both layers of the gpkg to a qgis project and have a look at the field names open the model included in the zip file with qgis browser switch the layers for the vector layer inputs in the model ui and have a look at the fields shown in the vector field inputs see error the vector fields in the vector field inputs are not all the time matching the fields of a selected layer qgis and os versions qgis version hannover qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version windows active python plugins quickosm db manager metasearch processing additional context if the model is opened and run from the proccesing toolbox panel open existing model it is working as expected on windows on linux it is working as expected also if opened with qgis browser
| 1
|
63,174
| 7,698,242,799
|
IssuesEvent
|
2018-05-18 22:07:39
|
OfficeDev/office-ui-fabric-react
|
https://api.github.com/repos/OfficeDev/office-ui-fabric-react
|
closed
|
[Nav] Item Groups option to not expand on click
|
Type: enhancement no-recent-activity π¨ Needs: design
|
I have a Nav structure where I don't want my parent ItemGroups navigating to a new page or picking up the Active style as they are purely categories and I want the user selecting something under them to proceed. The current behavior (as of 1.7.0) is to only expand an Item Group when the accompanying arrow is clicked and the Active status is set to true on click.
I'd like to propose an option for the ItemGroup that puts it into a mode where:
1. Clicking anywhere on the object area (i.e. arrow or label) will toggle the isExpanded status
2. The Active flag will stay false
This would be in line with other Web UI frameworks such as Bootstrap or Foundation.
|
1.0
|
[Nav] Item Groups option to not expand on click - I have a Nav structure where I don't want my parent ItemGroups navigating to a new page or picking up the Active style as they are purely categories and I want the user selecting something under them to proceed. The current behavior (as of 1.7.0) is to only expand an Item Group when the accompanying arrow is clicked and the Active status is set to true on click.
I'd like to propose an option for the ItemGroup that puts it into a mode where:
1. Clicking anywhere on the object area (i.e. arrow or label) will toggle the isExpanded status
2. The Active flag will stay false
This would be in line with other Web UI frameworks such as Bootstrap or Foundation.
|
non_process
|
item groups option to not expand on click i have a nav structure where i don t want my parent itemgroups navigating to a new page or picking up the active style as they are purely categories and i want the user selecting something under them to proceed the current behavior as of is to only expand an item group when the accompanying arrow is clicked and the active status is set to true on click i d like to propose an option for the itemgroup that puts it into a mode where clicking anywhere on the object area i e arrow or label will toggle the isexpanded status the active flag will stay false this would be in line with other web ui frameworks such as bootstrap or foundation
| 0
|
16,707
| 21,868,148,717
|
IssuesEvent
|
2022-05-19 01:42:32
|
googleapis/nodejs-binary-authorization
|
https://api.github.com/repos/googleapis/nodejs-binary-authorization
|
closed
|
GA release of @google-cloud/binary-authorization
|
type: process api: binaryauthorization
|
Package name: **@google-cloud/binary-authorization**
Current release: **GA**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [ ] 28 days elapsed since last beta release with new API surface
- [ ] Server API is GA
- [ ] Package API is stable, and we can commit to backward compatibility
- [ ] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one βgetting startedβ sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
1.0
|
GA release of @google-cloud/binary-authorization - Package name: **@google-cloud/binary-authorization**
Current release: **GA**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [ ] 28 days elapsed since last beta release with new API surface
- [ ] Server API is GA
- [ ] Package API is stable, and we can commit to backward compatibility
- [ ] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one βgetting startedβ sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
process
|
ga release of google cloud binary authorization package name google cloud binary authorization current release ga proposed release ga instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required days elapsed since last beta release with new api surface server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one βgetting startedβ sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
| 1
|
740,464
| 25,752,941,134
|
IssuesEvent
|
2022-12-08 14:29:31
|
Public-Health-Scotland/source-linkage-files
|
https://api.github.com/repos/Public-Health-Scotland/source-linkage-files
|
closed
|
Move cost uplift code from C10 into C01
|
SPSS to R Priority: High
|
Lines (4-8)
This could be put into a function and included in #332
|
1.0
|
Move cost uplift code from C10 into C01 - Lines (4-8)
This could be put into a function and included in #332
|
non_process
|
move cost uplift code from into lines this could be put into a function and included in
| 0
|
448,147
| 31,770,784,142
|
IssuesEvent
|
2023-09-12 11:39:48
|
arista-netdevops-community/anta
|
https://api.github.com/repos/arista-netdevops-community/anta
|
closed
|
doc: Update contributions page
|
documentation
|
Contributions page of main documentation needs an update for the unit-tests section:
https://www.anta.ninja/main/contribution/#unit-tests
|
1.0
|
doc: Update contributions page - Contributions page of main documentation needs an update for the unit-tests section:
https://www.anta.ninja/main/contribution/#unit-tests
|
non_process
|
doc update contributions page contributions page of main documentation needs an update for the unit tests section
| 0
|
262,140
| 22,797,046,981
|
IssuesEvent
|
2022-07-10 21:45:16
|
ossf/scorecard-action
|
https://api.github.com/repos/ossf/scorecard-action
|
closed
|
Failing e2e tests - scorecard-bash on ossf-tests/scorecard-action
|
e2e automated-tests
|
Matrix: {
"results_format": "sarif",
"publish_results": true,
"upload_result": true
}
Repo: https://github.com/ossf-tests/scorecard-action/tree/main
Run: https://github.com/ossf-tests/scorecard-action/actions/runs/2646091693
Workflow name: scorecard-bash
Workflow file: https://github.com/ossf-tests/scorecard-action/tree/main/.github/workflows/scorecards-bash.yml
Trigger: push
Branch: main
|
1.0
|
Failing e2e tests - scorecard-bash on ossf-tests/scorecard-action - Matrix: {
"results_format": "sarif",
"publish_results": true,
"upload_result": true
}
Repo: https://github.com/ossf-tests/scorecard-action/tree/main
Run: https://github.com/ossf-tests/scorecard-action/actions/runs/2646091693
Workflow name: scorecard-bash
Workflow file: https://github.com/ossf-tests/scorecard-action/tree/main/.github/workflows/scorecards-bash.yml
Trigger: push
Branch: main
|
non_process
|
failing tests scorecard bash on ossf tests scorecard action matrix results format sarif publish results true upload result true repo run workflow name scorecard bash workflow file trigger push branch main
| 0
|
348,556
| 31,627,224,334
|
IssuesEvent
|
2023-09-06 06:35:51
|
masters2023-project-06-second-hand/be-a
|
https://api.github.com/repos/masters2023-project-06-second-hand/be-a
|
closed
|
[BE] region μλΉμ€ ν΅ν©ν
μ€νΈ
|
π¦test
|
## β¨ ν΄λΉ κΈ°λ₯μ ꡬννκΈ° μν΄ ν μΌμ΄ 무μμΈκ°μ?
- [ ] regionQueryService test
- [ ] regionService test
|
1.0
|
[BE] region μλΉμ€ ν΅ν©ν
μ€νΈ - ## β¨ ν΄λΉ κΈ°λ₯μ ꡬννκΈ° μν΄ ν μΌμ΄ 무μμΈκ°μ?
- [ ] regionQueryService test
- [ ] regionService test
|
non_process
|
region μλΉμ€ ν΅ν©ν
μ€νΈ β¨ ν΄λΉ κΈ°λ₯μ ꡬννκΈ° μν΄ ν μΌμ΄ 무μμΈκ°μ regionqueryservice test regionservice test
| 0
|
65,892
| 12,693,994,424
|
IssuesEvent
|
2020-06-22 05:19:12
|
esp8266/Arduino
|
https://api.github.com/repos/esp8266/Arduino
|
opened
|
Large stack usage in core and MDNS libraries
|
component: MDNS component: core type: code cleanup
|
I've been playing locally with GCC10 and the `-Wstack-usage=` option which emits a compile-time warning when stacks are larger than the size given. There's only 4K total stack to play with so I set the warning limit to 300 bytes.
This isn't an exhaustive list (need to do a local CI and aggregate warnings), but I'm seeing very high use in the flash_hal and MDNS:
Flash_write() has a 512 byte buffer allocated on the stack in the case of an unaligned write, which seems pretty massive and ripe for reduction:
````
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\flash_hal.cpp: In function 'int32_t flash_hal_write(uint32_t, uint32_t, const uint8_t*)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\flash_hal.cpp:102:9: warning: stack usage is 576 bytes [-Wstack-usage=]
102 | int32_t flash_hal_write(uint32_t addr, uint32_t size, const uint8_t *src) {
| ^~~~~~~~~~~~~~~
````
MDNS (old and new) have stack usages up to almost 700 bytes which might explain some issues seen at runtime.
````
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'uint8_t esp8266::MDNSImplementation::MDNSResponder::_ZNK7esp826618MDNSImplementation13MDNSResponder17_replyMaskForHostERKNS1_16stcMDNS_RRHeaderEPb$part$0(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_RRHeader&, bool*) const':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:1991:9: warning: stack usage is 304 bytes [-Wstack-usage=]
1991 | uint8_t MDNSResponder::_replyMaskForHost(const MDNSResponder::stcMDNS_RRHeader& p_RRHeader,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'uint8_t esp8266::MDNSImplementation::MDNSResponder::_ZNK7esp826618MDNSImplementation13MDNSResponder20_replyMaskForServiceERKNS1_16stcMDNS_RRHeaderERKNS1_14stcMDNSServiceEPb$part$0(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_RRHeader&, const esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, bool*) const':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:2069:9: warning: stack usage is 576 bytes [-Wstack-usage=]
2069 | uint8_t MDNSResponder::_replyMaskForService(const MDNSResponder::stcMDNS_RRHeader& p_RRHeader,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_parseQuery(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_MsgHeader&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:179:6: warning: stack usage is 608 bytes [-Wstack-usage=]
179 | bool MDNSResponder::_parseQuery(const MDNSResponder::stcMDNS_MsgHeader& p_MsgHeader)
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_ZN7esp826618MDNSImplementation13MDNSResponder15_processAnswersEPKNS1_16stcMDNS_RRAnswerE$part$0(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_RRAnswer*)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:749:6: warning: stack usage is 320 bytes [-Wstack-usage=]
749 | bool MDNSResponder::_processAnswers(const MDNSResponder::stcMDNS_RRAnswer* p_pAnswers)
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_parseResponse(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_MsgHeader&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:628:6: warning: stack usage is 320 bytes [-Wstack-usage=]
628 | bool MDNSResponder::_parseResponse(const MDNSResponder::stcMDNS_MsgHeader& p_MsgHeader)
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\ESP8266mDNS_Legacy.cpp: In member function 'void Legacy_MDNSResponder::MDNSResponder::_parsePacket()':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\ESP8266mDNS_Legacy.cpp:567:6: warning: stack usage is 688 bytes [-Wstack-usage=]
567 | void MDNSResponder::_parsePacket()
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSHostDomain(const char*, bool, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1367:6: warning: stack usage is 320 bytes [-Wstack-usage=]
1367 | bool MDNSResponder::_writeMDNSHostDomain(const char* p_pcHostname,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSServiceDomain(const esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, bool, bool, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1410:6: warning: stack usage is 320 bytes [-Wstack-usage=]
1410 | bool MDNSResponder::_writeMDNSServiceDomain(const MDNSResponder::stcMDNSService& p_Service,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSAnswer_PTR_IP4(IPAddress, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1517:6: warning: stack usage is 560 bytes [-Wstack-usage=]
1517 | bool MDNSResponder::_writeMDNSAnswer_PTR_IP4(IPAddress p_IPAddress,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSAnswer_PTR_TYPE(esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1550:6: warning: stack usage is 544 bytes [-Wstack-usage=]
1550 | bool MDNSResponder::_writeMDNSAnswer_PTR_TYPE(MDNSResponder::stcMDNSService& p_rService,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSAnswer_SRV(esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1725:6: warning: stack usage is 320 bytes [-Wstack-usage=]
1725 | bool MDNSResponder::_writeMDNSAnswer_SRV(MDNSResponder::stcMDNSService& p_rService,
| ^~~~~~~~~~~~~
````
Crypto.c has some large stacks (which might be unavoidable given the algorithm, but we may want to consider moving it to heap or refactoring the code:
````
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp: In function 'void* {anonymous}::createBearsslHmac(const br_hash_class*, const void*, size_t, const void*, size_t, void*, size_t)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp:67:7: warning: stack usage is 432 bytes [-Wstack-usage=]
67 | void *createBearsslHmac(const br_hash_class *hashType, const void *data, const size_t dataLength, const void *hashKey, const size_t hashKeyLength, void *resultArray, const size_t outputLength)
| ^~~~~~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp: In function 'void* {anonymous}::createBearsslHmacCT(const br_hash_class*, const void*, size_t, const void*, size_t, void*, size_t)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp:110:7: warning: stack usage is 464 bytes [-Wstack-usage=]
110 | void *createBearsslHmacCT(const br_hash_class *hashType, const void *data, const size_t dataLength, const void *hashKey, const size_t hashKeyLength, void *resultArray, const size_t outputLength)
| ^~~~~~~~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp: In function 'void experimental::crypto::chacha20Poly1305Kernel(int, void*, size_t, const void*, const void*, size_t, const void*, void*, const void*, size_t)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp:507:6: warning: stack usage is 464 bytes [-Wstack-usage=]
507 | void chacha20Poly1305Kernel(const int encrypt, void *data, const size_t dataLength, const void *key, const void *keySalt, const size_t keySaltLength,
| ^~~~~~~~~~~~~~~~~~~~~~
````
|
1.0
|
Large stack usage in core and MDNS libraries - I've been playing locally with GCC10 and the `-Wstack-usage=` option which emits a compile-time warning when stacks are larger than the size given. There's only 4K total stack to play with so I set the warning limit to 300 bytes.
This isn't an exhaustive list (need to do a local CI and aggregate warnings), but I'm seeing very high use in the flash_hal and MDNS:
Flash_write() has a 512 byte buffer allocated on the stack in the case of an unaligned write, which seems pretty massive and ripe for reduction:
````
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\flash_hal.cpp: In function 'int32_t flash_hal_write(uint32_t, uint32_t, const uint8_t*)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\flash_hal.cpp:102:9: warning: stack usage is 576 bytes [-Wstack-usage=]
102 | int32_t flash_hal_write(uint32_t addr, uint32_t size, const uint8_t *src) {
| ^~~~~~~~~~~~~~~
````
MDNS (old and new) have stack usages up to almost 700 bytes which might explain some issues seen at runtime.
````
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'uint8_t esp8266::MDNSImplementation::MDNSResponder::_ZNK7esp826618MDNSImplementation13MDNSResponder17_replyMaskForHostERKNS1_16stcMDNS_RRHeaderEPb$part$0(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_RRHeader&, bool*) const':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:1991:9: warning: stack usage is 304 bytes [-Wstack-usage=]
1991 | uint8_t MDNSResponder::_replyMaskForHost(const MDNSResponder::stcMDNS_RRHeader& p_RRHeader,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'uint8_t esp8266::MDNSImplementation::MDNSResponder::_ZNK7esp826618MDNSImplementation13MDNSResponder20_replyMaskForServiceERKNS1_16stcMDNS_RRHeaderERKNS1_14stcMDNSServiceEPb$part$0(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_RRHeader&, const esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, bool*) const':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:2069:9: warning: stack usage is 576 bytes [-Wstack-usage=]
2069 | uint8_t MDNSResponder::_replyMaskForService(const MDNSResponder::stcMDNS_RRHeader& p_RRHeader,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_parseQuery(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_MsgHeader&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:179:6: warning: stack usage is 608 bytes [-Wstack-usage=]
179 | bool MDNSResponder::_parseQuery(const MDNSResponder::stcMDNS_MsgHeader& p_MsgHeader)
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_ZN7esp826618MDNSImplementation13MDNSResponder15_processAnswersEPKNS1_16stcMDNS_RRAnswerE$part$0(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_RRAnswer*)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:749:6: warning: stack usage is 320 bytes [-Wstack-usage=]
749 | bool MDNSResponder::_processAnswers(const MDNSResponder::stcMDNS_RRAnswer* p_pAnswers)
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_parseResponse(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_MsgHeader&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:628:6: warning: stack usage is 320 bytes [-Wstack-usage=]
628 | bool MDNSResponder::_parseResponse(const MDNSResponder::stcMDNS_MsgHeader& p_MsgHeader)
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\ESP8266mDNS_Legacy.cpp: In member function 'void Legacy_MDNSResponder::MDNSResponder::_parsePacket()':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\ESP8266mDNS_Legacy.cpp:567:6: warning: stack usage is 688 bytes [-Wstack-usage=]
567 | void MDNSResponder::_parsePacket()
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSHostDomain(const char*, bool, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1367:6: warning: stack usage is 320 bytes [-Wstack-usage=]
1367 | bool MDNSResponder::_writeMDNSHostDomain(const char* p_pcHostname,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSServiceDomain(const esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, bool, bool, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1410:6: warning: stack usage is 320 bytes [-Wstack-usage=]
1410 | bool MDNSResponder::_writeMDNSServiceDomain(const MDNSResponder::stcMDNSService& p_Service,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSAnswer_PTR_IP4(IPAddress, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1517:6: warning: stack usage is 560 bytes [-Wstack-usage=]
1517 | bool MDNSResponder::_writeMDNSAnswer_PTR_IP4(IPAddress p_IPAddress,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSAnswer_PTR_TYPE(esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1550:6: warning: stack usage is 544 bytes [-Wstack-usage=]
1550 | bool MDNSResponder::_writeMDNSAnswer_PTR_TYPE(MDNSResponder::stcMDNSService& p_rService,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSAnswer_SRV(esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1725:6: warning: stack usage is 320 bytes [-Wstack-usage=]
1725 | bool MDNSResponder::_writeMDNSAnswer_SRV(MDNSResponder::stcMDNSService& p_rService,
| ^~~~~~~~~~~~~
````
Crypto.c has some large stacks (which might be unavoidable given the algorithm, but we may want to consider moving it to heap or refactoring the code:
````
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp: In function 'void* {anonymous}::createBearsslHmac(const br_hash_class*, const void*, size_t, const void*, size_t, void*, size_t)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp:67:7: warning: stack usage is 432 bytes [-Wstack-usage=]
67 | void *createBearsslHmac(const br_hash_class *hashType, const void *data, const size_t dataLength, const void *hashKey, const size_t hashKeyLength, void *resultArray, const size_t outputLength)
| ^~~~~~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp: In function 'void* {anonymous}::createBearsslHmacCT(const br_hash_class*, const void*, size_t, const void*, size_t, void*, size_t)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp:110:7: warning: stack usage is 464 bytes [-Wstack-usage=]
110 | void *createBearsslHmacCT(const br_hash_class *hashType, const void *data, const size_t dataLength, const void *hashKey, const size_t hashKeyLength, void *resultArray, const size_t outputLength)
| ^~~~~~~~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp: In function 'void experimental::crypto::chacha20Poly1305Kernel(int, void*, size_t, const void*, const void*, size_t, const void*, void*, const void*, size_t)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp:507:6: warning: stack usage is 464 bytes [-Wstack-usage=]
507 | void chacha20Poly1305Kernel(const int encrypt, void *data, const size_t dataLength, const void *key, const void *keySalt, const size_t keySaltLength,
| ^~~~~~~~~~~~~~~~~~~~~~
````
|
non_process
|
large stack usage in core and mdns libraries i ve been playing locally with and the wstack usage option which emits a compile time warning when stacks are larger than the size given there s only total stack to play with so i set the warning limit to bytes this isn t an exhaustive list need to do a local ci and aggregate warnings but i m seeing very high use in the flash hal and mdns flash write has a byte buffer allocated on the stack in the case of an unaligned write which seems pretty massive and ripe for reduction c users earle documents arduino hardware cores flash hal cpp in function t flash hal write t t const t c users earle documents arduino hardware cores flash hal cpp warning stack usage is bytes t flash hal write t addr t size const t src mdns old and new have stack usages up to almost bytes which might explain some issues seen at runtime c users earle documents arduino hardware libraries src leamdns control cpp in member function t mdnsimplementation mdnsresponder rrheaderepb part const mdnsimplementation mdnsresponder stcmdns rrheader bool const c users earle documents arduino hardware libraries src leamdns control cpp warning stack usage is bytes t mdnsresponder replymaskforhost const mdnsresponder stcmdns rrheader p rrheader c users earle documents arduino hardware libraries src leamdns control cpp in member function t mdnsimplementation mdnsresponder part const mdnsimplementation mdnsresponder stcmdns rrheader const mdnsimplementation mdnsresponder stcmdnsservice bool const c users earle documents arduino hardware libraries src leamdns control cpp warning stack usage is bytes t mdnsresponder replymaskforservice const mdnsresponder stcmdns rrheader p rrheader c users earle documents arduino hardware libraries src leamdns control cpp in member function bool mdnsimplementation mdnsresponder parsequery const mdnsimplementation mdnsresponder stcmdns msgheader c users earle documents arduino hardware libraries src leamdns control cpp warning stack usage is bytes bool mdnsresponder parsequery const mdnsresponder stcmdns msgheader p msgheader c users earle documents arduino hardware libraries src leamdns control cpp in member function bool mdnsimplementation mdnsresponder rranswere part const mdnsimplementation mdnsresponder stcmdns rranswer c users earle documents arduino hardware libraries src leamdns control cpp warning stack usage is bytes bool mdnsresponder processanswers const mdnsresponder stcmdns rranswer p panswers c users earle documents arduino hardware libraries src leamdns control cpp in member function bool mdnsimplementation mdnsresponder parseresponse const mdnsimplementation mdnsresponder stcmdns msgheader c users earle documents arduino hardware libraries src leamdns control cpp warning stack usage is bytes bool mdnsresponder parseresponse const mdnsresponder stcmdns msgheader p msgheader c users earle documents arduino hardware libraries src legacy cpp in member function void legacy mdnsresponder mdnsresponder parsepacket c users earle documents arduino hardware libraries src legacy cpp warning stack usage is bytes void mdnsresponder parsepacket c users earle documents arduino hardware libraries src leamdns transfer cpp in member function bool mdnsimplementation mdnsresponder writemdnshostdomain const char bool mdnsimplementation mdnsresponder stcmdnssendparameter c users earle documents arduino hardware libraries src leamdns transfer cpp warning stack usage is bytes bool mdnsresponder writemdnshostdomain const char p pchostname c users earle documents arduino hardware libraries src leamdns transfer cpp in member function bool mdnsimplementation mdnsresponder writemdnsservicedomain const mdnsimplementation mdnsresponder stcmdnsservice bool bool mdnsimplementation mdnsresponder stcmdnssendparameter c users earle documents arduino hardware libraries src leamdns transfer cpp warning stack usage is bytes bool mdnsresponder writemdnsservicedomain const mdnsresponder stcmdnsservice p service c users earle documents arduino hardware libraries src leamdns transfer cpp in member function bool mdnsimplementation mdnsresponder writemdnsanswer ptr ipaddress mdnsimplementation mdnsresponder stcmdnssendparameter c users earle documents arduino hardware libraries src leamdns transfer cpp warning stack usage is bytes bool mdnsresponder writemdnsanswer ptr ipaddress p ipaddress c users earle documents arduino hardware libraries src leamdns transfer cpp in member function bool mdnsimplementation mdnsresponder writemdnsanswer ptr type mdnsimplementation mdnsresponder stcmdnsservice mdnsimplementation mdnsresponder stcmdnssendparameter c users earle documents arduino hardware libraries src leamdns transfer cpp warning stack usage is bytes bool mdnsresponder writemdnsanswer ptr type mdnsresponder stcmdnsservice p rservice c users earle documents arduino hardware libraries src leamdns transfer cpp in member function bool mdnsimplementation mdnsresponder writemdnsanswer srv mdnsimplementation mdnsresponder stcmdnsservice mdnsimplementation mdnsresponder stcmdnssendparameter c users earle documents arduino hardware libraries src leamdns transfer cpp warning stack usage is bytes bool mdnsresponder writemdnsanswer srv mdnsresponder stcmdnsservice p rservice crypto c has some large stacks which might be unavoidable given the algorithm but we may want to consider moving it to heap or refactoring the code c users earle documents arduino hardware cores crypto cpp in function void anonymous createbearsslhmac const br hash class const void size t const void size t void size t c users earle documents arduino hardware cores crypto cpp warning stack usage is bytes void createbearsslhmac const br hash class hashtype const void data const size t datalength const void hashkey const size t hashkeylength void resultarray const size t outputlength c users earle documents arduino hardware cores crypto cpp in function void anonymous createbearsslhmacct const br hash class const void size t const void size t void size t c users earle documents arduino hardware cores crypto cpp warning stack usage is bytes void createbearsslhmacct const br hash class hashtype const void data const size t datalength const void hashkey const size t hashkeylength void resultarray const size t outputlength c users earle documents arduino hardware cores crypto cpp in function void experimental crypto int void size t const void const void size t const void void const void size t c users earle documents arduino hardware cores crypto cpp warning stack usage is bytes void const int encrypt void data const size t datalength const void key const void keysalt const size t keysaltlength
| 0
|
130,709
| 5,120,720,609
|
IssuesEvent
|
2017-01-09 05:50:53
|
buttercup-pw/buttercup
|
https://api.github.com/repos/buttercup-pw/buttercup
|
opened
|
Page titles are visible in ubuntu
|
Effort: Medium Priority: High Status: Available Type: Bug
|
The page titles used for programmatic location detection are visible in the titlebars on linux (and presumably windows):

|
1.0
|
Page titles are visible in ubuntu - The page titles used for programmatic location detection are visible in the titlebars on linux (and presumably windows):

|
non_process
|
page titles are visible in ubuntu the page titles used for programmatic location detection are visible in the titlebars on linux and presumably windows
| 0
|
74,842
| 7,447,010,631
|
IssuesEvent
|
2018-03-28 11:01:06
|
SatelliteQE/robottelo
|
https://api.github.com/repos/SatelliteQE/robottelo
|
opened
|
[py3.6] RuntimeError: dictionary changed size during iteration
|
6.2 6.3 6.4 High test-failure
|
This is a issue to track all tests failing with the [subject]
<more details to be provided by @rplevka>
|
1.0
|
[py3.6] RuntimeError: dictionary changed size during iteration - This is a issue to track all tests failing with the [subject]
<more details to be provided by @rplevka>
|
non_process
|
runtimeerror dictionary changed size during iteration this is a issue to track all tests failing with the
| 0
|
22,446
| 31,165,862,566
|
IssuesEvent
|
2023-08-16 19:38:31
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
add threading.RWLock
|
type-feature stdlib topic-multiprocessing
|
BPO | [8800](https://bugs.python.org/issue8800)
--- | :---
Nosy | @jcea, @pitrou, @kristjanvalur, @tiran, @njsmith, @asvetlov, @ofek, @elarivie
Files | <li>[rwlock.patch](https://bugs.python.org/file17448/rwlock.patch "Uploaded as text/plain at 2010-05-24.09:55:37 by @kristjanvalur")</li><li>[Added-ShrdExclLock-to-threading-and-multiprocessing.patch](https://bugs.python.org/file27350/Added-ShrdExclLock-to-threading-and-multiprocessing.patch "Uploaded as text/plain at 2012-09-30.02:01:39 by Sebastian.Noack")</li><li>[sharablelock.patch](https://bugs.python.org/file27359/sharablelock.patch "Uploaded as text/plain at 2012-09-30.17:19:05 by @kristjanvalur")</li><li>[Added-ShrdExclLock-to-threading-and-multiprocessing-2.patch](https://bugs.python.org/file27363/Added-ShrdExclLock-to-threading-and-multiprocessing-2.patch "Uploaded as text/plain at 2012-09-30.18:40:15 by Sebastian.Noack")</li><li>[rwlock.patch](https://bugs.python.org/file27385/rwlock.patch "Uploaded as text/plain at 2012-10-02.11:37:48 by @kristjanvalur")</li><li>[rwlock.patch](https://bugs.python.org/file27412/rwlock.patch "Uploaded as text/plain at 2012-10-04.10:29:43 by @kristjanvalur")</li><li>[rwlock-sbt.patch](https://bugs.python.org/file27422/rwlock-sbt.patch "Uploaded as text/plain at 2012-10-04.17:24:23 by sbt")</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2010-05-24.09:55:40.864>
labels = ['type-feature', 'library']
title = 'add threading.RWLock'
updated_at = <Date 2020-01-15.09:20:07.038>
user = 'https://github.com/kristjanvalur'
```
bugs.python.org fields:
```python
activity = <Date 2020-01-15.09:20:07.038>
actor = 'asvetlov'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2010-05-24.09:55:40.864>
creator = 'kristjan.jonsson'
dependencies = []
files = ['17448', '27350', '27359', '27363', '27385', '27412', '27422']
hgrepos = []
issue_num = 8800
keywords = ['patch', 'needs review']
message_count = 63.0
messages = ['106350', '106372', '106373', '106376', '106377', '106386', '168077', '168149', '171567', '171568', '171599', '171600', '171626', '171639', '171653', '171659', '171667', '171669', '171674', '171693', '171695', '171696', '171697', '171698', '171699', '171700', '171703', '171708', '171709', '171710', '171713', '171714', '171716', '171717', '171718', '171721', '171780', '171781', '171782', '171783', '171785', '171786', '171788', '171789', '171790', '171792', '171793', '171877', '171883', '171891', '171914', '171915', '171930', '171975', '171979', '172064', '172071', '172074', '274765', '274795', '287745', '360026', '360031']
nosy_count = 15.0
nosy_names = ['jcea', 'pitrou', 'kristjan.jonsson', 'christian.heimes', 'jyasskin', 'njs', 'asvetlov', 'neologix', 'vrutsky', 'sbt', 'mklauber', 'Sebastian.Noack', 'dan.oreilly', 'Ofekmeister', 'elarivie']
pr_nums = []
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'enhancement'
url = 'https://bugs.python.org/issue8800'
versions = []
```
</p></details>
|
1.0
|
add threading.RWLock - BPO | [8800](https://bugs.python.org/issue8800)
--- | :---
Nosy | @jcea, @pitrou, @kristjanvalur, @tiran, @njsmith, @asvetlov, @ofek, @elarivie
Files | <li>[rwlock.patch](https://bugs.python.org/file17448/rwlock.patch "Uploaded as text/plain at 2010-05-24.09:55:37 by @kristjanvalur")</li><li>[Added-ShrdExclLock-to-threading-and-multiprocessing.patch](https://bugs.python.org/file27350/Added-ShrdExclLock-to-threading-and-multiprocessing.patch "Uploaded as text/plain at 2012-09-30.02:01:39 by Sebastian.Noack")</li><li>[sharablelock.patch](https://bugs.python.org/file27359/sharablelock.patch "Uploaded as text/plain at 2012-09-30.17:19:05 by @kristjanvalur")</li><li>[Added-ShrdExclLock-to-threading-and-multiprocessing-2.patch](https://bugs.python.org/file27363/Added-ShrdExclLock-to-threading-and-multiprocessing-2.patch "Uploaded as text/plain at 2012-09-30.18:40:15 by Sebastian.Noack")</li><li>[rwlock.patch](https://bugs.python.org/file27385/rwlock.patch "Uploaded as text/plain at 2012-10-02.11:37:48 by @kristjanvalur")</li><li>[rwlock.patch](https://bugs.python.org/file27412/rwlock.patch "Uploaded as text/plain at 2012-10-04.10:29:43 by @kristjanvalur")</li><li>[rwlock-sbt.patch](https://bugs.python.org/file27422/rwlock-sbt.patch "Uploaded as text/plain at 2012-10-04.17:24:23 by sbt")</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2010-05-24.09:55:40.864>
labels = ['type-feature', 'library']
title = 'add threading.RWLock'
updated_at = <Date 2020-01-15.09:20:07.038>
user = 'https://github.com/kristjanvalur'
```
bugs.python.org fields:
```python
activity = <Date 2020-01-15.09:20:07.038>
actor = 'asvetlov'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2010-05-24.09:55:40.864>
creator = 'kristjan.jonsson'
dependencies = []
files = ['17448', '27350', '27359', '27363', '27385', '27412', '27422']
hgrepos = []
issue_num = 8800
keywords = ['patch', 'needs review']
message_count = 63.0
messages = ['106350', '106372', '106373', '106376', '106377', '106386', '168077', '168149', '171567', '171568', '171599', '171600', '171626', '171639', '171653', '171659', '171667', '171669', '171674', '171693', '171695', '171696', '171697', '171698', '171699', '171700', '171703', '171708', '171709', '171710', '171713', '171714', '171716', '171717', '171718', '171721', '171780', '171781', '171782', '171783', '171785', '171786', '171788', '171789', '171790', '171792', '171793', '171877', '171883', '171891', '171914', '171915', '171930', '171975', '171979', '172064', '172071', '172074', '274765', '274795', '287745', '360026', '360031']
nosy_count = 15.0
nosy_names = ['jcea', 'pitrou', 'kristjan.jonsson', 'christian.heimes', 'jyasskin', 'njs', 'asvetlov', 'neologix', 'vrutsky', 'sbt', 'mklauber', 'Sebastian.Noack', 'dan.oreilly', 'Ofekmeister', 'elarivie']
pr_nums = []
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'enhancement'
url = 'https://bugs.python.org/issue8800'
versions = []
```
</p></details>
|
process
|
add threading rwlock bpo nosy jcea pitrou kristjanvalur tiran njsmith asvetlov ofek elarivie files uploaded as text plain at by kristjanvalur uploaded as text plain at by sebastian noack uploaded as text plain at by kristjanvalur uploaded as text plain at by sebastian noack uploaded as text plain at by kristjanvalur uploaded as text plain at by kristjanvalur uploaded as text plain at by sbt note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title add threading rwlock updated at user bugs python org fields python activity actor asvetlov assignee none closed false closed date none closer none components creation creator kristjan jonsson dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage patch review status open superseder none type enhancement url versions
| 1
|
636,956
| 20,614,692,327
|
IssuesEvent
|
2022-03-07 12:04:57
|
kubernetes-sigs/cluster-api-provider-aws
|
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-aws
|
closed
|
pull-cluster-api-provider-aws-apidiff-main test is failing
|
kind/bug needs-triage needs-priority
|
/kind bug
**What steps did you take and what happened:**
[A clear and concise description of what the bug is.]
The prow job [pull-cluster-api-provider-aws-apidiff-main test is failing ](https://prow.k8s.io/job-history/gs/kubernetes-jenkins/pr-logs/directory/pull-cluster-api-provider-aws-apidiff-main)
**What did you expect to happen:**
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
Related to #3169
**Environment:**
- Cluster-api-provider-aws version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
/assign
|
1.0
|
pull-cluster-api-provider-aws-apidiff-main test is failing - /kind bug
**What steps did you take and what happened:**
[A clear and concise description of what the bug is.]
The prow job [pull-cluster-api-provider-aws-apidiff-main test is failing ](https://prow.k8s.io/job-history/gs/kubernetes-jenkins/pr-logs/directory/pull-cluster-api-provider-aws-apidiff-main)
**What did you expect to happen:**
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
Related to #3169
**Environment:**
- Cluster-api-provider-aws version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
/assign
|
non_process
|
pull cluster api provider aws apidiff main test is failing kind bug what steps did you take and what happened the prow job what did you expect to happen anything else you would like to add related to environment cluster api provider aws version kubernetes version use kubectl version os e g from etc os release assign
| 0
|
10,816
| 13,609,291,106
|
IssuesEvent
|
2020-09-23 04:50:41
|
googleapis/java-os-config
|
https://api.github.com/repos/googleapis/java-os-config
|
closed
|
Dependency Dashboard
|
api: osconfig type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-os-config-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-os-config to v1
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-os-config-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-os-config to v1
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any chore deps update dependency com google cloud google cloud os config to check this box to trigger a request for renovate to run again on this repository
| 1
|
5,668
| 8,552,376,863
|
IssuesEvent
|
2018-11-07 20:53:20
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Storage: 'TestStorageListFiles' systest teardown flakes w/ 503
|
api: storage flaky testing type: process
|
From: https://source.cloud.google.com/results/invocations/8522729a-9bdc-480f-8dd5-4f24a1a2071f/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fstorage/log
```python
_______ ERROR at teardown of TestStoragePseudoHierarchy.test_third_level _______
cls = <class 'tests.system.TestStoragePseudoHierarchy'>
@classmethod
def tearDownClass(cls):
for blob in cls.suite_blobs_to_delete:
> blob.delete()
tests/system.py:646:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/storage/blob.py:411: in delete
return self.bucket.delete_blob(self.name, client=client)
google/cloud/storage/bucket.py:804: in delete_blob
_target_object=None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
...
if not 200 <= response.status_code < 300:
> raise exceptions.from_http_response(response)
E google.api_core.exceptions.ServiceUnavailable: 503 DELETE https://www.googleapis.com/storage/v1/b/new_1541601288920/o/parent%2Fchild%2Ffile21.txt: Backend Error
../core/google/cloud/_http.py:293: ServiceUnavailable
```
|
1.0
|
Storage: 'TestStorageListFiles' systest teardown flakes w/ 503 - From: https://source.cloud.google.com/results/invocations/8522729a-9bdc-480f-8dd5-4f24a1a2071f/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fstorage/log
```python
_______ ERROR at teardown of TestStoragePseudoHierarchy.test_third_level _______
cls = <class 'tests.system.TestStoragePseudoHierarchy'>
@classmethod
def tearDownClass(cls):
for blob in cls.suite_blobs_to_delete:
> blob.delete()
tests/system.py:646:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/storage/blob.py:411: in delete
return self.bucket.delete_blob(self.name, client=client)
google/cloud/storage/bucket.py:804: in delete_blob
_target_object=None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
...
if not 200 <= response.status_code < 300:
> raise exceptions.from_http_response(response)
E google.api_core.exceptions.ServiceUnavailable: 503 DELETE https://www.googleapis.com/storage/v1/b/new_1541601288920/o/parent%2Fchild%2Ffile21.txt: Backend Error
../core/google/cloud/_http.py:293: ServiceUnavailable
```
|
process
|
storage teststoragelistfiles systest teardown flakes w from python error at teardown of teststoragepseudohierarchy test third level cls classmethod def teardownclass cls for blob in cls suite blobs to delete blob delete tests system py google cloud storage blob py in delete return self bucket delete blob self name client client google cloud storage bucket py in delete blob target object none if not response status code raise exceptions from http response response e google api core exceptions serviceunavailable delete backend error core google cloud http py serviceunavailable
| 1
|
18,319
| 24,438,398,810
|
IssuesEvent
|
2022-10-06 13:06:40
|
pystatgen/sgkit
|
https://api.github.com/repos/pystatgen/sgkit
|
opened
|
Update Development Status classifier in setup.cfg
|
process + tools
|
Currently it is [`Development Status :: 3 - Alpha`](https://github.com/pystatgen/sgkit/blob/main/setup.cfg#L13), should we change it to [Beta or Production/Stable](https://pypi.org/classifiers/), or perhaps remove entirely?
|
1.0
|
Update Development Status classifier in setup.cfg - Currently it is [`Development Status :: 3 - Alpha`](https://github.com/pystatgen/sgkit/blob/main/setup.cfg#L13), should we change it to [Beta or Production/Stable](https://pypi.org/classifiers/), or perhaps remove entirely?
|
process
|
update development status classifier in setup cfg currently it is should we change it to or perhaps remove entirely
| 1
|
12,604
| 15,008,122,828
|
IssuesEvent
|
2021-01-31 08:35:42
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Improve S3Select Performance in alerts api for large numbers of files close in time
|
story team:data processing
|
### Description
If there are 100 of thousands of files in a short timespan for an alert the alerts api times out due to the listing time (we list serially).
NOTE: we may want to wait on this until appsync is removed, since we can have more than a 30sec timeout and this is a corner case, waiting a min or two could be ok.
Suggested approach:
- raise alerts lambda size from 512 to 2048 to get more speed and concurrency
- raise s3 select concurrency to 100
- refactor s3 select search to list concurrently
- this can be done by driving the s3select worker go routines via a channel using concurrent s3 listings, where each listing divides the time span of the alert (or starting by next token) by the concurrency and searches each range segment at the same time. Since we want the first N returned, this change means we have to wait until all listings are done, sort results (cannot stop early).
### Acceptance Criteria
- Alerts API does not timeout on a test of 200k objects searched
|
1.0
|
Improve S3Select Performance in alerts api for large numbers of files close in time - ### Description
If there are 100 of thousands of files in a short timespan for an alert the alerts api times out due to the listing time (we list serially).
NOTE: we may want to wait on this until appsync is removed, since we can have more than a 30sec timeout and this is a corner case, waiting a min or two could be ok.
Suggested approach:
- raise alerts lambda size from 512 to 2048 to get more speed and concurrency
- raise s3 select concurrency to 100
- refactor s3 select search to list concurrently
- this can be done by driving the s3select worker go routines via a channel using concurrent s3 listings, where each listing divides the time span of the alert (or starting by next token) by the concurrency and searches each range segment at the same time. Since we want the first N returned, this change means we have to wait until all listings are done, sort results (cannot stop early).
### Acceptance Criteria
- Alerts API does not timeout on a test of 200k objects searched
|
process
|
improve performance in alerts api for large numbers of files close in time description if there are of thousands of files in a short timespan for an alert the alerts api times out due to the listing time we list serially note we may want to wait on this until appsync is removed since we can have more than a timeout and this is a corner case waiting a min or two could be ok suggested approach raise alerts lambda size from to to get more speed and concurrency raise select concurrency to refactor select search to list concurrently this can be done by driving the worker go routines via a channel using concurrent listings where each listing divides the time span of the alert or starting by next token by the concurrency and searches each range segment at the same time since we want the first n returned this change means we have to wait until all listings are done sort results cannot stop early acceptance criteria alerts api does not timeout on a test of objects searched
| 1
|
18,766
| 24,670,599,611
|
IssuesEvent
|
2022-10-18 13:32:09
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Consent API] Consent record with revoked state should be created ,when participant account is deleted in the mobile app
|
Bug P2 Response datastore Process: Fixed Process: Tested QA Process: Tested dev
|
**Pre-condition:** Closed/combined/ iOS open study should be created in the study builder
**Steps:**
1. Sign in / Sign up
2. Enroll for the closed / combined study and then withdrawn from the study
3. Open the PM , and add same user account for different site in the PM
4. Then Open the mobile app
5. Again enroll for the same study
6. Go to 'My account' section and delete the user account
7. Go to that particular consent record and Verify
**AR:** Consent record with revoked state is not getting created ,when participant account is deleted in the mobile app
**ER:** Consent record with revoked state should be created ,when participant account is deleted in the mobile app
**Note:**
1. Issue is observed only when same user account is added in different sites in the participant manager
2. Issue is not observed when same user account is added in the same site in the participant manager
|
3.0
|
[Consent API] Consent record with revoked state should be created ,when participant account is deleted in the mobile app -
**Pre-condition:** Closed/combined/ iOS open study should be created in the study builder
**Steps:**
1. Sign in / Sign up
2. Enroll for the closed / combined study and then withdrawn from the study
3. Open the PM , and add same user account for different site in the PM
4. Then Open the mobile app
5. Again enroll for the same study
6. Go to 'My account' section and delete the user account
7. Go to that particular consent record and Verify
**AR:** Consent record with revoked state is not getting created ,when participant account is deleted in the mobile app
**ER:** Consent record with revoked state should be created ,when participant account is deleted in the mobile app
**Note:**
1. Issue is observed only when same user account is added in different sites in the participant manager
2. Issue is not observed when same user account is added in the same site in the participant manager
|
process
|
consent record with revoked state should be created when participant account is deleted in the mobile app pre condition closed combined ios open study should be created in the study builder steps sign in sign up enroll for the closed combined study and then withdrawn from the study open the pm and add same user account for different site in the pm then open the mobile app again enroll for the same study go to my account section and delete the user account go to that particular consent record and verify ar consent record with revoked state is not getting created when participant account is deleted in the mobile app er consent record with revoked state should be created when participant account is deleted in the mobile app note issue is observed only when same user account is added in different sites in the participant manager issue is not observed when same user account is added in the same site in the participant manager
| 1
|
189,723
| 15,193,990,477
|
IssuesEvent
|
2021-02-16 02:19:38
|
michaelrsweet/pappl
|
https://api.github.com/repos/michaelrsweet/pappl
|
closed
|
manpages have groff warnings
|
documentation priority-low
|
**Describe the bug**
`lintian` Debian linting tool shows manpage formatting warnings:
```
groff-message usr/share/man/man3/pappl-client.3.gz 235: warning: macro 'IN' not defined
groff-message usr/share/man/man3/pappl-device.3.gz 275: warning: macro 'IN' not defined
groff-message usr/share/man/man3/pappl-job.3.gz 156: warning: macro 'aborted-by-system'' not defined (possibly missing space after 'ab')
groff-message usr/share/man/man3/pappl-job.3.gz 160: warning: macro 'compression-error'' not defined
groff-message usr/share/man/man3/pappl-job.3.gz 164: warning: macro 'document-format-error'' not defined (possibly missing space after 'do')
groff-message usr/share/man/man3/pappl-job.3.gz 168: warning: macro 'document-password-error'' not defined (possibly missing space after 'do')
β¦
```
I'm by far not a (g)roff expert, but It seems that single quotes (') mark macro starters, and need escapes. I also haven't determined whether this is a direct formatting issue in `man/pappl-job-body.man` or in codedoc.
|
1.0
|
manpages have groff warnings - **Describe the bug**
`lintian` Debian linting tool shows manpage formatting warnings:
```
groff-message usr/share/man/man3/pappl-client.3.gz 235: warning: macro 'IN' not defined
groff-message usr/share/man/man3/pappl-device.3.gz 275: warning: macro 'IN' not defined
groff-message usr/share/man/man3/pappl-job.3.gz 156: warning: macro 'aborted-by-system'' not defined (possibly missing space after 'ab')
groff-message usr/share/man/man3/pappl-job.3.gz 160: warning: macro 'compression-error'' not defined
groff-message usr/share/man/man3/pappl-job.3.gz 164: warning: macro 'document-format-error'' not defined (possibly missing space after 'do')
groff-message usr/share/man/man3/pappl-job.3.gz 168: warning: macro 'document-password-error'' not defined (possibly missing space after 'do')
β¦
```
I'm by far not a (g)roff expert, but It seems that single quotes (') mark macro starters, and need escapes. I also haven't determined whether this is a direct formatting issue in `man/pappl-job-body.man` or in codedoc.
|
non_process
|
manpages have groff warnings describe the bug lintian debian linting tool shows manpage formatting warnings groff message usr share man pappl client gz warning macro in not defined groff message usr share man pappl device gz warning macro in not defined groff message usr share man pappl job gz warning macro aborted by system not defined possibly missing space after ab groff message usr share man pappl job gz warning macro compression error not defined groff message usr share man pappl job gz warning macro document format error not defined possibly missing space after do groff message usr share man pappl job gz warning macro document password error not defined possibly missing space after do β¦ i m by far not a g roff expert but it seems that single quotes mark macro starters and need escapes i also haven t determined whether this is a direct formatting issue in man pappl job body man or in codedoc
| 0
|
21,612
| 30,016,066,379
|
IssuesEvent
|
2023-06-26 18:49:43
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Root span logic in the tailsampling processor
|
enhancement Stale processor/tailsampling
|
### Component(s)
processor/tailsampling
### Is your feature request related to a problem? Please describe.
The tail sampling processor offers a reach set of logic functionality. The building blocks are highly technical and it is sometimes quite hard to translate them to complex use cases that make sense for users.
For example - if I want to sample a specific service by 50% and a different service by 30% I can create a policy that seems straightforward:
```
tail_sampling:
policies:
[
{
name: service-a-traces,
type: and,
and: {
and_sub_policy:
[
{
name: service-a,
type: string_attribute,
string_attribute: { key: service.name, values: [ "service-a" ]}
},
{
name: probability-policy-1,
type: probabilistic,
probabilistic: {sampling_percentage: 50}
}
]
}
},
{
name: service-b-traces,
type: and,
and: {
and_sub_policy:
[
{
name: service-b,
type: string_attribute,
string_attribute: { key: service.name, values: [ "service-b" ]}
},
{
name: probability-policy-2,
type: probabilistic,
probabilistic: {sampling_percentage: 30}
}
]
}
}
]
```
The fact is that setting probability values on different services does not ensure a certain percentage because ALL rules are evaluated and since the decisions include full traces, spans from different services may be sampled by these rules resulting in a percentage that is not absolute.
It seems very useful to have logic that allows setting a condition that will only be applied on "root spans". Currently this is not available for the processor.
Root spans are set by checking that the field parentSpanId that exists for every span is empty (""). Currently there is no logic item in the processor that checks for it
### Describe the solution you'd like
Add a rule type that will check for root span existance:
`{
name: test-policy-8,
type: root_span
}`
### Describe alternatives you've considered
I checked for all the current rules if they cover the parentSpanId field - found that Attributes (used for string_attribute) is not considering this field, also checked for trace_state.
Examined also the possibility to use the attribute processor to add it in some way that will make it available. This is an awkward way in any case but nevertheless I could not find an easy way to add it.
Adding a processor that checks on is quite simple
### Additional context
_No response_
|
1.0
|
Root span logic in the tailsampling processor - ### Component(s)
processor/tailsampling
### Is your feature request related to a problem? Please describe.
The tail sampling processor offers a reach set of logic functionality. The building blocks are highly technical and it is sometimes quite hard to translate them to complex use cases that make sense for users.
For example - if I want to sample a specific service by 50% and a different service by 30% I can create a policy that seems straightforward:
```
tail_sampling:
policies:
[
{
name: service-a-traces,
type: and,
and: {
and_sub_policy:
[
{
name: service-a,
type: string_attribute,
string_attribute: { key: service.name, values: [ "service-a" ]}
},
{
name: probability-policy-1,
type: probabilistic,
probabilistic: {sampling_percentage: 50}
}
]
}
},
{
name: service-b-traces,
type: and,
and: {
and_sub_policy:
[
{
name: service-b,
type: string_attribute,
string_attribute: { key: service.name, values: [ "service-b" ]}
},
{
name: probability-policy-2,
type: probabilistic,
probabilistic: {sampling_percentage: 30}
}
]
}
}
]
```
The fact is that setting probability values on different services does not ensure a certain percentage because ALL rules are evaluated and since the decisions include full traces, spans from different services may be sampled by these rules resulting in a percentage that is not absolute.
It seems very useful to have logic that allows setting a condition that will only be applied on "root spans". Currently this is not available for the processor.
Root spans are set by checking that the field parentSpanId that exists for every span is empty (""). Currently there is no logic item in the processor that checks for it
### Describe the solution you'd like
Add a rule type that will check for root span existance:
`{
name: test-policy-8,
type: root_span
}`
### Describe alternatives you've considered
I checked for all the current rules if they cover the parentSpanId field - found that Attributes (used for string_attribute) is not considering this field, also checked for trace_state.
Examined also the possibility to use the attribute processor to add it in some way that will make it available. This is an awkward way in any case but nevertheless I could not find an easy way to add it.
Adding a processor that checks on is quite simple
### Additional context
_No response_
|
process
|
root span logic in the tailsampling processor component s processor tailsampling is your feature request related to a problem please describe the tail sampling processor offers a reach set of logic functionality the building blocks are highly technical and it is sometimes quite hard to translate them to complex use cases that make sense for users for example if i want to sample a specific service by and a different service by i can create a policy that seems straightforward tail sampling policies name service a traces type and and and sub policy name service a type string attribute string attribute key service name values name probability policy type probabilistic probabilistic sampling percentage name service b traces type and and and sub policy name service b type string attribute string attribute key service name values name probability policy type probabilistic probabilistic sampling percentage the fact is that setting probability values on different services does not ensure a certain percentage because all rules are evaluated and since the decisions include full traces spans from different services may be sampled by these rules resulting in a percentage that is not absolute it seems very useful to have logic that allows setting a condition that will only be applied on root spans currently this is not available for the processor root spans are set by checking that the field parentspanid that exists for every span is empty currently there is no logic item in the processor that checks for it describe the solution you d like add a rule type that will check for root span existance name test policy type root span describe alternatives you ve considered i checked for all the current rules if they cover the parentspanid field found that attributes used for string attribute is not considering this field also checked for trace state examined also the possibility to use the attribute processor to add it in some way that will make it available this is an awkward way in any case but nevertheless i could not find an easy way to add it adding a processor that checks on is quite simple additional context no response
| 1
|
126,551
| 4,997,378,344
|
IssuesEvent
|
2016-12-09 16:36:13
|
google/fonts
|
https://api.github.com/repos/google/fonts
|
opened
|
Japanese Early Access font issues
|
Early Access Priority 2 - Important but not Urgent
|
https://twitter.com/jutha3_21/status/806666758450913280 mentions a few issues (text via Google Translate)
> Well, as early as 2017 version of the Japanese free font summary has come out. The one I often use is Hanpei Ming Dynasty, M +, around a cutaway. Also, why is not there in this list, but Y.OzFont (http://yozvox.web.fc2.com) too.
>
> Since the poetry of http://tanakajutha.tumblr.com gives priority to Ming Dynasty, it is glad if you can install it and view it. Other places prioritize Yu Gothic (font - weight: 500px;) or Yu Mincho entered in the latest Windows and Mac.
>
> Hannari Mincho tried the web font of Google Fonts + (https://googlefonts.github.io/japanese/), but in my parents' environment (Windows 10, Google Chrome), for example, only Hiragana was applied So abandon. There is also a disadvantage that capacity increases.
|
1.0
|
Japanese Early Access font issues - https://twitter.com/jutha3_21/status/806666758450913280 mentions a few issues (text via Google Translate)
> Well, as early as 2017 version of the Japanese free font summary has come out. The one I often use is Hanpei Ming Dynasty, M +, around a cutaway. Also, why is not there in this list, but Y.OzFont (http://yozvox.web.fc2.com) too.
>
> Since the poetry of http://tanakajutha.tumblr.com gives priority to Ming Dynasty, it is glad if you can install it and view it. Other places prioritize Yu Gothic (font - weight: 500px;) or Yu Mincho entered in the latest Windows and Mac.
>
> Hannari Mincho tried the web font of Google Fonts + (https://googlefonts.github.io/japanese/), but in my parents' environment (Windows 10, Google Chrome), for example, only Hiragana was applied So abandon. There is also a disadvantage that capacity increases.
|
non_process
|
japanese early access font issues mentions a few issues text via google translate well as early as version of the japanese free font summary has come out the one i often use is hanpei ming dynasty m around a cutaway also why is not there in this list but y ozfont too since the poetry of gives priority to ming dynasty it is glad if you can install it and view it other places prioritize yu gothic font weight or yu mincho entered in the latest windows and mac hannari mincho tried the web font of google fonts but in my parents environment windows google chrome for example only hiragana was applied so abandon there is also a disadvantage that capacity increases
| 0
|
3,773
| 6,743,153,082
|
IssuesEvent
|
2017-10-20 10:45:14
|
lijiarui/ticket-bot
|
https://api.github.com/repos/lijiarui/ticket-bot
|
closed
|
user input data preprocessing
|
pre process
|
- [ ] change all blanks to English `,`
- [ ] change Chinese `οΌ` , `γ`,`γ`,`γ`,`γ` to English version `,`,`.`,`,`,`>`,`<`
### Related Issue
#24
|
1.0
|
user input data preprocessing - - [ ] change all blanks to English `,`
- [ ] change Chinese `οΌ` , `γ`,`γ`,`γ`,`γ` to English version `,`,`.`,`,`,`>`,`<`
### Related Issue
#24
|
process
|
user input data preprocessing change all blanks to english change chinese οΌ γ γ γ γ to english version related issue
| 1
|
18,886
| 24,825,116,444
|
IssuesEvent
|
2022-10-25 19:54:42
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
closed
|
refactor Hasher lookup handling in the chiplets bus
|
processor v0.3
|
As discussed in #348, the first version of the hasher handling in the chiplets bus was straightforward and left room for several future optimizations.
In particular, the following should be refactored and optimized in the future:
- how hasher lookup values are computed
- how lookups are requested from the decoder
- how lookups are stored for "providing" them to the bus from the hash chiplet module.
Each item is described below. In all cases, there are also inline TODOs in the code with comments.
## how hasher lookup values are computed
The hasher state is currently stored in the `HasherLookup` struct, which makes it heavy and means that other things which contain it become correspondingly much heavier (e.g. `ChipletsLookupRow`). Similarly, the "next" hasher state is held in the `Absorb` variant of the `HasherLookupContext` enum, which has the same knock-on effects. The aforementioned struct & enum are [here](https://github.com/maticnetwork/miden/blob/next/processor/src/chiplets/hasher/lookups.rs).
Instead, we could get the state from the trace when the lookup values are included in the $b_{chip}$ column. However, because the trace is in column-major form, this might not be more performant, so any change should be benchmarked.
Edit (tohrnii):
Once we remove the hasher state from `HasherLookup`, we should refactor the [hash_span_block](https://github.com/maticnetwork/miden/blob/next/processor/src/chiplets/hasher/mod.rs#L192) method to avoid doing `is_memoized` checks multiple times.
## how lookups are requested from the decoder
Currently, the entire hash computation is done when the decoder makes its initialization request and all intermediate lookups required for the correctness of $b_{chip}$ are queued. When the decoder needs subsequent lookups (e.g. as it absorbs new operations during `RESPAN` or when the completes code blocks and needs the return hash), it sends a request, and they are dequeued and sent to the $b_{chip}$ bus.
Instead, it might be better to compute the lookups at the time they are needed. This requires refactoring the decoder and the functions in the hasher more extensively.
See the relevant discussions in the previous PR:
- https://github.com/maticnetwork/miden/pull/348#discussion_r937418159
- https://github.com/maticnetwork/miden/pull/348#discussion_r937400503
## how lookups are stored for "providing" them to the bus from the hash chiplet module.
Currently, all lookups are saved as they are computed [during hash computations](https://github.com/maticnetwork/miden/blob/7c86a2e57050009a276e5adc1e647ba17cfa1e7f/processor/src/chiplets/hasher/mod.rs#L97). At the end, during `fill_trace`, the hash chiplet iterates through and provides each lookup to the bus $b_{chip}$ one by one.
There are a few different options here:
1. provide the lookups to the bus as soon as they are computed, instead of saving them and sending them later. This would require refactoring the request/response handling a bit in the chiplet bus module since there is currently an assumption that all requests come first. It's a simple refactor, but it does also mean that the Hash chiplet would work differently from the Bitwise and Memory chiplets
2. during the `fill_trace` function, add the lookup "responses" to the chiplet bus in bulk instead of individually.
There are a few relevant comments about this:
- https://github.com/maticnetwork/miden/pull/348#discussion_r936660086
- https://github.com/maticnetwork/miden/pull/348#discussion_r936663453
- https://github.com/maticnetwork/miden/pull/348#discussion_r937437489
|
1.0
|
refactor Hasher lookup handling in the chiplets bus - As discussed in #348, the first version of the hasher handling in the chiplets bus was straightforward and left room for several future optimizations.
In particular, the following should be refactored and optimized in the future:
- how hasher lookup values are computed
- how lookups are requested from the decoder
- how lookups are stored for "providing" them to the bus from the hash chiplet module.
Each item is described below. In all cases, there are also inline TODOs in the code with comments.
## how hasher lookup values are computed
The hasher state is currently stored in the `HasherLookup` struct, which makes it heavy and means that other things which contain it become correspondingly much heavier (e.g. `ChipletsLookupRow`). Similarly, the "next" hasher state is held in the `Absorb` variant of the `HasherLookupContext` enum, which has the same knock-on effects. The aforementioned struct & enum are [here](https://github.com/maticnetwork/miden/blob/next/processor/src/chiplets/hasher/lookups.rs).
Instead, we could get the state from the trace when the lookup values are included in the $b_{chip}$ column. However, because the trace is in column-major form, this might not be more performant, so any change should be benchmarked.
Edit (tohrnii):
Once we remove the hasher state from `HasherLookup`, we should refactor the [hash_span_block](https://github.com/maticnetwork/miden/blob/next/processor/src/chiplets/hasher/mod.rs#L192) method to avoid doing `is_memoized` checks multiple times.
## how lookups are requested from the decoder
Currently, the entire hash computation is done when the decoder makes its initialization request and all intermediate lookups required for the correctness of $b_{chip}$ are queued. When the decoder needs subsequent lookups (e.g. as it absorbs new operations during `RESPAN` or when the completes code blocks and needs the return hash), it sends a request, and they are dequeued and sent to the $b_{chip}$ bus.
Instead, it might be better to compute the lookups at the time they are needed. This requires refactoring the decoder and the functions in the hasher more extensively.
See the relevant discussions in the previous PR:
- https://github.com/maticnetwork/miden/pull/348#discussion_r937418159
- https://github.com/maticnetwork/miden/pull/348#discussion_r937400503
## how lookups are stored for "providing" them to the bus from the hash chiplet module.
Currently, all lookups are saved as they are computed [during hash computations](https://github.com/maticnetwork/miden/blob/7c86a2e57050009a276e5adc1e647ba17cfa1e7f/processor/src/chiplets/hasher/mod.rs#L97). At the end, during `fill_trace`, the hash chiplet iterates through and provides each lookup to the bus $b_{chip}$ one by one.
There are a few different options here:
1. provide the lookups to the bus as soon as they are computed, instead of saving them and sending them later. This would require refactoring the request/response handling a bit in the chiplet bus module since there is currently an assumption that all requests come first. It's a simple refactor, but it does also mean that the Hash chiplet would work differently from the Bitwise and Memory chiplets
2. during the `fill_trace` function, add the lookup "responses" to the chiplet bus in bulk instead of individually.
There are a few relevant comments about this:
- https://github.com/maticnetwork/miden/pull/348#discussion_r936660086
- https://github.com/maticnetwork/miden/pull/348#discussion_r936663453
- https://github.com/maticnetwork/miden/pull/348#discussion_r937437489
|
process
|
refactor hasher lookup handling in the chiplets bus as discussed in the first version of the hasher handling in the chiplets bus was straightforward and left room for several future optimizations in particular the following should be refactored and optimized in the future how hasher lookup values are computed how lookups are requested from the decoder how lookups are stored for providing them to the bus from the hash chiplet module each item is described below in all cases there are also inline todos in the code with comments how hasher lookup values are computed the hasher state is currently stored in the hasherlookup struct which makes it heavy and means that other things which contain it become correspondingly much heavier e g chipletslookuprow similarly the next hasher state is held in the absorb variant of the hasherlookupcontext enum which has the same knock on effects the aforementioned struct enum are instead we could get the state from the trace when the lookup values are included in the b chip column however because the trace is in column major form this might not be more performant so any change should be benchmarked edit tohrnii once we remove the hasher state from hasherlookup we should refactor the method to avoid doing is memoized checks multiple times how lookups are requested from the decoder currently the entire hash computation is done when the decoder makes its initialization request and all intermediate lookups required for the correctness of b chip are queued when the decoder needs subsequent lookups e g as it absorbs new operations during respan or when the completes code blocks and needs the return hash it sends a request and they are dequeued and sent to the b chip bus instead it might be better to compute the lookups at the time they are needed this requires refactoring the decoder and the functions in the hasher more extensively see the relevant discussions in the previous pr how lookups are stored for providing them to the bus from the hash chiplet module currently all lookups are saved as they are computed at the end during fill trace the hash chiplet iterates through and provides each lookup to the bus b chip one by one there are a few different options here provide the lookups to the bus as soon as they are computed instead of saving them and sending them later this would require refactoring the request response handling a bit in the chiplet bus module since there is currently an assumption that all requests come first it s a simple refactor but it does also mean that the hash chiplet would work differently from the bitwise and memory chiplets during the fill trace function add the lookup responses to the chiplet bus in bulk instead of individually there are a few relevant comments about this
| 1
|
9,647
| 12,609,747,370
|
IssuesEvent
|
2020-06-12 02:38:48
|
OUDcollective/twenty20times
|
https://api.github.com/repos/OUDcollective/twenty20times
|
closed
|
Annotate the Annotation: CONTINUOUS INTEGRATION | PROJECT MGMT | WORLD CLASS BEST PRACTICES file:Update README.md - this is a 'Pull Request' #1.png - Google Drive
|
good first issue with-wind workflow-process
|

---
# ANNOTATE THE ANNOTATION
## TOOLS :
1. [GitHub.com](https://GitHub.com)
2. [Awesome Screenshot](https://awesomescreenshot.com)
3. [Google Drive: link to img - how do you get rid of this >(mark-down intricaciesππ½](https://drive.google.com/file/d/1EL24a_hBYVrWLPSGK6lfHhSPBOdu_5a2/view))
4. [with-wind](https://my.bio/thewindllc)
---
π
---
> Integration: Now onto the continuous part.
---
**Source URL**:
[Google Drive Link to File](https://drive.google.com/file/d/1EL24a_hBYVrWLPSGK6lfHhSPBOdu_5a2/view)
<table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.38</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>2560x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>2560x888</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>80%</td></tr></table>
|
1.0
|
Annotate the Annotation: CONTINUOUS INTEGRATION | PROJECT MGMT | WORLD CLASS BEST PRACTICES file:Update README.md - this is a 'Pull Request' #1.png - Google Drive - 
---
# ANNOTATE THE ANNOTATION
## TOOLS :
1. [GitHub.com](https://GitHub.com)
2. [Awesome Screenshot](https://awesomescreenshot.com)
3. [Google Drive: link to img - how do you get rid of this >(mark-down intricaciesππ½](https://drive.google.com/file/d/1EL24a_hBYVrWLPSGK6lfHhSPBOdu_5a2/view))
4. [with-wind](https://my.bio/thewindllc)
---
π
---
> Integration: Now onto the continuous part.
---
**Source URL**:
[Google Drive Link to File](https://drive.google.com/file/d/1EL24a_hBYVrWLPSGK6lfHhSPBOdu_5a2/view)
<table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.38</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>2560x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>2560x888</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>80%</td></tr></table>
|
process
|
annotate the annotation continuous integration project mgmt world class best practices file update readme md this is a pull request png google drive annotate the annotation tools π integration now onto the continuous part source url browser chrome os windows bit screen size viewport size pixel ratio zoom level
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.