Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9,684
| 12,685,271,559
|
IssuesEvent
|
2020-06-20 03:15:15
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Process and remove copy-to during preprocessing
|
feature preprocess priority/medium stale
|
Process `copy-to` attributes by creating the copied resource but also remove the `copy-to` attribute and replace with value of `href` with the new resource URI. This will allow removing copy-to handling from transtype specific code.
|
1.0
|
Process and remove copy-to during preprocessing - Process `copy-to` attributes by creating the copied resource but also remove the `copy-to` attribute and replace with value of `href` with the new resource URI. This will allow removing copy-to handling from transtype specific code.
|
process
|
process and remove copy to during preprocessing process copy to attributes by creating the copied resource but also remove the copy to attribute and replace with value of href with the new resource uri this will allow removing copy to handling from transtype specific code
| 1
|
148,892
| 23,394,181,607
|
IssuesEvent
|
2022-08-11 21:04:58
|
department-of-veterans-affairs/vets-design-system-documentation
|
https://api.github.com/repos/department-of-veterans-affairs/vets-design-system-documentation
|
closed
|
Minor updates to Alert - Expandable component
|
vsp-design-system-team va-alert-expandable
|
## What
Based on our experience implementing the new "Alert - Expandable" component for the new Facility COVID status, we have a few suggestions for improvements.
### Issue #1: At certain combinations of viewport and text length, the carat wraps on a line by itself
<details><summary>Screenshot</summary>
https://www.va.gov/alaska-health-care/locations/
<img width="765" alt="Locations___VA_Alaska_Health_Care___Veterans_Affairs" src="https://user-images.githubusercontent.com/643678/170289179-4a0e10ec-6d94-4308-a17a-81e578ff688b.png">
</details>
Proposed solution: Don't let the carat wrap on a line by itself. Either keep it aligned right, like the Accordion component, or prevent text wrap immediately before the carat.
### Issue #2. Long text wraps under the icon.
<details><summary>Screenshot</summary>
<img width="296" alt="Locations___VA_Alaska_Health_Care___Veterans_Affairs" src="https://user-images.githubusercontent.com/643678/170289470-35d65edd-f6ca-41ae-ac1e-752b424bb0e8.png">
</details>
Proposed solutions:
a) Limit text length (this will be very difficult from a content govenance perspective)
b) Indent the 2nd line so that it aligns with the first line, like in the non-experimental [Alert component](https://design.va.gov/components/alert)
<details><summary>Alert</summary>
<img width="579" alt="Alert_-_VA_gov_Design_System" src="https://user-images.githubusercontent.com/643678/170290467-de8e9479-2499-4ab3-ab35-958f868ce4a9.png">
</details>
### Issue #3: Defect with the height (UPDATE: this issue is now being tracked in #1093)
In certain situations, the `--calc-max-height:calc(192px + 2rem);` incorrectly calculates the height.
<details><summary>Screenshot</summary>
<img width="928" alt="Locations___VA_Alaska_Health_Care___Veterans_Affairs" src="https://user-images.githubusercontent.com/643678/170291173-c8a05d6d-0832-45c5-8e82-d53d30763e3a.png">
</details>
This seems to happen when the `alert-body` is longer, like the "Levels high" variant in [Alaska](https://www.va.gov/alaska-health-care/locations/). It doesn't happen the Low variant (eg [Memphis](https://www.va.gov/memphis-health-care/locations/). It's only happening on the Locations List page, but not on the indivdidual facility pages.
Solution: CSS or js updates?
### Issue #4: Vertical rhythm / padding issue
Depending on the HTML of the `alert-body`, the top and bottom padding seem excessive. `<p>` or `<ul>` each contain top and bottom margins, which may be creating unintended padding.
<details><summary>Screenshot</summary>
<img width="596" alt="Locations___VA_Memphis_Health_Care___Veterans_Affairs" src="https://user-images.githubusercontent.com/643678/170292334-1bafa2d7-4dc6-44be-8758-c3fad2c1c5c4.png">
</details>
Proposed solution: Unclear. Possibly make padding exceptions for `p:first` and `ul:first`, and `p:last` and `p:last` in the component, but there's probably a better way?
### Issue #5: Remove left and right margin
This one we're really not sure about, but submitting anyway.
In `margin: 0px 1.2rem 0.8rem;` the left/right 1.2rem seems like an assumption? seems like `va-alert` has no left/right margin.
<details><summary>Screenshot</summary>
<img width="1338" alt="Anchorage_VA_Medical_Center___VA_Alaska_Health_Care___Veterans_Affairs" src="https://user-images.githubusercontent.com/643678/170293396-e7fd9236-870c-4689-8ae6-dc222176d68d.png">
</details>
## Why
> Explain why you think this should be added to the VA.gov design system.
>
> - What evidence do you have that it's needed by multiple services across VA?
> - What evidence do you have that it meets the needs of the users of those services?
> - Have you checked that it doesn't already exist in the VA.gov Design System?
See above.
## Anything else
> Include links to any examples, research or code to support your proposal, if available.
## Next steps
You may present your work to the Design System Council at an upcoming meeting. If you do not or cannot attend the Design Council Meeting, you can opt to get an asynchronous approval.
Submit requests to join an upcoming Design System Council meeting in #platform-design-system.
During the meeting, the Design System Council Working Group will evaluate the request and make a decision.
If your request is approved, you can [add your component or pattern to the system](https://design.va.gov/about/contributing-to-the-design-system#4-add-your-component-or-pattern-to-the-system). If you have any questions on how to add your component or pattern to the system, please reach out to the Design System Team at #platform-design-system.
|
1.0
|
Minor updates to Alert - Expandable component - ## What
Based on our experience implementing the new "Alert - Expandable" component for the new Facility COVID status, we have a few suggestions for improvements.
### Issue #1: At certain combinations of viewport and text length, the carat wraps on a line by itself
<details><summary>Screenshot</summary>
https://www.va.gov/alaska-health-care/locations/
<img width="765" alt="Locations___VA_Alaska_Health_Care___Veterans_Affairs" src="https://user-images.githubusercontent.com/643678/170289179-4a0e10ec-6d94-4308-a17a-81e578ff688b.png">
</details>
Proposed solution: Don't let the carat wrap on a line by itself. Either keep it aligned right, like the Accordion component, or prevent text wrap immediately before the carat.
### Issue #2. Long text wraps under the icon.
<details><summary>Screenshot</summary>
<img width="296" alt="Locations___VA_Alaska_Health_Care___Veterans_Affairs" src="https://user-images.githubusercontent.com/643678/170289470-35d65edd-f6ca-41ae-ac1e-752b424bb0e8.png">
</details>
Proposed solutions:
a) Limit text length (this will be very difficult from a content govenance perspective)
b) Indent the 2nd line so that it aligns with the first line, like in the non-experimental [Alert component](https://design.va.gov/components/alert)
<details><summary>Alert</summary>
<img width="579" alt="Alert_-_VA_gov_Design_System" src="https://user-images.githubusercontent.com/643678/170290467-de8e9479-2499-4ab3-ab35-958f868ce4a9.png">
</details>
### Issue #3: Defect with the height (UPDATE: this issue is now being tracked in #1093)
In certain situations, the `--calc-max-height:calc(192px + 2rem);` incorrectly calculates the height.
<details><summary>Screenshot</summary>
<img width="928" alt="Locations___VA_Alaska_Health_Care___Veterans_Affairs" src="https://user-images.githubusercontent.com/643678/170291173-c8a05d6d-0832-45c5-8e82-d53d30763e3a.png">
</details>
This seems to happen when the `alert-body` is longer, like the "Levels high" variant in [Alaska](https://www.va.gov/alaska-health-care/locations/). It doesn't happen the Low variant (eg [Memphis](https://www.va.gov/memphis-health-care/locations/). It's only happening on the Locations List page, but not on the indivdidual facility pages.
Solution: CSS or js updates?
### Issue #4: Vertical rhythm / padding issue
Depending on the HTML of the `alert-body`, the top and bottom padding seem excessive. `<p>` or `<ul>` each contain top and bottom margins, which may be creating unintended padding.
<details><summary>Screenshot</summary>
<img width="596" alt="Locations___VA_Memphis_Health_Care___Veterans_Affairs" src="https://user-images.githubusercontent.com/643678/170292334-1bafa2d7-4dc6-44be-8758-c3fad2c1c5c4.png">
</details>
Proposed solution: Unclear. Possibly make padding exceptions for `p:first` and `ul:first`, and `p:last` and `p:last` in the component, but there's probably a better way?
### Issue #5: Remove left and right margin
This one we're really not sure about, but submitting anyway.
In `margin: 0px 1.2rem 0.8rem;` the left/right 1.2rem seems like an assumption? seems like `va-alert` has no left/right margin.
<details><summary>Screenshot</summary>
<img width="1338" alt="Anchorage_VA_Medical_Center___VA_Alaska_Health_Care___Veterans_Affairs" src="https://user-images.githubusercontent.com/643678/170293396-e7fd9236-870c-4689-8ae6-dc222176d68d.png">
</details>
## Why
> Explain why you think this should be added to the VA.gov design system.
>
> - What evidence do you have that it's needed by multiple services across VA?
> - What evidence do you have that it meets the needs of the users of those services?
> - Have you checked that it doesn't already exist in the VA.gov Design System?
See above.
## Anything else
> Include links to any examples, research or code to support your proposal, if available.
## Next steps
You may present your work to the Design System Council at an upcoming meeting. If you do not or cannot attend the Design Council Meeting, you can opt to get an asynchronous approval.
Submit requests to join an upcoming Design System Council meeting in #platform-design-system.
During the meeting, the Design System Council Working Group will evaluate the request and make a decision.
If your request is approved, you can [add your component or pattern to the system](https://design.va.gov/about/contributing-to-the-design-system#4-add-your-component-or-pattern-to-the-system). If you have any questions on how to add your component or pattern to the system, please reach out to the Design System Team at #platform-design-system.
|
non_process
|
minor updates to alert expandable component what based on our experience implementing the new alert expandable component for the new facility covid status we have a few suggestions for improvements issue at certain combinations of viewport and text length the carat wraps on a line by itself screenshot img width alt locations va alaska health care veterans affairs src proposed solution don t let the carat wrap on a line by itself either keep it aligned right like the accordion component or prevent text wrap immediately before the carat issue long text wraps under the icon screenshot img width alt locations va alaska health care veterans affairs src proposed solutions a limit text length this will be very difficult from a content govenance perspective b indent the line so that it aligns with the first line like in the non experimental alert img width alt alert va gov design system src issue defect with the height update this issue is now being tracked in in certain situations the calc max height calc incorrectly calculates the height screenshot img width alt locations va alaska health care veterans affairs src this seems to happen when the alert body is longer like the levels high variant in it doesn t happen the low variant eg it s only happening on the locations list page but not on the indivdidual facility pages solution css or js updates issue vertical rhythm padding issue depending on the html of the alert body the top and bottom padding seem excessive or each contain top and bottom margins which may be creating unintended padding screenshot img width alt locations va memphis health care veterans affairs src proposed solution unclear possibly make padding exceptions for p first and ul first and p last and p last in the component but there s probably a better way issue remove left and right margin this one we re really not sure about but submitting anyway in margin the left right seems like an assumption seems like va alert has no left right margin screenshot img width alt anchorage va medical center va alaska health care veterans affairs src why explain why you think this should be added to the va gov design system what evidence do you have that it s needed by multiple services across va what evidence do you have that it meets the needs of the users of those services have you checked that it doesn t already exist in the va gov design system see above anything else include links to any examples research or code to support your proposal if available next steps you may present your work to the design system council at an upcoming meeting if you do not or cannot attend the design council meeting you can opt to get an asynchronous approval submit requests to join an upcoming design system council meeting in platform design system during the meeting the design system council working group will evaluate the request and make a decision if your request is approved you can if you have any questions on how to add your component or pattern to the system please reach out to the design system team at platform design system
| 0
|
4,189
| 7,136,311,303
|
IssuesEvent
|
2018-01-23 06:23:59
|
w3c/payment-request
|
https://api.github.com/repos/w3c/payment-request
|
closed
|
Move to an incremental feature release model
|
Process aid
|
With the monolithic "1.0" out of the way, we could consider now moving to an incremental spec maintenance and enhancement model based on [semantic versioning](http://semver.org).
Such a model would allow us to do incremental releases and bug fixes without the pressure of having to do a big "2.0" release. It would allow us to fix bugs quickly, and only rapidly release features that have consensus, are fully tested, and (hopefully) implemented.
|
1.0
|
Move to an incremental feature release model - With the monolithic "1.0" out of the way, we could consider now moving to an incremental spec maintenance and enhancement model based on [semantic versioning](http://semver.org).
Such a model would allow us to do incremental releases and bug fixes without the pressure of having to do a big "2.0" release. It would allow us to fix bugs quickly, and only rapidly release features that have consensus, are fully tested, and (hopefully) implemented.
|
process
|
move to an incremental feature release model with the monolithic out of the way we could consider now moving to an incremental spec maintenance and enhancement model based on such a model would allow us to do incremental releases and bug fixes without the pressure of having to do a big release it would allow us to fix bugs quickly and only rapidly release features that have consensus are fully tested and hopefully implemented
| 1
|
659,759
| 21,940,685,791
|
IssuesEvent
|
2022-05-23 17:45:46
|
idaholab/LOGOS
|
https://api.github.com/repos/idaholab/LOGOS
|
closed
|
update Pyomo library
|
task priority critical
|
**Is your feature request related to a problem? Please describe.**
Currently, we need to fix the Pyomo version to 5.7.3 or less, since the newer versions has moved the PySP package to an independent library. See https://pyomo.readthedocs.io/en/latest/modeling_extensions/stochastic_programming.html
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [x] 1. Is it tagged with a type: defect or task?
- [x] 2. Is it tagged with a priority: critical, normal or minor?
- [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [x] 1. If the issue is a defect, is the defect fixed?
- [x] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [x] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [x] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [x] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
|
1.0
|
update Pyomo library - **Is your feature request related to a problem? Please describe.**
Currently, we need to fix the Pyomo version to 5.7.3 or less, since the newer versions has moved the PySP package to an independent library. See https://pyomo.readthedocs.io/en/latest/modeling_extensions/stochastic_programming.html
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [x] 1. Is it tagged with a type: defect or task?
- [x] 2. Is it tagged with a priority: critical, normal or minor?
- [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [x] 1. If the issue is a defect, is the defect fixed?
- [x] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [x] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [x] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [x] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
|
non_process
|
update pyomo library is your feature request related to a problem please describe currently we need to fix the pyomo version to or less since the newer versions has moved the pysp package to an independent library see describe the solution you d like a clear and concise description of what you want to happen describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here for change control board issue review this review should occur before any development is performed as a response to this issue is it tagged with a type defect or task is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure this review should occur when the issue is imminently going to be closed if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest release branch if yes is there any issue tagged with release create if needed if the issue is being closed without a pull request has an explanation of why it is being closed been provided
| 0
|
161,856
| 6,137,400,333
|
IssuesEvent
|
2017-06-26 12:13:16
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Missing translation for noDescriptionAvailable in usergroups
|
bug pending review Priority: High Project: C040
|
`usergroups.noDescriptionAvailable` is not present in description files
|
1.0
|
Missing translation for noDescriptionAvailable in usergroups - `usergroups.noDescriptionAvailable` is not present in description files
|
non_process
|
missing translation for nodescriptionavailable in usergroups usergroups nodescriptionavailable is not present in description files
| 0
|
62,732
| 17,186,792,320
|
IssuesEvent
|
2021-07-16 04:04:51
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
White background on .mx_BaseAvatar_image makes dark profile pictures look terrible on dark theme
|
A-Avatar A-Theming P2 S-Minor S-Tolerable T-Defect
|
When you have a profile picture with a black edge it gets a very thin white border around the image which looks quite bad against the black background of riot.
I assume this is here to make images with transparency visible on all background colors but I would argue that having no default background lets those groups set an icon with a proper background color where as the current setup makes it impossible to have a good looking icon on dark themes.
The background color really should not be visible on the edges of an image with no transparency though.
|
1.0
|
White background on .mx_BaseAvatar_image makes dark profile pictures look terrible on dark theme - When you have a profile picture with a black edge it gets a very thin white border around the image which looks quite bad against the black background of riot.
I assume this is here to make images with transparency visible on all background colors but I would argue that having no default background lets those groups set an icon with a proper background color where as the current setup makes it impossible to have a good looking icon on dark themes.
The background color really should not be visible on the edges of an image with no transparency though.
|
non_process
|
white background on mx baseavatar image makes dark profile pictures look terrible on dark theme when you have a profile picture with a black edge it gets a very thin white border around the image which looks quite bad against the black background of riot i assume this is here to make images with transparency visible on all background colors but i would argue that having no default background lets those groups set an icon with a proper background color where as the current setup makes it impossible to have a good looking icon on dark themes the background color really should not be visible on the edges of an image with no transparency though
| 0
|
10,192
| 13,049,067,275
|
IssuesEvent
|
2020-07-29 13:32:53
|
Arch666Angel/mods
|
https://api.github.com/repos/Arch666Angel/mods
|
closed
|
Crash on invalid lab input definition
|
Angels Bio Processing Impact: Bug
|
**Describe the bug**
A clear and concise description of what the bug is.

**Additional context**
An easy fix for this issue is by replacing
https://github.com/Arch666Angel/mods/blob/1a629fbbd112fb95b82fb86d3adfa996ebd1ca27/angelsbioprocessing/prototypes/bio-processing-override.lua#L40
with
```lua
for i, chk in pairs(labs.inputs or {}) do
```
|
1.0
|
Crash on invalid lab input definition - **Describe the bug**
A clear and concise description of what the bug is.

**Additional context**
An easy fix for this issue is by replacing
https://github.com/Arch666Angel/mods/blob/1a629fbbd112fb95b82fb86d3adfa996ebd1ca27/angelsbioprocessing/prototypes/bio-processing-override.lua#L40
with
```lua
for i, chk in pairs(labs.inputs or {}) do
```
|
process
|
crash on invalid lab input definition describe the bug a clear and concise description of what the bug is additional context an easy fix for this issue is by replacing with lua for i chk in pairs labs inputs or do
| 1
|
130,749
| 18,149,711,318
|
IssuesEvent
|
2021-09-26 03:52:49
|
loft-sh/devspace
|
https://api.github.com/repos/loft-sh/devspace
|
closed
|
make deploy and purge options consistent
|
kind/design
|
**Is your feature request related to a problem?**
Short option "-d" means different things for purge and deploy.
Currently for deploy:
```
--deployments string Only deploy a specifc deployment (You can specify multiple deployments comma-separated
-d, --force-deploy Forces to (re-)deploy every deployment
```
But for purge
```
-d, --deployments string The deployment to delete (You can specify multiple deployments comma-separated, e.g. devspace-default,devspace-database etc
```
**Which solution do you suggest?**
Use '-d' for short option for deployments for both; for `--force-deploy` short option should be `-f`.
**Which alternative solutions exist?**
Getting used to the inconsistency? Always using the long options?
**Additional context**
<!-- DO NOT EDIT BELOW THIS LINE -->
/kind feature
|
1.0
|
make deploy and purge options consistent - **Is your feature request related to a problem?**
Short option "-d" means different things for purge and deploy.
Currently for deploy:
```
--deployments string Only deploy a specifc deployment (You can specify multiple deployments comma-separated
-d, --force-deploy Forces to (re-)deploy every deployment
```
But for purge
```
-d, --deployments string The deployment to delete (You can specify multiple deployments comma-separated, e.g. devspace-default,devspace-database etc
```
**Which solution do you suggest?**
Use '-d' for short option for deployments for both; for `--force-deploy` short option should be `-f`.
**Which alternative solutions exist?**
Getting used to the inconsistency? Always using the long options?
**Additional context**
<!-- DO NOT EDIT BELOW THIS LINE -->
/kind feature
|
non_process
|
make deploy and purge options consistent is your feature request related to a problem short option d means different things for purge and deploy currently for deploy deployments string only deploy a specifc deployment you can specify multiple deployments comma separated d force deploy forces to re deploy every deployment but for purge d deployments string the deployment to delete you can specify multiple deployments comma separated e g devspace default devspace database etc which solution do you suggest use d for short option for deployments for both for force deploy short option should be f which alternative solutions exist getting used to the inconsistency always using the long options additional context kind feature
| 0
|
282,909
| 21,315,986,583
|
IssuesEvent
|
2022-04-16 09:28:15
|
emilysim00/pe
|
https://api.github.com/repos/emilysim00/pe
|
opened
|
Unclear target user in UG
|
type.DocumentationBug severity.Low
|
In the introduction, it was not stated that this application was only meant for student. However, in the section explaining expected outcome of adding a credit card, the warning section suddenly state that the credit card limit is set due to the student's income. Hence, is it unclear as to who is the target audience of the application.


<!--session: 1650096030642-71a1af9b-54f1-499a-adfa-381e774dd08e-->
<!--Version: Web v3.4.2-->
|
1.0
|
Unclear target user in UG - In the introduction, it was not stated that this application was only meant for student. However, in the section explaining expected outcome of adding a credit card, the warning section suddenly state that the credit card limit is set due to the student's income. Hence, is it unclear as to who is the target audience of the application.


<!--session: 1650096030642-71a1af9b-54f1-499a-adfa-381e774dd08e-->
<!--Version: Web v3.4.2-->
|
non_process
|
unclear target user in ug in the introduction it was not stated that this application was only meant for student however in the section explaining expected outcome of adding a credit card the warning section suddenly state that the credit card limit is set due to the student s income hence is it unclear as to who is the target audience of the application
| 0
|
1,903
| 4,728,501,988
|
IssuesEvent
|
2016-10-18 16:06:32
|
ongroup/mvmason
|
https://api.github.com/repos/ongroup/mvmason
|
closed
|
Prepare MVON# Presentation for EUG
|
4 - Done MVON# Priority: HIGH process
|
Prepare slides for the first of the EUG talks regarding use of MVON# to transpile MV to C#
<!---
@huboard:{"order":2.0010002000200005,"milestone_order":0.999300279916021,"custom_state":""}
-->
|
1.0
|
Prepare MVON# Presentation for EUG - Prepare slides for the first of the EUG talks regarding use of MVON# to transpile MV to C#
<!---
@huboard:{"order":2.0010002000200005,"milestone_order":0.999300279916021,"custom_state":""}
-->
|
process
|
prepare mvon presentation for eug prepare slides for the first of the eug talks regarding use of mvon to transpile mv to c huboard order milestone order custom state
| 1
|
171,854
| 13,250,989,206
|
IssuesEvent
|
2020-08-20 00:45:28
|
microsoft/msquic
|
https://api.github.com/repos/microsoft/msquic
|
closed
|
Free Invalid Ptr in OOM Scenario
|
App: spinquic Area: Testing
|
I implemented the faulting-heap that returns NULL ~1/100th of the time. The patch is attached. These issues are only relevant in OOM scenarios or other resiliency support scenarios.
```
Stopped reason: SIGSEGV
0x0000000000421394 in __asan::Allocator::Deallocate(void*, unsigned long, unsigned long, __sanitizer::BufferedStackTrace*, __asan::AllocType) ()
gdb-peda$ bt
#0 0x0000000000421394 in __asan::Allocator::Deallocate(void*, unsigned long, unsigned long, __sanitizer::BufferedStackTrace*, __asan::AllocType) ()
#1 0x00000000004c7c8c in free ()
#2 0x00007ffff7c67c45 in QuicFree (Mem=0xbebebebebebebebe) at /home/max/msquic/src/platform/platform_linux.c:192
#3 0x00007ffff7c75613 in QuicHashFree (Hash=0xbebebebebebebebe) at /home/max/msquic/src/platform/tls_openssl.c:2248
#4 0x00007ffff7bc2224 in QuicBindingInitialize (ShareBinding=0x0, ServerOwned=0x0, LocalAddress=0x0, RemoteAddress=0x61e000015d5c, NewBinding=0x61e000015d38) at /home/max/msquic/src/core/binding.c:164
#5 0x00007ffff7b7dde1 in QuicLibraryGetBinding (Session=0x613000000040, ShareBinding=0x0, ServerOwned=0x0, LocalAddress=0x0, RemoteAddress=0x61e000015d5c, NewBinding=0x61e000015d38)
at /home/max/msquic/src/core/library.c:1168
#6 0x00007ffff7bd4e7e in QuicConnStart (Connection=0x61e000015c80, Family=0x2, ServerName=0x6020000020f0 "127.0.0.1", ServerPort=0x270f) at /home/max/msquic/src/core/connection.c:1735
#7 0x00007ffff7bf47f5 in QuicConnProcessApiOperation (Connection=0x61e000015c80, ApiCtx=0x6060000073a0) at /home/max/msquic/src/core/connection.c:6412
#8 0x00007ffff7bf59bb in QuicConnDrainOperations (Connection=0x61e000015c80) at /home/max/msquic/src/core/connection.c:6586
#9 0x00007ffff7bb0ada in QuicWorkerProcessConnection (Worker=0x628000003448, Connection=0x61e000015c80) at /home/max/msquic/src/core/worker.c:483
#10 0x00007ffff7baea5d in QuicWorkerThread (Context=0x628000003448) at /home/max/msquic/src/core/worker.c:571
#11 0x00007ffff77bbfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#12 0x00007ffff76c64cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
```
### Steps to reproduce the behavior
This repros pretty easily with the following patch (and also turn on ASAN)
```
diff --git a/src/platform/platform_linux.c b/src/platform/platform_linux.c
index 68e7a6d..f5be03e 100644
--- a/src/platform/platform_linux.c
+++ b/src/platform/platform_linux.c
@@ -172,6 +172,8 @@ QuicAlloc(
_In_ size_t ByteCount
)
{
+ if ((rand() % 100) == 1)
+ return NULL;
#ifdef QUIC_PLATFORM_DISPATCH_TABLE
return PlatDispatch->Alloc(ByteCount);
#else
```
There are a few other issues that you'll see, but after running it a few time you'll trigger this issue. This issue seems particularly nasty because it's free'ing what looks like a heap poison-fill value.. I'm not exactly sure how that's possible :-)
|
1.0
|
Free Invalid Ptr in OOM Scenario - I implemented the faulting-heap that returns NULL ~1/100th of the time. The patch is attached. These issues are only relevant in OOM scenarios or other resiliency support scenarios.
```
Stopped reason: SIGSEGV
0x0000000000421394 in __asan::Allocator::Deallocate(void*, unsigned long, unsigned long, __sanitizer::BufferedStackTrace*, __asan::AllocType) ()
gdb-peda$ bt
#0 0x0000000000421394 in __asan::Allocator::Deallocate(void*, unsigned long, unsigned long, __sanitizer::BufferedStackTrace*, __asan::AllocType) ()
#1 0x00000000004c7c8c in free ()
#2 0x00007ffff7c67c45 in QuicFree (Mem=0xbebebebebebebebe) at /home/max/msquic/src/platform/platform_linux.c:192
#3 0x00007ffff7c75613 in QuicHashFree (Hash=0xbebebebebebebebe) at /home/max/msquic/src/platform/tls_openssl.c:2248
#4 0x00007ffff7bc2224 in QuicBindingInitialize (ShareBinding=0x0, ServerOwned=0x0, LocalAddress=0x0, RemoteAddress=0x61e000015d5c, NewBinding=0x61e000015d38) at /home/max/msquic/src/core/binding.c:164
#5 0x00007ffff7b7dde1 in QuicLibraryGetBinding (Session=0x613000000040, ShareBinding=0x0, ServerOwned=0x0, LocalAddress=0x0, RemoteAddress=0x61e000015d5c, NewBinding=0x61e000015d38)
at /home/max/msquic/src/core/library.c:1168
#6 0x00007ffff7bd4e7e in QuicConnStart (Connection=0x61e000015c80, Family=0x2, ServerName=0x6020000020f0 "127.0.0.1", ServerPort=0x270f) at /home/max/msquic/src/core/connection.c:1735
#7 0x00007ffff7bf47f5 in QuicConnProcessApiOperation (Connection=0x61e000015c80, ApiCtx=0x6060000073a0) at /home/max/msquic/src/core/connection.c:6412
#8 0x00007ffff7bf59bb in QuicConnDrainOperations (Connection=0x61e000015c80) at /home/max/msquic/src/core/connection.c:6586
#9 0x00007ffff7bb0ada in QuicWorkerProcessConnection (Worker=0x628000003448, Connection=0x61e000015c80) at /home/max/msquic/src/core/worker.c:483
#10 0x00007ffff7baea5d in QuicWorkerThread (Context=0x628000003448) at /home/max/msquic/src/core/worker.c:571
#11 0x00007ffff77bbfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#12 0x00007ffff76c64cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
```
### Steps to reproduce the behavior
This repros pretty easily with the following patch (and also turn on ASAN)
```
diff --git a/src/platform/platform_linux.c b/src/platform/platform_linux.c
index 68e7a6d..f5be03e 100644
--- a/src/platform/platform_linux.c
+++ b/src/platform/platform_linux.c
@@ -172,6 +172,8 @@ QuicAlloc(
_In_ size_t ByteCount
)
{
+ if ((rand() % 100) == 1)
+ return NULL;
#ifdef QUIC_PLATFORM_DISPATCH_TABLE
return PlatDispatch->Alloc(ByteCount);
#else
```
There are a few other issues that you'll see, but after running it a few time you'll trigger this issue. This issue seems particularly nasty because it's free'ing what looks like a heap poison-fill value.. I'm not exactly sure how that's possible :-)
|
non_process
|
free invalid ptr in oom scenario i implemented the faulting heap that returns null of the time the patch is attached these issues are only relevant in oom scenarios or other resiliency support scenarios stopped reason sigsegv in asan allocator deallocate void unsigned long unsigned long sanitizer bufferedstacktrace asan alloctype gdb peda bt in asan allocator deallocate void unsigned long unsigned long sanitizer bufferedstacktrace asan alloctype in free in quicfree mem at home max msquic src platform platform linux c in quichashfree hash at home max msquic src platform tls openssl c in quicbindinginitialize sharebinding serverowned localaddress remoteaddress newbinding at home max msquic src core binding c in quiclibrarygetbinding session sharebinding serverowned localaddress remoteaddress newbinding at home max msquic src core library c in quicconnstart connection family servername serverport at home max msquic src core connection c in quicconnprocessapioperation connection apictx at home max msquic src core connection c in quicconndrainoperations connection at home max msquic src core connection c in quicworkerprocessconnection worker connection at home max msquic src core worker c in quicworkerthread context at home max msquic src core worker c in start thread arg at pthread create c in clone at sysdeps unix sysv linux clone s steps to reproduce the behavior this repros pretty easily with the following patch and also turn on asan diff git a src platform platform linux c b src platform platform linux c index a src platform platform linux c b src platform platform linux c quicalloc in size t bytecount if rand return null ifdef quic platform dispatch table return platdispatch alloc bytecount else there are a few other issues that you ll see but after running it a few time you ll trigger this issue this issue seems particularly nasty because it s free ing what looks like a heap poison fill value i m not exactly sure how that s possible
| 0
|
183,363
| 14,939,613,389
|
IssuesEvent
|
2021-01-25 17:09:50
|
mspnp/aks-secure-baseline
|
https://api.github.com/repos/mspnp/aks-secure-baseline
|
closed
|
Update HTML version of readme
|
documentation
|
Diego, if you get some time, could you do a quick "sync" with the HTML content you added and the changes that have come about since that's landed. It hasn't been maintained as the various files have changed. If you think having it as an HTML file here is too much to maintain, your PR could be to remove it as well. Really your choice. Thanks @dcasati!
|
1.0
|
Update HTML version of readme - Diego, if you get some time, could you do a quick "sync" with the HTML content you added and the changes that have come about since that's landed. It hasn't been maintained as the various files have changed. If you think having it as an HTML file here is too much to maintain, your PR could be to remove it as well. Really your choice. Thanks @dcasati!
|
non_process
|
update html version of readme diego if you get some time could you do a quick sync with the html content you added and the changes that have come about since that s landed it hasn t been maintained as the various files have changed if you think having it as an html file here is too much to maintain your pr could be to remove it as well really your choice thanks dcasati
| 0
|
3,864
| 6,808,634,880
|
IssuesEvent
|
2017-11-04 05:56:54
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
reopened
|
Improperly handle what appears to be correct input to the wrong smart contract.
|
apps-ethslurp status-inprocess type-bug
|
This transaction: 0x0c49b8ff8365c43e48ed6aeb0bbe79250b386d20fab46caf1404cc6d773b53e2 sends a valid looking input data to a non-existant function in the singularDTV code (that is, this is an end user who sent the right function signature to the wrong contract).
My code reports "Field Not Found" because this looks like (and is) a valid function signature, but the parse returns the empty string. EtherScan (incorrectly, I think) reports the name of the function on the wrong contract.
The input data is 0x861731d5. The address of smart contract is 0xaec2e87e0a235266d9c5adc9deb4b2e29b54d009. This is singular DTV's contract.
|
1.0
|
Improperly handle what appears to be correct input to the wrong smart contract. - This transaction: 0x0c49b8ff8365c43e48ed6aeb0bbe79250b386d20fab46caf1404cc6d773b53e2 sends a valid looking input data to a non-existant function in the singularDTV code (that is, this is an end user who sent the right function signature to the wrong contract).
My code reports "Field Not Found" because this looks like (and is) a valid function signature, but the parse returns the empty string. EtherScan (incorrectly, I think) reports the name of the function on the wrong contract.
The input data is 0x861731d5. The address of smart contract is 0xaec2e87e0a235266d9c5adc9deb4b2e29b54d009. This is singular DTV's contract.
|
process
|
improperly handle what appears to be correct input to the wrong smart contract this transaction sends a valid looking input data to a non existant function in the singulardtv code that is this is an end user who sent the right function signature to the wrong contract my code reports field not found because this looks like and is a valid function signature but the parse returns the empty string etherscan incorrectly i think reports the name of the function on the wrong contract the input data is the address of smart contract is this is singular dtv s contract
| 1
|
9,859
| 12,865,571,961
|
IssuesEvent
|
2020-07-10 00:45:52
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Running two daemonized real-time-http (port 7890 & 7891): output behaves strangely
|
html report log-processing websocket-server
|
I’m hosting multiple websites on one VPS. I’d like to use GoAccess to offer stats on all of the websites, individually, and just run the daemons pretty much 24/7. I figured to start out I’d just run two at a time, to make sure it works.
So as of right now I’ve got two GoAccess processes going at once. Here is the command I used:
```
[root@localhost ~]# goaccess /var/log/httpd/main.site-access.log* --log-format=COMBINED --real-time-html --daemonize --port=7890 --ssl-cert=/etc/letsencrypt/live/www.main.site/fullchain.pem --ssl-key=/etc/letsencrypt/live/www.main.site/privkey.pem -o /var/www/mainsite/main.site/panels/main.site-stats.html
[root@localhost ~]# goaccess /var/log/httpd/secondary-access.log* --log-format=COMBINED --real-time-html --daemonize --port=7891 --ssl-cert=/etc/letsencrypt/live/www.main.site/fullchain.pem --ssl-key=/etc/letsencrypt/live/www.main.site/privkey.pem -o /var/www/mainsite/main.site/panels/secondary.site-stats.html
```
Netstat says it’s working as planned:
```
[root@localhost ~]# netstat -tlpn | grep goaccess
tcp 0 0 0.0.0.0:7890 0.0.0.0:* LISTEN 306026/goaccess
tcp 0 0 0.0.0.0:7891 0.0.0.0:* LISTEN 306002/goaccess
```
They've been running for about 24 hours now. Everything looks and works fine when you open one html page: the page updates real-time, it's showing stats for the correct website, etc, etc. But it starts to behave strangely if someone is looking at both at the same time.
If someone has a GoAccess output page open in their web browser, and a second GoAccess output page gets opened by someone else, one of three things happens (seemingly at random):
1. The output on the first page suddenly switches to the other website (the wrong set of stats)
2. The output of the page goes a little haywire and seems to _combine_ the stats of both pages
3. Nothing happens and both pages stay on the correct set of stats (this is exceedingly rare, however)
Also to note: if someone visits one of the websites while both browsers are open, and the logs get updated, then one of the open html pages will update correctly (apparently at random) and the other will behave as per 1, 2 and 3 above (also apparently at random). So if someone visits secondary.site, sometimes the output page for main.site will switch to secondary.site while the output page for secondary.site might go haywire and combine both stats, or else stay static and not update at all.
I have tried this with different machines / different IP addresses accessing both html output pages, and with a third machine / IP address visiting the website (to trigger an access log update).
So is this a bug? Or have I set it up incorrectly?
Or am I just asking GoAccess to do something it's not built for?
|
1.0
|
Running two daemonized real-time-http (port 7890 & 7891): output behaves strangely - I’m hosting multiple websites on one VPS. I’d like to use GoAccess to offer stats on all of the websites, individually, and just run the daemons pretty much 24/7. I figured to start out I’d just run two at a time, to make sure it works.
So as of right now I’ve got two GoAccess processes going at once. Here is the command I used:
```
[root@localhost ~]# goaccess /var/log/httpd/main.site-access.log* --log-format=COMBINED --real-time-html --daemonize --port=7890 --ssl-cert=/etc/letsencrypt/live/www.main.site/fullchain.pem --ssl-key=/etc/letsencrypt/live/www.main.site/privkey.pem -o /var/www/mainsite/main.site/panels/main.site-stats.html
[root@localhost ~]# goaccess /var/log/httpd/secondary-access.log* --log-format=COMBINED --real-time-html --daemonize --port=7891 --ssl-cert=/etc/letsencrypt/live/www.main.site/fullchain.pem --ssl-key=/etc/letsencrypt/live/www.main.site/privkey.pem -o /var/www/mainsite/main.site/panels/secondary.site-stats.html
```
Netstat says it’s working as planned:
```
[root@localhost ~]# netstat -tlpn | grep goaccess
tcp 0 0 0.0.0.0:7890 0.0.0.0:* LISTEN 306026/goaccess
tcp 0 0 0.0.0.0:7891 0.0.0.0:* LISTEN 306002/goaccess
```
They've been running for about 24 hours now. Everything looks and works fine when you open one html page: the page updates real-time, it's showing stats for the correct website, etc, etc. But it starts to behave strangely if someone is looking at both at the same time.
If someone has a GoAccess output page open in their web browser, and a second GoAccess output page gets opened by someone else, one of three things happens (seemingly at random):
1. The output on the first page suddenly switches to the other website (the wrong set of stats)
2. The output of the page goes a little haywire and seems to _combine_ the stats of both pages
3. Nothing happens and both pages stay on the correct set of stats (this is exceedingly rare, however)
Also to note: if someone visits one of the websites while both browsers are open, and the logs get updated, then one of the open html pages will update correctly (apparently at random) and the other will behave as per 1, 2 and 3 above (also apparently at random). So if someone visits secondary.site, sometimes the output page for main.site will switch to secondary.site while the output page for secondary.site might go haywire and combine both stats, or else stay static and not update at all.
I have tried this with different machines / different IP addresses accessing both html output pages, and with a third machine / IP address visiting the website (to trigger an access log update).
So is this a bug? Or have I set it up incorrectly?
Or am I just asking GoAccess to do something it's not built for?
|
process
|
running two daemonized real time http port output behaves strangely i’m hosting multiple websites on one vps i’d like to use goaccess to offer stats on all of the websites individually and just run the daemons pretty much i figured to start out i’d just run two at a time to make sure it works so as of right now i’ve got two goaccess processes going at once here is the command i used goaccess var log httpd main site access log log format combined real time html daemonize port ssl cert etc letsencrypt live ssl key etc letsencrypt live o var www mainsite main site panels main site stats html goaccess var log httpd secondary access log log format combined real time html daemonize port ssl cert etc letsencrypt live ssl key etc letsencrypt live o var www mainsite main site panels secondary site stats html netstat says it’s working as planned netstat tlpn grep goaccess tcp listen goaccess tcp listen goaccess they ve been running for about hours now everything looks and works fine when you open one html page the page updates real time it s showing stats for the correct website etc etc but it starts to behave strangely if someone is looking at both at the same time if someone has a goaccess output page open in their web browser and a second goaccess output page gets opened by someone else one of three things happens seemingly at random the output on the first page suddenly switches to the other website the wrong set of stats the output of the page goes a little haywire and seems to combine the stats of both pages nothing happens and both pages stay on the correct set of stats this is exceedingly rare however also to note if someone visits one of the websites while both browsers are open and the logs get updated then one of the open html pages will update correctly apparently at random and the other will behave as per and above also apparently at random so if someone visits secondary site sometimes the output page for main site will switch to secondary site while the output page for secondary site might go haywire and combine both stats or else stay static and not update at all i have tried this with different machines different ip addresses accessing both html output pages and with a third machine ip address visiting the website to trigger an access log update so is this a bug or have i set it up incorrectly or am i just asking goaccess to do something it s not built for
| 1
|
147,472
| 13,208,064,925
|
IssuesEvent
|
2020-08-15 02:11:36
|
lucyawrey/reroll
|
https://api.github.com/repos/lucyawrey/reroll
|
closed
|
Create Documentation
|
documentation
|
Commented code and external documentation. Should be a project akin to Bootstrap-Master.
|
1.0
|
Create Documentation - Commented code and external documentation. Should be a project akin to Bootstrap-Master.
|
non_process
|
create documentation commented code and external documentation should be a project akin to bootstrap master
| 0
|
16,694
| 21,792,643,144
|
IssuesEvent
|
2022-05-15 06:10:40
|
tivac/modular-css
|
https://api.github.com/repos/tivac/modular-css
|
closed
|
Try out postcss-values-parser
|
feature pkg:processor
|
https://github.com/shellscape/postcss-values-parser
Not that I have any serious issues w/ postcss-value-parser currently, but that project seems interesting at least.
|
1.0
|
Try out postcss-values-parser - https://github.com/shellscape/postcss-values-parser
Not that I have any serious issues w/ postcss-value-parser currently, but that project seems interesting at least.
|
process
|
try out postcss values parser not that i have any serious issues w postcss value parser currently but that project seems interesting at least
| 1
|
19,253
| 25,451,838,718
|
IssuesEvent
|
2022-11-24 11:03:50
|
saibrotech/mentoria
|
https://api.github.com/repos/saibrotech/mentoria
|
closed
|
Processo Seletivo Bootcamp Data Analytics WoMakers Code
|
processo seletivo
|
https://womakerscode.org/bootcamp-dados
- [x] Formulário de candidatura
- [x] Teste de interpretação de texto
- [x] Teste Google Colab
Resultado 26/11/2022
|
1.0
|
Processo Seletivo Bootcamp Data Analytics WoMakers Code - https://womakerscode.org/bootcamp-dados
- [x] Formulário de candidatura
- [x] Teste de interpretação de texto
- [x] Teste Google Colab
Resultado 26/11/2022
|
process
|
processo seletivo bootcamp data analytics womakers code formulário de candidatura teste de interpretação de texto teste google colab resultado
| 1
|
713,826
| 24,541,151,924
|
IssuesEvent
|
2022-10-12 03:57:58
|
mikezimm/ALVFinMan
|
https://api.github.com/repos/mikezimm/ALVFinMan
|
closed
|
Layout1Page - Title Sort is different on App vs Classic
|
enhancement Layout1Page complete priority
|
## See attached.... B10 should come at the end.
Look at the custom code for that sort function.

|
1.0
|
Layout1Page - Title Sort is different on App vs Classic - ## See attached.... B10 should come at the end.
Look at the custom code for that sort function.

|
non_process
|
title sort is different on app vs classic see attached should come at the end look at the custom code for that sort function
| 0
|
5,595
| 5,807,621,065
|
IssuesEvent
|
2017-05-04 08:24:13
|
oppia/oppia
|
https://api.github.com/repos/oppia/oppia
|
closed
|
Audit the site for missing translations.
|
loc: frontend owner: @seanlip TODO: other type: infrastructure
|
We would like Oppia to be available to users in many languages, and have implemented a translation framework for doing this. In order to change the platform language, a user can scroll down to the footer and select a different language from the dropdown menu.
However, there are some inconsistencies. On a number of pages, certain strings are not being translated. For example, on the Preferences page:

This reads really weirdly for people who want to view Oppia in Spanish. The aim of this issue is to do a comprehensive audit of the site and catalog any text that still needs to be translated, then introduce the necessary code changes to make that text translatable. Please put a list of such text here, similar to e.g. how the breakdown is being done in #2394.
Also, please note that the editor pages are out of scope for this issue, especially since the exploration editor is currently being redesigned.
|
1.0
|
Audit the site for missing translations. - We would like Oppia to be available to users in many languages, and have implemented a translation framework for doing this. In order to change the platform language, a user can scroll down to the footer and select a different language from the dropdown menu.
However, there are some inconsistencies. On a number of pages, certain strings are not being translated. For example, on the Preferences page:

This reads really weirdly for people who want to view Oppia in Spanish. The aim of this issue is to do a comprehensive audit of the site and catalog any text that still needs to be translated, then introduce the necessary code changes to make that text translatable. Please put a list of such text here, similar to e.g. how the breakdown is being done in #2394.
Also, please note that the editor pages are out of scope for this issue, especially since the exploration editor is currently being redesigned.
|
non_process
|
audit the site for missing translations we would like oppia to be available to users in many languages and have implemented a translation framework for doing this in order to change the platform language a user can scroll down to the footer and select a different language from the dropdown menu however there are some inconsistencies on a number of pages certain strings are not being translated for example on the preferences page this reads really weirdly for people who want to view oppia in spanish the aim of this issue is to do a comprehensive audit of the site and catalog any text that still needs to be translated then introduce the necessary code changes to make that text translatable please put a list of such text here similar to e g how the breakdown is being done in also please note that the editor pages are out of scope for this issue especially since the exploration editor is currently being redesigned
| 0
|
19,984
| 26,462,582,494
|
IssuesEvent
|
2023-01-16 19:14:11
|
kubernetes-sigs/windows-operational-readiness
|
https://api.github.com/repos/kubernetes-sigs/windows-operational-readiness
|
closed
|
Ability to access the APIServer using pod mounted service accounts
|
kind/feature lifecycle/rotten category/ext.hostprocess
|
Ability to access the APIServer using pod mounted service accounts from a hostProcess pod.
|
1.0
|
Ability to access the APIServer using pod mounted service accounts - Ability to access the APIServer using pod mounted service accounts from a hostProcess pod.
|
process
|
ability to access the apiserver using pod mounted service accounts ability to access the apiserver using pod mounted service accounts from a hostprocess pod
| 1
|
61,094
| 6,725,203,972
|
IssuesEvent
|
2017-10-17 03:39:54
|
przbadu/ezy-accounting
|
https://api.github.com/repos/przbadu/ezy-accounting
|
closed
|
API/rspec for Liabilities controller
|
API enhancement Ready for testing
|
Add API and rspec testcases for Liabilities controller:
## User should be able to
- GET liability by id
- POST liability
- PUT liability
|
1.0
|
API/rspec for Liabilities controller - Add API and rspec testcases for Liabilities controller:
## User should be able to
- GET liability by id
- POST liability
- PUT liability
|
non_process
|
api rspec for liabilities controller add api and rspec testcases for liabilities controller user should be able to get liability by id post liability put liability
| 0
|
4,864
| 3,470,228,907
|
IssuesEvent
|
2015-12-23 06:12:37
|
d-ronin/dRonin
|
https://api.github.com/repos/d-ronin/dRonin
|
closed
|
We should build GCS with -Bsymbolic-functions
|
build/infrastructure enhancement gcs ready
|
It can be expected to improve GCS startup times and initial performance significantly.
|
1.0
|
We should build GCS with -Bsymbolic-functions - It can be expected to improve GCS startup times and initial performance significantly.
|
non_process
|
we should build gcs with bsymbolic functions it can be expected to improve gcs startup times and initial performance significantly
| 0
|
9,190
| 12,228,824,056
|
IssuesEvent
|
2020-05-03 21:06:24
|
bridgetownrb/bridgetown
|
https://api.github.com/repos/bridgetownrb/bridgetown
|
closed
|
feat: Switch tests from using httpclient to Faraday (and then future data importers can expect Faraday out-of-the-box)
|
enhancement process
|
## Summary
I'm currently reviewing Bridgetown's plugins/hooks system and iterating towards an out-of-the-box solution for external API data import starting with REST-style calls. That let me to review what, if anything, we're currently using for an HTTP client. Currently there are some `serve` commands that seem to use the `httpclient` gem, so that's a good place to start refactoring.
## Motivation
My ultimate goal is to allow folks to write tiny plugins (aka barely more than a one-liner in many cases) that will automatically provide data from external APIs to the site build (pages, templates, etc., so it's important that Bridgetown come with an opinionated default of an HTTP client plugins can standardize on. And for a variety of reasons (not least of which it's part of the standard Rails bundle), Faraday seems top of the list. The API for instantiating connections can get rather verbose, but we can easily built a thin wrapper around that which will be sufficient for "most" use cases.
|
1.0
|
feat: Switch tests from using httpclient to Faraday (and then future data importers can expect Faraday out-of-the-box) - ## Summary
I'm currently reviewing Bridgetown's plugins/hooks system and iterating towards an out-of-the-box solution for external API data import starting with REST-style calls. That let me to review what, if anything, we're currently using for an HTTP client. Currently there are some `serve` commands that seem to use the `httpclient` gem, so that's a good place to start refactoring.
## Motivation
My ultimate goal is to allow folks to write tiny plugins (aka barely more than a one-liner in many cases) that will automatically provide data from external APIs to the site build (pages, templates, etc., so it's important that Bridgetown come with an opinionated default of an HTTP client plugins can standardize on. And for a variety of reasons (not least of which it's part of the standard Rails bundle), Faraday seems top of the list. The API for instantiating connections can get rather verbose, but we can easily built a thin wrapper around that which will be sufficient for "most" use cases.
|
process
|
feat switch tests from using httpclient to faraday and then future data importers can expect faraday out of the box summary i m currently reviewing bridgetown s plugins hooks system and iterating towards an out of the box solution for external api data import starting with rest style calls that let me to review what if anything we re currently using for an http client currently there are some serve commands that seem to use the httpclient gem so that s a good place to start refactoring motivation my ultimate goal is to allow folks to write tiny plugins aka barely more than a one liner in many cases that will automatically provide data from external apis to the site build pages templates etc so it s important that bridgetown come with an opinionated default of an http client plugins can standardize on and for a variety of reasons not least of which it s part of the standard rails bundle faraday seems top of the list the api for instantiating connections can get rather verbose but we can easily built a thin wrapper around that which will be sufficient for most use cases
| 1
|
3,232
| 13,219,386,995
|
IssuesEvent
|
2020-08-17 10:23:07
|
chavarera/python-mini-projects
|
https://api.github.com/repos/chavarera/python-mini-projects
|
closed
|
Write a script to get lat and lang for the given address
|
API Automation Json
|
**Problem statement**
Write a script to get lat and lang for the given address
|
1.0
|
Write a script to get lat and lang for the given address - **Problem statement**
Write a script to get lat and lang for the given address
|
non_process
|
write a script to get lat and lang for the given address problem statement write a script to get lat and lang for the given address
| 0
|
11,952
| 14,713,944,206
|
IssuesEvent
|
2021-01-05 11:08:54
|
yuta252/startlens_react_frontend
|
https://api.github.com/repos/yuta252/startlens_react_frontend
|
closed
|
多言語プロフィールの一覧表示・詳細表示・更新削除機能の実装
|
dev process
|
## 概要
情報を掲載する事業者は外国人ユーザーも考慮し多言語に対応した情報をアップロードする必要がある。
そのため多言語でのユーザープロフィール情報を管理する機能を実装する。
## 変更点
---
- [x] profileSliceの追加及び状態管理の更新
- [x] MultiProfileList, MultiProfileDisplay, MultiProfileEditコンポーネントを処理ごとに入れ替えることで、同一画面で多言語プロフィールのCRUDを実装
## 課題
- 処理失敗時のエラーハンドリングは今後修正が必要
- 翻訳データを直接入力せずに外部のAPIを利用して取得する処理を検討
## 参照
---
## 備考
---
|
1.0
|
多言語プロフィールの一覧表示・詳細表示・更新削除機能の実装 - ## 概要
情報を掲載する事業者は外国人ユーザーも考慮し多言語に対応した情報をアップロードする必要がある。
そのため多言語でのユーザープロフィール情報を管理する機能を実装する。
## 変更点
---
- [x] profileSliceの追加及び状態管理の更新
- [x] MultiProfileList, MultiProfileDisplay, MultiProfileEditコンポーネントを処理ごとに入れ替えることで、同一画面で多言語プロフィールのCRUDを実装
## 課題
- 処理失敗時のエラーハンドリングは今後修正が必要
- 翻訳データを直接入力せずに外部のAPIを利用して取得する処理を検討
## 参照
---
## 備考
---
|
process
|
多言語プロフィールの一覧表示・詳細表示・更新削除機能の実装 概要 情報を掲載する事業者は外国人ユーザーも考慮し多言語に対応した情報をアップロードする必要がある。 そのため多言語でのユーザープロフィール情報を管理する機能を実装する。 変更点 profilesliceの追加及び状態管理の更新 multiprofilelist multiprofiledisplay multiprofileeditコンポーネントを処理ごとに入れ替えることで、同一画面で多言語プロフィールのcrudを実装 課題 処理失敗時のエラーハンドリングは今後修正が必要 翻訳データを直接入力せずに外部のapiを利用して取得する処理を検討 参照 備考
| 1
|
12,278
| 8,656,406,203
|
IssuesEvent
|
2018-11-27 18:21:46
|
ChurchCRM/CRM
|
https://api.github.com/repos/ChurchCRM/CRM
|
closed
|
Upgrade to 3.0.11 overwrote .htaccess
|
Backend System In Review Installation / Upgrade Security Web Report
|
I upgraded using the automatic upgrade via a browser. I upgraded from 3.0.10 to 3.0.11. The upgrade was successful but overwrote the RewriteBase directives in the .htaccess files. I was able to fix the files and all is well, but I can see that this could be an issue for people who hadn't seen this problem before.
Collected Value Title | Data
----------------------|----------------
Page Name |/v2/index.php
Screen Size |900x1440
Window Size |729x1440
Page Size |1227x1440
Platform Information | Linux info 3.0 #1337 SMP Tue Jan 01 00:00:00 CEST 2000 all GNU/Linux
PHP Version | 7.1.24
SQL Version | 5.5.60-0+deb7u1-log
ChurchCRM Version |3.0.11
Reporting Browser |Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36
Prerequisite Status |All Prerequisites met
Integrity check status |{"status":"success"}
|
True
|
Upgrade to 3.0.11 overwrote .htaccess - I upgraded using the automatic upgrade via a browser. I upgraded from 3.0.10 to 3.0.11. The upgrade was successful but overwrote the RewriteBase directives in the .htaccess files. I was able to fix the files and all is well, but I can see that this could be an issue for people who hadn't seen this problem before.
Collected Value Title | Data
----------------------|----------------
Page Name |/v2/index.php
Screen Size |900x1440
Window Size |729x1440
Page Size |1227x1440
Platform Information | Linux info 3.0 #1337 SMP Tue Jan 01 00:00:00 CEST 2000 all GNU/Linux
PHP Version | 7.1.24
SQL Version | 5.5.60-0+deb7u1-log
ChurchCRM Version |3.0.11
Reporting Browser |Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36
Prerequisite Status |All Prerequisites met
Integrity check status |{"status":"success"}
|
non_process
|
upgrade to overwrote htaccess i upgraded using the automatic upgrade via a browser i upgraded from to the upgrade was successful but overwrote the rewritebase directives in the htaccess files i was able to fix the files and all is well but i can see that this could be an issue for people who hadn t seen this problem before collected value title data page name index php screen size window size page size platform information linux info smp tue jan cest all gnu linux php version sql version log churchcrm version reporting browser mozilla macintosh intel mac os x applewebkit khtml like gecko chrome safari prerequisite status all prerequisites met integrity check status status success
| 0
|
118,079
| 25,249,411,873
|
IssuesEvent
|
2022-11-15 13:39:09
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
closed
|
[Epic]: JS Framework Stability
|
JS Evaluation Autocomplete Epic FE Coders Pod Evaluated Value
|
### Objective
Improve the JS experience by making it consistent and bug free
### Success Metrics
Reduction in High / Critical Issues
### Requirements
_No response_
### Out of Scope
_No response_
### Developer Handoff Document in Figma
_No response_
### RACI matrix
| ------------- | ------------- |
| Responsible | @hetunandu @ApekshaBhosale @ohansFavour @eco-monk @Rishabh-Rathod |
| Accountable | @ohansFavour |
| Consulted | @Nikhil-Nandagopal @ajinkyakulkarni |
| Informed | @mohanarpit |
|
1.0
|
[Epic]: JS Framework Stability - ### Objective
Improve the JS experience by making it consistent and bug free
### Success Metrics
Reduction in High / Critical Issues
### Requirements
_No response_
### Out of Scope
_No response_
### Developer Handoff Document in Figma
_No response_
### RACI matrix
| ------------- | ------------- |
| Responsible | @hetunandu @ApekshaBhosale @ohansFavour @eco-monk @Rishabh-Rathod |
| Accountable | @ohansFavour |
| Consulted | @Nikhil-Nandagopal @ajinkyakulkarni |
| Informed | @mohanarpit |
|
non_process
|
js framework stability objective improve the js experience by making it consistent and bug free success metrics reduction in high critical issues requirements no response out of scope no response developer handoff document in figma no response raci matrix responsible hetunandu apekshabhosale ohansfavour eco monk rishabh rathod accountable ohansfavour consulted nikhil nandagopal ajinkyakulkarni informed mohanarpit
| 0
|
225,327
| 17,262,600,820
|
IssuesEvent
|
2021-07-22 09:41:22
|
mailersend/mailersend-python
|
https://api.github.com/repos/mailersend/mailersend-python
|
closed
|
Missing information on docs? camelCase?
|
documentation feature-request
|
Hi there, I was wondering if the documentation for the package usage is accurate. I've the class receives the api_key, but it is no mentioned in the docs. Also, I'm curious on what were the reasons to use camelCase in a Python package. Coming from sendgrid, if things workout ok I could also contribute.
Edit: sorry, I missed the `os.environ.get("MAILERSEND_API_KEY")`. Would it make sense to make it so that the api_key is a parameter, instead of necessarily having to use the env_var?
|
1.0
|
Missing information on docs? camelCase? - Hi there, I was wondering if the documentation for the package usage is accurate. I've the class receives the api_key, but it is no mentioned in the docs. Also, I'm curious on what were the reasons to use camelCase in a Python package. Coming from sendgrid, if things workout ok I could also contribute.
Edit: sorry, I missed the `os.environ.get("MAILERSEND_API_KEY")`. Would it make sense to make it so that the api_key is a parameter, instead of necessarily having to use the env_var?
|
non_process
|
missing information on docs camelcase hi there i was wondering if the documentation for the package usage is accurate i ve the class receives the api key but it is no mentioned in the docs also i m curious on what were the reasons to use camelcase in a python package coming from sendgrid if things workout ok i could also contribute edit sorry i missed the os environ get mailersend api key would it make sense to make it so that the api key is a parameter instead of necessarily having to use the env var
| 0
|
21,254
| 28,376,801,675
|
IssuesEvent
|
2023-04-12 21:35:23
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
closed
|
`populate_defaults = False` is uneffective when one of the inputs of a namespace is defined
|
type/bug topic/processes
|
### Describe the bug
The `populate_defaults` option can be insert in the the call `expose_inputs` of a `spec` in order to avoid that the defaults of the workchain whose inputs are exposed are set in the current workchain. The expected behavior is not registered, however, when the exposed inputs are in a `namespace` and one of them is set by the user.
### Steps to reproduce
In verdi shell:
```
from aiida import orm
from aiida.engine import WorkChain, run
class ClassA(WorkChain):
@classmethod
def define(cls, spec):
super().define(spec)
spec.input("www",valid_type=Float,default=lambda: Float(1.0))
spec.input("ooo",valid_type=Float,default=lambda: Float(3.0))
spec.outline(cls.brun)
def brun(self):
return None
class ClassB(WorkChain):
@classmethod
def define(cls, spec):
super().define(spec)
spec.input("ee",valid_type=Float,default=lambda: Float(2.0))
spec.expose_inputs(ClassA, namespace="qqq", namespace_options={'required': False, 'populate_defaults': False})
spec.outline(cls.run)
def run(self):
self.report(self.inputs)
a=ClassB.get_builder
a.qqq.www = Float(2.2)
run(a)
```
The expected behavior would be that `qqq.www` is set to a value, but `qqq.ooo` remains without any value. However what I see (showed in the report) is that `qqq.ooo` assumes the default value. This is undesired since 'populate_defaults' is set to False
### Your environment
- Operating system [e.g. Linux]: Ubuntu 16.04
- Python version [e.g. 3.7.1]: 3.7.10
- aiida-core version [e.g. 1.2.1]: 1.6.5 and develop branch
|
1.0
|
`populate_defaults = False` is uneffective when one of the inputs of a namespace is defined - ### Describe the bug
The `populate_defaults` option can be insert in the the call `expose_inputs` of a `spec` in order to avoid that the defaults of the workchain whose inputs are exposed are set in the current workchain. The expected behavior is not registered, however, when the exposed inputs are in a `namespace` and one of them is set by the user.
### Steps to reproduce
In verdi shell:
```
from aiida import orm
from aiida.engine import WorkChain, run
class ClassA(WorkChain):
@classmethod
def define(cls, spec):
super().define(spec)
spec.input("www",valid_type=Float,default=lambda: Float(1.0))
spec.input("ooo",valid_type=Float,default=lambda: Float(3.0))
spec.outline(cls.brun)
def brun(self):
return None
class ClassB(WorkChain):
@classmethod
def define(cls, spec):
super().define(spec)
spec.input("ee",valid_type=Float,default=lambda: Float(2.0))
spec.expose_inputs(ClassA, namespace="qqq", namespace_options={'required': False, 'populate_defaults': False})
spec.outline(cls.run)
def run(self):
self.report(self.inputs)
a=ClassB.get_builder
a.qqq.www = Float(2.2)
run(a)
```
The expected behavior would be that `qqq.www` is set to a value, but `qqq.ooo` remains without any value. However what I see (showed in the report) is that `qqq.ooo` assumes the default value. This is undesired since 'populate_defaults' is set to False
### Your environment
- Operating system [e.g. Linux]: Ubuntu 16.04
- Python version [e.g. 3.7.1]: 3.7.10
- aiida-core version [e.g. 1.2.1]: 1.6.5 and develop branch
|
process
|
populate defaults false is uneffective when one of the inputs of a namespace is defined describe the bug the populate defaults option can be insert in the the call expose inputs of a spec in order to avoid that the defaults of the workchain whose inputs are exposed are set in the current workchain the expected behavior is not registered however when the exposed inputs are in a namespace and one of them is set by the user steps to reproduce in verdi shell from aiida import orm from aiida engine import workchain run class classa workchain classmethod def define cls spec super define spec spec input www valid type float default lambda float spec input ooo valid type float default lambda float spec outline cls brun def brun self return none class classb workchain classmethod def define cls spec super define spec spec input ee valid type float default lambda float spec expose inputs classa namespace qqq namespace options required false populate defaults false spec outline cls run def run self self report self inputs a classb get builder a qqq www float run a the expected behavior would be that qqq www is set to a value but qqq ooo remains without any value however what i see showed in the report is that qqq ooo assumes the default value this is undesired since populate defaults is set to false your environment operating system ubuntu python version aiida core version and develop branch
| 1
|
32,806
| 4,423,041,488
|
IssuesEvent
|
2016-08-16 06:47:06
|
smartguy1196/regex.objectified
|
https://api.github.com/repos/smartguy1196/regex.objectified
|
opened
|
Design Proposition #1
|
design proposition
|
# Constructor Design for `RegExObj`
### Conversion Order:
1. Regexp
2. String
3. JSON
### JSON Argument Handling:
- Allow `tokens` to be `regexps` and `strings`
- Verify It
|
1.0
|
Design Proposition #1 - # Constructor Design for `RegExObj`
### Conversion Order:
1. Regexp
2. String
3. JSON
### JSON Argument Handling:
- Allow `tokens` to be `regexps` and `strings`
- Verify It
|
non_process
|
design proposition constructor design for regexobj conversion order regexp string json json argument handling allow tokens to be regexps and strings verify it
| 0
|
136,208
| 5,273,139,632
|
IssuesEvent
|
2017-02-06 14:53:59
|
ctsit/qipr_approver
|
https://api.github.com/repos/ctsit/qipr_approver
|
opened
|
QIPR registry CSS missing rem
|
bug medium priority
|
Check your css
.btn__logout {
line-height: 2rem;
height: 2rem;
width: 5rem;
padding: 0 .6rem;
padding is missing rem
line 79
|
1.0
|
QIPR registry CSS missing rem -
Check your css
.btn__logout {
line-height: 2rem;
height: 2rem;
width: 5rem;
padding: 0 .6rem;
padding is missing rem
line 79
|
non_process
|
qipr registry css missing rem check your css btn logout line height height width padding padding is missing rem line
| 0
|
21,019
| 27,966,857,798
|
IssuesEvent
|
2023-03-24 20:24:34
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
Reworking the Compute OSLogin SSH samples and tests
|
priority: p2 type: process api: compute samples
|
After tackling the #7277 issue over many weeks, I decided that it's time to invest a lot more attention to the SSH samples and their tests. I want to use this issue as a tracker for following problems I want to fix:
1. The tests mentioned in #7277 are flaky and evade all fix attempts.
2. The samples in [service_account_ssh.py](https://github.com/GoogleCloudPlatform/python-docs-samples/blob/main/compute/oslogin/service_account_ssh.py) have incorrect region tags - they don't start with product prefix.
3. The sample uses the old compute library, while it should be using GAPIC lib.
4. The tests are sometimes leaking resources.
To fix all those issues, I'm going to rewrite the samples using GAPIC library, together with rewriting tests. Hopefully, using a new library will make it easier to properly run tests.
|
1.0
|
Reworking the Compute OSLogin SSH samples and tests - After tackling the #7277 issue over many weeks, I decided that it's time to invest a lot more attention to the SSH samples and their tests. I want to use this issue as a tracker for following problems I want to fix:
1. The tests mentioned in #7277 are flaky and evade all fix attempts.
2. The samples in [service_account_ssh.py](https://github.com/GoogleCloudPlatform/python-docs-samples/blob/main/compute/oslogin/service_account_ssh.py) have incorrect region tags - they don't start with product prefix.
3. The sample uses the old compute library, while it should be using GAPIC lib.
4. The tests are sometimes leaking resources.
To fix all those issues, I'm going to rewrite the samples using GAPIC library, together with rewriting tests. Hopefully, using a new library will make it easier to properly run tests.
|
process
|
reworking the compute oslogin ssh samples and tests after tackling the issue over many weeks i decided that it s time to invest a lot more attention to the ssh samples and their tests i want to use this issue as a tracker for following problems i want to fix the tests mentioned in are flaky and evade all fix attempts the samples in have incorrect region tags they don t start with product prefix the sample uses the old compute library while it should be using gapic lib the tests are sometimes leaking resources to fix all those issues i m going to rewrite the samples using gapic library together with rewriting tests hopefully using a new library will make it easier to properly run tests
| 1
|
695,219
| 23,849,188,726
|
IssuesEvent
|
2022-09-06 16:18:23
|
prometheus/prometheus
|
https://api.github.com/repos/prometheus/prometheus
|
opened
|
histograms: Possible FloatHistogram.Add/.Sub performance issue
|
kind/enhancement priority/P3
|
Vet the performance impact of the `FloatHistogram.Add` and `FloatHistogram.Sub` methods and optimize code if needed (see TODOs in the code).
|
1.0
|
histograms: Possible FloatHistogram.Add/.Sub performance issue - Vet the performance impact of the `FloatHistogram.Add` and `FloatHistogram.Sub` methods and optimize code if needed (see TODOs in the code).
|
non_process
|
histograms possible floathistogram add sub performance issue vet the performance impact of the floathistogram add and floathistogram sub methods and optimize code if needed see todos in the code
| 0
|
8,087
| 11,257,680,588
|
IssuesEvent
|
2020-01-13 00:26:27
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Polygonize Tool doesn't work with big data
|
Bug Feedback Processing
|
<!--
W
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
If the issue concerns a **third party plugin**, then it **cannot** be fixed by the QGIS team. Please raise your issue in the dedicated bug tracker for that specific plugin (as listed in the plugin's description). -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Using the Polygonize Tool to create Polygons from a line layer
**How to Reproduce**
<!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome-->
1 I loud a gpkg-layer with more than 50.000 features

2. then I run the Polygonize Tool
3. after 40 % the process stops

4. I wonder if the tool uses only one core instead more
**QGIS and OS versions**
<!-- In the QGIS menu help/about, click in the dialog, Ctrl+A and then Ctrl+C. Finally paste here -->
QGIS 3.10-1 with Kubuntu
**Additional context**
<!-- Add any other context about the problem here. -->
|
1.0
|
Polygonize Tool doesn't work with big data - <!--
W
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
If the issue concerns a **third party plugin**, then it **cannot** be fixed by the QGIS team. Please raise your issue in the dedicated bug tracker for that specific plugin (as listed in the plugin's description). -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Using the Polygonize Tool to create Polygons from a line layer
**How to Reproduce**
<!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome-->
1 I loud a gpkg-layer with more than 50.000 features

2. then I run the Polygonize Tool
3. after 40 % the process stops

4. I wonder if the tool uses only one core instead more
**QGIS and OS versions**
<!-- In the QGIS menu help/about, click in the dialog, Ctrl+A and then Ctrl+C. Finally paste here -->
QGIS 3.10-1 with Kubuntu
**Additional context**
<!-- Add any other context about the problem here. -->
|
process
|
polygonize tool doesn t work with big data w search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue if the issue concerns a third party plugin then it cannot be fixed by the qgis team please raise your issue in the dedicated bug tracker for that specific plugin as listed in the plugin s description describe the bug using the polygonize tool to create polygons from a line layer how to reproduce i loud a gpkg layer with more than features then i run the polygonize tool after the process stops i wonder if the tool uses only one core instead more qgis and os versions qgis with kubuntu additional context
| 1
|
16,517
| 21,527,202,109
|
IssuesEvent
|
2022-04-28 19:44:36
|
googleapis/google-cloud-php-eventarc-publishing
|
https://api.github.com/repos/googleapis/google-cloud-php-eventarc-publishing
|
closed
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'library_type' in .repo-metadata.json
* client_documentation must match pattern "^https://.*" in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'library_type' in .repo-metadata.json
* client_documentation must match pattern "^https://.*" in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 must have required property library type in repo metadata json client documentation must match pattern in repo metadata json release level must be equal to one of the allowed values in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
18,120
| 24,151,113,194
|
IssuesEvent
|
2022-09-22 00:56:35
|
neuropsychology/NeuroKit
|
https://api.github.com/repos/neuropsychology/NeuroKit
|
closed
|
Improve PSD
|
wontfix signal processing :chart_with_upwards_trend: inactive 👻
|
Why this so nice:
```python
import neurokit2 as nk
import mne
raw.plot_psd(fmin=0, fmax=40., picks=["EEG 050"])
```

And ours so ugly 🥲
```python
channel = nk.mne_channel_extract(raw, what=["EEG 050"]).values
psd = nk.signal_psd(
channel, sampling_rate=raw.info["sfreq"], show=True, max_frequency=40, method="multitapers"
)
```

_Originally posted by @DominiqueMakowski in https://github.com/neuropsychology/NeuroKit/issues/574#issuecomment-962896583_
|
1.0
|
Improve PSD - Why this so nice:
```python
import neurokit2 as nk
import mne
raw.plot_psd(fmin=0, fmax=40., picks=["EEG 050"])
```

And ours so ugly 🥲
```python
channel = nk.mne_channel_extract(raw, what=["EEG 050"]).values
psd = nk.signal_psd(
channel, sampling_rate=raw.info["sfreq"], show=True, max_frequency=40, method="multitapers"
)
```

_Originally posted by @DominiqueMakowski in https://github.com/neuropsychology/NeuroKit/issues/574#issuecomment-962896583_
|
process
|
improve psd why this so nice python import as nk import mne raw plot psd fmin fmax picks and ours so ugly 🥲 python channel nk mne channel extract raw what values psd nk signal psd channel sampling rate raw info show true max frequency method multitapers originally posted by dominiquemakowski in
| 1
|
3,181
| 6,256,204,418
|
IssuesEvent
|
2017-07-14 09:31:54
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
GoAccess SSH Tunnel Realtime Report stops at 275,000 Requests
|
log-processing websocket-server
|
Hi there I am running go access via ssh tunnel using the realtime option:
nohup ssh root@servername 'cat /var/log/httpd/access_log' | goaccess -r --log-format=COMBINED --real-time-html -o /var/www/html/report_servername_webserver.html -g - &
When I run this it crashes at around 275,000 requests when I add the ssh -T option the same happens.
I am trying to figure out if goaccess has a limitation of requests with the realtime option or is this an issue with SSH and the amount of lines it can take from STDIN or a Kernel limitation, meaning I might have to adjust something?
Thanks for your help and great software.
|
1.0
|
GoAccess SSH Tunnel Realtime Report stops at 275,000 Requests - Hi there I am running go access via ssh tunnel using the realtime option:
nohup ssh root@servername 'cat /var/log/httpd/access_log' | goaccess -r --log-format=COMBINED --real-time-html -o /var/www/html/report_servername_webserver.html -g - &
When I run this it crashes at around 275,000 requests when I add the ssh -T option the same happens.
I am trying to figure out if goaccess has a limitation of requests with the realtime option or is this an issue with SSH and the amount of lines it can take from STDIN or a Kernel limitation, meaning I might have to adjust something?
Thanks for your help and great software.
|
process
|
goaccess ssh tunnel realtime report stops at requests hi there i am running go access via ssh tunnel using the realtime option nohup ssh root servername cat var log httpd access log goaccess r log format combined real time html o var www html report servername webserver html g when i run this it crashes at around requests when i add the ssh t option the same happens i am trying to figure out if goaccess has a limitation of requests with the realtime option or is this an issue with ssh and the amount of lines it can take from stdin or a kernel limitation meaning i might have to adjust something thanks for your help and great software
| 1
|
317,729
| 27,262,199,771
|
IssuesEvent
|
2023-02-22 15:35:43
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
Failing test: Chrome X-Pack UI Functional Tests - transform - creation - index pattern.x-pack/test/functional/apps/transform/creation/index_pattern/creation_index_pattern·ts - transform - creation - index pattern creation_index_pattern batch transform with terms+date_histogram groups and avg agg navigates to discover and displays results of the destination index
|
failed-test
|
A test failed on a tracked branch
```
Error: retry.tryForTime timeout: Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj~="transformListTable"] [data-test-subj~="row-ec_1_1677079387909"] [data-test-subj="euiCollapsedItemActionsButton"])
Wait timed out after 10040ms
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-e936a41231f62d1a/elastic/kibana-on-merge/kibana/node_modules/selenium-webdriver/lib/webdriver.js:908:17
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at onFailure (retry_for_success.ts:17:9)
at retryForSuccess (retry_for_success.ts:59:13)
at RetryService.try (retry.ts:31:12)
at Proxy.clickByCssSelector (find.ts:407:5)
at TestSubjects.click (test_subjects.ts:164:5)
at transform_table.ts:301:11
at runAttempt (retry_for_success.ts:29:15)
at retryForSuccess (retry_for_success.ts:68:21)
at RetryService.tryForTime (retry.ts:22:12)
at TransformTable.ensureTransformActionsMenuOpen (transform_table.ts:297:7)
at TransformTable.assertTransformRowActions (transform_table.ts:336:7)
at Context.<anonymous> (creation_index_pattern.ts:832:11)
at Object.apply (wrap_function.js:73:16)
at onFailure (retry_for_success.ts:17:9)
at retryForSuccess (retry_for_success.ts:59:13)
at RetryService.tryForTime (retry.ts:22:12)
at TransformTable.ensureTransformActionsMenuOpen (transform_table.ts:297:7)
at TransformTable.assertTransformRowActions (transform_table.ts:336:7)
at Context.<anonymous> (creation_index_pattern.ts:832:11)
at Object.apply (wrap_function.js:73:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/27380#01867992-cd8c-48f3-810d-d506ab97ca4e)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests - transform - creation - index pattern.x-pack/test/functional/apps/transform/creation/index_pattern/creation_index_pattern·ts","test.name":"transform - creation - index pattern creation_index_pattern batch transform with terms+date_histogram groups and avg agg navigates to discover and displays results of the destination index","test.failCount":1}} -->
|
1.0
|
Failing test: Chrome X-Pack UI Functional Tests - transform - creation - index pattern.x-pack/test/functional/apps/transform/creation/index_pattern/creation_index_pattern·ts - transform - creation - index pattern creation_index_pattern batch transform with terms+date_histogram groups and avg agg navigates to discover and displays results of the destination index - A test failed on a tracked branch
```
Error: retry.tryForTime timeout: Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj~="transformListTable"] [data-test-subj~="row-ec_1_1677079387909"] [data-test-subj="euiCollapsedItemActionsButton"])
Wait timed out after 10040ms
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-e936a41231f62d1a/elastic/kibana-on-merge/kibana/node_modules/selenium-webdriver/lib/webdriver.js:908:17
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at onFailure (retry_for_success.ts:17:9)
at retryForSuccess (retry_for_success.ts:59:13)
at RetryService.try (retry.ts:31:12)
at Proxy.clickByCssSelector (find.ts:407:5)
at TestSubjects.click (test_subjects.ts:164:5)
at transform_table.ts:301:11
at runAttempt (retry_for_success.ts:29:15)
at retryForSuccess (retry_for_success.ts:68:21)
at RetryService.tryForTime (retry.ts:22:12)
at TransformTable.ensureTransformActionsMenuOpen (transform_table.ts:297:7)
at TransformTable.assertTransformRowActions (transform_table.ts:336:7)
at Context.<anonymous> (creation_index_pattern.ts:832:11)
at Object.apply (wrap_function.js:73:16)
at onFailure (retry_for_success.ts:17:9)
at retryForSuccess (retry_for_success.ts:59:13)
at RetryService.tryForTime (retry.ts:22:12)
at TransformTable.ensureTransformActionsMenuOpen (transform_table.ts:297:7)
at TransformTable.assertTransformRowActions (transform_table.ts:336:7)
at Context.<anonymous> (creation_index_pattern.ts:832:11)
at Object.apply (wrap_function.js:73:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/27380#01867992-cd8c-48f3-810d-d506ab97ca4e)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests - transform - creation - index pattern.x-pack/test/functional/apps/transform/creation/index_pattern/creation_index_pattern·ts","test.name":"transform - creation - index pattern creation_index_pattern batch transform with terms+date_histogram groups and avg agg navigates to discover and displays results of the destination index","test.failCount":1}} -->
|
non_process
|
failing test chrome x pack ui functional tests transform creation index pattern x pack test functional apps transform creation index pattern creation index pattern·ts transform creation index pattern creation index pattern batch transform with terms date histogram groups and avg agg navigates to discover and displays results of the destination index a test failed on a tracked branch error retry tryfortime timeout error retry try timeout timeouterror waiting for element to be located by css selector wait timed out after at var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules selenium webdriver lib webdriver js at runmicrotasks at processticksandrejections node internal process task queues at onfailure retry for success ts at retryforsuccess retry for success ts at retryservice try retry ts at proxy clickbycssselector find ts at testsubjects click test subjects ts at transform table ts at runattempt retry for success ts at retryforsuccess retry for success ts at retryservice tryfortime retry ts at transformtable ensuretransformactionsmenuopen transform table ts at transformtable asserttransformrowactions transform table ts at context creation index pattern ts at object apply wrap function js at onfailure retry for success ts at retryforsuccess retry for success ts at retryservice tryfortime retry ts at transformtable ensuretransformactionsmenuopen transform table ts at transformtable asserttransformrowactions transform table ts at context creation index pattern ts at object apply wrap function js first failure
| 0
|
183
| 2,514,115,336
|
IssuesEvent
|
2015-01-15 08:17:09
|
Starcounter/Starcounter
|
https://api.github.com/repos/Starcounter/Starcounter
|
opened
|
How to handle abstract "dynamic" types
|
Code host enhancement G/DynamicSchema question
|
Consider Car and CarModel in [this model](https://github.com/Starcounter/Starcounter/issues/2473#issuecomment-69332832). In #2477, we are assuring there is an instance of ```CarModel``` created for every such declaration.
Now, how should we go about if ```CarModel``` is declared an abstract class?
a) We refuse to accept such model - the weaver will raise an error when the application is weaved, saying that classes modelling a type can not be abstract.
b) We create the instance in the kernel and throw an exception when it is being read through ```Car.Model```.
c) We create the instance in the kernel and have the code host generate a concrete type extending ```CarModel``` that we use (with any abstract member have an implementation that just raise an ```NotImplementedException```.
Time-wise, (a) is really simple, (b) is probably a couple of hours or so, and (c) is probably a day or two.
Any thoughts? @Starcounter-Jack?
|
1.0
|
How to handle abstract "dynamic" types - Consider Car and CarModel in [this model](https://github.com/Starcounter/Starcounter/issues/2473#issuecomment-69332832). In #2477, we are assuring there is an instance of ```CarModel``` created for every such declaration.
Now, how should we go about if ```CarModel``` is declared an abstract class?
a) We refuse to accept such model - the weaver will raise an error when the application is weaved, saying that classes modelling a type can not be abstract.
b) We create the instance in the kernel and throw an exception when it is being read through ```Car.Model```.
c) We create the instance in the kernel and have the code host generate a concrete type extending ```CarModel``` that we use (with any abstract member have an implementation that just raise an ```NotImplementedException```.
Time-wise, (a) is really simple, (b) is probably a couple of hours or so, and (c) is probably a day or two.
Any thoughts? @Starcounter-Jack?
|
non_process
|
how to handle abstract dynamic types consider car and carmodel in in we are assuring there is an instance of carmodel created for every such declaration now how should we go about if carmodel is declared an abstract class a we refuse to accept such model the weaver will raise an error when the application is weaved saying that classes modelling a type can not be abstract b we create the instance in the kernel and throw an exception when it is being read through car model c we create the instance in the kernel and have the code host generate a concrete type extending carmodel that we use with any abstract member have an implementation that just raise an notimplementedexception time wise a is really simple b is probably a couple of hours or so and c is probably a day or two any thoughts starcounter jack
| 0
|
20,574
| 27,234,585,826
|
IssuesEvent
|
2023-02-21 15:28:18
|
AvaloniaUI/Avalonia
|
https://api.github.com/repos/AvaloniaUI/Avalonia
|
closed
|
RTL Ellipsis triple dots are on the wrong side
|
bug area-textprocessing
|
**Describe the bug**
RTL text ellipsis are on the wrong side.
**To Reproduce**
Steps to reproduce the behavior:
1. Create `TextBlock` with `TextTrimming=CharacterEllipsis`
2. Set `Text` to random arabic text
3. Set `Width` to `120`
4. Set `FlowDirection` to `RightToLeft`
**Expected behavior**
Triple dots are on the left side of the text block
**Screenshots**
<img width="335" alt="Screenshot 2023-02-09 at 15 31 31" src="https://user-images.githubusercontent.com/4997065/217814039-e42fa4a4-1a08-44e1-9235-1e01cbea6c8a.png">
**Desktop (please complete the following information):**
- MacOS 13.0, Windows 11
- 11.0.0-preview5
|
1.0
|
RTL Ellipsis triple dots are on the wrong side - **Describe the bug**
RTL text ellipsis are on the wrong side.
**To Reproduce**
Steps to reproduce the behavior:
1. Create `TextBlock` with `TextTrimming=CharacterEllipsis`
2. Set `Text` to random arabic text
3. Set `Width` to `120`
4. Set `FlowDirection` to `RightToLeft`
**Expected behavior**
Triple dots are on the left side of the text block
**Screenshots**
<img width="335" alt="Screenshot 2023-02-09 at 15 31 31" src="https://user-images.githubusercontent.com/4997065/217814039-e42fa4a4-1a08-44e1-9235-1e01cbea6c8a.png">
**Desktop (please complete the following information):**
- MacOS 13.0, Windows 11
- 11.0.0-preview5
|
process
|
rtl ellipsis triple dots are on the wrong side describe the bug rtl text ellipsis are on the wrong side to reproduce steps to reproduce the behavior create textblock with texttrimming characterellipsis set text to random arabic text set width to set flowdirection to righttoleft expected behavior triple dots are on the left side of the text block screenshots img width alt screenshot at src desktop please complete the following information macos windows
| 1
|
4,564
| 7,393,704,259
|
IssuesEvent
|
2018-03-17 00:51:37
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Validation Error while moving one API app from a resource groups with multiple API apps in some Resource Groups
|
app-service-api cxp in-process product-question triaged
|
While moving one azure API app from a resource group with multiple azure API apps within the same subscription, it throws a validation error. Tried the portal as well as PowerShell, no respite.
tracking id '40ea95b4-4c27-4738-9013-187690012a98', request correlation id '2ea747b7-bc73-49d5-9ead-94396a5ecdd5'.",
The error is "The resource '/subscriptions/{sub id}/resourceGroups/test-accounts/providers/Microsoft.Web/sites/test-v1-accounts-pxopt' is missing from the move request" but the api app I'm trying to move is test-v1-accounts-import. Both test-v1-accounts-pxopt and test-v1-accounts-import-pxopt are in same resource group
|
1.0
|
Validation Error while moving one API app from a resource groups with multiple API apps in some Resource Groups - While moving one azure API app from a resource group with multiple azure API apps within the same subscription, it throws a validation error. Tried the portal as well as PowerShell, no respite.
tracking id '40ea95b4-4c27-4738-9013-187690012a98', request correlation id '2ea747b7-bc73-49d5-9ead-94396a5ecdd5'.",
The error is "The resource '/subscriptions/{sub id}/resourceGroups/test-accounts/providers/Microsoft.Web/sites/test-v1-accounts-pxopt' is missing from the move request" but the api app I'm trying to move is test-v1-accounts-import. Both test-v1-accounts-pxopt and test-v1-accounts-import-pxopt are in same resource group
|
process
|
validation error while moving one api app from a resource groups with multiple api apps in some resource groups while moving one azure api app from a resource group with multiple azure api apps within the same subscription it throws a validation error tried the portal as well as powershell no respite tracking id request correlation id the error is the resource subscriptions sub id resourcegroups test accounts providers microsoft web sites test accounts pxopt is missing from the move request but the api app i m trying to move is test accounts import both test accounts pxopt and test accounts import pxopt are in same resource group
| 1
|
60,588
| 7,360,303,884
|
IssuesEvent
|
2018-03-10 17:12:48
|
ifmeorg/ifme
|
https://api.github.com/repos/ifmeorg/ifme
|
opened
|
[Component] Avatar
|
design newbiefriendly react
|
## Overview
We are looking for an awesome human to build this out as a React component, preferably within Storybook. You can view and get details (colors, dimensions, export SVGs, etc.) by visiting [the Figma designs](https://www.figma.com/file/RTjPkO0nSuJcMqqWLjeVdEof/if-me?node-id=65%3A618) and creating a free account.
This is part of the app redesign (#691).
## Screenshot from [the Figma document](https://www.figma.com/file/RTjPkO0nSuJcMqqWLjeVdEof/if-me?node-id=65%3A618):

|
1.0
|
[Component] Avatar - ## Overview
We are looking for an awesome human to build this out as a React component, preferably within Storybook. You can view and get details (colors, dimensions, export SVGs, etc.) by visiting [the Figma designs](https://www.figma.com/file/RTjPkO0nSuJcMqqWLjeVdEof/if-me?node-id=65%3A618) and creating a free account.
This is part of the app redesign (#691).
## Screenshot from [the Figma document](https://www.figma.com/file/RTjPkO0nSuJcMqqWLjeVdEof/if-me?node-id=65%3A618):

|
non_process
|
avatar overview we are looking for an awesome human to build this out as a react component preferably within storybook you can view and get details colors dimensions export svgs etc by visiting and creating a free account this is part of the app redesign screenshot from
| 0
|
121,929
| 10,198,694,533
|
IssuesEvent
|
2019-08-13 06:22:56
|
irisnet/irishub
|
https://api.github.com/repos/irisnet/irishub
|
closed
|
Rewrite automated tests for LCD API
|
API test
|
Finished modules:
bank
distribution
keys
stake
tendermint
governance
To do:
service
|
1.0
|
Rewrite automated tests for LCD API - Finished modules:
bank
distribution
keys
stake
tendermint
governance
To do:
service
|
non_process
|
rewrite automated tests for lcd api finished modules: bank distribution keys stake tendermint governance to do: service
| 0
|
367,184
| 25,724,796,193
|
IssuesEvent
|
2022-12-07 15:57:22
|
stuart-lab/signac
|
https://api.github.com/repos/stuart-lab/signac
|
closed
|
CoveragePlot error: can't plot annotation, multiome vignette
|
documentation
|
Apologies if this has been asked before, but I haven't been able to find a solution to this error elsewhere. I'm running into a problem with the CoveragePlot function as I am working through the [example multiome vignette](https://satijalab.org/signac/articles/pbmc_multiomic.html). No other part of the vignette has given me problems, up until this last step.
I am running the following code:
```r
DefaultAssay(VG.test) <- "ATAC"
CoveragePlot(
object = VG.test,
region = "toxin1",
features = "toxin1",
expression.assay = "SCT",
extend.upstream = 500,
extend.downstream = 10000
)
```
and get the following error:
```r
Error in data.frame(seqnames = annotation[[i]]$seqnames[[1]], start = min(annotation[[i]]$start), :
arguments imply differing number of rows: 1, 0
```
It seems like the error is produced when trying to generate some sort of dataframe from the Granges annotation object? I get the above error no matter what gene I select in regions/features. Importantly, the error disappears when I set ```annotation=F```, and the graph can be plotted, albeit without a gene track.
I'm using a non-reference genome and annotation, which I believe may be part of the issue (though I can't find error-producing differences between it and the human reference), so here's a part of my annotation below. Most columns are not needed by Signac, but the required "type", "gene_id", "gene_name" and "gene_biotype" to match ENSEMBL annotations are present. I've set ``` gene_name <- gene_id```, but I don't think this should cause these particular issues.
```r
> head(gtf)
GRanges object with 6 ranges and 24 metadata columns:
seqnames ranges strand | source type score phase gene_id
<Rle> <IRanges> <Rle> | <factor> <factor> <numeric> <integer> <character>
[1] ma1 1-2076 + | NA gene NA <NA> toxin1
[2] ma1 1-2076 + | NA transcript NA <NA> toxin1
[3] ma1 1-746 + | NA exon NA <NA> toxin1
[4] ma1 1640-1765 + | NA exon NA <NA> toxin1
[5] ma1 1911-2076 + | NA exon NA <NA> toxin1
[6] ma1 1-689 + | NA five_prime_utr NA <NA> toxin1
transcript_id ID Parent original_biotype Anolis_Blast_Type
<character> <character> <character> <character> <character>
[1] toxin_model_1 toxin1 <NA> <NA> <NA>
[2] toxin_model_1 nbis-mrna-1 toxin1 mrna <NA>
[3] toxin_model_1 toxin1:exon:1 nbis-mrna-1 <NA> <NA>
[4] toxin_model_1 toxin1:exon:2 nbis-mrna-1 <NA> <NA>
[5] toxin_model_1 toxin1:exon:3 nbis-mrna-1 <NA> <NA>
[6] toxin_model_1 toxin1:five_prime.. nbis-mrna-1 five_prime_UTR <NA>
Anolis_Homolog Crovir_Transcript_ID Name Python_Blast_Type Python_Homolog
<character> <character> <character> <character> <character>
[1] <NA> <NA> <NA> <NA> <NA>
[2] <NA> <NA> <NA> <NA> <NA>
[3] <NA> <NA> <NA> <NA> <NA>
[4] <NA> <NA> <NA> <NA> <NA>
[5] <NA> <NA> <NA> <NA> <NA>
[6] <NA> <NA> <NA> <NA> <NA>
Thamnophis_Blast_Type Thamnophis_Homolog X_AED X_QI X_eAED Crovir_Protein_ID
<character> <character> <character> <character> <character> <character>
[1] <NA> <NA> <NA> <NA> <NA> <NA>
[2] <NA> <NA> <NA> <NA> <NA> <NA>
[3] <NA> <NA> <NA> <NA> <NA> <NA>
[4] <NA> <NA> <NA> <NA> <NA> <NA>
[5] <NA> <NA> <NA> <NA> <NA> <NA>
[6] <NA> <NA> <NA> <NA> <NA> <NA>
previous_transcript_id gene_biotype gene_name
<character> <character> <character>
[1] <NA> protein_coding toxin1
[2] <NA> protein_coding toxin1
[3] <NA> protein_coding toxin1
[4] <NA> protein_coding toxin1
[5] <NA> protein_coding toxin1
[6] <NA> protein_coding toxin1
-------
seqinfo: 21 sequences from an unspecified genome; no seqlengths
```
My session info:
```r
> sessionInfo()
R version 4.1.2 (2021-11-01)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Catalina 10.15.7
Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats4 stats graphics grDevices utils datasets methods base
other attached packages:
[1] BSgenome.Hsapiens.UCSC.hg38_1.4.4 EnsDb.Hsapiens.v86_2.99.0
[3] ensembldb_2.18.3 AnnotationFilter_1.18.0
[5] BSgenome.Cviridis.custom.CroVir_3.1 BSgenome_1.62.0
[7] rtracklayer_1.54.0 Biostrings_2.62.0
[9] XVector_0.34.0 forcats_0.5.1
[11] stringr_1.4.0 dplyr_1.0.7
[13] purrr_0.3.4 readr_2.1.2
[15] tidyr_1.2.0 tibble_3.1.6
[17] ggplot2_3.3.5 tidyverse_1.3.1
[19] GenomicFeatures_1.46.4 AnnotationDbi_1.56.2
[21] Biobase_2.54.0 GenomicRanges_1.46.1
[23] GenomeInfoDb_1.30.1 IRanges_2.28.0
[25] S4Vectors_0.32.3 BiocGenerics_0.40.0
[27] sp_1.4-6 SeuratObject_4.1.0
[29] Seurat_4.1.1 Signac_1.7.0
loaded via a namespace (and not attached):
[1] utf8_1.2.2 reticulate_1.25 tidyselect_1.1.1
[4] RSQLite_2.2.9 htmlwidgets_1.5.4 docopt_0.7.1
[7] grid_4.1.2 BiocParallel_1.28.3 Rtsne_0.16
[10] munsell_0.5.0 codetools_0.2-18 ica_1.0-2
[13] future_1.23.0 miniUI_0.1.1.1 withr_2.4.3
[16] spatstat.random_2.2-0 colorspace_2.0-3 progressr_0.10.0
[19] filelock_1.0.2 knitr_1.37 rstudioapi_0.13
[22] ROCR_1.0-11 tensor_1.5 listenv_0.8.0
[25] labeling_0.4.2 MatrixGenerics_1.6.0 slam_0.1-50
[28] GenomeInfoDbData_1.2.7 polyclip_1.10-0 farver_2.1.0
[31] bit64_4.0.5 parallelly_1.30.0 vctrs_0.3.8
[34] generics_0.1.2 xfun_0.29 biovizBase_1.42.0
[37] BiocFileCache_2.2.1 R6_2.5.1 hdf5r_1.3.5
[40] bitops_1.0-7 spatstat.utils_2.3-1 cachem_1.0.6
[43] DelayedArray_0.20.0 assertthat_0.2.1 promises_1.2.0.1
[46] BiocIO_1.4.0 scales_1.1.1 nnet_7.3-17
[49] rgeos_0.5-9 gtable_0.3.0 globals_0.14.0
[52] goftest_1.2-3 rlang_1.0.2 RcppRoll_0.3.0
[55] splines_4.1.2 rgdal_1.5-29 lazyeval_0.2.2
[58] dichromat_2.0-0 checkmate_2.0.0 spatstat.geom_2.4-0
[61] broom_0.7.12 yaml_2.2.2 reshape2_1.4.4
[64] abind_1.4-5 modelr_0.1.8 backports_1.4.1
[67] httpuv_1.6.5 Hmisc_4.6-0 tools_4.1.2
[70] ellipsis_0.3.2 spatstat.core_2.4-4 RColorBrewer_1.1-2
[73] ggridges_0.5.3 Rcpp_1.0.8 plyr_1.8.6
[76] base64enc_0.1-3 progress_1.2.2 zlibbioc_1.40.0
[79] RCurl_1.98-1.5 prettyunits_1.1.1 rpart_4.1.16
[82] deldir_1.0-6 pbapply_1.5-0 cowplot_1.1.1
[85] zoo_1.8-9 SummarizedExperiment_1.24.0 haven_2.4.3
[88] ggrepel_0.9.1 cluster_2.1.2 fs_1.5.2
[91] magrittr_2.0.2 RSpectra_0.16-1 data.table_1.14.2
[94] scattermore_0.8 reprex_2.0.1 lmtest_0.9-40
[97] RANN_2.6.1 ProtGenerics_1.26.0 fitdistrplus_1.1-8
[100] matrixStats_0.61.0 hms_1.1.1 patchwork_1.1.1
[103] mime_0.12 xtable_1.8-4 XML_3.99-0.8
[106] jpeg_0.1-9 sparsesvd_0.2 readxl_1.3.1
[109] gridExtra_2.3 compiler_4.1.2 biomaRt_2.50.2
[112] KernSmooth_2.23-20 crayon_1.5.0 htmltools_0.5.2
[115] mgcv_1.8-38 later_1.3.0 tzdb_0.2.0
[118] Formula_1.2-4 lubridate_1.8.0 DBI_1.1.2
[121] dbplyr_2.1.1 MASS_7.3-55 rappdirs_0.3.3
[124] Matrix_1.4-0 cli_3.2.0 parallel_4.1.2
[127] igraph_1.3.0 pkgconfig_2.0.3 GenomicAlignments_1.30.0
[130] foreign_0.8-82 plotly_4.10.0 spatstat.sparse_2.1-1
[133] xml2_1.3.3 rvest_1.0.2 VariantAnnotation_1.40.0
[136] digest_0.6.29 sctransform_0.3.3 RcppAnnoy_0.0.19
[139] spatstat.data_2.2-0 cellranger_1.1.0 leiden_0.4.2
[142] fastmatch_1.1-3 htmlTable_2.4.0 uwot_0.1.11
[145] restfulr_0.0.13 curl_4.3.2 shiny_1.7.1
[148] Rsamtools_2.10.0 rjson_0.2.21 lifecycle_1.0.1
[151] nlme_3.1-155 jsonlite_1.7.3 viridisLite_0.4.0
[154] fansi_1.0.2 pillar_1.7.0 lattice_0.20-45
[157] KEGGREST_1.34.0 fastmap_1.1.0 httr_1.4.2
[160] survival_3.2-13 glue_1.6.1 qlcMatrix_0.9.7
[163] png_0.1-7 bit_4.0.4 stringi_1.7.6
[166] blob_1.2.2 latticeExtra_0.6-29 memoise_2.0.1
[169] irlba_2.3.5 future.apply_1.8.1
```
If there is other output that would be useful to include, let me know.
Any advice would be thoroughly appreciated, thanks very much!
|
1.0
|
CoveragePlot error: can't plot annotation, multiome vignette - Apologies if this has been asked before, but I haven't been able to find a solution to this error elsewhere. I'm running into a problem with the CoveragePlot function as I am working through the [example multiome vignette](https://satijalab.org/signac/articles/pbmc_multiomic.html). No other part of the vignette has given me problems, up until this last step.
I am running the following code:
```r
DefaultAssay(VG.test) <- "ATAC"
CoveragePlot(
object = VG.test,
region = "toxin1",
features = "toxin1",
expression.assay = "SCT",
extend.upstream = 500,
extend.downstream = 10000
)
```
and get the following error:
```r
Error in data.frame(seqnames = annotation[[i]]$seqnames[[1]], start = min(annotation[[i]]$start), :
arguments imply differing number of rows: 1, 0
```
It seems like the error is produced when trying to generate some sort of dataframe from the Granges annotation object? I get the above error no matter what gene I select in regions/features. Importantly, the error disappears when I set ```annotation=F```, and the graph can be plotted, albeit without a gene track.
I'm using a non-reference genome and annotation, which I believe may be part of the issue (though I can't find error-producing differences between it and the human reference), so here's a part of my annotation below. Most columns are not needed by Signac, but the required "type", "gene_id", "gene_name" and "gene_biotype" to match ENSEMBL annotations are present. I've set ``` gene_name <- gene_id```, but I don't think this should cause these particular issues.
```r
> head(gtf)
GRanges object with 6 ranges and 24 metadata columns:
seqnames ranges strand | source type score phase gene_id
<Rle> <IRanges> <Rle> | <factor> <factor> <numeric> <integer> <character>
[1] ma1 1-2076 + | NA gene NA <NA> toxin1
[2] ma1 1-2076 + | NA transcript NA <NA> toxin1
[3] ma1 1-746 + | NA exon NA <NA> toxin1
[4] ma1 1640-1765 + | NA exon NA <NA> toxin1
[5] ma1 1911-2076 + | NA exon NA <NA> toxin1
[6] ma1 1-689 + | NA five_prime_utr NA <NA> toxin1
transcript_id ID Parent original_biotype Anolis_Blast_Type
<character> <character> <character> <character> <character>
[1] toxin_model_1 toxin1 <NA> <NA> <NA>
[2] toxin_model_1 nbis-mrna-1 toxin1 mrna <NA>
[3] toxin_model_1 toxin1:exon:1 nbis-mrna-1 <NA> <NA>
[4] toxin_model_1 toxin1:exon:2 nbis-mrna-1 <NA> <NA>
[5] toxin_model_1 toxin1:exon:3 nbis-mrna-1 <NA> <NA>
[6] toxin_model_1 toxin1:five_prime.. nbis-mrna-1 five_prime_UTR <NA>
Anolis_Homolog Crovir_Transcript_ID Name Python_Blast_Type Python_Homolog
<character> <character> <character> <character> <character>
[1] <NA> <NA> <NA> <NA> <NA>
[2] <NA> <NA> <NA> <NA> <NA>
[3] <NA> <NA> <NA> <NA> <NA>
[4] <NA> <NA> <NA> <NA> <NA>
[5] <NA> <NA> <NA> <NA> <NA>
[6] <NA> <NA> <NA> <NA> <NA>
Thamnophis_Blast_Type Thamnophis_Homolog X_AED X_QI X_eAED Crovir_Protein_ID
<character> <character> <character> <character> <character> <character>
[1] <NA> <NA> <NA> <NA> <NA> <NA>
[2] <NA> <NA> <NA> <NA> <NA> <NA>
[3] <NA> <NA> <NA> <NA> <NA> <NA>
[4] <NA> <NA> <NA> <NA> <NA> <NA>
[5] <NA> <NA> <NA> <NA> <NA> <NA>
[6] <NA> <NA> <NA> <NA> <NA> <NA>
previous_transcript_id gene_biotype gene_name
<character> <character> <character>
[1] <NA> protein_coding toxin1
[2] <NA> protein_coding toxin1
[3] <NA> protein_coding toxin1
[4] <NA> protein_coding toxin1
[5] <NA> protein_coding toxin1
[6] <NA> protein_coding toxin1
-------
seqinfo: 21 sequences from an unspecified genome; no seqlengths
```
My session info:
```r
> sessionInfo()
R version 4.1.2 (2021-11-01)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Catalina 10.15.7
Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats4 stats graphics grDevices utils datasets methods base
other attached packages:
[1] BSgenome.Hsapiens.UCSC.hg38_1.4.4 EnsDb.Hsapiens.v86_2.99.0
[3] ensembldb_2.18.3 AnnotationFilter_1.18.0
[5] BSgenome.Cviridis.custom.CroVir_3.1 BSgenome_1.62.0
[7] rtracklayer_1.54.0 Biostrings_2.62.0
[9] XVector_0.34.0 forcats_0.5.1
[11] stringr_1.4.0 dplyr_1.0.7
[13] purrr_0.3.4 readr_2.1.2
[15] tidyr_1.2.0 tibble_3.1.6
[17] ggplot2_3.3.5 tidyverse_1.3.1
[19] GenomicFeatures_1.46.4 AnnotationDbi_1.56.2
[21] Biobase_2.54.0 GenomicRanges_1.46.1
[23] GenomeInfoDb_1.30.1 IRanges_2.28.0
[25] S4Vectors_0.32.3 BiocGenerics_0.40.0
[27] sp_1.4-6 SeuratObject_4.1.0
[29] Seurat_4.1.1 Signac_1.7.0
loaded via a namespace (and not attached):
[1] utf8_1.2.2 reticulate_1.25 tidyselect_1.1.1
[4] RSQLite_2.2.9 htmlwidgets_1.5.4 docopt_0.7.1
[7] grid_4.1.2 BiocParallel_1.28.3 Rtsne_0.16
[10] munsell_0.5.0 codetools_0.2-18 ica_1.0-2
[13] future_1.23.0 miniUI_0.1.1.1 withr_2.4.3
[16] spatstat.random_2.2-0 colorspace_2.0-3 progressr_0.10.0
[19] filelock_1.0.2 knitr_1.37 rstudioapi_0.13
[22] ROCR_1.0-11 tensor_1.5 listenv_0.8.0
[25] labeling_0.4.2 MatrixGenerics_1.6.0 slam_0.1-50
[28] GenomeInfoDbData_1.2.7 polyclip_1.10-0 farver_2.1.0
[31] bit64_4.0.5 parallelly_1.30.0 vctrs_0.3.8
[34] generics_0.1.2 xfun_0.29 biovizBase_1.42.0
[37] BiocFileCache_2.2.1 R6_2.5.1 hdf5r_1.3.5
[40] bitops_1.0-7 spatstat.utils_2.3-1 cachem_1.0.6
[43] DelayedArray_0.20.0 assertthat_0.2.1 promises_1.2.0.1
[46] BiocIO_1.4.0 scales_1.1.1 nnet_7.3-17
[49] rgeos_0.5-9 gtable_0.3.0 globals_0.14.0
[52] goftest_1.2-3 rlang_1.0.2 RcppRoll_0.3.0
[55] splines_4.1.2 rgdal_1.5-29 lazyeval_0.2.2
[58] dichromat_2.0-0 checkmate_2.0.0 spatstat.geom_2.4-0
[61] broom_0.7.12 yaml_2.2.2 reshape2_1.4.4
[64] abind_1.4-5 modelr_0.1.8 backports_1.4.1
[67] httpuv_1.6.5 Hmisc_4.6-0 tools_4.1.2
[70] ellipsis_0.3.2 spatstat.core_2.4-4 RColorBrewer_1.1-2
[73] ggridges_0.5.3 Rcpp_1.0.8 plyr_1.8.6
[76] base64enc_0.1-3 progress_1.2.2 zlibbioc_1.40.0
[79] RCurl_1.98-1.5 prettyunits_1.1.1 rpart_4.1.16
[82] deldir_1.0-6 pbapply_1.5-0 cowplot_1.1.1
[85] zoo_1.8-9 SummarizedExperiment_1.24.0 haven_2.4.3
[88] ggrepel_0.9.1 cluster_2.1.2 fs_1.5.2
[91] magrittr_2.0.2 RSpectra_0.16-1 data.table_1.14.2
[94] scattermore_0.8 reprex_2.0.1 lmtest_0.9-40
[97] RANN_2.6.1 ProtGenerics_1.26.0 fitdistrplus_1.1-8
[100] matrixStats_0.61.0 hms_1.1.1 patchwork_1.1.1
[103] mime_0.12 xtable_1.8-4 XML_3.99-0.8
[106] jpeg_0.1-9 sparsesvd_0.2 readxl_1.3.1
[109] gridExtra_2.3 compiler_4.1.2 biomaRt_2.50.2
[112] KernSmooth_2.23-20 crayon_1.5.0 htmltools_0.5.2
[115] mgcv_1.8-38 later_1.3.0 tzdb_0.2.0
[118] Formula_1.2-4 lubridate_1.8.0 DBI_1.1.2
[121] dbplyr_2.1.1 MASS_7.3-55 rappdirs_0.3.3
[124] Matrix_1.4-0 cli_3.2.0 parallel_4.1.2
[127] igraph_1.3.0 pkgconfig_2.0.3 GenomicAlignments_1.30.0
[130] foreign_0.8-82 plotly_4.10.0 spatstat.sparse_2.1-1
[133] xml2_1.3.3 rvest_1.0.2 VariantAnnotation_1.40.0
[136] digest_0.6.29 sctransform_0.3.3 RcppAnnoy_0.0.19
[139] spatstat.data_2.2-0 cellranger_1.1.0 leiden_0.4.2
[142] fastmatch_1.1-3 htmlTable_2.4.0 uwot_0.1.11
[145] restfulr_0.0.13 curl_4.3.2 shiny_1.7.1
[148] Rsamtools_2.10.0 rjson_0.2.21 lifecycle_1.0.1
[151] nlme_3.1-155 jsonlite_1.7.3 viridisLite_0.4.0
[154] fansi_1.0.2 pillar_1.7.0 lattice_0.20-45
[157] KEGGREST_1.34.0 fastmap_1.1.0 httr_1.4.2
[160] survival_3.2-13 glue_1.6.1 qlcMatrix_0.9.7
[163] png_0.1-7 bit_4.0.4 stringi_1.7.6
[166] blob_1.2.2 latticeExtra_0.6-29 memoise_2.0.1
[169] irlba_2.3.5 future.apply_1.8.1
```
If there is other output that would be useful to include, let me know.
Any advice would be thoroughly appreciated, thanks very much!
|
non_process
|
coverageplot error can t plot annotation multiome vignette apologies if this has been asked before but i haven t been able to find a solution to this error elsewhere i m running into a problem with the coverageplot function as i am working through the no other part of the vignette has given me problems up until this last step i am running the following code r defaultassay vg test atac coverageplot object vg test region features expression assay sct extend upstream extend downstream and get the following error r error in data frame seqnames annotation seqnames start min annotation start arguments imply differing number of rows it seems like the error is produced when trying to generate some sort of dataframe from the granges annotation object i get the above error no matter what gene i select in regions features importantly the error disappears when i set annotation f and the graph can be plotted albeit without a gene track i m using a non reference genome and annotation which i believe may be part of the issue though i can t find error producing differences between it and the human reference so here s a part of my annotation below most columns are not needed by signac but the required type gene id gene name and gene biotype to match ensembl annotations are present i ve set gene name gene id but i don t think this should cause these particular issues r head gtf granges object with ranges and metadata columns seqnames ranges strand source type score phase gene id na gene na na transcript na na exon na na exon na na exon na na five prime utr na transcript id id parent original biotype anolis blast type toxin model toxin model nbis mrna mrna toxin model exon nbis mrna toxin model exon nbis mrna toxin model exon nbis mrna toxin model five prime nbis mrna five prime utr anolis homolog crovir transcript id name python blast type python homolog thamnophis blast type thamnophis homolog x aed x qi x eaed crovir protein id previous transcript id gene biotype gene name protein coding protein coding protein coding protein coding protein coding protein coding seqinfo sequences from an unspecified genome no seqlengths my session info r sessioninfo r version platform apple bit running under macos catalina matrix products default blas system library frameworks accelerate framework versions a frameworks veclib framework versions a libblas dylib lapack library frameworks r framework versions resources lib librlapack dylib locale en us utf en us utf en us utf c en us utf en us utf attached base packages stats graphics grdevices utils datasets methods base other attached packages bsgenome hsapiens ucsc ensdb hsapiens ensembldb annotationfilter bsgenome cviridis custom crovir bsgenome rtracklayer biostrings xvector forcats stringr dplyr purrr readr tidyr tibble tidyverse genomicfeatures annotationdbi biobase genomicranges genomeinfodb iranges biocgenerics sp seuratobject seurat signac loaded via a namespace and not attached reticulate tidyselect rsqlite htmlwidgets docopt grid biocparallel rtsne munsell codetools ica future miniui withr spatstat random colorspace progressr filelock knitr rstudioapi rocr tensor listenv labeling matrixgenerics slam genomeinfodbdata polyclip farver parallelly vctrs generics xfun biovizbase biocfilecache bitops spatstat utils cachem delayedarray assertthat promises biocio scales nnet rgeos gtable globals goftest rlang rcpproll splines rgdal lazyeval dichromat checkmate spatstat geom broom yaml abind modelr backports httpuv hmisc tools ellipsis spatstat core rcolorbrewer ggridges rcpp plyr progress zlibbioc rcurl prettyunits rpart deldir pbapply cowplot zoo summarizedexperiment haven ggrepel cluster fs magrittr rspectra data table scattermore reprex lmtest rann protgenerics fitdistrplus matrixstats hms patchwork mime xtable xml jpeg sparsesvd readxl gridextra compiler biomart kernsmooth crayon htmltools mgcv later tzdb formula lubridate dbi dbplyr mass rappdirs matrix cli parallel igraph pkgconfig genomicalignments foreign plotly spatstat sparse rvest variantannotation digest sctransform rcppannoy spatstat data cellranger leiden fastmatch htmltable uwot restfulr curl shiny rsamtools rjson lifecycle nlme jsonlite viridislite fansi pillar lattice keggrest fastmap httr survival glue qlcmatrix png bit stringi blob latticeextra memoise irlba future apply if there is other output that would be useful to include let me know any advice would be thoroughly appreciated thanks very much
| 0
|
11,637
| 14,494,470,497
|
IssuesEvent
|
2020-12-11 09:52:04
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Cloud Security events get processed by panther-log-processor
|
p1 story team:data processing
|
### Description
Cloud Security resources are stored in the Data lake and queryable through Athena
### Related Services
panther-log-processor
### Designs
None. This is a backend task
### Acceptance Criteria
- Cloud security resources are queryable in the `panther_cloudsecurity` database.
|
1.0
|
Cloud Security events get processed by panther-log-processor - ### Description
Cloud Security resources are stored in the Data lake and queryable through Athena
### Related Services
panther-log-processor
### Designs
None. This is a backend task
### Acceptance Criteria
- Cloud security resources are queryable in the `panther_cloudsecurity` database.
|
process
|
cloud security events get processed by panther log processor description cloud security resources are stored in the data lake and queryable through athena related services panther log processor designs none this is a backend task acceptance criteria cloud security resources are queryable in the panther cloudsecurity database
| 1
|
42,575
| 9,255,365,297
|
IssuesEvent
|
2019-03-16 09:17:34
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
.modal class conflict in terms.php with Bootstrap
|
No Code Attached Yet
|
> If you are submitting an issue for the Joomla! CMS, please submit it at https://github.com/joomla/joomla-cms/issues/new instead. You may remove this line from the issue template.
#### Steps to reproduce the issue
Use any template or otherwise incorporate Bootstrap into Joomla.
Enable the User - Terms and Conditions plugin.
Link an article to the Terms and Conditions plugin.
Attempt to register a new User via the frontend.
"Terms and Conditions" link will not be visible, only the "*" for required fields.
, and when a new user is registering, the link for the Terms & Conditions will not be visible because of the "display:none" and other issues with the .modal class.
#### Expected result
This line in the source uses a "modal" class on the link.
`<a href="/index.php/component/content/article/12-club-docs/2-constitution-and-bylaws?tmpl=component&Itemid=101" class="modal" rel="{handler: 'iframe', size: {x:800, y:500}}">Terms & Conditions</a>`
Bootstrap contains its own definition for .modal:
`.modal {
position:fixed;
top:0;
right:0;
bottom:0;
left:0;
z-index:1050;
display:none;
overflow:hidden;
outline:0
}`
#### Actual result
The line only renders a "*" with no link to the complete terms and conditions.
#### System information (as much as possible)
Joomla 3.9.1
Helix Ultimate Template v 1.0.5
Bootstrap v4.1.3
#### Additional comments
Source of class definition is in /plugins/user/terms/field/terms.php:
line 106:
`$attribs['class'] = 'modal';`
|
1.0
|
.modal class conflict in terms.php with Bootstrap - > If you are submitting an issue for the Joomla! CMS, please submit it at https://github.com/joomla/joomla-cms/issues/new instead. You may remove this line from the issue template.
#### Steps to reproduce the issue
Use any template or otherwise incorporate Bootstrap into Joomla.
Enable the User - Terms and Conditions plugin.
Link an article to the Terms and Conditions plugin.
Attempt to register a new User via the frontend.
"Terms and Conditions" link will not be visible, only the "*" for required fields.
, and when a new user is registering, the link for the Terms & Conditions will not be visible because of the "display:none" and other issues with the .modal class.
#### Expected result
This line in the source uses a "modal" class on the link.
`<a href="/index.php/component/content/article/12-club-docs/2-constitution-and-bylaws?tmpl=component&Itemid=101" class="modal" rel="{handler: 'iframe', size: {x:800, y:500}}">Terms & Conditions</a>`
Bootstrap contains its own definition for .modal:
`.modal {
position:fixed;
top:0;
right:0;
bottom:0;
left:0;
z-index:1050;
display:none;
overflow:hidden;
outline:0
}`
#### Actual result
The line only renders a "*" with no link to the complete terms and conditions.
#### System information (as much as possible)
Joomla 3.9.1
Helix Ultimate Template v 1.0.5
Bootstrap v4.1.3
#### Additional comments
Source of class definition is in /plugins/user/terms/field/terms.php:
line 106:
`$attribs['class'] = 'modal';`
|
non_process
|
modal class conflict in terms php with bootstrap if you are submitting an issue for the joomla cms please submit it at instead you may remove this line from the issue template steps to reproduce the issue use any template or otherwise incorporate bootstrap into joomla enable the user terms and conditions plugin link an article to the terms and conditions plugin attempt to register a new user via the frontend terms and conditions link will not be visible only the for required fields and when a new user is registering the link for the terms conditions will not be visible because of the display none and other issues with the modal class expected result this line in the source uses a modal class on the link terms amp conditions bootstrap contains its own definition for modal modal position fixed top right bottom left z index display none overflow hidden outline actual result the line only renders a with no link to the complete terms and conditions system information as much as possible joomla helix ultimate template v bootstrap additional comments source of class definition is in plugins user terms field terms php line attribs modal
| 0
|
316,533
| 9,648,848,079
|
IssuesEvent
|
2019-05-17 17:25:10
|
minio/mc
|
https://api.github.com/repos/minio/mc
|
closed
|
mc operations are slow some times
|
community priority: medium triage
|
## Expected behaviour
When I'm using mc client on mac or linux sometimes the operation is very fast 1 second some times is very slow 1 minute for the same operation. I have observed this behaviour with `ls` and `cp` commands
## Actual behavior
Fast
```
time mc --debug ls minio/client-tools/ci/
mc: <DEBUG> GET /client-tools/?location= HTTP/1.1
Host: 192.168.200.100
User-Agent: Minio (darwin; amd64) minio-go/v6.0.21 mc/2019-05-01T23:27:44Z
Authorization: AWS4-HMAC-SHA256 Credential=admin/20190517/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20190517T131052Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Accept-Ranges: bytes
Connection: keep-alive
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 17 May 2019 13:10:52 GMT
Server: nginx/1.14.0 (Ubuntu)
Vary: Origin
X-Amz-Request-Id: 159F7A781FD05EDC
X-Xss-Protection: 1; mode=block
mc: <DEBUG> Response Time: 297.526134ms
mc: <DEBUG> GET /client-tools/?delimiter=%2F&max-keys=1000&prefix=ci HTTP/1.1
Host: 192.168.200.100
User-Agent: Minio (darwin; amd64) minio-go/v6.0.21 mc/2019-05-01T23:27:44Z
Authorization: AWS4-HMAC-SHA256 Credential=admin/20190517/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20190517T131052Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Accept-Ranges: bytes
Connection: keep-alive
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 17 May 2019 13:10:52 GMT
Server: nginx/1.14.0 (Ubuntu)
Vary: Origin
X-Amz-Request-Id: 159F7A7828CB8633
X-Xss-Protection: 1; mode=block
mc: <DEBUG> Response Time: 147.284384ms
mc: <DEBUG> GET /client-tools/?delimiter=%2F&max-keys=1000&prefix=ci%2F HTTP/1.1
Host: 192.168.200.100
User-Agent: Minio (darwin; amd64) minio-go/v6.0.21 mc/2019-05-01T23:27:44Z
Authorization: AWS4-HMAC-SHA256 Credential=admin/20190517/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20190517T131052Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Accept-Ranges: bytes
Connection: keep-alive
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 17 May 2019 13:10:52 GMT
Server: nginx/1.14.0 (Ubuntu)
Vary: Origin
X-Amz-Request-Id: 159F7A78319F9AA4
X-Xss-Protection: 1; mode=block
mc: <DEBUG> Response Time: 146.8402ms
[2019-05-17 14:10:52 BST] 0B build-3.0.0-320/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-321/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-322/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-334/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-335/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-336/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-337/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-338/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-339/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-340/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-400/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-401/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-402/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-403/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-404/
mc --debug ls minio/client-tools/ci/ 0.02s user 0.02s system 5% cpu 0.616 total
```
Slow
```
time mc --debug ls minio/client-tools/ci/
mc: <DEBUG> GET /client-tools/?location= HTTP/1.1
Host: 192.168.200.100
User-Agent: Minio (darwin; amd64) minio-go/v6.0.21 mc/2019-05-01T23:27:44Z
Authorization: AWS4-HMAC-SHA256 Credential=admin/20190517/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20190517T131057Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Accept-Ranges: bytes
Connection: keep-alive
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 17 May 2019 13:10:57 GMT
Server: nginx/1.14.0 (Ubuntu)
Vary: Origin
X-Amz-Request-Id: 159F7A795C4013CA
X-Xss-Protection: 1; mode=block
mc: <DEBUG> Response Time: 293.226618ms
mc: <DEBUG> GET /client-tools/?delimiter=%2F&max-keys=1000&prefix=ci HTTP/1.1
Host: 192.168.200.100
User-Agent: Minio (darwin; amd64) minio-go/v6.0.21 mc/2019-05-01T23:27:44Z
Authorization: AWS4-HMAC-SHA256 Credential=admin/20190517/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20190517T131157Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Accept-Ranges: bytes
Connection: keep-alive
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 17 May 2019 13:11:58 GMT
Server: nginx/1.14.0 (Ubuntu)
Vary: Origin
X-Amz-Request-Id: 159F7A8767EF4781
X-Xss-Protection: 1; mode=block
mc: <DEBUG> Response Time: 298.434012ms
mc: <DEBUG> GET /client-tools/?delimiter=%2F&max-keys=1000&prefix=ci%2F HTTP/1.1
Host: 192.168.200.100
User-Agent: Minio (darwin; amd64) minio-go/v6.0.21 mc/2019-05-01T23:27:44Z
Authorization: AWS4-HMAC-SHA256 Credential=admin/20190517/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20190517T131158Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Accept-Ranges: bytes
Connection: keep-alive
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 17 May 2019 13:11:58 GMT
Server: nginx/1.14.0 (Ubuntu)
Vary: Origin
X-Amz-Request-Id: 159F7A877189345C
X-Xss-Protection: 1; mode=block
mc: <DEBUG> Response Time: 156.632665ms
[2019-05-17 14:11:58 BST] 0B build-3.0.0-320/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-321/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-322/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-334/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-335/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-336/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-337/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-338/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-339/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-340/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-400/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-401/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-402/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-403/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-404/
mc --debug ls minio/client-tools/ci/ 0.02s user 0.02s system 0% cpu 1:00.81 total
```
## Steps to reproduce the behaviour
## mc version
mc version
```
Version: 2019-05-01T23:27:44Z
Release-tag: RELEASE.2019-05-01T23-27-44Z
Commit-id: b1aa2232ef3babc5f06bacb1b0044022185457ca
```
## System information
minio runs on ubuntu 18.04 VM
|
1.0
|
mc operations are slow some times - ## Expected behaviour
When I'm using mc client on mac or linux sometimes the operation is very fast 1 second some times is very slow 1 minute for the same operation. I have observed this behaviour with `ls` and `cp` commands
## Actual behavior
Fast
```
time mc --debug ls minio/client-tools/ci/
mc: <DEBUG> GET /client-tools/?location= HTTP/1.1
Host: 192.168.200.100
User-Agent: Minio (darwin; amd64) minio-go/v6.0.21 mc/2019-05-01T23:27:44Z
Authorization: AWS4-HMAC-SHA256 Credential=admin/20190517/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20190517T131052Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Accept-Ranges: bytes
Connection: keep-alive
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 17 May 2019 13:10:52 GMT
Server: nginx/1.14.0 (Ubuntu)
Vary: Origin
X-Amz-Request-Id: 159F7A781FD05EDC
X-Xss-Protection: 1; mode=block
mc: <DEBUG> Response Time: 297.526134ms
mc: <DEBUG> GET /client-tools/?delimiter=%2F&max-keys=1000&prefix=ci HTTP/1.1
Host: 192.168.200.100
User-Agent: Minio (darwin; amd64) minio-go/v6.0.21 mc/2019-05-01T23:27:44Z
Authorization: AWS4-HMAC-SHA256 Credential=admin/20190517/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20190517T131052Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Accept-Ranges: bytes
Connection: keep-alive
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 17 May 2019 13:10:52 GMT
Server: nginx/1.14.0 (Ubuntu)
Vary: Origin
X-Amz-Request-Id: 159F7A7828CB8633
X-Xss-Protection: 1; mode=block
mc: <DEBUG> Response Time: 147.284384ms
mc: <DEBUG> GET /client-tools/?delimiter=%2F&max-keys=1000&prefix=ci%2F HTTP/1.1
Host: 192.168.200.100
User-Agent: Minio (darwin; amd64) minio-go/v6.0.21 mc/2019-05-01T23:27:44Z
Authorization: AWS4-HMAC-SHA256 Credential=admin/20190517/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20190517T131052Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Accept-Ranges: bytes
Connection: keep-alive
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 17 May 2019 13:10:52 GMT
Server: nginx/1.14.0 (Ubuntu)
Vary: Origin
X-Amz-Request-Id: 159F7A78319F9AA4
X-Xss-Protection: 1; mode=block
mc: <DEBUG> Response Time: 146.8402ms
[2019-05-17 14:10:52 BST] 0B build-3.0.0-320/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-321/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-322/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-334/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-335/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-336/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-337/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-338/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-339/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-340/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-400/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-401/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-402/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-403/
[2019-05-17 14:10:52 BST] 0B build-3.0.0-404/
mc --debug ls minio/client-tools/ci/ 0.02s user 0.02s system 5% cpu 0.616 total
```
Slow
```
time mc --debug ls minio/client-tools/ci/
mc: <DEBUG> GET /client-tools/?location= HTTP/1.1
Host: 192.168.200.100
User-Agent: Minio (darwin; amd64) minio-go/v6.0.21 mc/2019-05-01T23:27:44Z
Authorization: AWS4-HMAC-SHA256 Credential=admin/20190517/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20190517T131057Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Accept-Ranges: bytes
Connection: keep-alive
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 17 May 2019 13:10:57 GMT
Server: nginx/1.14.0 (Ubuntu)
Vary: Origin
X-Amz-Request-Id: 159F7A795C4013CA
X-Xss-Protection: 1; mode=block
mc: <DEBUG> Response Time: 293.226618ms
mc: <DEBUG> GET /client-tools/?delimiter=%2F&max-keys=1000&prefix=ci HTTP/1.1
Host: 192.168.200.100
User-Agent: Minio (darwin; amd64) minio-go/v6.0.21 mc/2019-05-01T23:27:44Z
Authorization: AWS4-HMAC-SHA256 Credential=admin/20190517/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20190517T131157Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Accept-Ranges: bytes
Connection: keep-alive
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 17 May 2019 13:11:58 GMT
Server: nginx/1.14.0 (Ubuntu)
Vary: Origin
X-Amz-Request-Id: 159F7A8767EF4781
X-Xss-Protection: 1; mode=block
mc: <DEBUG> Response Time: 298.434012ms
mc: <DEBUG> GET /client-tools/?delimiter=%2F&max-keys=1000&prefix=ci%2F HTTP/1.1
Host: 192.168.200.100
User-Agent: Minio (darwin; amd64) minio-go/v6.0.21 mc/2019-05-01T23:27:44Z
Authorization: AWS4-HMAC-SHA256 Credential=admin/20190517/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20190517T131158Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Accept-Ranges: bytes
Connection: keep-alive
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 17 May 2019 13:11:58 GMT
Server: nginx/1.14.0 (Ubuntu)
Vary: Origin
X-Amz-Request-Id: 159F7A877189345C
X-Xss-Protection: 1; mode=block
mc: <DEBUG> Response Time: 156.632665ms
[2019-05-17 14:11:58 BST] 0B build-3.0.0-320/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-321/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-322/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-334/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-335/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-336/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-337/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-338/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-339/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-340/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-400/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-401/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-402/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-403/
[2019-05-17 14:11:58 BST] 0B build-3.0.0-404/
mc --debug ls minio/client-tools/ci/ 0.02s user 0.02s system 0% cpu 1:00.81 total
```
## Steps to reproduce the behaviour
## mc version
mc version
```
Version: 2019-05-01T23:27:44Z
Release-tag: RELEASE.2019-05-01T23-27-44Z
Commit-id: b1aa2232ef3babc5f06bacb1b0044022185457ca
```
## System information
minio runs on ubuntu 18.04 VM
|
non_process
|
mc operations are slow some times expected behaviour when i m using mc client on mac or linux sometimes the operation is very fast second some times is very slow minute for the same operation i have observed this behaviour with ls and cp commands actual behavior fast time mc debug ls minio client tools ci mc get client tools location http host user agent minio darwin minio go mc authorization hmac credential admin us east request signedheaders host x amz content x amz date signature redacted x amz content x amz date accept encoding gzip mc http ok transfer encoding chunked accept ranges bytes connection keep alive content security policy block all mixed content content type application xml date fri may gmt server nginx ubuntu vary origin x amz request id x xss protection mode block mc response time mc get client tools delimiter max keys prefix ci http host user agent minio darwin minio go mc authorization hmac credential admin us east request signedheaders host x amz content x amz date signature redacted x amz content x amz date accept encoding gzip mc http ok transfer encoding chunked accept ranges bytes connection keep alive content security policy block all mixed content content type application xml date fri may gmt server nginx ubuntu vary origin x amz request id x xss protection mode block mc response time mc get client tools delimiter max keys prefix ci http host user agent minio darwin minio go mc authorization hmac credential admin us east request signedheaders host x amz content x amz date signature redacted x amz content x amz date accept encoding gzip mc http ok transfer encoding chunked accept ranges bytes connection keep alive content security policy block all mixed content content type application xml date fri may gmt server nginx ubuntu vary origin x amz request id x xss protection mode block mc response time build build build build build build build build build build build build build build build mc debug ls minio client tools ci user system cpu total slow time mc debug ls minio client tools ci mc get client tools location http host user agent minio darwin minio go mc authorization hmac credential admin us east request signedheaders host x amz content x amz date signature redacted x amz content x amz date accept encoding gzip mc http ok transfer encoding chunked accept ranges bytes connection keep alive content security policy block all mixed content content type application xml date fri may gmt server nginx ubuntu vary origin x amz request id x xss protection mode block mc response time mc get client tools delimiter max keys prefix ci http host user agent minio darwin minio go mc authorization hmac credential admin us east request signedheaders host x amz content x amz date signature redacted x amz content x amz date accept encoding gzip mc http ok transfer encoding chunked accept ranges bytes connection keep alive content security policy block all mixed content content type application xml date fri may gmt server nginx ubuntu vary origin x amz request id x xss protection mode block mc response time mc get client tools delimiter max keys prefix ci http host user agent minio darwin minio go mc authorization hmac credential admin us east request signedheaders host x amz content x amz date signature redacted x amz content x amz date accept encoding gzip mc http ok transfer encoding chunked accept ranges bytes connection keep alive content security policy block all mixed content content type application xml date fri may gmt server nginx ubuntu vary origin x amz request id x xss protection mode block mc response time build build build build build build build build build build build build build build build mc debug ls minio client tools ci user system cpu total steps to reproduce the behaviour mc version mc version version release tag release commit id system information minio runs on ubuntu vm
| 0
|
188,604
| 6,778,147,387
|
IssuesEvent
|
2017-10-28 06:30:32
|
dhowe/AdNauseam
|
https://api.github.com/repos/dhowe/AdNauseam
|
closed
|
Change 'AdNauseam Filter' update button function
|
PRIORITY: Medium Question
|
It is not always clear why the AdNauseam list 'update' button is active and or not; also, why one sometimes has to to 'purge-caches' sometimes; and finally, whether the list has actually updated or not
Assuming everything is in fact working, we need some better visual clues as to what is happening and why (possibly a FAQ entry as well)
Please verify that:
-- button functions correctly
-- it is clear to user when the update starts [design]
-- it is clear to user when the update completes [design]
-- it is clear to user if the update fails (disable internet during test) [design]
-- it is clear whether the list has actually changed [design]
|
1.0
|
Change 'AdNauseam Filter' update button function - It is not always clear why the AdNauseam list 'update' button is active and or not; also, why one sometimes has to to 'purge-caches' sometimes; and finally, whether the list has actually updated or not
Assuming everything is in fact working, we need some better visual clues as to what is happening and why (possibly a FAQ entry as well)
Please verify that:
-- button functions correctly
-- it is clear to user when the update starts [design]
-- it is clear to user when the update completes [design]
-- it is clear to user if the update fails (disable internet during test) [design]
-- it is clear whether the list has actually changed [design]
|
non_process
|
change adnauseam filter update button function it is not always clear why the adnauseam list update button is active and or not also why one sometimes has to to purge caches sometimes and finally whether the list has actually updated or not assuming everything is in fact working we need some better visual clues as to what is happening and why possibly a faq entry as well please verify that button functions correctly it is clear to user when the update starts it is clear to user when the update completes it is clear to user if the update fails disable internet during test it is clear whether the list has actually changed
| 0
|
15,376
| 19,561,888,165
|
IssuesEvent
|
2022-01-03 17:20:10
|
corona-warn-app/cwa-wishlist
|
https://api.github.com/repos/corona-warn-app/cwa-wishlist
|
closed
|
Check network status before scanning QR code or asking for TAN
|
enhancement mirrored-to-jira Test/Share process
|
## Avoid duplicates
* [X] This enhancement request has not already been raised before
Related to https://github.com/corona-warn-app/cwa-app-android/issues/2062#issuecomment-757548295
* [ ] Enhancement request is specific for Android only, for general issues / questions that apply to iOS and Android please raise them in [CWA-Wishlist](https://github.com/corona-warn-app/cwa-wishlist)
* [ ] If you are proposing a new feature, please do so in [CWA-Wishlist](https://github.com/corona-warn-app/cwa-wishlist)
## Current Implementation
First reported on Android app version: 1.10.1
Reviewed for accuracy on Android app version: release/2.12.x
If there is no Internet connection after scanning a test QR code from a test or after inputting a TAN the error message:
"**Error**
Your Internet connection may have been lost. Please ensure that you are connected to the Internet"
is displayed. The connection is however not checked **before** the scan is started or the TAN is input.
## Suggested Enhancement
A check for an active network connection should be made **before** requesting the user to scan a test QR code or **before** requesting the input of a TAN. If there is no network connection, then the user should be informed that a network connection is necessary to register a QR code or TAN. There is no store-and-forward mechanism which allows either a QR code or TAN to be input and later transferred to a server for registration. (See also documentation issue "Privacy notice incorrect description scanning QR code with no internet" https://github.com/corona-warn-app/cwa-documentation/issues/510.)
A network check should be done:
- after tapping on "Scan QR Code", followed by ACCEPT and before "Position the QR code in the frame" and
- immediately after tapping on "Enter TAN".
~~In PR "Improved popup text for state "non internet" (EXPOSUREAPP-4569)" https://github.com/corona-warn-app/cwa-app-android/pull/2177 related to https://github.com/corona-warn-app/cwa-app-android/issues/2062 there is a proposal to change the error message to "Möglicherweise wurde Ihre Internet-Verbindung unterbrochen. Bitte stellen Sie sicher, dass Sie mit dem Internet verbunden sind." in the [release/1.12.x](https://github.com/corona-warn-app/cwa-app-android/tree/release/1.12.x) branch. (Presumably also pending translation to other languages).~~ https://github.com/corona-warn-app/cwa-app-android/pull/2177 was merged and the change released in 1.12.0.
## Expected Benefits
If the action of scanning a QR code or forwarding a TAN to a server is not going to be possible because there is no active Internet connection, the user should be told this as soon as possible in the sequence. Warning early saves the disappointment of going through a scan sequence or inputting 10 digits into the UI only to be told that the action was not possible due to something that could have been foreseen earlier.
---
Internal Tracking ID: [EXPOSUREAPP-4719](https://jira-ibs.wbs.net.sap/browse/EXPOSUREAPP-4719) obsolete
Internal Tracking ID: [EXPOSUREAPP-4406](https://jira-ibs.wbs.net.sap/browse/EXPOSUREAPP-4406)
|
1.0
|
Check network status before scanning QR code or asking for TAN - ## Avoid duplicates
* [X] This enhancement request has not already been raised before
Related to https://github.com/corona-warn-app/cwa-app-android/issues/2062#issuecomment-757548295
* [ ] Enhancement request is specific for Android only, for general issues / questions that apply to iOS and Android please raise them in [CWA-Wishlist](https://github.com/corona-warn-app/cwa-wishlist)
* [ ] If you are proposing a new feature, please do so in [CWA-Wishlist](https://github.com/corona-warn-app/cwa-wishlist)
## Current Implementation
First reported on Android app version: 1.10.1
Reviewed for accuracy on Android app version: release/2.12.x
If there is no Internet connection after scanning a test QR code from a test or after inputting a TAN the error message:
"**Error**
Your Internet connection may have been lost. Please ensure that you are connected to the Internet"
is displayed. The connection is however not checked **before** the scan is started or the TAN is input.
## Suggested Enhancement
A check for an active network connection should be made **before** requesting the user to scan a test QR code or **before** requesting the input of a TAN. If there is no network connection, then the user should be informed that a network connection is necessary to register a QR code or TAN. There is no store-and-forward mechanism which allows either a QR code or TAN to be input and later transferred to a server for registration. (See also documentation issue "Privacy notice incorrect description scanning QR code with no internet" https://github.com/corona-warn-app/cwa-documentation/issues/510.)
A network check should be done:
- after tapping on "Scan QR Code", followed by ACCEPT and before "Position the QR code in the frame" and
- immediately after tapping on "Enter TAN".
~~In PR "Improved popup text for state "non internet" (EXPOSUREAPP-4569)" https://github.com/corona-warn-app/cwa-app-android/pull/2177 related to https://github.com/corona-warn-app/cwa-app-android/issues/2062 there is a proposal to change the error message to "Möglicherweise wurde Ihre Internet-Verbindung unterbrochen. Bitte stellen Sie sicher, dass Sie mit dem Internet verbunden sind." in the [release/1.12.x](https://github.com/corona-warn-app/cwa-app-android/tree/release/1.12.x) branch. (Presumably also pending translation to other languages).~~ https://github.com/corona-warn-app/cwa-app-android/pull/2177 was merged and the change released in 1.12.0.
## Expected Benefits
If the action of scanning a QR code or forwarding a TAN to a server is not going to be possible because there is no active Internet connection, the user should be told this as soon as possible in the sequence. Warning early saves the disappointment of going through a scan sequence or inputting 10 digits into the UI only to be told that the action was not possible due to something that could have been foreseen earlier.
---
Internal Tracking ID: [EXPOSUREAPP-4719](https://jira-ibs.wbs.net.sap/browse/EXPOSUREAPP-4719) obsolete
Internal Tracking ID: [EXPOSUREAPP-4406](https://jira-ibs.wbs.net.sap/browse/EXPOSUREAPP-4406)
|
process
|
check network status before scanning qr code or asking for tan avoid duplicates this enhancement request has not already been raised before related to enhancement request is specific for android only for general issues questions that apply to ios and android please raise them in if you are proposing a new feature please do so in current implementation first reported on android app version reviewed for accuracy on android app version release x if there is no internet connection after scanning a test qr code from a test or after inputting a tan the error message error your internet connection may have been lost please ensure that you are connected to the internet is displayed the connection is however not checked before the scan is started or the tan is input suggested enhancement a check for an active network connection should be made before requesting the user to scan a test qr code or before requesting the input of a tan if there is no network connection then the user should be informed that a network connection is necessary to register a qr code or tan there is no store and forward mechanism which allows either a qr code or tan to be input and later transferred to a server for registration see also documentation issue privacy notice incorrect description scanning qr code with no internet a network check should be done after tapping on scan qr code followed by accept and before position the qr code in the frame and immediately after tapping on enter tan in pr improved popup text for state non internet exposureapp related to there is a proposal to change the error message to möglicherweise wurde ihre internet verbindung unterbrochen bitte stellen sie sicher dass sie mit dem internet verbunden sind in the branch presumably also pending translation to other languages was merged and the change released in expected benefits if the action of scanning a qr code or forwarding a tan to a server is not going to be possible because there is no active internet connection the user should be told this as soon as possible in the sequence warning early saves the disappointment of going through a scan sequence or inputting digits into the ui only to be told that the action was not possible due to something that could have been foreseen earlier internal tracking id obsolete internal tracking id
| 1
|
78,177
| 14,962,795,169
|
IssuesEvent
|
2021-01-27 09:44:26
|
OSIPI/DCE-DSC-MRI_CodeCollection
|
https://api.github.com/repos/OSIPI/DCE-DSC-MRI_CodeCollection
|
opened
|
Call open for code contributions regarding 'AIF selection'
|
Code Contributions
|
This is a request for python code contributions for the selection of the AIF. Below some details about this functionality
Please go to the documentation to follow the guidelines for code contributions.
**AIF selection**
We assume initially that an AIF ROI mask will be provided as input or that a population AIF will be used. In future the library may include the capability to generate one or more AIFs from the data automatically. The postprocessing box in case of direct measurement could involve steps as fitting a model to the concentration time curve.

Fig. AIF selection flow chart
The library will also accommodate the reference region (RR) method. In this case, the input of the model is not Ca(t) but CRR(t). CRR(t) will be determined in a similar way as the direct measurement of the AIF where instead of an AIF mask, a mask of the reference region is provided (either manual or automatic).
|
1.0
|
Call open for code contributions regarding 'AIF selection' - This is a request for python code contributions for the selection of the AIF. Below some details about this functionality
Please go to the documentation to follow the guidelines for code contributions.
**AIF selection**
We assume initially that an AIF ROI mask will be provided as input or that a population AIF will be used. In future the library may include the capability to generate one or more AIFs from the data automatically. The postprocessing box in case of direct measurement could involve steps as fitting a model to the concentration time curve.

Fig. AIF selection flow chart
The library will also accommodate the reference region (RR) method. In this case, the input of the model is not Ca(t) but CRR(t). CRR(t) will be determined in a similar way as the direct measurement of the AIF where instead of an AIF mask, a mask of the reference region is provided (either manual or automatic).
|
non_process
|
call open for code contributions regarding aif selection this is a request for python code contributions for the selection of the aif below some details about this functionality please go to the documentation to follow the guidelines for code contributions aif selection we assume initially that an aif roi mask will be provided as input or that a population aif will be used in future the library may include the capability to generate one or more aifs from the data automatically the postprocessing box in case of direct measurement could involve steps as fitting a model to the concentration time curve fig aif selection flow chart the library will also accommodate the reference region rr method in this case the input of the model is not ca t but crr t crr t will be determined in a similar way as the direct measurement of the aif where instead of an aif mask a mask of the reference region is provided either manual or automatic
| 0
|
21,258
| 28,407,289,814
|
IssuesEvent
|
2023-04-14 00:02:41
|
metallb/metallb
|
https://api.github.com/repos/metallb/metallb
|
closed
|
Review issue and PR templates
|
process lifecycle-stale
|
I think we should review our issue and PR templates (https://github.com/metallb/metallb/tree/main/.github) to ensure they match the way we want to handle issues and PRs.
Things to verify:
- We clearly mention important requirements for common issue and PR types (bug reports, feature requests).
- We don't bother users with unnecessary bureaucracy.
Patterns I've seen which come to mind:
- Users opening issues for suspected bugs without providing reproduction steps.
- Developers opening PRs with new features without getting a general "go ahead" about the design first.
|
1.0
|
Review issue and PR templates - I think we should review our issue and PR templates (https://github.com/metallb/metallb/tree/main/.github) to ensure they match the way we want to handle issues and PRs.
Things to verify:
- We clearly mention important requirements for common issue and PR types (bug reports, feature requests).
- We don't bother users with unnecessary bureaucracy.
Patterns I've seen which come to mind:
- Users opening issues for suspected bugs without providing reproduction steps.
- Developers opening PRs with new features without getting a general "go ahead" about the design first.
|
process
|
review issue and pr templates i think we should review our issue and pr templates to ensure they match the way we want to handle issues and prs things to verify we clearly mention important requirements for common issue and pr types bug reports feature requests we don t bother users with unnecessary bureaucracy patterns i ve seen which come to mind users opening issues for suspected bugs without providing reproduction steps developers opening prs with new features without getting a general go ahead about the design first
| 1
|
8,793
| 11,908,196,862
|
IssuesEvent
|
2020-03-31 00:15:04
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Add Specific Geopackage layer as a input to an algorithm in graphical modeler
|
Feature Request Processing
|
**Feature description.**
<!-- A clear and concise description of what you want to happen. Ex. QGIS would rock even more if [...] -->
Thanks everyone for this amazing program. I found a small gap that I think would be useful as geopackage becomes the default file type. I have a geopackage layer that I want to always be used in an algorithm in the QGIS 3.8 graphical modeler. When I use the triple dots to navigate to the geopackage it doesn't give me the option to choose a layer in the geopackage. Ideally this would work like it does in other qgis situations where it asks for the layer after I choose a geopackage and then the file path would include the layer name or the graphical modeler would use the QGIS browser that allows be to choose gpkg layers. In the example screenshot I want to use the layer AOI_Poly2 from the geopackage AOI as the mask layer. It does work but it chooses the first(?) polygon layer in the gpkg (AOI_poly). I have tried appending "|layername=AOI_poly2" to the file path but the dialog won't let me save. Allowing to manual append the layer would be a stop gap solution. Thanks
**Additional context**
<!-- Add any other context or screenshots about the feature request here. Open source is community driven, please consider a way to support this work either by hiring developers, supporting the QGIS project, find someone to submit a pull request.
If the change required is important, you should consider writing a [QGIS Enhancement Proposal](https://github.com/qgis/QGIS-Enhancement-Proposals/issues) (QEP) or hiring someone to, and announce your work on the lists. -->
See my stack exchange question here: https://gis.stackexchange.com/questions/329294/adding-a-geopackage-layer-as-a-hardwired-input-to-an-algorithm-in-the-qgis-graph
May be related to https://github.com/qgis/QGIS/issues/28770

|
1.0
|
Add Specific Geopackage layer as a input to an algorithm in graphical modeler - **Feature description.**
<!-- A clear and concise description of what you want to happen. Ex. QGIS would rock even more if [...] -->
Thanks everyone for this amazing program. I found a small gap that I think would be useful as geopackage becomes the default file type. I have a geopackage layer that I want to always be used in an algorithm in the QGIS 3.8 graphical modeler. When I use the triple dots to navigate to the geopackage it doesn't give me the option to choose a layer in the geopackage. Ideally this would work like it does in other qgis situations where it asks for the layer after I choose a geopackage and then the file path would include the layer name or the graphical modeler would use the QGIS browser that allows be to choose gpkg layers. In the example screenshot I want to use the layer AOI_Poly2 from the geopackage AOI as the mask layer. It does work but it chooses the first(?) polygon layer in the gpkg (AOI_poly). I have tried appending "|layername=AOI_poly2" to the file path but the dialog won't let me save. Allowing to manual append the layer would be a stop gap solution. Thanks
**Additional context**
<!-- Add any other context or screenshots about the feature request here. Open source is community driven, please consider a way to support this work either by hiring developers, supporting the QGIS project, find someone to submit a pull request.
If the change required is important, you should consider writing a [QGIS Enhancement Proposal](https://github.com/qgis/QGIS-Enhancement-Proposals/issues) (QEP) or hiring someone to, and announce your work on the lists. -->
See my stack exchange question here: https://gis.stackexchange.com/questions/329294/adding-a-geopackage-layer-as-a-hardwired-input-to-an-algorithm-in-the-qgis-graph
May be related to https://github.com/qgis/QGIS/issues/28770

|
process
|
add specific geopackage layer as a input to an algorithm in graphical modeler feature description thanks everyone for this amazing program i found a small gap that i think would be useful as geopackage becomes the default file type i have a geopackage layer that i want to always be used in an algorithm in the qgis graphical modeler when i use the triple dots to navigate to the geopackage it doesn t give me the option to choose a layer in the geopackage ideally this would work like it does in other qgis situations where it asks for the layer after i choose a geopackage and then the file path would include the layer name or the graphical modeler would use the qgis browser that allows be to choose gpkg layers in the example screenshot i want to use the layer aoi from the geopackage aoi as the mask layer it does work but it chooses the first polygon layer in the gpkg aoi poly i have tried appending layername aoi to the file path but the dialog won t let me save allowing to manual append the layer would be a stop gap solution thanks additional context add any other context or screenshots about the feature request here open source is community driven please consider a way to support this work either by hiring developers supporting the qgis project find someone to submit a pull request if the change required is important you should consider writing a qep or hiring someone to and announce your work on the lists see my stack exchange question here may be related to
| 1
|
1,888
| 4,714,309,494
|
IssuesEvent
|
2016-10-14 23:39:14
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Dashboard relative date filter widget doesn't work with SQL card field filter variables
|
Bug Query Processor
|
For a (simplified) example query as follows:
```sql
SELECT CAST("public"."user_account"."created_at" AS date) AS "created_at", count(*) AS "count"
FROM "public"."user_account"
GROUP BY CAST("public"."user_account"."created_at" AS date)
ORDER BY CAST("public"."user_account"."created_at" AS date) ASC
```
If it's constructed via query builder, when I add the question to dashboard and add a relative time filter, it would be able to recognize created_at as a compatible field. But if it's a custom SQL question, created_at would not be shown.
- Your browser and the version: Chrome Version 53.0.2785.143 (64-bit)
- Your operating system: Ubuntu 16.04
- Your databases: Postgres
- Metabase version: 0.20.0 and 0.19.3
- Metabase hosting environment: Docker
- Metabase internal database: H2
|
1.0
|
Dashboard relative date filter widget doesn't work with SQL card field filter variables - For a (simplified) example query as follows:
```sql
SELECT CAST("public"."user_account"."created_at" AS date) AS "created_at", count(*) AS "count"
FROM "public"."user_account"
GROUP BY CAST("public"."user_account"."created_at" AS date)
ORDER BY CAST("public"."user_account"."created_at" AS date) ASC
```
If it's constructed via query builder, when I add the question to dashboard and add a relative time filter, it would be able to recognize created_at as a compatible field. But if it's a custom SQL question, created_at would not be shown.
- Your browser and the version: Chrome Version 53.0.2785.143 (64-bit)
- Your operating system: Ubuntu 16.04
- Your databases: Postgres
- Metabase version: 0.20.0 and 0.19.3
- Metabase hosting environment: Docker
- Metabase internal database: H2
|
process
|
dashboard relative date filter widget doesn t work with sql card field filter variables for a simplified example query as follows sql select cast public user account created at as date as created at count as count from public user account group by cast public user account created at as date order by cast public user account created at as date asc if it s constructed via query builder when i add the question to dashboard and add a relative time filter it would be able to recognize created at as a compatible field but if it s a custom sql question created at would not be shown your browser and the version chrome version bit your operating system ubuntu your databases postgres metabase version and metabase hosting environment docker metabase internal database
| 1
|
131,885
| 10,720,715,533
|
IssuesEvent
|
2019-10-26 19:43:09
|
petdance/html-tidy5
|
https://api.github.com/repos/petdance/html-tidy5
|
closed
|
Build fails on macOS
|
bug tests
|
```
cpanm (App::cpanminus) 1.7044 on perl 5.030000 built for darwin-2level
Work directory is /Users/andy/.cpanm/work/1572113799.93019
You have make /usr/bin/make
You have /usr/local/bin/wget
You have /usr/bin/tar: bsdtar 3.3.2 - libarchive 3.3.2 zlib/1.2.11 liblzma/5.0.5 bz2lib/1.0.6
You have /usr/bin/unzip
Searching HTML::Tidy5 () on cpanmetadb ...
--> Working on HTML::Tidy5
Fetching http://www.cpan.org/authors/id/P/PE/PETDANCE/HTML-Tidy5-1.04.tar.gz
-> OK
Unpacking HTML-Tidy5-1.04.tar.gz
Entering HTML-Tidy5-1.04
Checking configure dependencies from META.json
Checking if you have ExtUtils::MakeMaker 6.58 ... Yes (7.34)
Configuring HTML-Tidy5-1.04
Running Makefile.PL
NOTE: It seems that you don't have LWP::Simple installed.
The webtidy program will not be able to retrieve web pages.
Checking if your kit is complete...
Looks good
Generating a Unix-style Makefile
Writing Makefile for HTML::Tidy5
Writing MYMETA.yml and MYMETA.json
-> OK
Checking dependencies from MYMETA.json ...
Checking if you have Test::Builder 0 ... Yes (1.302162)
Checking if you have constant 0 ... Yes (1.33)
Checking if you have Exporter 0 ... Yes (5.73)
Checking if you have Getopt::Long 0 ... Yes (2.5)
Checking if you have Test::More 0.98 ... Yes (1.302162)
Checking if you have Encode 0 ... Yes (3.01)
Checking if you have Test::Exception 0 ... Yes (0.43)
Checking if you have ExtUtils::MakeMaker 0 ... Yes (7.34)
Checking if you have Carp 0 ... Yes (1.50)
Building and testing HTML-Tidy5-1.04
cp lib/HTML/Tidy5/Message.pm blib/lib/HTML/Tidy5/Message.pm
cp lib/HTML/Tidy5.pm blib/lib/HTML/Tidy5.pm
cp lib/Test/HTML/Tidy5.pm blib/lib/Test/HTML/Tidy5.pm
Running Mkbootstrap for Tidy5 ()
chmod 644 "Tidy5.bs"
"/Users/andy/perl5/perlbrew/perls/perl-5.30.0/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- Tidy5.bs blib/arch/auto/HTML/Tidy5/Tidy5.bs 644
"/Users/andy/perl5/perlbrew/perls/perl-5.30.0/bin/perl" "/Users/andy/perl5/perlbrew/perls/perl-5.30.0/lib/5.30.0/ExtUtils/xsubpp" -typemap '/Users/andy/perl5/perlbrew/perls/perl-5.30.0/lib/5.30.0/ExtUtils/typemap' Tidy5.xs > Tidy5.xsc
mv Tidy5.xsc Tidy5.c
cc -c -I. -I/usr/include/tidy -I/usr/local/include/tidy -I/usr/include/tidy -fno-common -DPERL_DARWIN -mmacosx-version-min=10.14 -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include -DPERL_USE_SAFE_PUTENV -O3 -DVERSION=\"1.04\" -DXS_VERSION=\"1.04\" "-I/Users/andy/perl5/perlbrew/perls/perl-5.30.0/lib/5.30.0/darwin-2level/CORE" Tidy5.c
rm -f blib/arch/auto/HTML/Tidy5/Tidy5.bundle
LD_RUN_PATH="/usr/local/lib" cc -mmacosx-version-min=10.14 -bundle -undefined dynamic_lookup -L/usr/local/lib -fstack-protector-strong Tidy5.o -o blib/arch/auto/HTML/Tidy5/Tidy5.bundle \
-ltidy \
chmod 755 blib/arch/auto/HTML/Tidy5/Tidy5.bundle
cp bin/webtidy5 blib/script/webtidy5
"/Users/andy/perl5/perlbrew/perls/perl-5.30.0/bin/perl" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/webtidy5
Manifying 3 pod documents
"/Users/andy/perl5/perlbrew/perls/perl-5.30.0/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- Tidy5.bs blib/arch/auto/HTML/Tidy5/Tidy5.bs 644
PERL_DL_NONLAZY=1 "/Users/andy/perl5/perlbrew/perls/perl-5.30.0/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
# Testing HTML::Tidy5 1.04, tidy 5.6.0, Perl 5.030000, /Users/andy/perl5/perlbrew/perls/perl-5.30.0/bin/perl
[13:16:42] t/00-load.t ................ ok 328 ms ( 0.00 usr 0.00 sys + 0.07 cusr 0.02 csys = 0.09 CPU)
[13:16:42] t/cfg-for-parse.t .......... ok 68 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:43] t/clean-crash.t ............ ok 67 ms ( 0.01 usr 0.00 sys + 0.05 cusr 0.02 csys = 0.08 CPU)
# Failed test '$tidy->clean("") returns empty HTML document'
# at t/clean.t line 41.
# got: '<!DOCTYPE html>
# <html>
# <head>
# <meta name="generator" content="HTML Tidy for HTML5 for Apple macOS version 5.6.0">
# <title></title>
# </head>
# <body>
# </body>
# </html>
# '
# expected: '<!DOCTYPE html>
# <html>
# <head>
# <meta name="generator" content="TIDY">
# <title></title>
# </head>
# <body>
# </body>
# </html>
# '
# Looks like you failed 1 test of 3.
[13:16:43] t/clean.t ..................
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/3 subtests
[13:16:43] t/extra-quote.t ............ ok 77 ms ( 0.00 usr 0.00 sys + 0.07 cusr 0.01 csys = 0.08 CPU)
[13:16:43] t/html_fragment_tidy_ok.t .. ok 93 ms ( 0.00 usr 0.00 sys + 0.07 cusr 0.02 csys = 0.09 CPU)
[13:16:43] t/html_tidy_ok.t ........... ok 85 ms ( 0.00 usr 0.00 sys + 0.07 cusr 0.01 csys = 0.08 CPU)
[13:16:43] t/ignore-text.t ............ ok 69 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:43] t/ignore.t ................. ok 69 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:43] t/illegal-options.t ........ ok 84 ms ( 0.01 usr 0.00 sys + 0.07 cusr 0.02 csys = 0.10 CPU)
[13:16:43] t/levels.t ................. ok 74 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:43] t/message.t ................ ok 76 ms ( 0.01 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.08 CPU)
[13:16:43] t/opt-00.t ................. ok 77 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:43] t/parse-errors.t ........... ok 73 ms ( 0.00 usr 0.00 sys + 0.07 cusr 0.02 csys = 0.09 CPU)
[13:16:43] t/parse.t .................. ok 71 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:44] t/perfect.t ................ ok 83 ms ( 0.00 usr 0.01 sys + 0.06 cusr 0.01 csys = 0.08 CPU)
# Failed test 'Cleaned up properly'
# at t/roundtrip.t line 36.
# got: '<!DOCTYPE html>
# <html>
# <head>
# <meta name="generator" content="HTML Tidy for HTML5 for Apple macOS version 5.6.0">
# <title></title>
# </head>
# <body>
# <a href="http://www.example.com/"><em>This is a test.</em></a>
# </body>
# </html>
# '
# expected: '<!DOCTYPE html>
# <html>
# <head>
# <meta name="generator" content="TIDY">
# <title></title>
# </head>
# <body>
# <a href="http://www.example.com/"><em>This is a test.</em></a>
# </body>
# </html>
# '
# Looks like you failed 1 test of 3.
[13:16:44] t/roundtrip.t ..............
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/3 subtests
[13:16:44] t/segfault-form.t .......... ok 68 ms ( 0.00 usr 0.00 sys + 0.05 cusr 0.01 csys = 0.06 CPU)
[13:16:44] t/simple.t ................. ok 81 ms ( 0.01 usr 0.00 sys + 0.07 cusr 0.01 csys = 0.09 CPU)
[13:16:44] t/too-many-titles.t ........ ok 70 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:44] t/unicode-nbsp.t ........... ok 77 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.02 csys = 0.08 CPU)
# Failed test 'Cleanup didn't break anything'
# at t/unicode.t line 41.
Wide character in print at /Users/andy/perl5/perlbrew/perls/perl-5.30.0/lib/5.30.0/Test2/Formatter/TAP.pm line 124, <DATA> line 10.
# got: '<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN">
# <html>
# <head>
# <meta name="generator" content="HTML Tidy for HTML5 for Apple macOS version 5.6.0">
# <title>日本語のホムページ</title>
# </head>
# <body>
# <p>Unicodeが好きですか?</p>
# </body>
# </html>
# '
# expected: '<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN">
# <html>
# <head>
# <meta name="generator" content="TIDY">
# <title>日本語のホムページ</title>
# </head>
# <body>
# <p>Unicodeが好きですか?</p>
# </body>
# </html>
# '
# Looks like you failed 1 test of 8.
# Failed test 'utf8 testing'
# at t/unicode.t line 52.
# Failed test 'Cleanup didn't break anything'
# at t/unicode.t line 66.
Wide character in print at /Users/andy/perl5/perlbrew/perls/perl-5.30.0/lib/5.30.0/Test2/Formatter/TAP.pm line 124, <DATA> line 10.
# got: '<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN">
# <html>
# <head>
# <meta name="generator" content="HTML Tidy for HTML5 for Apple macOS version 5.6.0">
# <title>日本語のホムページ</title>
# </head>
# <body>
# <p>Unicodeが好きですか?</p>
# </body>
# </html>
# '
# expected: '<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN">
# <html>
# <head>
# <meta name="generator" content="TIDY">
# <title>日本語のホムページ</title>
# </head>
# <body>
# <p>Unicodeが好きですか?</p>
# </body>
# </html>
# '
# Looks like you failed 1 test of 3.
# Failed test 'Try send bytes to clean method.'
# at t/unicode.t line 67.
# Looks like you failed 2 tests of 2.
[13:16:44] t/unicode.t ................
Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/2 subtests
[13:16:44] t/venus.t .................. ok 71 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:44] t/version.t ................ ok 68 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.02 csys = 0.08 CPU)
[13:16:44] t/wordwrap.t ............... ok 70 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:44]
Test Summary Report
-------------------
t/clean.t (Wstat: 256 Tests: 3 Failed: 1)
Failed test: 3
Non-zero exit status: 1
t/roundtrip.t (Wstat: 256 Tests: 3 Failed: 1)
Failed test: 3
Non-zero exit status: 1
t/unicode.t (Wstat: 512 Tests: 2 Failed: 2)
Failed tests: 1-2
Non-zero exit status: 2
Files=25, Tests=85, 2 wallclock secs ( 0.09 usr 0.05 sys + 1.57 cusr 0.33 csys = 2.04 CPU)
Result: FAIL
Failed 3/25 test programs. 4/85 subtests failed.
make: *** [test_dynamic] Error 255
-> FAIL Installing HTML::Tidy5 failed. See /Users/andy/.cpanm/work/1572113799.93019/build.log for details. Retry with --force to force install it.
```
|
1.0
|
Build fails on macOS - ```
cpanm (App::cpanminus) 1.7044 on perl 5.030000 built for darwin-2level
Work directory is /Users/andy/.cpanm/work/1572113799.93019
You have make /usr/bin/make
You have /usr/local/bin/wget
You have /usr/bin/tar: bsdtar 3.3.2 - libarchive 3.3.2 zlib/1.2.11 liblzma/5.0.5 bz2lib/1.0.6
You have /usr/bin/unzip
Searching HTML::Tidy5 () on cpanmetadb ...
--> Working on HTML::Tidy5
Fetching http://www.cpan.org/authors/id/P/PE/PETDANCE/HTML-Tidy5-1.04.tar.gz
-> OK
Unpacking HTML-Tidy5-1.04.tar.gz
Entering HTML-Tidy5-1.04
Checking configure dependencies from META.json
Checking if you have ExtUtils::MakeMaker 6.58 ... Yes (7.34)
Configuring HTML-Tidy5-1.04
Running Makefile.PL
NOTE: It seems that you don't have LWP::Simple installed.
The webtidy program will not be able to retrieve web pages.
Checking if your kit is complete...
Looks good
Generating a Unix-style Makefile
Writing Makefile for HTML::Tidy5
Writing MYMETA.yml and MYMETA.json
-> OK
Checking dependencies from MYMETA.json ...
Checking if you have Test::Builder 0 ... Yes (1.302162)
Checking if you have constant 0 ... Yes (1.33)
Checking if you have Exporter 0 ... Yes (5.73)
Checking if you have Getopt::Long 0 ... Yes (2.5)
Checking if you have Test::More 0.98 ... Yes (1.302162)
Checking if you have Encode 0 ... Yes (3.01)
Checking if you have Test::Exception 0 ... Yes (0.43)
Checking if you have ExtUtils::MakeMaker 0 ... Yes (7.34)
Checking if you have Carp 0 ... Yes (1.50)
Building and testing HTML-Tidy5-1.04
cp lib/HTML/Tidy5/Message.pm blib/lib/HTML/Tidy5/Message.pm
cp lib/HTML/Tidy5.pm blib/lib/HTML/Tidy5.pm
cp lib/Test/HTML/Tidy5.pm blib/lib/Test/HTML/Tidy5.pm
Running Mkbootstrap for Tidy5 ()
chmod 644 "Tidy5.bs"
"/Users/andy/perl5/perlbrew/perls/perl-5.30.0/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- Tidy5.bs blib/arch/auto/HTML/Tidy5/Tidy5.bs 644
"/Users/andy/perl5/perlbrew/perls/perl-5.30.0/bin/perl" "/Users/andy/perl5/perlbrew/perls/perl-5.30.0/lib/5.30.0/ExtUtils/xsubpp" -typemap '/Users/andy/perl5/perlbrew/perls/perl-5.30.0/lib/5.30.0/ExtUtils/typemap' Tidy5.xs > Tidy5.xsc
mv Tidy5.xsc Tidy5.c
cc -c -I. -I/usr/include/tidy -I/usr/local/include/tidy -I/usr/include/tidy -fno-common -DPERL_DARWIN -mmacosx-version-min=10.14 -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include -DPERL_USE_SAFE_PUTENV -O3 -DVERSION=\"1.04\" -DXS_VERSION=\"1.04\" "-I/Users/andy/perl5/perlbrew/perls/perl-5.30.0/lib/5.30.0/darwin-2level/CORE" Tidy5.c
rm -f blib/arch/auto/HTML/Tidy5/Tidy5.bundle
LD_RUN_PATH="/usr/local/lib" cc -mmacosx-version-min=10.14 -bundle -undefined dynamic_lookup -L/usr/local/lib -fstack-protector-strong Tidy5.o -o blib/arch/auto/HTML/Tidy5/Tidy5.bundle \
-ltidy \
chmod 755 blib/arch/auto/HTML/Tidy5/Tidy5.bundle
cp bin/webtidy5 blib/script/webtidy5
"/Users/andy/perl5/perlbrew/perls/perl-5.30.0/bin/perl" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/webtidy5
Manifying 3 pod documents
"/Users/andy/perl5/perlbrew/perls/perl-5.30.0/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- Tidy5.bs blib/arch/auto/HTML/Tidy5/Tidy5.bs 644
PERL_DL_NONLAZY=1 "/Users/andy/perl5/perlbrew/perls/perl-5.30.0/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
# Testing HTML::Tidy5 1.04, tidy 5.6.0, Perl 5.030000, /Users/andy/perl5/perlbrew/perls/perl-5.30.0/bin/perl
[13:16:42] t/00-load.t ................ ok 328 ms ( 0.00 usr 0.00 sys + 0.07 cusr 0.02 csys = 0.09 CPU)
[13:16:42] t/cfg-for-parse.t .......... ok 68 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:43] t/clean-crash.t ............ ok 67 ms ( 0.01 usr 0.00 sys + 0.05 cusr 0.02 csys = 0.08 CPU)
# Failed test '$tidy->clean("") returns empty HTML document'
# at t/clean.t line 41.
# got: '<!DOCTYPE html>
# <html>
# <head>
# <meta name="generator" content="HTML Tidy for HTML5 for Apple macOS version 5.6.0">
# <title></title>
# </head>
# <body>
# </body>
# </html>
# '
# expected: '<!DOCTYPE html>
# <html>
# <head>
# <meta name="generator" content="TIDY">
# <title></title>
# </head>
# <body>
# </body>
# </html>
# '
# Looks like you failed 1 test of 3.
[13:16:43] t/clean.t ..................
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/3 subtests
[13:16:43] t/extra-quote.t ............ ok 77 ms ( 0.00 usr 0.00 sys + 0.07 cusr 0.01 csys = 0.08 CPU)
[13:16:43] t/html_fragment_tidy_ok.t .. ok 93 ms ( 0.00 usr 0.00 sys + 0.07 cusr 0.02 csys = 0.09 CPU)
[13:16:43] t/html_tidy_ok.t ........... ok 85 ms ( 0.00 usr 0.00 sys + 0.07 cusr 0.01 csys = 0.08 CPU)
[13:16:43] t/ignore-text.t ............ ok 69 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:43] t/ignore.t ................. ok 69 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:43] t/illegal-options.t ........ ok 84 ms ( 0.01 usr 0.00 sys + 0.07 cusr 0.02 csys = 0.10 CPU)
[13:16:43] t/levels.t ................. ok 74 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:43] t/message.t ................ ok 76 ms ( 0.01 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.08 CPU)
[13:16:43] t/opt-00.t ................. ok 77 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:43] t/parse-errors.t ........... ok 73 ms ( 0.00 usr 0.00 sys + 0.07 cusr 0.02 csys = 0.09 CPU)
[13:16:43] t/parse.t .................. ok 71 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:44] t/perfect.t ................ ok 83 ms ( 0.00 usr 0.01 sys + 0.06 cusr 0.01 csys = 0.08 CPU)
# Failed test 'Cleaned up properly'
# at t/roundtrip.t line 36.
# got: '<!DOCTYPE html>
# <html>
# <head>
# <meta name="generator" content="HTML Tidy for HTML5 for Apple macOS version 5.6.0">
# <title></title>
# </head>
# <body>
# <a href="http://www.example.com/"><em>This is a test.</em></a>
# </body>
# </html>
# '
# expected: '<!DOCTYPE html>
# <html>
# <head>
# <meta name="generator" content="TIDY">
# <title></title>
# </head>
# <body>
# <a href="http://www.example.com/"><em>This is a test.</em></a>
# </body>
# </html>
# '
# Looks like you failed 1 test of 3.
[13:16:44] t/roundtrip.t ..............
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/3 subtests
[13:16:44] t/segfault-form.t .......... ok 68 ms ( 0.00 usr 0.00 sys + 0.05 cusr 0.01 csys = 0.06 CPU)
[13:16:44] t/simple.t ................. ok 81 ms ( 0.01 usr 0.00 sys + 0.07 cusr 0.01 csys = 0.09 CPU)
[13:16:44] t/too-many-titles.t ........ ok 70 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:44] t/unicode-nbsp.t ........... ok 77 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.02 csys = 0.08 CPU)
# Failed test 'Cleanup didn't break anything'
# at t/unicode.t line 41.
Wide character in print at /Users/andy/perl5/perlbrew/perls/perl-5.30.0/lib/5.30.0/Test2/Formatter/TAP.pm line 124, <DATA> line 10.
# got: '<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN">
# <html>
# <head>
# <meta name="generator" content="HTML Tidy for HTML5 for Apple macOS version 5.6.0">
# <title>日本語のホムページ</title>
# </head>
# <body>
# <p>Unicodeが好きですか?</p>
# </body>
# </html>
# '
# expected: '<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN">
# <html>
# <head>
# <meta name="generator" content="TIDY">
# <title>日本語のホムページ</title>
# </head>
# <body>
# <p>Unicodeが好きですか?</p>
# </body>
# </html>
# '
# Looks like you failed 1 test of 8.
# Failed test 'utf8 testing'
# at t/unicode.t line 52.
# Failed test 'Cleanup didn't break anything'
# at t/unicode.t line 66.
Wide character in print at /Users/andy/perl5/perlbrew/perls/perl-5.30.0/lib/5.30.0/Test2/Formatter/TAP.pm line 124, <DATA> line 10.
# got: '<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN">
# <html>
# <head>
# <meta name="generator" content="HTML Tidy for HTML5 for Apple macOS version 5.6.0">
# <title>日本語のホムページ</title>
# </head>
# <body>
# <p>Unicodeが好きですか?</p>
# </body>
# </html>
# '
# expected: '<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN">
# <html>
# <head>
# <meta name="generator" content="TIDY">
# <title>日本語のホムページ</title>
# </head>
# <body>
# <p>Unicodeが好きですか?</p>
# </body>
# </html>
# '
# Looks like you failed 1 test of 3.
# Failed test 'Try send bytes to clean method.'
# at t/unicode.t line 67.
# Looks like you failed 2 tests of 2.
[13:16:44] t/unicode.t ................
Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/2 subtests
[13:16:44] t/venus.t .................. ok 71 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:44] t/version.t ................ ok 68 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.02 csys = 0.08 CPU)
[13:16:44] t/wordwrap.t ............... ok 70 ms ( 0.00 usr 0.00 sys + 0.06 cusr 0.01 csys = 0.07 CPU)
[13:16:44]
Test Summary Report
-------------------
t/clean.t (Wstat: 256 Tests: 3 Failed: 1)
Failed test: 3
Non-zero exit status: 1
t/roundtrip.t (Wstat: 256 Tests: 3 Failed: 1)
Failed test: 3
Non-zero exit status: 1
t/unicode.t (Wstat: 512 Tests: 2 Failed: 2)
Failed tests: 1-2
Non-zero exit status: 2
Files=25, Tests=85, 2 wallclock secs ( 0.09 usr 0.05 sys + 1.57 cusr 0.33 csys = 2.04 CPU)
Result: FAIL
Failed 3/25 test programs. 4/85 subtests failed.
make: *** [test_dynamic] Error 255
-> FAIL Installing HTML::Tidy5 failed. See /Users/andy/.cpanm/work/1572113799.93019/build.log for details. Retry with --force to force install it.
```
|
non_process
|
build fails on macos cpanm app cpanminus on perl built for darwin work directory is users andy cpanm work you have make usr bin make you have usr local bin wget you have usr bin tar bsdtar libarchive zlib liblzma you have usr bin unzip searching html on cpanmetadb working on html fetching ok unpacking html tar gz entering html checking configure dependencies from meta json checking if you have extutils makemaker yes configuring html running makefile pl note it seems that you don t have lwp simple installed the webtidy program will not be able to retrieve web pages checking if your kit is complete looks good generating a unix style makefile writing makefile for html writing mymeta yml and mymeta json ok checking dependencies from mymeta json checking if you have test builder yes checking if you have constant yes checking if you have exporter yes checking if you have getopt long yes checking if you have test more yes checking if you have encode yes checking if you have test exception yes checking if you have extutils makemaker yes checking if you have carp yes building and testing html cp lib html message pm blib lib html message pm cp lib html pm blib lib html pm cp lib test html pm blib lib test html pm running mkbootstrap for chmod bs users andy perlbrew perls perl bin perl mextutils command mm e cp nonempty bs blib arch auto html bs users andy perlbrew perls perl bin perl users andy perlbrew perls perl lib extutils xsubpp typemap users andy perlbrew perls perl lib extutils typemap xs xsc mv xsc c cc c i i usr include tidy i usr local include tidy i usr include tidy fno common dperl darwin mmacosx version min fno strict aliasing pipe fstack protector strong i usr local include dperl use safe putenv dversion dxs version i users andy perlbrew perls perl lib darwin core c rm f blib arch auto html bundle ld run path usr local lib cc mmacosx version min bundle undefined dynamic lookup l usr local lib fstack protector strong o o blib arch auto html bundle ltidy chmod blib arch auto html bundle cp bin blib script users andy perlbrew perls perl bin perl mextutils my e my fixin shift blib script manifying pod documents users andy perlbrew perls perl bin perl mextutils command mm e cp nonempty bs blib arch auto html bs perl dl nonlazy users andy perlbrew perls perl bin perl mextutils command mm mtest harness e undef test harness switches test harness blib lib blib arch t t testing html tidy perl users andy perlbrew perls perl bin perl t load t ok ms usr sys cusr csys cpu t cfg for parse t ok ms usr sys cusr csys cpu t clean crash t ok ms usr sys cusr csys cpu failed test tidy clean returns empty html document at t clean t line got expected looks like you failed test of t clean t dubious test returned wstat failed subtests t extra quote t ok ms usr sys cusr csys cpu t html fragment tidy ok t ok ms usr sys cusr csys cpu t html tidy ok t ok ms usr sys cusr csys cpu t ignore text t ok ms usr sys cusr csys cpu t ignore t ok ms usr sys cusr csys cpu t illegal options t ok ms usr sys cusr csys cpu t levels t ok ms usr sys cusr csys cpu t message t ok ms usr sys cusr csys cpu t opt t ok ms usr sys cusr csys cpu t parse errors t ok ms usr sys cusr csys cpu t parse t ok ms usr sys cusr csys cpu t perfect t ok ms usr sys cusr csys cpu failed test cleaned up properly at t roundtrip t line got expected looks like you failed test of t roundtrip t dubious test returned wstat failed subtests t segfault form t ok ms usr sys cusr csys cpu t simple t ok ms usr sys cusr csys cpu t too many titles t ok ms usr sys cusr csys cpu t unicode nbsp t ok ms usr sys cusr csys cpu failed test cleanup didn t break anything at t unicode t line wide character in print at users andy perlbrew perls perl lib formatter tap pm line line got 日本語のホムページ unicodeが好きですか expected 日本語のホムページ unicodeが好きですか looks like you failed test of failed test testing at t unicode t line failed test cleanup didn t break anything at t unicode t line wide character in print at users andy perlbrew perls perl lib formatter tap pm line line got 日本語のホムページ unicodeが好きですか expected 日本語のホムページ unicodeが好きですか looks like you failed test of failed test try send bytes to clean method at t unicode t line looks like you failed tests of t unicode t dubious test returned wstat failed subtests t venus t ok ms usr sys cusr csys cpu t version t ok ms usr sys cusr csys cpu t wordwrap t ok ms usr sys cusr csys cpu test summary report t clean t wstat tests failed failed test non zero exit status t roundtrip t wstat tests failed failed test non zero exit status t unicode t wstat tests failed failed tests non zero exit status files tests wallclock secs usr sys cusr csys cpu result fail failed test programs subtests failed make error fail installing html failed see users andy cpanm work build log for details retry with force to force install it
| 0
|
10,386
| 13,195,612,737
|
IssuesEvent
|
2020-08-13 19:00:45
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
Decide whether to eliminate the prelude_bazel file
|
P2 team-Starlark type: process
|
(See also previous discussion in #1674 and #3835.)
Separate from the issue of how the prelude file is technically loaded (#11940), and whether it should be restricted to only contain `load()` statements, we need to decide whether we want to eventually eliminate it altogether. That is, we need to decide if it's a good idea to allow users to customize the set of predeclared symbols in their BUILD files.
I recently learned that my strongest objection to the prelude doesn't apply anymore: I was worried about the prelude file leading to repos not being composable, but it turns out that so was everyone else, so now the prelude only applies within the repo where it's defined (#3991).
So the remaining arguments against the prelude are:
- It's harder to read and understand BUILD files without any context on the repo you're looking at (spooky action at a distance).
- Changes to the prelude, or its transitive dependencies, trigger reevaluation of every package. That is, it encourages unnecessary (implicit) `load()` dependencies.
The argument in favor of keeping it is that it's convenient, and allows you to put Starlark rules on the same footing as native rules in terms of their implicit availability in the BUILD environment.
|
1.0
|
Decide whether to eliminate the prelude_bazel file - (See also previous discussion in #1674 and #3835.)
Separate from the issue of how the prelude file is technically loaded (#11940), and whether it should be restricted to only contain `load()` statements, we need to decide whether we want to eventually eliminate it altogether. That is, we need to decide if it's a good idea to allow users to customize the set of predeclared symbols in their BUILD files.
I recently learned that my strongest objection to the prelude doesn't apply anymore: I was worried about the prelude file leading to repos not being composable, but it turns out that so was everyone else, so now the prelude only applies within the repo where it's defined (#3991).
So the remaining arguments against the prelude are:
- It's harder to read and understand BUILD files without any context on the repo you're looking at (spooky action at a distance).
- Changes to the prelude, or its transitive dependencies, trigger reevaluation of every package. That is, it encourages unnecessary (implicit) `load()` dependencies.
The argument in favor of keeping it is that it's convenient, and allows you to put Starlark rules on the same footing as native rules in terms of their implicit availability in the BUILD environment.
|
process
|
decide whether to eliminate the prelude bazel file see also previous discussion in and separate from the issue of how the prelude file is technically loaded and whether it should be restricted to only contain load statements we need to decide whether we want to eventually eliminate it altogether that is we need to decide if it s a good idea to allow users to customize the set of predeclared symbols in their build files i recently learned that my strongest objection to the prelude doesn t apply anymore i was worried about the prelude file leading to repos not being composable but it turns out that so was everyone else so now the prelude only applies within the repo where it s defined so the remaining arguments against the prelude are it s harder to read and understand build files without any context on the repo you re looking at spooky action at a distance changes to the prelude or its transitive dependencies trigger reevaluation of every package that is it encourages unnecessary implicit load dependencies the argument in favor of keeping it is that it s convenient and allows you to put starlark rules on the same footing as native rules in terms of their implicit availability in the build environment
| 1
|
2,853
| 5,823,869,434
|
IssuesEvent
|
2017-05-07 06:58:42
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Unselect all behavior is not consistent with select all when the data is filtered
|
enhancement inprocess
|
The table is configured to have the selection enabled (`selectRow.mode = 'checkbox'`) and column filtering enabled as well.
Suppose the user filters some rows using column filters and presses the "select all" checkbox (at the selection column header).
As expected, only the visible rows are selected.
Now suppose the user changes the filter to display a different set of rows (possibly seeing some selected rows).
Now pressing the "unselect all" checkbox again unselects **all** rows of the table, both those that are visible and those that are not.
This behavior is surprising and not consistent with "select all" which selects only the visible rows.
I couldn't find a way to configure this behavior. Would it be possible to add a configuration option for this (I guess you couldn't just change it for compatibility reasons)? Thank you!
|
1.0
|
Unselect all behavior is not consistent with select all when the data is filtered - The table is configured to have the selection enabled (`selectRow.mode = 'checkbox'`) and column filtering enabled as well.
Suppose the user filters some rows using column filters and presses the "select all" checkbox (at the selection column header).
As expected, only the visible rows are selected.
Now suppose the user changes the filter to display a different set of rows (possibly seeing some selected rows).
Now pressing the "unselect all" checkbox again unselects **all** rows of the table, both those that are visible and those that are not.
This behavior is surprising and not consistent with "select all" which selects only the visible rows.
I couldn't find a way to configure this behavior. Would it be possible to add a configuration option for this (I guess you couldn't just change it for compatibility reasons)? Thank you!
|
process
|
unselect all behavior is not consistent with select all when the data is filtered the table is configured to have the selection enabled selectrow mode checkbox and column filtering enabled as well suppose the user filters some rows using column filters and presses the select all checkbox at the selection column header as expected only the visible rows are selected now suppose the user changes the filter to display a different set of rows possibly seeing some selected rows now pressing the unselect all checkbox again unselects all rows of the table both those that are visible and those that are not this behavior is surprising and not consistent with select all which selects only the visible rows i couldn t find a way to configure this behavior would it be possible to add a configuration option for this i guess you couldn t just change it for compatibility reasons thank you
| 1
|
55,870
| 14,072,606,083
|
IssuesEvent
|
2020-11-04 02:19:38
|
junghanlee/juice-shop
|
https://api.github.com/repos/junghanlee/juice-shop
|
opened
|
CVE-2020-7656 (Medium) detected in jquery-1.4.4.min.js
|
security vulnerability
|
## CVE-2020-7656 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.4.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js</a></p>
<p>Path to dependency file: juice-shop/node_modules/selenium-webdriver/lib/test/data/mousePositionTracker.html</p>
<p>Path to vulnerable library: juice-shop/node_modules/selenium-webdriver/lib/test/data/js/jquery-1.4.4.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.4.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/junghanlee/juice-shop/commit/765fa79934265f7322f82b9e91a33ec80e92457d">765fa79934265f7322f82b9e91a33ec80e92457d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568">https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568</a></p>
<p>Release Date: 2020-05-19</p>
<p>Fix Resolution: jquery-rails - 2.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7656 (Medium) detected in jquery-1.4.4.min.js - ## CVE-2020-7656 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.4.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js</a></p>
<p>Path to dependency file: juice-shop/node_modules/selenium-webdriver/lib/test/data/mousePositionTracker.html</p>
<p>Path to vulnerable library: juice-shop/node_modules/selenium-webdriver/lib/test/data/js/jquery-1.4.4.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.4.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/junghanlee/juice-shop/commit/765fa79934265f7322f82b9e91a33ec80e92457d">765fa79934265f7322f82b9e91a33ec80e92457d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568">https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568</a></p>
<p>Release Date: 2020-05-19</p>
<p>Fix Resolution: jquery-rails - 2.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file juice shop node modules selenium webdriver lib test data mousepositiontracker html path to vulnerable library juice shop node modules selenium webdriver lib test data js jquery min js dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery prior to allows cross site scripting attacks via the load method the load method fails to recognize and remove html tags that contain a whitespace character i e which results in the enclosed script logic to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery rails step up your open source security game with whitesource
| 0
|
7,368
| 10,511,763,217
|
IssuesEvent
|
2019-09-27 16:10:12
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
closed
|
Init flow "printSchema is not a function"
|
bug/2-confirmed kind/regression process/candidate
|
On `alpha.216`
To reproduce:
1. `prisma2 init`
2. `blank project -> sqlite -> JS -> just schema`
3. Error with debug flag:-
```
$ [prisma]$ prisma2 init p2-photon
Processing the blank project
prisma TypeError: templates_1.printSchema is not a function
prisma at /Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58976:136
prisma at step (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58888:23)
prisma at Object.next (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58869:53)
prisma at /Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58863:71
prisma at new Promise (<anonymous>)
prisma at module.exports.1903.__awaiter (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58859:12)
prisma at run (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58939:20)
prisma at /Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58984:9
prisma at commitHookEffectList (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:351366:26)
prisma at commitPassiveHookEffects (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:351390:3) +0ms
prisma TypeError: templates_1.printSchema is not a function
prisma at /Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58976:136
prisma at step (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58888:23)
prisma at Object.next (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58869:53)
prisma at /Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58863:71
prisma at new Promise (<anonymous>)
prisma at module.exports.1903.__awaiter (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58859:12)
prisma at run (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58939:20)
prisma at /Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58984:9
prisma at commitHookEffectList (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:351366:26)
prisma at commitPassiveHookEffects (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:351390:3) +1ms
```
|
1.0
|
Init flow "printSchema is not a function" - On `alpha.216`
To reproduce:
1. `prisma2 init`
2. `blank project -> sqlite -> JS -> just schema`
3. Error with debug flag:-
```
$ [prisma]$ prisma2 init p2-photon
Processing the blank project
prisma TypeError: templates_1.printSchema is not a function
prisma at /Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58976:136
prisma at step (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58888:23)
prisma at Object.next (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58869:53)
prisma at /Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58863:71
prisma at new Promise (<anonymous>)
prisma at module.exports.1903.__awaiter (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58859:12)
prisma at run (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58939:20)
prisma at /Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58984:9
prisma at commitHookEffectList (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:351366:26)
prisma at commitPassiveHookEffects (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:351390:3) +0ms
prisma TypeError: templates_1.printSchema is not a function
prisma at /Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58976:136
prisma at step (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58888:23)
prisma at Object.next (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58869:53)
prisma at /Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58863:71
prisma at new Promise (<anonymous>)
prisma at module.exports.1903.__awaiter (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58859:12)
prisma at run (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58939:20)
prisma at /Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:58984:9
prisma at commitHookEffectList (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:351366:26)
prisma at commitPassiveHookEffects (/Users/divyendusingh/.nvm/versions/node/v10.16.3/lib/node_modules/prisma2/build/index.js:351390:3) +1ms
```
|
process
|
init flow printschema is not a function on alpha to reproduce init blank project sqlite js just schema error with debug flag init photon processing the blank project prisma typeerror templates printschema is not a function prisma at users divyendusingh nvm versions node lib node modules build index js prisma at step users divyendusingh nvm versions node lib node modules build index js prisma at object next users divyendusingh nvm versions node lib node modules build index js prisma at users divyendusingh nvm versions node lib node modules build index js prisma at new promise prisma at module exports awaiter users divyendusingh nvm versions node lib node modules build index js prisma at run users divyendusingh nvm versions node lib node modules build index js prisma at users divyendusingh nvm versions node lib node modules build index js prisma at commithookeffectlist users divyendusingh nvm versions node lib node modules build index js prisma at commitpassivehookeffects users divyendusingh nvm versions node lib node modules build index js prisma typeerror templates printschema is not a function prisma at users divyendusingh nvm versions node lib node modules build index js prisma at step users divyendusingh nvm versions node lib node modules build index js prisma at object next users divyendusingh nvm versions node lib node modules build index js prisma at users divyendusingh nvm versions node lib node modules build index js prisma at new promise prisma at module exports awaiter users divyendusingh nvm versions node lib node modules build index js prisma at run users divyendusingh nvm versions node lib node modules build index js prisma at users divyendusingh nvm versions node lib node modules build index js prisma at commithookeffectlist users divyendusingh nvm versions node lib node modules build index js prisma at commitpassivehookeffects users divyendusingh nvm versions node lib node modules build index js
| 1
|
7,837
| 2,935,495,184
|
IssuesEvent
|
2015-06-30 14:45:34
|
rancherio/rancher
|
https://api.github.com/repos/rancherio/rancher
|
closed
|
Instance stuck in "post-network" state when activating service that is linked to dns service.
|
area/networking area/server bug status/to-test
|
Server version - V0.25.0-rc2
I was doing scenarios relating to a service linked to dns service which is linked to multiple services , scenarios like scaling services , stopping/starting instances.
After few scenarios , I see that one of the instance gets stuck in "post-network" state when activating service that is linked to dns service. It stays in "starting" state forever.
I activate services and and add servicelinks between services in parallel .
service.activate()
consumed_service.activate()
consumed_service1.activate()
dns.activate()
service.addservicelink(serviceId=dns.id)
dns.addservicelink(serviceId=consumed_service.id)
dns.addservicelink(serviceId=consumed_service1.id)
After this , I am not able to ping between containers from other host and the host where I see the container stuck on "Starting" state at "Post Network" state.
Is the instance that is stuck in "post-network" causing the network connectivity issues to the host ?
|
1.0
|
Instance stuck in "post-network" state when activating service that is linked to dns service. - Server version - V0.25.0-rc2
I was doing scenarios relating to a service linked to dns service which is linked to multiple services , scenarios like scaling services , stopping/starting instances.
After few scenarios , I see that one of the instance gets stuck in "post-network" state when activating service that is linked to dns service. It stays in "starting" state forever.
I activate services and and add servicelinks between services in parallel .
service.activate()
consumed_service.activate()
consumed_service1.activate()
dns.activate()
service.addservicelink(serviceId=dns.id)
dns.addservicelink(serviceId=consumed_service.id)
dns.addservicelink(serviceId=consumed_service1.id)
After this , I am not able to ping between containers from other host and the host where I see the container stuck on "Starting" state at "Post Network" state.
Is the instance that is stuck in "post-network" causing the network connectivity issues to the host ?
|
non_process
|
instance stuck in post network state when activating service that is linked to dns service server version i was doing scenarios relating to a service linked to dns service which is linked to multiple services scenarios like scaling services stopping starting instances after few scenarios i see that one of the instance gets stuck in post network state when activating service that is linked to dns service it stays in starting state forever i activate services and and add servicelinks between services in parallel service activate consumed service activate consumed activate dns activate service addservicelink serviceid dns id dns addservicelink serviceid consumed service id dns addservicelink serviceid consumed id after this i am not able to ping between containers from other host and the host where i see the container stuck on starting state at post network state is the instance that is stuck in post network causing the network connectivity issues to the host
| 0
|
22,647
| 31,895,827,065
|
IssuesEvent
|
2023-09-18 01:31:55
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - earliestEraOrLowestErathem
|
Term - change Class - GeologicalContext normative Task Group - Material Sample Process - complete
|
## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_earliestEraOrLowestErathem
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): earliestEraOrLowestErathem
* Term label (English, not normative): Earliest Era Or Lowest Erathem
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the earliest possible geochronologic era or lowest chronostratigraphic erathem attributable to the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Cenozoic, Mesozoic
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
1.0
|
Change term - earliestEraOrLowestErathem - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_earliestEraOrLowestErathem
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): earliestEraOrLowestErathem
* Term label (English, not normative): Earliest Era Or Lowest Erathem
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the earliest possible geochronologic era or lowest chronostratigraphic erathem attributable to the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Cenozoic, Mesozoic
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
process
|
change term earliesteraorlowesterathem term change submitter efficacy justification why is this change necessary create consistency of terms for material in darwin core demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes earliesteraorlowesterathem term label english not normative earliest era or lowest erathem organized in class e g occurrence event location taxon geological context definition of the term normative the full name of the earliest possible geochronologic era or lowest chronostratigraphic erathem attributable to the stratigraphic horizon from which the cataloged item dwc materialentity was collected usage comments recommendations regarding content etc not normative examples not normative cenozoic mesozoic refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd
| 1
|
10,238
| 13,098,666,571
|
IssuesEvent
|
2020-08-03 19:57:38
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
Migrate google-cloud-pubsub to the microgenerator
|
api: pubsub type: process
|
Migrate google-cloud-pubsub to the microgenerator. This involves the following steps:
* [ ] Write synth file and generate `google-cloud-pubsub-v1`
* [ ] Make sure the new libraries are configured in kokoro
* [ ] Release `google-cloud-pubsub-v1`
* [ ] Switch `google-cloud-pubsub` backend to the versioned gems. That is:
* Rip out synth and all the generated code
* Add `google-cloud-pubsub-v1` as a dependency
* Update the veneer code to the microgenerator usage
* [ ] Release `google-cloud-pubsub` update
I do not believe samples need to be updated, unless they invoke the low-level interface directly.
|
1.0
|
Migrate google-cloud-pubsub to the microgenerator - Migrate google-cloud-pubsub to the microgenerator. This involves the following steps:
* [ ] Write synth file and generate `google-cloud-pubsub-v1`
* [ ] Make sure the new libraries are configured in kokoro
* [ ] Release `google-cloud-pubsub-v1`
* [ ] Switch `google-cloud-pubsub` backend to the versioned gems. That is:
* Rip out synth and all the generated code
* Add `google-cloud-pubsub-v1` as a dependency
* Update the veneer code to the microgenerator usage
* [ ] Release `google-cloud-pubsub` update
I do not believe samples need to be updated, unless they invoke the low-level interface directly.
|
process
|
migrate google cloud pubsub to the microgenerator migrate google cloud pubsub to the microgenerator this involves the following steps write synth file and generate google cloud pubsub make sure the new libraries are configured in kokoro release google cloud pubsub switch google cloud pubsub backend to the versioned gems that is rip out synth and all the generated code add google cloud pubsub as a dependency update the veneer code to the microgenerator usage release google cloud pubsub update i do not believe samples need to be updated unless they invoke the low level interface directly
| 1
|
10,349
| 13,174,861,548
|
IssuesEvent
|
2020-08-11 23:43:53
|
GoogleCloudPlatform/stackdriver-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/stackdriver-sandbox
|
closed
|
Create tests for Custom Cloud Image
|
lang: yaml priority: p2 type: process
|
Tests should be added to CI to ensure nothing breaks whenever changes are made to the custom cloud image.
The Custom Cloud Image is created from a Dockerfile stored in the project repo (`cloud-shell/Dockerfile`).
|
1.0
|
Create tests for Custom Cloud Image - Tests should be added to CI to ensure nothing breaks whenever changes are made to the custom cloud image.
The Custom Cloud Image is created from a Dockerfile stored in the project repo (`cloud-shell/Dockerfile`).
|
process
|
create tests for custom cloud image tests should be added to ci to ensure nothing breaks whenever changes are made to the custom cloud image the custom cloud image is created from a dockerfile stored in the project repo cloud shell dockerfile
| 1
|
2,630
| 3,789,759,444
|
IssuesEvent
|
2016-03-21 19:02:00
|
ilri/DSpace
|
https://api.github.com/repos/ilri/DSpace
|
closed
|
Migrate to Let's Encrypt for TLS certificates
|
infrastructure
|
We need to migrate to Let's Encrypt's certificate authority for TLS certificates for the following domains:
- cgspace.cgiar.org
- mahider.ilri.org
- dspace.ilri.org
First these need to be activated with Let's Encrypt, then we need to add support to the [infrastructure playbooks](https://github.com/ilri/rmg-ansible-public), both for the nginx vhost templates as well as a cron job to do the certificate renewals. This first phase will happen during the next deploy window for CGSpace, since we'll have a window of downtime anyways.
|
1.0
|
Migrate to Let's Encrypt for TLS certificates - We need to migrate to Let's Encrypt's certificate authority for TLS certificates for the following domains:
- cgspace.cgiar.org
- mahider.ilri.org
- dspace.ilri.org
First these need to be activated with Let's Encrypt, then we need to add support to the [infrastructure playbooks](https://github.com/ilri/rmg-ansible-public), both for the nginx vhost templates as well as a cron job to do the certificate renewals. This first phase will happen during the next deploy window for CGSpace, since we'll have a window of downtime anyways.
|
non_process
|
migrate to let s encrypt for tls certificates we need to migrate to let s encrypt s certificate authority for tls certificates for the following domains cgspace cgiar org mahider ilri org dspace ilri org first these need to be activated with let s encrypt then we need to add support to the both for the nginx vhost templates as well as a cron job to do the certificate renewals this first phase will happen during the next deploy window for cgspace since we ll have a window of downtime anyways
| 0
|
1,461
| 4,043,792,530
|
IssuesEvent
|
2016-05-21 00:00:24
|
meteor/meteor
|
https://api.github.com/repos/meteor/meteor
|
closed
|
Eliminate phantomjs from dev bundle
|
feature fixed Impact:some Project:Release Process Severity:has-workaround
|
The `phantomjs` package is one of the largest npm packages in the dev bundle, at 45MB. It is currently only used in [Velocity tests](https://github.com/meteor/meteor/blob/d6288aceed339e01c120cc19e69308d0a3d8d80e/tools/runners/run-velocity.js#L4) and [self-tests](https://github.com/meteor/meteor/blob/d6288aceed339e01c120cc19e69308d0a3d8d80e/tools/tool-testing/selftest.js#L6), neither of which are commonly run by most Meteor developers. We should consider eliminating this dependency for the 1.3.3 release.
|
1.0
|
Eliminate phantomjs from dev bundle - The `phantomjs` package is one of the largest npm packages in the dev bundle, at 45MB. It is currently only used in [Velocity tests](https://github.com/meteor/meteor/blob/d6288aceed339e01c120cc19e69308d0a3d8d80e/tools/runners/run-velocity.js#L4) and [self-tests](https://github.com/meteor/meteor/blob/d6288aceed339e01c120cc19e69308d0a3d8d80e/tools/tool-testing/selftest.js#L6), neither of which are commonly run by most Meteor developers. We should consider eliminating this dependency for the 1.3.3 release.
|
process
|
eliminate phantomjs from dev bundle the phantomjs package is one of the largest npm packages in the dev bundle at it is currently only used in and neither of which are commonly run by most meteor developers we should consider eliminating this dependency for the release
| 1
|
75
| 2,524,965,682
|
IssuesEvent
|
2015-01-20 21:17:42
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Immediately accessing properties on Process objects after Start() can throw NullReferenceException
|
bug System.Diagnostics.Process
|
By calling Process.Start(), then immediately (as in the next line of code) accessing a property on it such as PeakVirtualMemorySize64 can throw a very strange NullReferenceException in the property accessors. (strange because the text is "error CS0648: '' is a type not supported by the language")

This is further exacerbated by the lack of (obvious) spin mechanisms on Process class such as the WaitForInputIdle method. Workarounds are possible but not ideal. Stepping through in a debugger, the problem will not reproduce.
Noted on : NonpagedSystemMemorySize64
PagedMemorySize64
PagedSystemMemorySize64
PeakPagedMemorySize64
PeakWorkingSet64
|
1.0
|
Immediately accessing properties on Process objects after Start() can throw NullReferenceException - By calling Process.Start(), then immediately (as in the next line of code) accessing a property on it such as PeakVirtualMemorySize64 can throw a very strange NullReferenceException in the property accessors. (strange because the text is "error CS0648: '' is a type not supported by the language")

This is further exacerbated by the lack of (obvious) spin mechanisms on Process class such as the WaitForInputIdle method. Workarounds are possible but not ideal. Stepping through in a debugger, the problem will not reproduce.
Noted on : NonpagedSystemMemorySize64
PagedMemorySize64
PagedSystemMemorySize64
PeakPagedMemorySize64
PeakWorkingSet64
|
process
|
immediately accessing properties on process objects after start can throw nullreferenceexception by calling process start then immediately as in the next line of code accessing a property on it such as can throw a very strange nullreferenceexception in the property accessors strange because the text is error is a type not supported by the language this is further exacerbated by the lack of obvious spin mechanisms on process class such as the waitforinputidle method workarounds are possible but not ideal stepping through in a debugger the problem will not reproduce noted on
| 1
|
11,813
| 14,630,295,401
|
IssuesEvent
|
2020-12-23 17:25:57
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
closed
|
3botdeployer on testnet: no date accepted
|
process_wontfix type_bug
|
Whatever date/hour I introduce to deploy a 3bot on testnet, I always get an error that date must be at least 19 minutes in the future.

|
1.0
|
3botdeployer on testnet: no date accepted - Whatever date/hour I introduce to deploy a 3bot on testnet, I always get an error that date must be at least 19 minutes in the future.

|
process
|
on testnet no date accepted whatever date hour i introduce to deploy a on testnet i always get an error that date must be at least minutes in the future
| 1
|
19,725
| 26,073,842,326
|
IssuesEvent
|
2022-12-24 07:09:25
|
pyanodon/pybugreports
|
https://api.github.com/repos/pyanodon/pybugreports
|
closed
|
Incompatible with Train Construction Site - missing crafting category
|
mod:pypostprocessing postprocess-fail compatibility
|
### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [ ] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [ ] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [X] Pypostprocessing failure
- [ ] Other
### What is the problem?
34.316 Script @__pypostprocessing__/data-final-fixes.lua:142: AUTOTECH START
40.580 Error ModManager.cpp:1558: Failed to load mod "pypostprocessing":
ERROR: Missing crafting category: trainassembling (ingredients: 1, fluids in: 0, fluids out:1), for fluid-wagon / fluid-wagon-fluid[fluid-wagon]
stack traceback:
[C]: in function 'error'
__pypostprocessing__/prototypes/functions/data_parser.lua:239: in function 'parse_recipe'
__pypostprocessing__/prototypes/functions/data_parser.lua:342: in function 'parse_tech'
__pypostprocessing__/prototypes/functions/data_parser.lua:134: in function 'run'
__pypostprocessing__/prototypes/functions/auto_tech.lua:34: in function 'run'
__pypostprocessing__/data-final-fixes.lua:144: in main chunk
40.582 Loading mod core 0.0.0 (data.lua)
40.738 Checksum for core: 1476961332
40.773 Error ModManager.cpp:1558: Error in assignID: recipe-category with name 'crafting' does not exist.
### Steps to reproduce
Install : https://mods.factorio.com/mod/trainConstructionSite
### Additional context
_No response_
### Log file
_No response_
|
2.0
|
Incompatible with Train Construction Site - missing crafting category - ### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [ ] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [ ] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [X] Pypostprocessing failure
- [ ] Other
### What is the problem?
34.316 Script @__pypostprocessing__/data-final-fixes.lua:142: AUTOTECH START
40.580 Error ModManager.cpp:1558: Failed to load mod "pypostprocessing":
ERROR: Missing crafting category: trainassembling (ingredients: 1, fluids in: 0, fluids out:1), for fluid-wagon / fluid-wagon-fluid[fluid-wagon]
stack traceback:
[C]: in function 'error'
__pypostprocessing__/prototypes/functions/data_parser.lua:239: in function 'parse_recipe'
__pypostprocessing__/prototypes/functions/data_parser.lua:342: in function 'parse_tech'
__pypostprocessing__/prototypes/functions/data_parser.lua:134: in function 'run'
__pypostprocessing__/prototypes/functions/auto_tech.lua:34: in function 'run'
__pypostprocessing__/data-final-fixes.lua:144: in main chunk
40.582 Loading mod core 0.0.0 (data.lua)
40.738 Checksum for core: 1476961332
40.773 Error ModManager.cpp:1558: Error in assignID: recipe-category with name 'crafting' does not exist.
### Steps to reproduce
Install : https://mods.factorio.com/mod/trainConstructionSite
### Additional context
_No response_
### Log file
_No response_
|
process
|
incompatible with train construction site missing crafting category mod source pyae beta which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem script pypostprocessing data final fixes lua autotech start error modmanager cpp failed to load mod pypostprocessing error missing crafting category trainassembling ingredients fluids in fluids out for fluid wagon fluid wagon fluid stack traceback in function error pypostprocessing prototypes functions data parser lua in function parse recipe pypostprocessing prototypes functions data parser lua in function parse tech pypostprocessing prototypes functions data parser lua in function run pypostprocessing prototypes functions auto tech lua in function run pypostprocessing data final fixes lua in main chunk loading mod core data lua checksum for core error modmanager cpp error in assignid recipe category with name crafting does not exist steps to reproduce install additional context no response log file no response
| 1
|
4,290
| 7,191,245,539
|
IssuesEvent
|
2018-02-02 20:14:58
|
openthread/openthread
|
https://api.github.com/repos/openthread/openthread
|
closed
|
Switching from `astyle` to `uncrustify`.
|
enhancement process
|
The OpenThread project currently uses [astyle](http://astyle.sourceforge.net/) to automatically format code and provide baseline enforcement of the OpenThread code style. In general, I've found astyle to be fairly limited in its flexibility.
This is a proposal to switch to [uncrustify](https://github.com/uncrustify/uncrustify). For better (or worse), uncrustify supports many more options than astyle. In short, astyle allows automated formatting and enforcement of more code style aspects, including:
- Left-aligned variable declarations.
- Aligned assignment operators.
- Left-aligned end-of-line comments.
- ... and many more.
|
1.0
|
Switching from `astyle` to `uncrustify`. - The OpenThread project currently uses [astyle](http://astyle.sourceforge.net/) to automatically format code and provide baseline enforcement of the OpenThread code style. In general, I've found astyle to be fairly limited in its flexibility.
This is a proposal to switch to [uncrustify](https://github.com/uncrustify/uncrustify). For better (or worse), uncrustify supports many more options than astyle. In short, astyle allows automated formatting and enforcement of more code style aspects, including:
- Left-aligned variable declarations.
- Aligned assignment operators.
- Left-aligned end-of-line comments.
- ... and many more.
|
process
|
switching from astyle to uncrustify the openthread project currently uses to automatically format code and provide baseline enforcement of the openthread code style in general i ve found astyle to be fairly limited in its flexibility this is a proposal to switch to for better or worse uncrustify supports many more options than astyle in short astyle allows automated formatting and enforcement of more code style aspects including left aligned variable declarations aligned assignment operators left aligned end of line comments and many more
| 1
|
18,977
| 24,964,413,776
|
IssuesEvent
|
2022-11-01 18:07:47
|
keras-team/keras-cv
|
https://api.github.com/repos/keras-team/keras-cv
|
closed
|
Is mixed precision, mixed_float16, not supported for certain augmentations
|
contribution-welcome preprocessing
|
I found that certain augmentations fail when the mixed precision policy is set to mixed_float16 but work with float32. So far I've found this to be true for RandomColorDegeneration and Solarization - both run individually and within the RandAugment layer.
Is there a work around? I did try specifying the dtype for the RandAugment layer but that didn't change the outcome.
|
1.0
|
Is mixed precision, mixed_float16, not supported for certain augmentations - I found that certain augmentations fail when the mixed precision policy is set to mixed_float16 but work with float32. So far I've found this to be true for RandomColorDegeneration and Solarization - both run individually and within the RandAugment layer.
Is there a work around? I did try specifying the dtype for the RandAugment layer but that didn't change the outcome.
|
process
|
is mixed precision mixed not supported for certain augmentations i found that certain augmentations fail when the mixed precision policy is set to mixed but work with so far i ve found this to be true for randomcolordegeneration and solarization both run individually and within the randaugment layer is there a work around i did try specifying the dtype for the randaugment layer but that didn t change the outcome
| 1
|
72,985
| 15,252,069,811
|
IssuesEvent
|
2021-02-20 01:24:50
|
rsoreq/grafana
|
https://api.github.com/repos/rsoreq/grafana
|
closed
|
CVE-2020-8203 (High) detected in multiple libraries - autoclosed
|
security vulnerability
|
## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-2.4.2.tgz</b>, <b>lodash-4.13.1.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-4.17.15.tgz</b></p></summary>
<p>
<details><summary><b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: grafana/emails/node_modules/lodash/package.json</p>
<p>Path to vulnerable library: grafana/emails/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-assemble-0.6.3.tgz (Root Library)
- resolve-dep-0.5.4.tgz
- cwd-0.3.7.tgz
- findup-sync-0.1.3.tgz
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.13.1.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.13.1.tgz">https://registry.npmjs.org/lodash/-/lodash-4.13.1.tgz</a></p>
<p>Path to dependency file: grafana/emails/node_modules/lodash/package.json</p>
<p>Path to vulnerable library: grafana/emails/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-premailer-1.1.0.tgz (Root Library)
- :x: **lodash-4.13.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: grafana/node_modules/lodash/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-usemin-3.1.1.tgz (Root Library)
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: grafana/emails/node_modules/lodash/package.json</p>
<p>Path to vulnerable library: grafana/emails/node_modules/lodash/package.json,grafana/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- sass-lint-1.12.1.tgz (Root Library)
- eslint-2.13.1.tgz
- inquirer-0.12.0.tgz
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-07-23</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"2.4.2","isTransitiveDependency":true,"dependencyTree":"grunt-assemble:0.6.3;resolve-dep:0.5.4;cwd:0.3.7;findup-sync:0.1.3;lodash:2.4.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.13.1","isTransitiveDependency":true,"dependencyTree":"grunt-premailer:1.1.0;lodash:4.13.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.10.1","isTransitiveDependency":true,"dependencyTree":"grunt-usemin:3.1.1;lodash:3.10.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.15","isTransitiveDependency":true,"dependencyTree":"sass-lint:1.12.1;eslint:2.13.1;inquirer:0.12.0;lodash:4.17.15","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"}],"vulnerabilityIdentifier":"CVE-2020-8203","vulnerabilityDetails":"Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-8203 (High) detected in multiple libraries - autoclosed - ## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-2.4.2.tgz</b>, <b>lodash-4.13.1.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-4.17.15.tgz</b></p></summary>
<p>
<details><summary><b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: grafana/emails/node_modules/lodash/package.json</p>
<p>Path to vulnerable library: grafana/emails/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-assemble-0.6.3.tgz (Root Library)
- resolve-dep-0.5.4.tgz
- cwd-0.3.7.tgz
- findup-sync-0.1.3.tgz
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.13.1.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.13.1.tgz">https://registry.npmjs.org/lodash/-/lodash-4.13.1.tgz</a></p>
<p>Path to dependency file: grafana/emails/node_modules/lodash/package.json</p>
<p>Path to vulnerable library: grafana/emails/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-premailer-1.1.0.tgz (Root Library)
- :x: **lodash-4.13.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: grafana/node_modules/lodash/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-usemin-3.1.1.tgz (Root Library)
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: grafana/emails/node_modules/lodash/package.json</p>
<p>Path to vulnerable library: grafana/emails/node_modules/lodash/package.json,grafana/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- sass-lint-1.12.1.tgz (Root Library)
- eslint-2.13.1.tgz
- inquirer-0.12.0.tgz
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-07-23</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"2.4.2","isTransitiveDependency":true,"dependencyTree":"grunt-assemble:0.6.3;resolve-dep:0.5.4;cwd:0.3.7;findup-sync:0.1.3;lodash:2.4.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.13.1","isTransitiveDependency":true,"dependencyTree":"grunt-premailer:1.1.0;lodash:4.13.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.10.1","isTransitiveDependency":true,"dependencyTree":"grunt-usemin:3.1.1;lodash:3.10.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.15","isTransitiveDependency":true,"dependencyTree":"sass-lint:1.12.1;eslint:2.13.1;inquirer:0.12.0;lodash:4.17.15","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"}],"vulnerabilityIdentifier":"CVE-2020-8203","vulnerabilityDetails":"Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in multiple libraries autoclosed cve high severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash tgz lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file grafana emails node modules lodash package json path to vulnerable library grafana emails node modules lodash package json dependency hierarchy grunt assemble tgz root library resolve dep tgz cwd tgz findup sync tgz x lodash tgz vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file grafana emails node modules lodash package json path to vulnerable library grafana emails node modules lodash package json dependency hierarchy grunt premailer tgz root library x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file grafana node modules lodash package json path to vulnerable library grafana node modules lodash package json dependency hierarchy grunt usemin tgz root library x lodash tgz vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file grafana emails node modules lodash package json path to vulnerable library grafana emails node modules lodash package json grafana node modules lodash package json dependency hierarchy sass lint tgz root library eslint tgz inquirer tgz x lodash tgz vulnerable library found in base branch master vulnerability details prototype pollution attack when using zipobjectdeep in lodash before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails prototype pollution attack when using zipobjectdeep in lodash before vulnerabilityurl
| 0
|
14,414
| 17,465,593,353
|
IssuesEvent
|
2021-08-06 16:19:32
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Typo
|
automation/svc triaged cxp doc-bug process-automation/subsvc Pri2
|
[Enter feedback here]
I believe the reference to 'Get-AutomationConnection' should be 'Get-AzAutomationConnection'
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 23c183d0-5012-e2e1-5562-69135b3f6509
* Version Independent ID: 7f36ff87-e24a-7442-8d42-f621f5391814
* Content: [Create modular runbooks in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks)
* Content Source: [articles/automation/automation-child-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-child-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Typo -
[Enter feedback here]
I believe the reference to 'Get-AutomationConnection' should be 'Get-AzAutomationConnection'
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 23c183d0-5012-e2e1-5562-69135b3f6509
* Version Independent ID: 7f36ff87-e24a-7442-8d42-f621f5391814
* Content: [Create modular runbooks in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks)
* Content Source: [articles/automation/automation-child-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-child-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
typo i believe the reference to get automationconnection should be get azautomationconnection document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
94,382
| 3,925,136,406
|
IssuesEvent
|
2016-04-22 17:48:07
|
ualbertalib/HydraNorth
|
https://api.github.com/repos/ualbertalib/HydraNorth
|
opened
|
File download link change
|
priority:high size:medium
|
Create a route for file download: instead of
```https://era.library.ualberta.ca/downloads/j098zd881```
use the file label (i.e. the original file name) and place it under ```/files/``` instead of ```/downloads/```, to produce:
```https://era.library.ualberta.ca/files/j098zd881/120715_parliamentnow.pdf```
This meets Google's and LAC's requirements, for the downloadable file to be in the same "directory" as the landing page for the item, and for the download link to end in ".pdf" for thesis pdfs.
The old url should issue a permanent redirect to the new one: it will have to do a lookup to determine the file label, in the way we do with old ERA uuids: #651
Closes #848, #849
|
1.0
|
File download link change - Create a route for file download: instead of
```https://era.library.ualberta.ca/downloads/j098zd881```
use the file label (i.e. the original file name) and place it under ```/files/``` instead of ```/downloads/```, to produce:
```https://era.library.ualberta.ca/files/j098zd881/120715_parliamentnow.pdf```
This meets Google's and LAC's requirements, for the downloadable file to be in the same "directory" as the landing page for the item, and for the download link to end in ".pdf" for thesis pdfs.
The old url should issue a permanent redirect to the new one: it will have to do a lookup to determine the file label, in the way we do with old ERA uuids: #651
Closes #848, #849
|
non_process
|
file download link change create a route for file download instead of use the file label i e the original file name and place it under files instead of downloads to produce this meets google s and lac s requirements for the downloadable file to be in the same directory as the landing page for the item and for the download link to end in pdf for thesis pdfs the old url should issue a permanent redirect to the new one it will have to do a lookup to determine the file label in the way we do with old era uuids closes
| 0
|
64,510
| 15,896,497,500
|
IssuesEvent
|
2021-04-11 17:38:32
|
haskell/text
|
https://api.github.com/repos/haskell/text
|
closed
|
Can't build benchmarks
|
build failure
|
When trying to build benchmarks, I get a linker error, as the `cbits.c` from the `text` i have installed clash with the `cbits.c` from the `text` source code i'm currently working on and which i'm trying to benchmark.
```
Linking dist/build/text-benchmarks/text-benchmarks ...
/home/kuko/.cabal/lib/x86_64-linux-ghc-8.0.2/text-1.2.2.2-KC7dWoG09dA1F6jKj5GSqh/libHStext-1.2.2.2-KC7dWoG09dA1F6jKj5GSqh.a(cbits.o): In function `_hs_text_memcmp':
(.text+0x20): multiple definition of `_hs_text_memcmp'
dist/build/text-benchmarks/text-benchmarks-tmp/../cbits/cbits.o:cbits.c:(.text+0x0): first defined here
/home/kuko/.cabal/lib/x86_64-linux-ghc-8.0.2/text-1.2.2.2-KC7dWoG09dA1F6jKj5GSqh/libHStext-1.2.2.2-KC7dWoG09dA1F6jKj5GSqh.a(cbits.o): In function `_hs_text_decode_utf8_state':
(.text+0xe0): multiple definition of `_hs_text_decode_utf8_state'
dist/build/text-benchmarks/text-benchmarks-tmp/../cbits/cbits.o:cbits.c:(.text+0x20): first defined here
/home/kuko/.cabal/lib/x86_64-linux-ghc-8.0.2/text-1.2.2.2-KC7dWoG09dA1F6jKj5GSqh/libHStext-1.2.2.2-KC7dWoG09dA1F6jKj5GSqh.a(cbits.o): In function `_hs_text_decode_utf8':
(.text+0x2a0): multiple definition of `_hs_text_decode_utf8'
dist/build/text-benchmarks/text-benchmarks-tmp/../cbits/cbits.o:cbits.c:(.text+0x150): first defined here
collect2: error: ld returned 1 exit status
```
The problem is, that `criterion` has `text` as a dependency.
Currently, as a workaround, I just rename the functions in `cbits.c`, then `cabal build` just works.
* Is there a better solution?
* If not, I can send a PR adding a compilation flag and do the renaming, so `cabal build` just works.
* Is this related to https://github.com/haskell/cabal/issues/1575 ?
|
1.0
|
Can't build benchmarks - When trying to build benchmarks, I get a linker error, as the `cbits.c` from the `text` i have installed clash with the `cbits.c` from the `text` source code i'm currently working on and which i'm trying to benchmark.
```
Linking dist/build/text-benchmarks/text-benchmarks ...
/home/kuko/.cabal/lib/x86_64-linux-ghc-8.0.2/text-1.2.2.2-KC7dWoG09dA1F6jKj5GSqh/libHStext-1.2.2.2-KC7dWoG09dA1F6jKj5GSqh.a(cbits.o): In function `_hs_text_memcmp':
(.text+0x20): multiple definition of `_hs_text_memcmp'
dist/build/text-benchmarks/text-benchmarks-tmp/../cbits/cbits.o:cbits.c:(.text+0x0): first defined here
/home/kuko/.cabal/lib/x86_64-linux-ghc-8.0.2/text-1.2.2.2-KC7dWoG09dA1F6jKj5GSqh/libHStext-1.2.2.2-KC7dWoG09dA1F6jKj5GSqh.a(cbits.o): In function `_hs_text_decode_utf8_state':
(.text+0xe0): multiple definition of `_hs_text_decode_utf8_state'
dist/build/text-benchmarks/text-benchmarks-tmp/../cbits/cbits.o:cbits.c:(.text+0x20): first defined here
/home/kuko/.cabal/lib/x86_64-linux-ghc-8.0.2/text-1.2.2.2-KC7dWoG09dA1F6jKj5GSqh/libHStext-1.2.2.2-KC7dWoG09dA1F6jKj5GSqh.a(cbits.o): In function `_hs_text_decode_utf8':
(.text+0x2a0): multiple definition of `_hs_text_decode_utf8'
dist/build/text-benchmarks/text-benchmarks-tmp/../cbits/cbits.o:cbits.c:(.text+0x150): first defined here
collect2: error: ld returned 1 exit status
```
The problem is, that `criterion` has `text` as a dependency.
Currently, as a workaround, I just rename the functions in `cbits.c`, then `cabal build` just works.
* Is there a better solution?
* If not, I can send a PR adding a compilation flag and do the renaming, so `cabal build` just works.
* Is this related to https://github.com/haskell/cabal/issues/1575 ?
|
non_process
|
can t build benchmarks when trying to build benchmarks i get a linker error as the cbits c from the text i have installed clash with the cbits c from the text source code i m currently working on and which i m trying to benchmark linking dist build text benchmarks text benchmarks home kuko cabal lib linux ghc text libhstext a cbits o in function hs text memcmp text multiple definition of hs text memcmp dist build text benchmarks text benchmarks tmp cbits cbits o cbits c text first defined here home kuko cabal lib linux ghc text libhstext a cbits o in function hs text decode state text multiple definition of hs text decode state dist build text benchmarks text benchmarks tmp cbits cbits o cbits c text first defined here home kuko cabal lib linux ghc text libhstext a cbits o in function hs text decode text multiple definition of hs text decode dist build text benchmarks text benchmarks tmp cbits cbits o cbits c text first defined here error ld returned exit status the problem is that criterion has text as a dependency currently as a workaround i just rename the functions in cbits c then cabal build just works is there a better solution if not i can send a pr adding a compilation flag and do the renaming so cabal build just works is this related to
| 0
|
60,388
| 12,102,228,361
|
IssuesEvent
|
2020-04-20 16:21:47
|
LorenzoMei/iNeed
|
https://api.github.com/repos/LorenzoMei/iNeed
|
closed
|
Remove duplicated LOC
|
Code Smell
|
- src/.../view/components/ViewUserController.java 32-52, src/logicviewcomponentsViewWalletController.java 24-43
|
1.0
|
Remove duplicated LOC - - src/.../view/components/ViewUserController.java 32-52, src/logicviewcomponentsViewWalletController.java 24-43
|
non_process
|
remove duplicated loc src view components viewusercontroller java src logicviewcomponentsviewwalletcontroller java
| 0
|
8,952
| 12,059,212,914
|
IssuesEvent
|
2020-04-15 18:51:54
|
fablabbcn/fablabs.io
|
https://api.github.com/repos/fablabbcn/fablabs.io
|
closed
|
New Referee system after FAB14
|
Approval Process
|
At FAB14 several options and strategies were discussed for improving the approval process. The first steps to be taken are:
- _Referees_ are now `Users`, not `Labs`
- _Superadmins_ should be able to select who is a _Referee_ in the _Backstage_, as it is now with `Labs`
- There is only one _Referee_ per approval process, and it is randomly selected
- There should be a maximum number of `Labs` to approve that a _Referee_ can be assigned to each year
|
1.0
|
New Referee system after FAB14 - At FAB14 several options and strategies were discussed for improving the approval process. The first steps to be taken are:
- _Referees_ are now `Users`, not `Labs`
- _Superadmins_ should be able to select who is a _Referee_ in the _Backstage_, as it is now with `Labs`
- There is only one _Referee_ per approval process, and it is randomly selected
- There should be a maximum number of `Labs` to approve that a _Referee_ can be assigned to each year
|
process
|
new referee system after at several options and strategies were discussed for improving the approval process the first steps to be taken are referees are now users not labs superadmins should be able to select who is a referee in the backstage as it is now with labs there is only one referee per approval process and it is randomly selected there should be a maximum number of labs to approve that a referee can be assigned to each year
| 1
|
89,961
| 11,306,360,015
|
IssuesEvent
|
2020-01-18 13:33:25
|
godaddy-wordpress/go
|
https://api.github.com/repos/godaddy-wordpress/go
|
opened
|
`entry-content` class missing
|
bug design
|
**Describe the bug:**
`entry-content` class is missing. Is this intentional? Thanks!
|
1.0
|
`entry-content` class missing - **Describe the bug:**
`entry-content` class is missing. Is this intentional? Thanks!
|
non_process
|
entry content class missing describe the bug entry content class is missing is this intentional thanks
| 0
|
67,320
| 3,268,886,114
|
IssuesEvent
|
2015-10-23 13:57:47
|
bosik/diacomp
|
https://api.github.com/repos/bosik/diacomp
|
opened
|
Deleted foods / dishes are available to add into meal
|
app:win priority:critical type:bug
|
**STR:**
1. In the food base:
* Create a food in the food base, remember the exact name
* Delete it
2. In the diary:
* Create meal
* Type the deleted food's name into text field
* Press Enter
**Expected result:**
* Nothing happened, as food is deleted
**Actual result:**
* The focus is switched to the Mass field, so user can input mass and add the deleted food
|
1.0
|
Deleted foods / dishes are available to add into meal - **STR:**
1. In the food base:
* Create a food in the food base, remember the exact name
* Delete it
2. In the diary:
* Create meal
* Type the deleted food's name into text field
* Press Enter
**Expected result:**
* Nothing happened, as food is deleted
**Actual result:**
* The focus is switched to the Mass field, so user can input mass and add the deleted food
|
non_process
|
deleted foods dishes are available to add into meal str in the food base create a food in the food base remember the exact name delete it in the diary create meal type the deleted food s name into text field press enter expected result nothing happened as food is deleted actual result the focus is switched to the mass field so user can input mass and add the deleted food
| 0
|
14,917
| 18,354,359,894
|
IssuesEvent
|
2021-10-08 16:04:39
|
oasis-tcs/csaf
|
https://api.github.com/repos/oasis-tcs/csaf
|
closed
|
Correct links in status section - these point to OpenC2 instead of CSAF TC
|
csaf 2.0 editorial oasis_tc_process CSDPR01_feedback non_material
|
# Situation / Comment
> Status
>
> The invitation for public comments on the website
>
> https://docs.oasis-open.org/csaf/csaf/v2.0/csd01/csaf-v2.0-csd01.html
>
> directs to the TC page for OpenC2
> (https://www.oasis-open.org/committees/openc2/)
# Proposal (from review comment)
correct the links in the section status for the next iteration / progression of the draft:
Instead of
> ### Status:
>
> This document was last revised or approved by the OASIS Common Security Advisory Framework (CSAF) TC on the above date. The level of approval is also listed above. Check the "Latest stage" location noted above for possible later revisions of this document. Any other numbered Versions and other technical work produced by the Technical Committee (TC) are listed at https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=openc2#technical.
>
> TC members should send comments on this specification to the TC's email list. Others should send comments to the TC's public comment list, after subscribing to it by following the instructions at the "Send A Comment" button on the TC's web page at https://www.oasis-open.org/committees/openc2/.
>
> This specification is provided under the Non-Assertion Mode of the OASIS IPR Policy, the mode chosen when the Technical Committee was established. For information on whether any patents have been disclosed that may be essential to implementing this specification, and any offers of patent licensing terms, please refer to the Intellectual Property Rights section of the TC's web page (https://www.oasis-open.org/committees/openc2/ipr.php).
>
> Note that any machine-readable content (Computer Language Definitions) declared Normative for this Work Product is provided in separate plain text files. In the event of a discrepancy between any such plain text file and display content in the Work Product's prose narrative document(s), the content in the separate plain text file prevails.
write:
> ### Status
>
> This document was last revised or approved by the OASIS Common Security Advisory Framework (CSAF) TC on the above date. The level of approval is also listed above. Check the "Latest stage" location noted above for possible later revisions of this document. Any other numbered Versions and other technical work produced by the Technical Committee (TC) are listed at https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=csaf#technical.
>
> TC members should send comments on this specification to the TC's email list. Others should send comments to the TC's public comment list, after subscribing to it by following the instructions at the "Send A Comment" button on the TC's web page at https://www.oasis-open.org/committees/csaf/.
>
> This specification is provided under the Non-Assertion Mode of the OASIS IPR Policy, the mode chosen when the Technical Committee was established. For information on whether any patents have been disclosed that may be essential to implementing this specification, and any offers of patent licensing terms, please refer to the Intellectual Property Rights section of the TC's web page (https://www.oasis-open.org/committees/csaf/ipr.php).
>
> Note that any machine-readable content (Computer Language Definitions) declared Normative for this Work Product is provided in separate plain text files. In the event of a discrepancy between any such plain text file and display content in the Work Product's prose narrative document(s), the content in the separate plain text file prevails.
## Scope CSDPR01 Public Review comment
Received from Christian Keil (DFN-CERT) per email to the public csaf comments mailing list as part of https://lists.oasis-open.org/archives/csaf-comment/202109/msg00000.html @santosomar, @tschmidtb51
|
1.0
|
Correct links in status section - these point to OpenC2 instead of CSAF TC - # Situation / Comment
> Status
>
> The invitation for public comments on the website
>
> https://docs.oasis-open.org/csaf/csaf/v2.0/csd01/csaf-v2.0-csd01.html
>
> directs to the TC page for OpenC2
> (https://www.oasis-open.org/committees/openc2/)
# Proposal (from review comment)
correct the links in the section status for the next iteration / progression of the draft:
Instead of
> ### Status:
>
> This document was last revised or approved by the OASIS Common Security Advisory Framework (CSAF) TC on the above date. The level of approval is also listed above. Check the "Latest stage" location noted above for possible later revisions of this document. Any other numbered Versions and other technical work produced by the Technical Committee (TC) are listed at https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=openc2#technical.
>
> TC members should send comments on this specification to the TC's email list. Others should send comments to the TC's public comment list, after subscribing to it by following the instructions at the "Send A Comment" button on the TC's web page at https://www.oasis-open.org/committees/openc2/.
>
> This specification is provided under the Non-Assertion Mode of the OASIS IPR Policy, the mode chosen when the Technical Committee was established. For information on whether any patents have been disclosed that may be essential to implementing this specification, and any offers of patent licensing terms, please refer to the Intellectual Property Rights section of the TC's web page (https://www.oasis-open.org/committees/openc2/ipr.php).
>
> Note that any machine-readable content (Computer Language Definitions) declared Normative for this Work Product is provided in separate plain text files. In the event of a discrepancy between any such plain text file and display content in the Work Product's prose narrative document(s), the content in the separate plain text file prevails.
write:
> ### Status
>
> This document was last revised or approved by the OASIS Common Security Advisory Framework (CSAF) TC on the above date. The level of approval is also listed above. Check the "Latest stage" location noted above for possible later revisions of this document. Any other numbered Versions and other technical work produced by the Technical Committee (TC) are listed at https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=csaf#technical.
>
> TC members should send comments on this specification to the TC's email list. Others should send comments to the TC's public comment list, after subscribing to it by following the instructions at the "Send A Comment" button on the TC's web page at https://www.oasis-open.org/committees/csaf/.
>
> This specification is provided under the Non-Assertion Mode of the OASIS IPR Policy, the mode chosen when the Technical Committee was established. For information on whether any patents have been disclosed that may be essential to implementing this specification, and any offers of patent licensing terms, please refer to the Intellectual Property Rights section of the TC's web page (https://www.oasis-open.org/committees/csaf/ipr.php).
>
> Note that any machine-readable content (Computer Language Definitions) declared Normative for this Work Product is provided in separate plain text files. In the event of a discrepancy between any such plain text file and display content in the Work Product's prose narrative document(s), the content in the separate plain text file prevails.
## Scope CSDPR01 Public Review comment
Received from Christian Keil (DFN-CERT) per email to the public csaf comments mailing list as part of https://lists.oasis-open.org/archives/csaf-comment/202109/msg00000.html @santosomar, @tschmidtb51
|
process
|
correct links in status section these point to instead of csaf tc situation comment status the invitation for public comments on the website directs to the tc page for proposal from review comment correct the links in the section status for the next iteration progression of the draft instead of status this document was last revised or approved by the oasis common security advisory framework csaf tc on the above date the level of approval is also listed above check the latest stage location noted above for possible later revisions of this document any other numbered versions and other technical work produced by the technical committee tc are listed at tc members should send comments on this specification to the tc s email list others should send comments to the tc s public comment list after subscribing to it by following the instructions at the send a comment button on the tc s web page at this specification is provided under the non assertion mode of the oasis ipr policy the mode chosen when the technical committee was established for information on whether any patents have been disclosed that may be essential to implementing this specification and any offers of patent licensing terms please refer to the intellectual property rights section of the tc s web page note that any machine readable content computer language definitions declared normative for this work product is provided in separate plain text files in the event of a discrepancy between any such plain text file and display content in the work product s prose narrative document s the content in the separate plain text file prevails write status this document was last revised or approved by the oasis common security advisory framework csaf tc on the above date the level of approval is also listed above check the latest stage location noted above for possible later revisions of this document any other numbered versions and other technical work produced by the technical committee tc are listed at tc members should send comments on this specification to the tc s email list others should send comments to the tc s public comment list after subscribing to it by following the instructions at the send a comment button on the tc s web page at this specification is provided under the non assertion mode of the oasis ipr policy the mode chosen when the technical committee was established for information on whether any patents have been disclosed that may be essential to implementing this specification and any offers of patent licensing terms please refer to the intellectual property rights section of the tc s web page note that any machine readable content computer language definitions declared normative for this work product is provided in separate plain text files in the event of a discrepancy between any such plain text file and display content in the work product s prose narrative document s the content in the separate plain text file prevails scope public review comment received from christian keil dfn cert per email to the public csaf comments mailing list as part of santosomar
| 1
|
3,763
| 6,735,255,141
|
IssuesEvent
|
2017-10-18 21:03:48
|
EPFLMachineLearningTeam01/Project1
|
https://api.github.com/repos/EPFLMachineLearningTeam01/Project1
|
closed
|
Do histograms on features
|
data processing
|
:clock1: 3 hours
- [x] Add basic histograms
- [x] Add scatter plots feature vs feature
- [x] Plot feature vs **y** (answer)
- [x] Color classes on histograms and scatter plots
- [x] Boxplot
|
1.0
|
Do histograms on features - :clock1: 3 hours
- [x] Add basic histograms
- [x] Add scatter plots feature vs feature
- [x] Plot feature vs **y** (answer)
- [x] Color classes on histograms and scatter plots
- [x] Boxplot
|
process
|
do histograms on features hours add basic histograms add scatter plots feature vs feature plot feature vs y answer color classes on histograms and scatter plots boxplot
| 1
|
7,452
| 10,560,607,372
|
IssuesEvent
|
2019-10-04 14:13:46
|
endlessm/azafea
|
https://api.github.com/repos/endlessm/azafea
|
closed
|
Allow ignoring some metrics events
|
endless event processors enhancement
|
Some events have changed UUID over time, and the old UUIDs have been deprecated.
Currently those old events end up in the "unknown" tables, where they accumulate for no purpose.
We should have an ignore list for those old events.
|
1.0
|
Allow ignoring some metrics events - Some events have changed UUID over time, and the old UUIDs have been deprecated.
Currently those old events end up in the "unknown" tables, where they accumulate for no purpose.
We should have an ignore list for those old events.
|
process
|
allow ignoring some metrics events some events have changed uuid over time and the old uuids have been deprecated currently those old events end up in the unknown tables where they accumulate for no purpose we should have an ignore list for those old events
| 1
|
7,142
| 10,282,515,186
|
IssuesEvent
|
2019-08-26 11:22:29
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Windows builds are broken, cannot find GTest::gmock.
|
priority: p1 type: bug type: process
|
The Windows builds are broken. My guess is that the CI build installed a new version of vcpkg. We are trying to link against `GTest::gmock`, but vcpkg provides `GMock::gmock`.
That name change seems to have been introduced by vcpkg, I will be filing a bug with vcpkg or with googletest, depending on where the name was changed. Either way, we need to fix our builds.
|
1.0
|
Windows builds are broken, cannot find GTest::gmock. - The Windows builds are broken. My guess is that the CI build installed a new version of vcpkg. We are trying to link against `GTest::gmock`, but vcpkg provides `GMock::gmock`.
That name change seems to have been introduced by vcpkg, I will be filing a bug with vcpkg or with googletest, depending on where the name was changed. Either way, we need to fix our builds.
|
process
|
windows builds are broken cannot find gtest gmock the windows builds are broken my guess is that the ci build installed a new version of vcpkg we are trying to link against gtest gmock but vcpkg provides gmock gmock that name change seems to have been introduced by vcpkg i will be filing a bug with vcpkg or with googletest depending on where the name was changed either way we need to fix our builds
| 1
|
13,927
| 16,685,114,767
|
IssuesEvent
|
2021-06-08 07:11:10
|
SeisNN/SeisNN
|
https://api.github.com/repos/SeisNN/SeisNN
|
closed
|
generate_tfrecord replace by zeros
|
bug preprocessing
|
**Describe the bug**
CWB資料產生出來的TFRecord會常常有突然斷掉剩下都被0補起來的現象
CWB資料為不連續資料
1.假設資料長度從01:15:33_01:16:12長39秒
2.若依照P波挑速度並且隨機位移選擇到的時間比起時時間更早的話,取到的資料會不足30秒,舉例來說若隨機到的時間為01:15:23_01:15:53,從CWB_mseed取到的波型會只有01:15:33~01:15:53長20秒,剩下空的10秒鐘便會被0取代,變成我們看到的從中間斷掉的樣子
**Screenshots**


此兩張圖應該為接起來的,但是會因為上述原因斷掉,並且形成假的S波label.
|
1.0
|
generate_tfrecord replace by zeros - **Describe the bug**
CWB資料產生出來的TFRecord會常常有突然斷掉剩下都被0補起來的現象
CWB資料為不連續資料
1.假設資料長度從01:15:33_01:16:12長39秒
2.若依照P波挑速度並且隨機位移選擇到的時間比起時時間更早的話,取到的資料會不足30秒,舉例來說若隨機到的時間為01:15:23_01:15:53,從CWB_mseed取到的波型會只有01:15:33~01:15:53長20秒,剩下空的10秒鐘便會被0取代,變成我們看到的從中間斷掉的樣子
**Screenshots**


此兩張圖應該為接起來的,但是會因為上述原因斷掉,並且形成假的S波label.
|
process
|
generate tfrecord replace by zeros describe the bug cwb資料為不連續資料 若依照p波挑速度並且隨機位移選擇到的時間比起時時間更早的話, , ,從cwb , ,變成我們看到的從中間斷掉的樣子 screenshots 此兩張圖應該為接起來的,但是會因為上述原因斷掉,並且形成假的s波label
| 1
|
10,516
| 13,285,782,225
|
IssuesEvent
|
2020-08-24 08:41:44
|
googleapis/google-cloudevents-dotnet
|
https://api.github.com/repos/googleapis/google-cloudevents-dotnet
|
closed
|
Move to GitHub Actions
|
api: events type: process
|
Now that google-cloudevents and functions-framework-dotnet are on GH Actions, it makes sense for this repo to do so too.
|
1.0
|
Move to GitHub Actions - Now that google-cloudevents and functions-framework-dotnet are on GH Actions, it makes sense for this repo to do so too.
|
process
|
move to github actions now that google cloudevents and functions framework dotnet are on gh actions it makes sense for this repo to do so too
| 1
|
14,989
| 18,535,807,465
|
IssuesEvent
|
2021-10-21 11:22:10
|
opensafely-core/job-server
|
https://api.github.com/repos/opensafely-core/job-server
|
opened
|
Add question to confirm which backend data a user requires access to
|
application-process
|
## Question
Which backend data environment are you requesting access to?
## Hint text
OpenSAFELY is under rapid iteration and development and as a consequence the features and data available in the underlying data environments differ. Currently all approvals are granted to OpenSAFELY-TPP by default and requests for OpenSAFELY-EMIS are considered by exception. OpenSAFELY-EMIS approvals for external users will be considered for
1. Studies where it is advantageous to have a complete national picture or a sample size greater than contained in OpenSAFELY-TPP
1. Studies that currently only involve GP data, or where no new bespoke feature/variable development is required.
1. Studies where the team have advanced skills and can contribute to active development of the shared OpenSAFELY codebase for all subsequent users.
## Follow-up question
If requesting EMIS please specify the justification
|
1.0
|
Add question to confirm which backend data a user requires access to - ## Question
Which backend data environment are you requesting access to?
## Hint text
OpenSAFELY is under rapid iteration and development and as a consequence the features and data available in the underlying data environments differ. Currently all approvals are granted to OpenSAFELY-TPP by default and requests for OpenSAFELY-EMIS are considered by exception. OpenSAFELY-EMIS approvals for external users will be considered for
1. Studies where it is advantageous to have a complete national picture or a sample size greater than contained in OpenSAFELY-TPP
1. Studies that currently only involve GP data, or where no new bespoke feature/variable development is required.
1. Studies where the team have advanced skills and can contribute to active development of the shared OpenSAFELY codebase for all subsequent users.
## Follow-up question
If requesting EMIS please specify the justification
|
process
|
add question to confirm which backend data a user requires access to question which backend data environment are you requesting access to hint text opensafely is under rapid iteration and development and as a consequence the features and data available in the underlying data environments differ currently all approvals are granted to opensafely tpp by default and requests for opensafely emis are considered by exception opensafely emis approvals for external users will be considered for studies where it is advantageous to have a complete national picture or a sample size greater than contained in opensafely tpp studies that currently only involve gp data or where no new bespoke feature variable development is required studies where the team have advanced skills and can contribute to active development of the shared opensafely codebase for all subsequent users follow up question if requesting emis please specify the justification
| 1
|
371,246
| 10,963,635,475
|
IssuesEvent
|
2019-11-27 20:12:27
|
gitblit/gitblit
|
https://api.github.com/repos/gitblit/gitblit
|
closed
|
very slow repository access over web and very slow startup
|
Priority-Medium Status-Fixed Type-Performance
|
Hi,
we have around 60 repositories whom weight around 12GB.
Server startup: 2016-06-23 22:23:53 [INFO ] Started @174304ms
accessing repository with 8,083 changes 671 tags in past 2 years takes around 20 seconds,
by accessing I mean going to summary page. This repository weights around 700MB.
We've tried to move tickets into redis, but it didn't help so we switched back to file based.
What can we change to make it faster? Is it possible at all in gitblit?
Box is CentOS 6 with 2 CPUs and 4GB of RAM. Current version is 1.8.0
few of settings from gitblit.properties are:
git.cacheRepositoryList = true
git.enableGarbageCollection = false
git.garbageCollectionHour = 0
git.defaultGarbageCollectionThreshold = 500k
git.defaultGarbageCollectionPeriod = 7
git.mirrorPeriod = 30 mins
git.packedGitWindowSize = 8k
git.packedGitLimit = 1500m
git.deltaBaseCacheLimit = 10m
git.packedGitOpenFiles = 256
git.streamFileThreshold = 50m
git.maxObjectSizeLimit = 0
git.maxPackSizeLimit = -1
tickets.service = com.gitblit.tickets.FileTicketService
tickets.perPage = 25
|
1.0
|
very slow repository access over web and very slow startup - Hi,
we have around 60 repositories whom weight around 12GB.
Server startup: 2016-06-23 22:23:53 [INFO ] Started @174304ms
accessing repository with 8,083 changes 671 tags in past 2 years takes around 20 seconds,
by accessing I mean going to summary page. This repository weights around 700MB.
We've tried to move tickets into redis, but it didn't help so we switched back to file based.
What can we change to make it faster? Is it possible at all in gitblit?
Box is CentOS 6 with 2 CPUs and 4GB of RAM. Current version is 1.8.0
few of settings from gitblit.properties are:
git.cacheRepositoryList = true
git.enableGarbageCollection = false
git.garbageCollectionHour = 0
git.defaultGarbageCollectionThreshold = 500k
git.defaultGarbageCollectionPeriod = 7
git.mirrorPeriod = 30 mins
git.packedGitWindowSize = 8k
git.packedGitLimit = 1500m
git.deltaBaseCacheLimit = 10m
git.packedGitOpenFiles = 256
git.streamFileThreshold = 50m
git.maxObjectSizeLimit = 0
git.maxPackSizeLimit = -1
tickets.service = com.gitblit.tickets.FileTicketService
tickets.perPage = 25
|
non_process
|
very slow repository access over web and very slow startup hi we have around repositories whom weight around server startup started accessing repository with changes tags in past years takes around seconds by accessing i mean going to summary page this repository weights around we ve tried to move tickets into redis but it didn t help so we switched back to file based what can we change to make it faster is it possible at all in gitblit box is centos with cpus and of ram current version is few of settings from gitblit properties are git cacherepositorylist true git enablegarbagecollection false git garbagecollectionhour git defaultgarbagecollectionthreshold git defaultgarbagecollectionperiod git mirrorperiod mins git packedgitwindowsize git packedgitlimit git deltabasecachelimit git packedgitopenfiles git streamfilethreshold git maxobjectsizelimit git maxpacksizelimit tickets service com gitblit tickets fileticketservice tickets perpage
| 0
|
15,393
| 19,578,934,565
|
IssuesEvent
|
2022-01-04 18:33:22
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Release checklist 0.47
|
enhancement process
|
### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] PRs and issues that are merged/closed have milestone populated
- [x] PRs and issues targeting release are merged, closed, or re-targeted
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [x] Deploy to VM
- [x] Deploy Rosetta API
- [x] Rosetta tests
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
- [x] Migrations tested against mainnet clone
## Previewnet
- [x] Deploy to VM
- [x] Deploy Rosetta API
- [x] Rosetta tests
## Testnet
- [x] Deploy to VM
- [x] Deploy Rosetta API
- [x] Rosetta tests
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Rosetta tests
### Alternatives
_No response_
|
1.0
|
Release checklist 0.47 - ### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] PRs and issues that are merged/closed have milestone populated
- [x] PRs and issues targeting release are merged, closed, or re-targeted
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [x] Deploy to VM
- [x] Deploy Rosetta API
- [x] Rosetta tests
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
- [x] Migrations tested against mainnet clone
## Previewnet
- [x] Deploy to VM
- [x] Deploy Rosetta API
- [x] Rosetta tests
## Testnet
- [x] Deploy to VM
- [x] Deploy Rosetta API
- [x] Rosetta tests
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Rosetta tests
### Alternatives
_No response_
|
process
|
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation prs and issues that are merged closed have milestone populated prs and issues targeting release are merged closed or re targeted github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts publish release integration deploy to vm deploy rosetta api rosetta tests performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests migrations tested against mainnet clone previewnet deploy to vm deploy rosetta api rosetta tests testnet deploy to vm deploy rosetta api rosetta tests mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm rosetta tests alternatives no response
| 1
|
15,260
| 19,190,521,407
|
IssuesEvent
|
2021-12-05 22:38:58
|
km4ack/pi-build
|
https://api.github.com/repos/km4ack/pi-build
|
closed
|
Conky Temp INOP under Raspian Bullseye (SOLVED)
|
in process Bug-Minor
|
Raspbian Bullseye does not contain a _/opt/vc/bin_ directory for the _vcgencmd_ utility.
After Pi-Build install, Conky configuration files check the temperature via _{exec /opt/vc/bin/vcgencmd measure_temp ... }_ line.
To fix, remove the full path from each Conky config file. (e.g. _{exec vcgencmd measure_temp ...}_)
|
1.0
|
Conky Temp INOP under Raspian Bullseye (SOLVED) - Raspbian Bullseye does not contain a _/opt/vc/bin_ directory for the _vcgencmd_ utility.
After Pi-Build install, Conky configuration files check the temperature via _{exec /opt/vc/bin/vcgencmd measure_temp ... }_ line.
To fix, remove the full path from each Conky config file. (e.g. _{exec vcgencmd measure_temp ...}_)
|
process
|
conky temp inop under raspian bullseye solved raspbian bullseye does not contain a opt vc bin directory for the vcgencmd utility after pi build install conky configuration files check the temperature via exec opt vc bin vcgencmd measure temp line to fix remove the full path from each conky config file e g exec vcgencmd measure temp
| 1
|
17,251
| 23,033,496,475
|
IssuesEvent
|
2022-07-22 16:04:07
|
ROCmSoftwarePlatform/MITunaX
|
https://api.github.com/repos/ROCmSoftwarePlatform/MITunaX
|
closed
|
Per config online tuning verification
|
enhancement process changes large task priority_medium
|
While tuning Tuna should check the previous results of the tuning (if any) and make sure there are no:
1. Regressions at a reasonable tolerance level
2. Warnings or errors emanating from corrupt, or obsolete entrees.
|
1.0
|
Per config online tuning verification - While tuning Tuna should check the previous results of the tuning (if any) and make sure there are no:
1. Regressions at a reasonable tolerance level
2. Warnings or errors emanating from corrupt, or obsolete entrees.
|
process
|
per config online tuning verification while tuning tuna should check the previous results of the tuning if any and make sure there are no regressions at a reasonable tolerance level warnings or errors emanating from corrupt or obsolete entrees
| 1
|
16,838
| 22,087,336,700
|
IssuesEvent
|
2022-06-01 01:04:28
|
hashgraph/hedera-json-rpc-relay
|
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
|
opened
|
Add acceptance test support for eth_getTransactionCount
|
enhancement P2 process
|
### Problem
The current acceptance tests implemented in https://github.com/hashgraph/hedera-json-rpc-relay/pull/119 was not able to include eth_getTransactionCount
### Solution
Investigate why call produces and resolve
```
err: {
"type": "PrecheckStatusError",
"message": "transaction 0.0.2@1654029652.985219400 failed precheck with status INVALID_ACCOUNT_ID",
"stack":
StatusError: transaction 0.0.2@1654029652.985219400 failed precheck with status INVALID_ACCOUNT_ID
at new PrecheckStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/PrecheckStatusError.cjs:43:5)
at AccountInfoQuery._mapStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/Query.cjs:431:12)
at CostQuery._mapStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/CostQuery.cjs:155:24)
at CostQuery.execute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/Executable.cjs:519:22)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at AccountInfoQuery.getCost (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/account/AccountInfoQuery.cjs:144:16)
at AccountInfoQuery._beforeExecute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/Query.cjs:267:28)
at AccountInfoQuery.execute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/Executable.cjs:411:5)
"name": "StatusError",
"status": {
"_code": 15
},
```
### Alternatives
_No response_
|
1.0
|
Add acceptance test support for eth_getTransactionCount - ### Problem
The current acceptance tests implemented in https://github.com/hashgraph/hedera-json-rpc-relay/pull/119 was not able to include eth_getTransactionCount
### Solution
Investigate why call produces and resolve
```
err: {
"type": "PrecheckStatusError",
"message": "transaction 0.0.2@1654029652.985219400 failed precheck with status INVALID_ACCOUNT_ID",
"stack":
StatusError: transaction 0.0.2@1654029652.985219400 failed precheck with status INVALID_ACCOUNT_ID
at new PrecheckStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/PrecheckStatusError.cjs:43:5)
at AccountInfoQuery._mapStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/Query.cjs:431:12)
at CostQuery._mapStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/CostQuery.cjs:155:24)
at CostQuery.execute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/Executable.cjs:519:22)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at AccountInfoQuery.getCost (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/account/AccountInfoQuery.cjs:144:16)
at AccountInfoQuery._beforeExecute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/Query.cjs:267:28)
at AccountInfoQuery.execute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/Executable.cjs:411:5)
"name": "StatusError",
"status": {
"_code": 15
},
```
### Alternatives
_No response_
|
process
|
add acceptance test support for eth gettransactioncount problem the current acceptance tests implemented in was not able to include eth gettransactioncount solution investigate why call produces and resolve err type precheckstatuserror message transaction failed precheck with status invalid account id stack statuserror transaction failed precheck with status invalid account id at new precheckstatuserror hedera json rpc relay packages relay node modules hashgraph sdk lib precheckstatuserror cjs at accountinfoquery mapstatuserror hedera json rpc relay packages relay node modules hashgraph sdk lib query query cjs at costquery mapstatuserror hedera json rpc relay packages relay node modules hashgraph sdk lib query costquery cjs at costquery execute hedera json rpc relay packages relay node modules hashgraph sdk lib executable cjs at processticksandrejections node internal process task queues at accountinfoquery getcost hedera json rpc relay packages relay node modules hashgraph sdk lib account accountinfoquery cjs at accountinfoquery beforeexecute hedera json rpc relay packages relay node modules hashgraph sdk lib query query cjs at accountinfoquery execute hedera json rpc relay packages relay node modules hashgraph sdk lib executable cjs name statuserror status code alternatives no response
| 1
|
98,921
| 8,685,918,351
|
IssuesEvent
|
2018-12-03 09:22:20
|
humera987/FXLabs-Test-Automation
|
https://api.github.com/repos/humera987/FXLabs-Test-Automation
|
closed
|
FX Testing 3 : ApiV1OrgsFindByNameNameGetQueryParamPagesizeDdos
|
FX Testing 3
|
Project : FX Testing 3
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NzFhZjk0NjUtMzg0NC00ZDVlLWI2ZWQtYzBjOWE2MTBmNGQ0; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 03 Dec 2018 04:47:04 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/orgs/find-by-name/KMNHfoen?pageSize=1001
Request :
Response :
{
"timestamp" : "2018-12-03T04:47:04.499+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/orgs/find-by-name/KMNHfoen"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot ---
|
1.0
|
FX Testing 3 : ApiV1OrgsFindByNameNameGetQueryParamPagesizeDdos - Project : FX Testing 3
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NzFhZjk0NjUtMzg0NC00ZDVlLWI2ZWQtYzBjOWE2MTBmNGQ0; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 03 Dec 2018 04:47:04 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/orgs/find-by-name/KMNHfoen?pageSize=1001
Request :
Response :
{
"timestamp" : "2018-12-03T04:47:04.499+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/orgs/find-by-name/KMNHfoen"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot ---
|
non_process
|
fx testing project fx testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api orgs find by name kmnhfoen logs assertion resolved to result assertion resolved to result fx bot
| 0
|
32,837
| 7,606,261,511
|
IssuesEvent
|
2018-04-30 12:43:21
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
[4.0] Installation step Sample Data fails
|
J4 Issue No Code Attached Yet
|
### Steps to reproduce the issue
- 4.0.0-dev of today.
- Started installation.
- Selected MySQLi as database type.
- Database was created.
- configuration.php was written with
`public $dbtype = 'mysqli';`
- Then I skipped "Install Language packages" and
- continued with "Install sample data"
There I have an endless rotating Joomla logo and a message "Warning! Please select the database type"

I tried several times to install sample datas in the past; since the first alpha release of Joomla 4.
Never successfull.
|
1.0
|
[4.0] Installation step Sample Data fails - ### Steps to reproduce the issue
- 4.0.0-dev of today.
- Started installation.
- Selected MySQLi as database type.
- Database was created.
- configuration.php was written with
`public $dbtype = 'mysqli';`
- Then I skipped "Install Language packages" and
- continued with "Install sample data"
There I have an endless rotating Joomla logo and a message "Warning! Please select the database type"

I tried several times to install sample datas in the past; since the first alpha release of Joomla 4.
Never successfull.
|
non_process
|
installation step sample data fails steps to reproduce the issue dev of today started installation selected mysqli as database type database was created configuration php was written with public dbtype mysqli then i skipped install language packages and continued with install sample data there i have an endless rotating joomla logo and a message warning please select the database type i tried several times to install sample datas in the past since the first alpha release of joomla never successfull
| 0
|
15,554
| 19,703,503,146
|
IssuesEvent
|
2022-01-12 19:07:59
|
googleapis/java-orchestration-airflow
|
https://api.github.com/repos/googleapis/java-orchestration-airflow
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'orchestration-airflow' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'orchestration-airflow' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname orchestration airflow invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
30,172
| 4,566,162,827
|
IssuesEvent
|
2016-09-15 05:21:57
|
NishantUpadhyay-BTC/BLISS-Issue-Tracking
|
https://api.github.com/repos/NishantUpadhyay-BTC/BLISS-Issue-Tracking
|
closed
|
#1428 - Office UI: User Record: Phone Number format validation too strict
|
Change Request Deployed to Test
|
I think we've been here before on the Guest UI -
If the phone number entered in the field does not conform to validation format, then the radio button for "preferred" cannot be selected. I think the problem is with validation on these fields. Apparently the Office Team has been using them to also enter extension numbers, e.g.: 503-854-3320 x138 - which the UI will not accept. Other than the occasional mis-type, what is the risk of just removing all validation other than a length limit (say, 20 characters) for these fields?
|
1.0
|
#1428 - Office UI: User Record: Phone Number format validation too strict - I think we've been here before on the Guest UI -
If the phone number entered in the field does not conform to validation format, then the radio button for "preferred" cannot be selected. I think the problem is with validation on these fields. Apparently the Office Team has been using them to also enter extension numbers, e.g.: 503-854-3320 x138 - which the UI will not accept. Other than the occasional mis-type, what is the risk of just removing all validation other than a length limit (say, 20 characters) for these fields?
|
non_process
|
office ui user record phone number format validation too strict i think we ve been here before on the guest ui if the phone number entered in the field does not conform to validation format then the radio button for preferred cannot be selected i think the problem is with validation on these fields apparently the office team has been using them to also enter extension numbers e g which the ui will not accept other than the occasional mis type what is the risk of just removing all validation other than a length limit say characters for these fields
| 0
|
104,858
| 11,424,501,610
|
IssuesEvent
|
2020-02-03 17:53:10
|
nimona/go-nimona
|
https://api.github.com/repos/nimona/go-nimona
|
opened
|
feat(router): add connection manager for exchange
|
Status: In Progress Type: Documentation Type: Enhancement
|
* Split the connection management from `pkg/exchange` into a separate package. (ie `pkg/router`).
* Add bounded connection pooling so that connections are capped to a fix amount of inc/out.
* Add exponential backoff for attempting to talk to unavailable peers.
* Consider abstracting the relay inside the router.
|
1.0
|
feat(router): add connection manager for exchange - * Split the connection management from `pkg/exchange` into a separate package. (ie `pkg/router`).
* Add bounded connection pooling so that connections are capped to a fix amount of inc/out.
* Add exponential backoff for attempting to talk to unavailable peers.
* Consider abstracting the relay inside the router.
|
non_process
|
feat router add connection manager for exchange split the connection management from pkg exchange into a separate package ie pkg router add bounded connection pooling so that connections are capped to a fix amount of inc out add exponential backoff for attempting to talk to unavailable peers consider abstracting the relay inside the router
| 0
|
8,498
| 11,660,552,112
|
IssuesEvent
|
2020-03-03 03:44:06
|
kubeflow/testing
|
https://api.github.com/repos/kubeflow/testing
|
closed
|
Setup 0.7 periodic tests
|
area/engprod kind/process lifecycle/stale priority/p0
|
We need to setup a periodic test for the 0.7 release.
There's a couple pieces to this.
1. We need to define a prow job for the periodic job
1. We need to figure out which repo to use for the prow_config.yaml that will define the test.
The obvious choice would be either kubeflow/kubeflow, kubeflow/kfctl, or kubeflow/manifests.
Our primary E2E tests right now are defined with kubeflow/kubeflow.
But we haven't cut a v0.7-branch yet for kfctl. When we do this will be on kubeflow/kfctl.
But we do have a v0.7-branch for kubeflow/manifests.
Here's what we can do.
In kubeflow/kubeflow we can add periodic jobs to prow_config.yaml that pull the kfdef files from kubeflow/manifests v0.7-branch.
When we move kfctl over to kubeflow/kfctl and cut a release branch we should move the test definitions there.
|
1.0
|
Setup 0.7 periodic tests - We need to setup a periodic test for the 0.7 release.
There's a couple pieces to this.
1. We need to define a prow job for the periodic job
1. We need to figure out which repo to use for the prow_config.yaml that will define the test.
The obvious choice would be either kubeflow/kubeflow, kubeflow/kfctl, or kubeflow/manifests.
Our primary E2E tests right now are defined with kubeflow/kubeflow.
But we haven't cut a v0.7-branch yet for kfctl. When we do this will be on kubeflow/kfctl.
But we do have a v0.7-branch for kubeflow/manifests.
Here's what we can do.
In kubeflow/kubeflow we can add periodic jobs to prow_config.yaml that pull the kfdef files from kubeflow/manifests v0.7-branch.
When we move kfctl over to kubeflow/kfctl and cut a release branch we should move the test definitions there.
|
process
|
setup periodic tests we need to setup a periodic test for the release there s a couple pieces to this we need to define a prow job for the periodic job we need to figure out which repo to use for the prow config yaml that will define the test the obvious choice would be either kubeflow kubeflow kubeflow kfctl or kubeflow manifests our primary tests right now are defined with kubeflow kubeflow but we haven t cut a branch yet for kfctl when we do this will be on kubeflow kfctl but we do have a branch for kubeflow manifests here s what we can do in kubeflow kubeflow we can add periodic jobs to prow config yaml that pull the kfdef files from kubeflow manifests branch when we move kfctl over to kubeflow kfctl and cut a release branch we should move the test definitions there
| 1
|
94,426
| 15,962,373,403
|
IssuesEvent
|
2021-04-16 01:10:33
|
KaterinaOrg/my-bag-of-holding
|
https://api.github.com/repos/KaterinaOrg/my-bag-of-holding
|
opened
|
CVE-2021-23358 (High) detected in underscore-1.6.0.tgz
|
security vulnerability
|
## CVE-2021-23358 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-1.6.0.tgz</b></p></summary>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.6.0.tgz">https://registry.npmjs.org/underscore/-/underscore-1.6.0.tgz</a></p>
<p>
Dependency Hierarchy:
- grunt-jscs-3.0.1.tgz (Root Library)
- jscs-3.0.7.tgz
- jsonlint-1.6.3.tgz
- nomnom-1.8.1.tgz
- :x: **underscore-1.6.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
<p>Publish Date: 2021-03-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358>CVE-2021-23358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p>
<p>Release Date: 2021-03-29</p>
<p>Fix Resolution: underscore - 1.12.1,1.13.0-2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"underscore","packageVersion":"1.6.0","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"grunt-jscs:3.0.1;jscs:3.0.7;jsonlint:1.6.3;nomnom:1.8.1;underscore:1.6.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"underscore - 1.12.1,1.13.0-2"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-23358","vulnerabilityDetails":"The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"High","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23358 (High) detected in underscore-1.6.0.tgz - ## CVE-2021-23358 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-1.6.0.tgz</b></p></summary>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.6.0.tgz">https://registry.npmjs.org/underscore/-/underscore-1.6.0.tgz</a></p>
<p>
Dependency Hierarchy:
- grunt-jscs-3.0.1.tgz (Root Library)
- jscs-3.0.7.tgz
- jsonlint-1.6.3.tgz
- nomnom-1.8.1.tgz
- :x: **underscore-1.6.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
<p>Publish Date: 2021-03-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358>CVE-2021-23358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p>
<p>Release Date: 2021-03-29</p>
<p>Fix Resolution: underscore - 1.12.1,1.13.0-2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"underscore","packageVersion":"1.6.0","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"grunt-jscs:3.0.1;jscs:3.0.7;jsonlint:1.6.3;nomnom:1.8.1;underscore:1.6.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"underscore - 1.12.1,1.13.0-2"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-23358","vulnerabilityDetails":"The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"High","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in underscore tgz cve high severity vulnerability vulnerable library underscore tgz javascript s functional programming helper library library home page a href dependency hierarchy grunt jscs tgz root library jscs tgz jsonlint tgz nomnom tgz x underscore tgz vulnerable library found in base branch main vulnerability details the package underscore from and before from and before are vulnerable to arbitrary code injection via the template function particularly when a variable property is passed as an argument as it is not sanitized publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution underscore isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt jscs jscs jsonlint nomnom underscore isminimumfixversionavailable true minimumfixversion underscore basebranches vulnerabilityidentifier cve vulnerabilitydetails the package underscore from and before from and before are vulnerable to arbitrary code injection via the template function particularly when a variable property is passed as an argument as it is not sanitized vulnerabilityurl
| 0
|
19,819
| 26,208,462,840
|
IssuesEvent
|
2023-01-04 02:28:47
|
TeamAidemy/ds-paper-summaries
|
https://api.github.com/repos/TeamAidemy/ds-paper-summaries
|
opened
|
Training language models to follow instructions with human feedback
|
Natural language processing Transformer
|
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe. 2022. “Training language models to follow instructions with human feedback.” arXiv [cs.CL]. https://arxiv.org/abs/2203.02155
- **InstructGPT**という、GPT-3を人間のフィードバックを用いた強化学習でFine-tuningしたモデルを、GPT-3を提案したOpenAIが自ら提案
- モデルの学習プロセス(学習アルゴリズムおよびデータセット)に人間のフィードバックを盛り込むことで、パラメーター数をシンプルなGPT-3から1/100に削減しても人間が好む出力ができることを確認
- 2022年末にリリースされた [ChatGPT](https://openai.com/blog/chatgpt/) のベースにもなっている技術
- 本文だけでも20ページ、Appendixまで含めると68ページの大作
## Abstract
> Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.
(DeepL翻訳)
言語モデルを大きくしても、ユーザーの意図に沿うようになるとは限りません。例えば、大きな言語モデルは、真実味のない、有害な、あるいは単にユーザーにとって役に立たない出力を生成することがあります。言い換えれば、これらのモデルはユーザーと一致していないのである。本論文では、人間のフィードバックを用いて微調整を行うことで、様々なタスクにおいて言語モデルをユーザーの意図に沿うようにする方法を示す。まず、ラベラーが書いたプロンプトとOpenAI APIを通じて送信されたプロンプトのセットから始め、我々はラベラーが望ましいモデルの動作を示すデータセットを収集し、それを用いて教師あり学習を用いてGPT-3の微調整を行う。次に、モデル出力のランキングデータセットを収集し、人間のフィードバックからの強化学習を用いて、この教師ありモデルをさらに微調整するために使用する。このようにして得られたモデルをInstructGPTと呼ぶ.我々のプロンプト分布に対する人間の評価では、パラメータが100倍少ないにもかかわらず、パラメータ13BのInstructGPTモデルの出力が、パラメータ175BのGPT-3の出力よりも優先されました。さらに、InstructGPTモデルは、真実性の向上と有害な出力生成の削減を示す一方で、公開されたNLPデータセットに対する性能低下は最小限であることが分かりました。この結果は、人間のフィードバックによる微調整が、言語モデルを人間の意図に沿わせるための有望な方向性であることを示しています。
## コード
まとめ作成時点では無し
## 解決した課題/先行研究との比較
- GPT-3を始めとする大規模言語モデル (LM; Language Model) は、人間が意図せぬ動作をすることがしばしばある
→ そもそも大規模LMの目的が「人間の意図する動作をすること」になっていない
- 意図せぬ動作の例
- それっぽい表現で事実のでっちあげ
- 簡単な質問に対して長々と答える
- 社会的バイアスや差別などの不適切表現([参考](https://ainow.ai/2020/07/20/224713/))
- 個人情報の漏洩([参考](https://ai.googleblog.com/2020/12/privacy-considerations-in-large.html))
- そもそもユーザーの指示に従わない
- よりユーザーの意図を汲めるように、モデルの "alignment" が必要
- そこで、本論文ではGPT-3のFine-tuning時に、**人間のフィードバックを用いた強化学習** (RLHF; Reinforcement Learning from Human Feedback; Paul F. Christiano, et al. 2017) を利用
- 著者らが新規に考案した手法ではなく、もともとロボットの行動学習時などに利用されていた手法
- ただし、2019年頃からテキスト要約モデルのFine-tuningで応用され始めていた
- 通常のGPT-3と比較して、モデルのパラメータ数が1/100のInstructGPTのほうが、人が見たときに違和感のないテキスト生成が可能だった(詳細は [評価指標](#評価指標) の項目で説明)
## 技術・手法のポイント
3ステップで構成される。

ベースはGPT-3。Web上の多様なデータで学習された状態の、いわば「人間が意図せぬ動作」をするモデルがスタート。
### Step 1. 教師ありFine-tuning (SFT; Supervised Fine-Tuning)
- 格好いい名前がついているが、やっていることは事前学習済GPT-3を、少し大規模なFew-shot Fine-tuning(入力プロンプトと出力テキストのペアをモデルに提示)しているだけ
- 選ばれた人間(以下、アノテーターと呼ぶ)が人力で作った「入力プロンプトと所望の出力文章のペア」データセットを利用。13,000件程度。
- 本研究において、アノテーターに誰を(どういう性質や思考の人間を)選ぶかは非常に重要。そのため、適性を測定するためのスクリーニングを行ったり、他のアノテーター群との比較実験で公平性を確認している (付録B参照)
- こうしてできあがったモデルをSFTモデルと呼ぶ
### Step 2. 報酬モデルの学習
- 入力プロンプトに対するSFTモデルの出力文章が $K$ 個(図中だと $K$ = 4)選択肢として提示され、アノテーターは文章の好ましさ順にランキングをつける
- このランキングを利用して、入力が(入力プロンプト, 出力文章)のペアワイズのデータ、出力が「文章の好ましさスコア」となるモデルを学習させる
- アノテーターが各文章に対して絶対的・普遍的な好ましさスコアを定量的に与えることが難しいため、相対的なランキングを利用し、ランキング学習 (LTR; Learning To Rank) の枠組みに落とし込んでいる
- モデルの出力が次ステップの強化学習における報酬として用いられるため、このモデルのことを報酬モデル (RM; Reward Model) と呼ぶ
- 報酬モデルはSFTモデルをベースとする。ただし、スカラーで報酬値を出力できるように、最終層のみアーキテクチャーを変更
### Step 3. 報酬モデルを使ってSFTモデルを強化学習 (RLHF)
- 強化学習の枠組みに落とし込んで、RMの出力が最大となるようにSFTモデルをFine-tuningさせる
- このときの学習アルゴリズムとして用いられるのが PPO (Proximal Policy Optimization; Schulman, et al., 2017)
- PPOのざっくりの特徴として、ポリシー(今回でいうSFTモデル)が過剰に更新されることを抑えながら、安定的に学習を行うことが可能
- SFTモデルが学習データにオーバーフィッティングしすぎないように、事前学習データの尤度を加える工夫も追加されている(詳細はここでは割愛)
- **ここまでしたバージョンのモデルのことをInstructGPTと呼ぶ**
- 詳細は[こちら](https://qiita.com/omiita/items/c355bc4c26eca2817324#33-renfiorcement-learning-from-human-feedback)の記事が理解しやすい
## 評価指標
3つの観点で評価を実施。以下は図中の用語の説明。
- GPT: GPT-3
- GPT (prompted): Few-shotのプロンプトでFine-tuningをしたGPT-3
- SFT: SFTモデル
- PPO: PPOを用いてRLHFしたモデル
- PPO-ptx: PPO+事前学習データの尤度を用いてRLHFしたモデル (InstructGPT)
### OpenAI Playgroundに公開されているAPIを用いた評価
(APIの詳細は付録A.2参照)
- <img width="635" alt="スクリーンショット 2022-12-21 13 50 38" src="https://user-images.githubusercontent.com/68265677/208833516-c0f9eb4b-7755-4695-a57d-207f4f978f29.png">
- 縦軸: GPT-3 175BをベースにしたSFTモデルの出力と、比較用の各モデルの出力をアノテーターが比較し、後者が勝っていた(= 好ましい出力をしていた)パーセンテージ
- **結果: RLHFモデル (PPO, PPO-ptx) が他を圧倒**
- GPT-3 175Bよりも、モデルサイズが1/100であるInstructGPT 1.3Bのほうが好ましい出力が可能
- (この図にはないが)データセット作成に携わらなかったアノテーター群を対象に同様の実験を行ったところ、そちらでも同様の傾向が出ることを確認 = 汎化性能あり(アノテーターの特性にオーバーフィットしていない)
- <img width="423" alt="スクリーンショット 2022-12-21 15 15 34" src="https://user-images.githubusercontent.com/68265677/208834822-d200b7be-3a9a-4cf7-b670-8ad20436a58c.png">
- アノテーターによる各モデル出力のリッカート尺度比較 (MIN: 1 ~ MAX: 7)
- FLAN: 大規模なZero-shot LMの1種である [FLAN (Jason Wei, et al., 2021)](https://arxiv.org/abs/2109.01652) の学習用データでFine-tuningしたGPT-3 (175B)
- T0: 大規模なZero-shot LMの1種である [T0 (Victor Sanh, et al., 2021)](https://arxiv.org/abs/2110.08207) の学習用データでFine-tuningしたGPT-3 (175B)
- **結果: RLHFモデルが圧倒**
- FLANやT0は汎用的な自動評価しやすいタスク(分類、質問応答、要約、翻訳など)を前提に作られている一方で、実際のGPT-3のユースケースとしてはより自由度の高い文章生成が多い(全体の57%はこういった用途)
- つまり、元々のデータセット自体のターゲットの違いが結果に寄与している可能性がある
<!-- <img width="776" alt="スクリーンショット 2022-12-21 15 12 51" src="https://user-images.githubusercontent.com/68265677/208834411-80e6d997-b972-4d74-99d1-ed2d388fdd00.png"> -->
### 公開データセットを用いた評価
- Truthfulness(信憑性)
- [TruthfulQA](https://github.com/sylinrl/TruthfulQA)データセットを利用
- <img width="750" alt="スクリーンショット 2022-12-21 13 51 25" src="https://user-images.githubusercontent.com/68265677/208827560-0eee121b-b181-47ed-a00f-8dfe7ecab4b3.png">
- Grayのバーが信憑性のあるテキストの割合、カラーバーが信憑かつ有益なテキストの割合を示す
- **結果: GPT-3と比較してInstructGPTがわずかに改善**
- Toxicity(有害性)
- [Real Toxicity Prompts Dataset](https://toxicdegeneration.allenai.org/)を利用
- <img width="786" alt="スクリーンショット 2022-12-21 13 51 34" src="https://user-images.githubusercontent.com/68265677/208831593-93186ee6-c06b-4ff9-b685-d4010477441a.png">
- 左図はアノテーターによるマニュアル評価、右図は [PerspectiveAPI](https://www.perspectiveapi.com/) を通じた自動評価
- "respectful" と指示された場合とそうでない場合ごとの結果が示されている
- **結果: 全体的に出力の有害性の低さは GPT-3 < SFT <= InstructGPT**
- (図にはないが)興味深いことに、**有害な出力を生成するように明示的に指示した場合、GPT-3よりもInstructGPTのほうが有害な出力をすることが分かった**(詳細は [残された課題・議論](#残された課題・議論) にて)
- alignmentした場合の汎用的なNLPタスク性能の変化
- alignmentを追求するトレードオフとして、汎用的なNLPタスクで性能が低下する
- 論文中ではこのことを "alignment tax" が課されると表現
- DROP, HellaSwag, SQuADv2, BLEU (French → English) などで評価(一覧はTab.14参照)
- 
- 結果
- RLHFしたモデルは、SFTモデルやGPT-3と比較してほぼ全てのタスクで性能が低下(alignment taxの影響)
- ただし、**InstructGPTではシンプルなPPOモデルよりも性能の低下度合いが軽減されている**
### 定性的評価
InstructGPTのFine-tuningに使用したデータは英語の文章データが中心で、それ以外はごく少数であったにも関わらず、英語以外の言語やプログラミングコードの要約・質問応答も可能という、興味深い結果が得られた。
<img width="836" alt="スクリーンショット 2022-12-21 13 51 57" src="https://user-images.githubusercontent.com/68265677/208825065-9a741864-5c06-4446-a3b3-11163d0006c3.png">
↑ 同じパラメーター数(175B)のGPT-3ではテキスト生成が全くうまくいっていないが、InstructGPTではうまくいっている例
## 残された課題・議論
<!-- - まだ完全に HHH (Helpful, Honest, Harmless) なテキストが生成できるわけではない
- むしろ、悪意を持った人間が「人間の指示に従順な」InstructGPTの学習を行ったら、普通のGPT-3よりも強力なバイアスがかかったテキストが生成される恐れもある
- こういったテキスト生成のリスクを減少させるために、他のアプローチとの組み合わせが考えうる
- 事前学習データをフィルタりんする方法
- WebGPT のような、モデルの真実性を向上させる手法 -->
- まだまだ単純なミスをする
- 例1: 誤った前提を持つ命令が与えられると、その前提が真であると無理やり仮定してテキストを生成する
- 例2: モデルが過度にリスクヘッジをして、曖昧な言い回しで回答してしまう
- 例3: 指示の成約が厳しい場合(文章数制限など)
- 下図はそれぞれ例1, 2の実例
- <img width="857" alt="スクリーンショット 2022-12-21 13 52 57" src="https://user-images.githubusercontent.com/68265677/208824918-03693cce-c069-47f3-8174-69699a9ff682.png">
- 例2の顕著な例(ChatGPTに本論文の要約をさせた結果)
- 
- モデルが誰に対して alignment されるかが極めて重要
- 悪意を持った人間が「人間の指示に従順な」InstructGPTの学習を行ったら、普通のGPT-3よりも有害なバイアスがかかったテキストが生成される恐れもある
- こういったテキスト生成のリスクを減少させるために、他のアプローチとの組み合わせが考えうる
- 事前学習データをフィルタする手法
- WebGPTのような、モデルの真実性を向上させる手法
- そもそも、利害関係の強い領域(医療診断、保護特性に基づく人々の分類、信用、雇用、住居の適格性の判断、政治的広告の生成、法執行など)ではこういった生成モデルは一切使うべきではない、と著者らは考えている
## 重要な引用
- Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” 34th Conference on Neural Information Processing Systems. https://papers.nips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html.
- GPT-3の原著
- [弊社論文サマリー](https://github.com/TeamAidemy/ds-paper-summaries/issues/6)
- Paul F. Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement
learning from human preferences. In *Proc. NIPS 2017*. https://papers.nips.cc/paper/2017/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html.
- RLHFが提案されている論文
- [OpenAIの公式Blog](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)
- John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal Policy Optimization Algorithms. arXiv:1707.06347 [cs.CL]. https://arxiv.org/abs/1707.06347.
- PPOが提案されている論文
- これもOpenAI発
<!-- - Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. 2021. A General Language Assistant as a Laboratory for Alignment. arXiv:2112.00861 [cs.CL]. https://arxiv.org/abs/2112.00861
- 大規模LMのalignmentで目指すべきコンセプト HHH (Helpful, Honest, Harmless) を提唱 -->
## 参考情報
- [OpenAIの公式Blog](https://openai.com/blog/instruction-following/)
- [話題爆発中のAI「ChatGPT」の仕組みにせまる!](https://qiita.com/omiita/items/c355bc4c26eca2817324) (Qiita)
- [ChatGPT 人間のフィードバックから強化学習した対話AI](https://www.slideshare.net/ShotaImai3/chatgpt-254863623) (slideshare, 主に24ページ以降)
- [ChatGPTのコア技術RLHF(人間フィードバックによる強化学習)を解説](https://ja.stateofaiguides.com/20221214-reinforcement-learning-from-human-feedback/)(ステート・オブ・AI ガイド)
|
1.0
|
Training language models to follow instructions with human feedback - Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe. 2022. “Training language models to follow instructions with human feedback.” arXiv [cs.CL]. https://arxiv.org/abs/2203.02155
- **InstructGPT**という、GPT-3を人間のフィードバックを用いた強化学習でFine-tuningしたモデルを、GPT-3を提案したOpenAIが自ら提案
- モデルの学習プロセス(学習アルゴリズムおよびデータセット)に人間のフィードバックを盛り込むことで、パラメーター数をシンプルなGPT-3から1/100に削減しても人間が好む出力ができることを確認
- 2022年末にリリースされた [ChatGPT](https://openai.com/blog/chatgpt/) のベースにもなっている技術
- 本文だけでも20ページ、Appendixまで含めると68ページの大作
## Abstract
> Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.
(DeepL翻訳)
言語モデルを大きくしても、ユーザーの意図に沿うようになるとは限りません。例えば、大きな言語モデルは、真実味のない、有害な、あるいは単にユーザーにとって役に立たない出力を生成することがあります。言い換えれば、これらのモデルはユーザーと一致していないのである。本論文では、人間のフィードバックを用いて微調整を行うことで、様々なタスクにおいて言語モデルをユーザーの意図に沿うようにする方法を示す。まず、ラベラーが書いたプロンプトとOpenAI APIを通じて送信されたプロンプトのセットから始め、我々はラベラーが望ましいモデルの動作を示すデータセットを収集し、それを用いて教師あり学習を用いてGPT-3の微調整を行う。次に、モデル出力のランキングデータセットを収集し、人間のフィードバックからの強化学習を用いて、この教師ありモデルをさらに微調整するために使用する。このようにして得られたモデルをInstructGPTと呼ぶ.我々のプロンプト分布に対する人間の評価では、パラメータが100倍少ないにもかかわらず、パラメータ13BのInstructGPTモデルの出力が、パラメータ175BのGPT-3の出力よりも優先されました。さらに、InstructGPTモデルは、真実性の向上と有害な出力生成の削減を示す一方で、公開されたNLPデータセットに対する性能低下は最小限であることが分かりました。この結果は、人間のフィードバックによる微調整が、言語モデルを人間の意図に沿わせるための有望な方向性であることを示しています。
## コード
まとめ作成時点では無し
## 解決した課題/先行研究との比較
- GPT-3を始めとする大規模言語モデル (LM; Language Model) は、人間が意図せぬ動作をすることがしばしばある
→ そもそも大規模LMの目的が「人間の意図する動作をすること」になっていない
- 意図せぬ動作の例
- それっぽい表現で事実のでっちあげ
- 簡単な質問に対して長々と答える
- 社会的バイアスや差別などの不適切表現([参考](https://ainow.ai/2020/07/20/224713/))
- 個人情報の漏洩([参考](https://ai.googleblog.com/2020/12/privacy-considerations-in-large.html))
- そもそもユーザーの指示に従わない
- よりユーザーの意図を汲めるように、モデルの "alignment" が必要
- そこで、本論文ではGPT-3のFine-tuning時に、**人間のフィードバックを用いた強化学習** (RLHF; Reinforcement Learning from Human Feedback; Paul F. Christiano, et al. 2017) を利用
- 著者らが新規に考案した手法ではなく、もともとロボットの行動学習時などに利用されていた手法
- ただし、2019年頃からテキスト要約モデルのFine-tuningで応用され始めていた
- 通常のGPT-3と比較して、モデルのパラメータ数が1/100のInstructGPTのほうが、人が見たときに違和感のないテキスト生成が可能だった(詳細は [評価指標](#評価指標) の項目で説明)
## 技術・手法のポイント
3ステップで構成される。

ベースはGPT-3。Web上の多様なデータで学習された状態の、いわば「人間が意図せぬ動作」をするモデルがスタート。
### Step 1. 教師ありFine-tuning (SFT; Supervised Fine-Tuning)
- 格好いい名前がついているが、やっていることは事前学習済GPT-3を、少し大規模なFew-shot Fine-tuning(入力プロンプトと出力テキストのペアをモデルに提示)しているだけ
- 選ばれた人間(以下、アノテーターと呼ぶ)が人力で作った「入力プロンプトと所望の出力文章のペア」データセットを利用。13,000件程度。
- 本研究において、アノテーターに誰を(どういう性質や思考の人間を)選ぶかは非常に重要。そのため、適性を測定するためのスクリーニングを行ったり、他のアノテーター群との比較実験で公平性を確認している (付録B参照)
- こうしてできあがったモデルをSFTモデルと呼ぶ
### Step 2. 報酬モデルの学習
- 入力プロンプトに対するSFTモデルの出力文章が $K$ 個(図中だと $K$ = 4)選択肢として提示され、アノテーターは文章の好ましさ順にランキングをつける
- このランキングを利用して、入力が(入力プロンプト, 出力文章)のペアワイズのデータ、出力が「文章の好ましさスコア」となるモデルを学習させる
- アノテーターが各文章に対して絶対的・普遍的な好ましさスコアを定量的に与えることが難しいため、相対的なランキングを利用し、ランキング学習 (LTR; Learning To Rank) の枠組みに落とし込んでいる
- モデルの出力が次ステップの強化学習における報酬として用いられるため、このモデルのことを報酬モデル (RM; Reward Model) と呼ぶ
- 報酬モデルはSFTモデルをベースとする。ただし、スカラーで報酬値を出力できるように、最終層のみアーキテクチャーを変更
### Step 3. 報酬モデルを使ってSFTモデルを強化学習 (RLHF)
- 強化学習の枠組みに落とし込んで、RMの出力が最大となるようにSFTモデルをFine-tuningさせる
- このときの学習アルゴリズムとして用いられるのが PPO (Proximal Policy Optimization; Schulman, et al., 2017)
- PPOのざっくりの特徴として、ポリシー(今回でいうSFTモデル)が過剰に更新されることを抑えながら、安定的に学習を行うことが可能
- SFTモデルが学習データにオーバーフィッティングしすぎないように、事前学習データの尤度を加える工夫も追加されている(詳細はここでは割愛)
- **ここまでしたバージョンのモデルのことをInstructGPTと呼ぶ**
- 詳細は[こちら](https://qiita.com/omiita/items/c355bc4c26eca2817324#33-renfiorcement-learning-from-human-feedback)の記事が理解しやすい
## 評価指標
3つの観点で評価を実施。以下は図中の用語の説明。
- GPT: GPT-3
- GPT (prompted): Few-shotのプロンプトでFine-tuningをしたGPT-3
- SFT: SFTモデル
- PPO: PPOを用いてRLHFしたモデル
- PPO-ptx: PPO+事前学習データの尤度を用いてRLHFしたモデル (InstructGPT)
### OpenAI Playgroundに公開されているAPIを用いた評価
(APIの詳細は付録A.2参照)
- <img width="635" alt="スクリーンショット 2022-12-21 13 50 38" src="https://user-images.githubusercontent.com/68265677/208833516-c0f9eb4b-7755-4695-a57d-207f4f978f29.png">
- 縦軸: GPT-3 175BをベースにしたSFTモデルの出力と、比較用の各モデルの出力をアノテーターが比較し、後者が勝っていた(= 好ましい出力をしていた)パーセンテージ
- **結果: RLHFモデル (PPO, PPO-ptx) が他を圧倒**
- GPT-3 175Bよりも、モデルサイズが1/100であるInstructGPT 1.3Bのほうが好ましい出力が可能
- (この図にはないが)データセット作成に携わらなかったアノテーター群を対象に同様の実験を行ったところ、そちらでも同様の傾向が出ることを確認 = 汎化性能あり(アノテーターの特性にオーバーフィットしていない)
- <img width="423" alt="スクリーンショット 2022-12-21 15 15 34" src="https://user-images.githubusercontent.com/68265677/208834822-d200b7be-3a9a-4cf7-b670-8ad20436a58c.png">
- アノテーターによる各モデル出力のリッカート尺度比較 (MIN: 1 ~ MAX: 7)
- FLAN: 大規模なZero-shot LMの1種である [FLAN (Jason Wei, et al., 2021)](https://arxiv.org/abs/2109.01652) の学習用データでFine-tuningしたGPT-3 (175B)
- T0: 大規模なZero-shot LMの1種である [T0 (Victor Sanh, et al., 2021)](https://arxiv.org/abs/2110.08207) の学習用データでFine-tuningしたGPT-3 (175B)
- **結果: RLHFモデルが圧倒**
- FLANやT0は汎用的な自動評価しやすいタスク(分類、質問応答、要約、翻訳など)を前提に作られている一方で、実際のGPT-3のユースケースとしてはより自由度の高い文章生成が多い(全体の57%はこういった用途)
- つまり、元々のデータセット自体のターゲットの違いが結果に寄与している可能性がある
<!-- <img width="776" alt="スクリーンショット 2022-12-21 15 12 51" src="https://user-images.githubusercontent.com/68265677/208834411-80e6d997-b972-4d74-99d1-ed2d388fdd00.png"> -->
### 公開データセットを用いた評価
- Truthfulness(信憑性)
- [TruthfulQA](https://github.com/sylinrl/TruthfulQA)データセットを利用
- <img width="750" alt="スクリーンショット 2022-12-21 13 51 25" src="https://user-images.githubusercontent.com/68265677/208827560-0eee121b-b181-47ed-a00f-8dfe7ecab4b3.png">
- Grayのバーが信憑性のあるテキストの割合、カラーバーが信憑かつ有益なテキストの割合を示す
- **結果: GPT-3と比較してInstructGPTがわずかに改善**
- Toxicity(有害性)
- [Real Toxicity Prompts Dataset](https://toxicdegeneration.allenai.org/)を利用
- <img width="786" alt="スクリーンショット 2022-12-21 13 51 34" src="https://user-images.githubusercontent.com/68265677/208831593-93186ee6-c06b-4ff9-b685-d4010477441a.png">
- 左図はアノテーターによるマニュアル評価、右図は [PerspectiveAPI](https://www.perspectiveapi.com/) を通じた自動評価
- "respectful" と指示された場合とそうでない場合ごとの結果が示されている
- **結果: 全体的に出力の有害性の低さは GPT-3 < SFT <= InstructGPT**
- (図にはないが)興味深いことに、**有害な出力を生成するように明示的に指示した場合、GPT-3よりもInstructGPTのほうが有害な出力をすることが分かった**(詳細は [残された課題・議論](#残された課題・議論) にて)
- alignmentした場合の汎用的なNLPタスク性能の変化
- alignmentを追求するトレードオフとして、汎用的なNLPタスクで性能が低下する
- 論文中ではこのことを "alignment tax" が課されると表現
- DROP, HellaSwag, SQuADv2, BLEU (French → English) などで評価(一覧はTab.14参照)
- 
- 結果
- RLHFしたモデルは、SFTモデルやGPT-3と比較してほぼ全てのタスクで性能が低下(alignment taxの影響)
- ただし、**InstructGPTではシンプルなPPOモデルよりも性能の低下度合いが軽減されている**
### 定性的評価
InstructGPTのFine-tuningに使用したデータは英語の文章データが中心で、それ以外はごく少数であったにも関わらず、英語以外の言語やプログラミングコードの要約・質問応答も可能という、興味深い結果が得られた。
<img width="836" alt="スクリーンショット 2022-12-21 13 51 57" src="https://user-images.githubusercontent.com/68265677/208825065-9a741864-5c06-4446-a3b3-11163d0006c3.png">
↑ 同じパラメーター数(175B)のGPT-3ではテキスト生成が全くうまくいっていないが、InstructGPTではうまくいっている例
## 残された課題・議論
<!-- - まだ完全に HHH (Helpful, Honest, Harmless) なテキストが生成できるわけではない
- むしろ、悪意を持った人間が「人間の指示に従順な」InstructGPTの学習を行ったら、普通のGPT-3よりも強力なバイアスがかかったテキストが生成される恐れもある
- こういったテキスト生成のリスクを減少させるために、他のアプローチとの組み合わせが考えうる
- 事前学習データをフィルタりんする方法
- WebGPT のような、モデルの真実性を向上させる手法 -->
- まだまだ単純なミスをする
- 例1: 誤った前提を持つ命令が与えられると、その前提が真であると無理やり仮定してテキストを生成する
- 例2: モデルが過度にリスクヘッジをして、曖昧な言い回しで回答してしまう
- 例3: 指示の成約が厳しい場合(文章数制限など)
- 下図はそれぞれ例1, 2の実例
- <img width="857" alt="スクリーンショット 2022-12-21 13 52 57" src="https://user-images.githubusercontent.com/68265677/208824918-03693cce-c069-47f3-8174-69699a9ff682.png">
- 例2の顕著な例(ChatGPTに本論文の要約をさせた結果)
- 
- モデルが誰に対して alignment されるかが極めて重要
- 悪意を持った人間が「人間の指示に従順な」InstructGPTの学習を行ったら、普通のGPT-3よりも有害なバイアスがかかったテキストが生成される恐れもある
- こういったテキスト生成のリスクを減少させるために、他のアプローチとの組み合わせが考えうる
- 事前学習データをフィルタする手法
- WebGPTのような、モデルの真実性を向上させる手法
- そもそも、利害関係の強い領域(医療診断、保護特性に基づく人々の分類、信用、雇用、住居の適格性の判断、政治的広告の生成、法執行など)ではこういった生成モデルは一切使うべきではない、と著者らは考えている
## 重要な引用
- Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” 34th Conference on Neural Information Processing Systems. https://papers.nips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html.
- GPT-3の原著
- [弊社論文サマリー](https://github.com/TeamAidemy/ds-paper-summaries/issues/6)
- Paul F. Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement
learning from human preferences. In *Proc. NIPS 2017*. https://papers.nips.cc/paper/2017/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html.
- RLHFが提案されている論文
- [OpenAIの公式Blog](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)
- John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal Policy Optimization Algorithms. arXiv:1707.06347 [cs.CL]. https://arxiv.org/abs/1707.06347.
- PPOが提案されている論文
- これもOpenAI発
<!-- - Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. 2021. A General Language Assistant as a Laboratory for Alignment. arXiv:2112.00861 [cs.CL]. https://arxiv.org/abs/2112.00861
- 大規模LMのalignmentで目指すべきコンセプト HHH (Helpful, Honest, Harmless) を提唱 -->
## 参考情報
- [OpenAIの公式Blog](https://openai.com/blog/instruction-following/)
- [話題爆発中のAI「ChatGPT」の仕組みにせまる!](https://qiita.com/omiita/items/c355bc4c26eca2817324) (Qiita)
- [ChatGPT 人間のフィードバックから強化学習した対話AI](https://www.slideshare.net/ShotaImai3/chatgpt-254863623) (slideshare, 主に24ページ以降)
- [ChatGPTのコア技術RLHF(人間フィードバックによる強化学習)を解説](https://ja.stateofaiguides.com/20221214-reinforcement-learning-from-human-feedback/)(ステート・オブ・AI ガイド)
|
process
|
training language models to follow instructions with human feedback long ouyang jeff wu xu jiang diogo almeida carroll l wainwright pamela mishkin chong zhang sandhini agarwal katarina slama alex ray john schulman jacob hilton fraser kelton luke miller maddie simens amanda askell peter welinder paul christiano jan leike ryan lowe “training language models to follow instructions with human feedback ” arxiv instructgpt という、gpt tuningしたモデルを、gpt モデルの学習プロセス(学習アルゴリズムおよびデータセット)に人間のフィードバックを盛り込むことで、パラメーター数をシンプルなgpt のベースにもなっている技術 、 abstract making language models bigger does not inherently make them better at following a user s intent for example large language models can generate outputs that are untruthful toxic or simply not helpful to the user in other words these models are not aligned with their users in this paper we show an avenue for aligning language models with user intent on a wide range of tasks by fine tuning with human feedback starting with a set of labeler written prompts and prompts submitted through the openai api we collect a dataset of labeler demonstrations of the desired model behavior which we use to fine tune gpt using supervised learning we then collect a dataset of rankings of model outputs which we use to further fine tune this supervised model using reinforcement learning from human feedback we call the resulting models instructgpt in human evaluations on our prompt distribution outputs from the parameter instructgpt model are preferred to outputs from the gpt despite having fewer parameters moreover instructgpt models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public nlp datasets even though instructgpt still makes simple mistakes our results show that fine tuning with human feedback is a promising direction for aligning language models with human intent deepl翻訳 言語モデルを大きくしても、ユーザーの意図に沿うようになるとは限りません。例えば、大きな言語モデルは、真実味のない、有害な、あるいは単にユーザーにとって役に立たない出力を生成することがあります。言い換えれば、これらのモデルはユーザーと一致していないのである。本論文では、人間のフィードバックを用いて微調整を行うことで、様々なタスクにおいて言語モデルをユーザーの意図に沿うようにする方法を示す。まず、ラベラーが書いたプロンプトとopenai apiを通じて送信されたプロンプトのセットから始め、我々はラベラーが望ましいモデルの動作を示すデータセットを収集し、それを用いて教師あり学習を用いてgpt 。次に、モデル出力のランキングデータセットを収集し、人間のフィードバックからの強化学習を用いて、この教師ありモデルをさらに微調整するために使用する。このようにして得られたモデルをinstructgptと呼ぶ.我々のプロンプト分布に対する人間の評価では、 、 、 。さらに、instructgptモデルは、真実性の向上と有害な出力生成の削減を示す一方で、公開されたnlpデータセットに対する性能低下は最小限であることが分かりました。この結果は、人間のフィードバックによる微調整が、言語モデルを人間の意図に沿わせるための有望な方向性であることを示しています。 コード まとめ作成時点では無し 解決した課題 先行研究との比較 gpt lm language model は、人間が意図せぬ動作をすることがしばしばある → そもそも大規模lmの目的が「人間の意図する動作をすること」になっていない 意図せぬ動作の例 それっぽい表現で事実のでっちあげ 簡単な質問に対して長々と答える 社会的バイアスや差別などの不適切表現( 個人情報の漏洩( そもそもユーザーの指示に従わない よりユーザーの意図を汲めるように、モデルの alignment が必要 そこで、本論文ではgpt tuning時に、 人間のフィードバックを用いた強化学習 rlhf reinforcement learning from human feedback paul f christiano et al を利用 著者らが新規に考案した手法ではなく、もともとロボットの行動学習時などに利用されていた手法 ただし、 tuningで応用され始めていた 通常のgpt 、 、人が見たときに違和感のないテキスト生成が可能だった(詳細は 評価指標 の項目で説明) 技術・手法のポイント 。 ベースはgpt 。web上の多様なデータで学習された状態の、いわば「人間が意図せぬ動作」をするモデルがスタート。 step 教師ありfine tuning sft supervised fine tuning 格好いい名前がついているが、やっていることは事前学習済gpt 、少し大規模なfew shot fine tuning(入力プロンプトと出力テキストのペアをモデルに提示)しているだけ 選ばれた人間(以下、アノテーターと呼ぶ)が人力で作った「入力プロンプトと所望の出力文章のペア」データセットを利用。 。 本研究において、アノテーターに誰を(どういう性質や思考の人間を)選ぶかは非常に重要。そのため、適性を測定するためのスクリーニングを行ったり、他のアノテーター群との比較実験で公平性を確認している 付録b参照 こうしてできあがったモデルをsftモデルと呼ぶ step 報酬モデルの学習 入力プロンプトに対するsftモデルの出力文章が k 個(図中だと k )選択肢として提示され、アノテーターは文章の好ましさ順にランキングをつける このランキングを利用して、入力が(入力プロンプト 出力文章)のペアワイズのデータ、出力が「文章の好ましさスコア」となるモデルを学習させる アノテーターが各文章に対して絶対的・普遍的な好ましさスコアを定量的に与えることが難しいため、相対的なランキングを利用し、ランキング学習 ltr learning to rank の枠組みに落とし込んでいる モデルの出力が次ステップの強化学習における報酬として用いられるため、このモデルのことを報酬モデル rm reward model と呼ぶ 報酬モデルはsftモデルをベースとする。ただし、スカラーで報酬値を出力できるように、最終層のみアーキテクチャーを変更 step 報酬モデルを使ってsftモデルを強化学習 rlhf 強化学習の枠組みに落とし込んで、rmの出力が最大となるようにsftモデルをfine tuningさせる このときの学習アルゴリズムとして用いられるのが ppo proximal policy optimization schulman et al ppoのざっくりの特徴として、ポリシー(今回でいうsftモデル)が過剰に更新されることを抑えながら、安定的に学習を行うことが可能 sftモデルが学習データにオーバーフィッティングしすぎないように、事前学習データの尤度を加える工夫も追加されている(詳細はここでは割愛) ここまでしたバージョンのモデルのことをinstructgptと呼ぶ 詳細は 評価指標 。以下は図中の用語の説明。 gpt gpt gpt prompted few shotのプロンプトでfine tuningをしたgpt sft sftモデル ppo ppoを用いてrlhfしたモデル ppo ptx ppo 事前学習データの尤度を用いてrlhfしたモデル instructgpt openai playgroundに公開されているapiを用いた評価 (apiの詳細は付録a ) img width alt スクリーンショット src 縦軸 gpt 、比較用の各モデルの出力をアノテーターが比較し、後者が勝っていた( 好ましい出力をしていた)パーセンテージ 結果 rlhfモデル ppo ppo ptx が他を圧倒 gpt 、 (この図にはないが)データセット作成に携わらなかったアノテーター群を対象に同様の実験を行ったところ、そちらでも同様の傾向が出ることを確認 汎化性能あり(アノテーターの特性にオーバーフィットしていない) img width alt スクリーンショット src アノテーターによる各モデル出力のリッカート尺度比較 min max flan 大規模なzero shot の学習用データでfine tuningしたgpt 大規模なzero shot の学習用データでfine tuningしたgpt 結果 rlhfモデルが圧倒 (分類、質問応答、要約、翻訳など)を前提に作られている一方で、実際のgpt ( はこういった用途) つまり、元々のデータセット自体のターゲットの違いが結果に寄与している可能性がある 公開データセットを用いた評価 truthfulness(信憑性) img width alt スクリーンショット src grayのバーが信憑性のあるテキストの割合、カラーバーが信憑かつ有益なテキストの割合を示す 結果 gpt toxicity(有害性) img width alt スクリーンショット src 左図はアノテーターによるマニュアル評価、右図は を通じた自動評価 respectful と指示された場合とそうでない場合ごとの結果が示されている 結果 全体的に出力の有害性の低さは gpt sft instructgpt (図にはないが)興味深いことに、 有害な出力を生成するように明示的に指示した場合、gpt (詳細は 残された課題・議論 にて) alignmentした場合の汎用的なnlpタスク性能の変化 alignmentを追求するトレードオフとして、汎用的なnlpタスクで性能が低下する 論文中ではこのことを alignment tax が課されると表現 drop hellaswag bleu french → english などで評価(一覧はtab ) 結果 rlhfしたモデルは、sftモデルやgpt (alignment taxの影響) ただし、 instructgptではシンプルなppoモデルよりも性能の低下度合いが軽減されている 定性的評価 instructgptのfine tuningに使用したデータは英語の文章データが中心で、それ以外はごく少数であったにも関わらず、英語以外の言語やプログラミングコードの要約・質問応答も可能という、興味深い結果が得られた。 img width alt スクリーンショット src ↑ 同じパラメーター数( )のgpt 、instructgptではうまくいっている例 残された課題・議論 まだ完全に hhh helpful honest harmless なテキストが生成できるわけではない むしろ、悪意を持った人間が「人間の指示に従順な」instructgptの学習を行ったら、普通のgpt こういったテキスト生成のリスクを減少させるために、他のアプローチとの組み合わせが考えうる 事前学習データをフィルタりんする方法 webgpt のような、モデルの真実性を向上させる手法 まだまだ単純なミスをする 誤った前提を持つ命令が与えられると、その前提が真であると無理やり仮定してテキストを生成する モデルが過度にリスクヘッジをして、曖昧な言い回しで回答してしまう 指示の成約が厳しい場合(文章数制限など) img width alt スクリーンショット src (chatgptに本論文の要約をさせた結果) モデルが誰に対して alignment されるかが極めて重要 悪意を持った人間が「人間の指示に従順な」instructgptの学習を行ったら、普通のgpt こういったテキスト生成のリスクを減少させるために、他のアプローチとの組み合わせが考えうる 事前学習データをフィルタする手法 webgptのような、モデルの真実性を向上させる手法 そもそも、利害関係の強い領域(医療診断、保護特性に基づく人々の分類、信用、雇用、住居の適格性の判断、政治的広告の生成、法執行など)ではこういった生成モデルは一切使うべきではない、と著者らは考えている 重要な引用 brown tom b benjamin mann nick ryder melanie subbiah jared kaplan prafulla dhariwal arvind neelakantan et al “language models are few shot learners ” conference on neural information processing systems gpt paul f christiano jan leike tom brown miljan martic shane legg and dario amodei deep reinforcement learning from human preferences in proc nips rlhfが提案されている論文 john schulman filip wolski prafulla dhariwal alec radford and oleg klimov proximal policy optimization algorithms arxiv ppoが提案されている論文 これもopenai発 amanda askell yuntao bai anna chen dawn drain deep ganguli tom henighan andy jones nicholas joseph ben mann nova dassarma nelson elhage zac hatfield dodds danny hernandez jackson kernion kamal ndousse catherine olsson dario amodei tom brown jack clark sam mccandlish chris olah and jared kaplan a general language assistant as a laboratory for alignment arxiv 大規模lmのalignmentで目指すべきコンセプト hhh helpful honest harmless を提唱 参考情報 qiita slideshare ガイド)
| 1
|
60,571
| 7,359,690,356
|
IssuesEvent
|
2018-03-10 10:02:23
|
SzFMV2018-Tavasz/AutomatedCar
|
https://api.github.com/repos/SzFMV2018-Tavasz/AutomatedCar
|
closed
|
"D" fokozat logikai implementálása
|
design
|
1. ha az autó aktuális sebessége >= 0km/h:
- [x] gázpedál lenyomására gyorsulás előremenetben a gázpedál állásának megfelelő intenzitással
- [x] fékpedál lenyomására lassulás 0 km/h-ra a fékpedál állásának megfelelő intenzitással
2. ha az autó aktuális sebessége = 0km/h:
- [x] és nincs lenyomva egyik pedál sem, akkor az autó minimálisan gyorsul előremenetben 5km/h sebességig
|
1.0
|
"D" fokozat logikai implementálása - 1. ha az autó aktuális sebessége >= 0km/h:
- [x] gázpedál lenyomására gyorsulás előremenetben a gázpedál állásának megfelelő intenzitással
- [x] fékpedál lenyomására lassulás 0 km/h-ra a fékpedál állásának megfelelő intenzitással
2. ha az autó aktuális sebessége = 0km/h:
- [x] és nincs lenyomva egyik pedál sem, akkor az autó minimálisan gyorsul előremenetben 5km/h sebességig
|
non_process
|
d fokozat logikai implementálása ha az autó aktuális sebessége h gázpedál lenyomására gyorsulás előremenetben a gázpedál állásának megfelelő intenzitással fékpedál lenyomására lassulás km h ra a fékpedál állásának megfelelő intenzitással ha az autó aktuális sebessége h és nincs lenyomva egyik pedál sem akkor az autó minimálisan gyorsul előremenetben h sebességig
| 0
|
18,921
| 24,872,160,257
|
IssuesEvent
|
2022-10-27 15:59:22
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Obsoletion notice: GO:0051710 regulation of cytolysis in another organism & related
|
multi-species process
|
GO:0001898 regulation of cytolysis by symbiont of host cells
GO:0001899 negative regulation of cytolysis by symbiont of host cells
GO:0001900 positive regulation of cytolysis by symbiont of host cells
2 EXP
====
GO:0051839 regulation by host of cytolysis of symbiont cells
GO:0051840 negative regulation by host of cytolysis of symbiont cells
GO:0051841 positive regulation by host of cytolysis of symbiont cells
No annotations
=====
GO:0051710 regulation of cytolysis in another organism
GO:0051713 negative regulation of cytolysis in another organism
GO:0051714 positive regulation of cytolysis in another organism
2 EXP from 1 paper
|
1.0
|
Obsoletion notice: GO:0051710 regulation of cytolysis in another organism & related - GO:0001898 regulation of cytolysis by symbiont of host cells
GO:0001899 negative regulation of cytolysis by symbiont of host cells
GO:0001900 positive regulation of cytolysis by symbiont of host cells
2 EXP
====
GO:0051839 regulation by host of cytolysis of symbiont cells
GO:0051840 negative regulation by host of cytolysis of symbiont cells
GO:0051841 positive regulation by host of cytolysis of symbiont cells
No annotations
=====
GO:0051710 regulation of cytolysis in another organism
GO:0051713 negative regulation of cytolysis in another organism
GO:0051714 positive regulation of cytolysis in another organism
2 EXP from 1 paper
|
process
|
obsoletion notice go regulation of cytolysis in another organism related go regulation of cytolysis by symbiont of host cells go negative regulation of cytolysis by symbiont of host cells go positive regulation of cytolysis by symbiont of host cells exp go regulation by host of cytolysis of symbiont cells go negative regulation by host of cytolysis of symbiont cells go positive regulation by host of cytolysis of symbiont cells no annotations go regulation of cytolysis in another organism go negative regulation of cytolysis in another organism go positive regulation of cytolysis in another organism exp from paper
| 1
|
12,125
| 14,740,800,231
|
IssuesEvent
|
2021-01-07 09:38:55
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Keener - unable to process cc payments
|
anc-process anp-urgent ant-bug
|
In GitLab by @kdjstudios on Dec 4, 2018, 08:49
**Submitted by:** Gaylan Garrett <gaylan@keenercom.net>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6203790
**Server:** External
**Client/Site:** Keener
**Account:** NA
**Issue:**
Each time I try to do a cc payment from SA billing, I get the error, We’re sorry. Something went wrong.
However, if I go directly into authorize.net and process it, it goes through right away and then I just have to post the payment to SA billing.
I am scheduled to process ALL cc payments on Wednesday, so if we could get this to work by then, that would be greatly appreciated.
|
1.0
|
Keener - unable to process cc payments - In GitLab by @kdjstudios on Dec 4, 2018, 08:49
**Submitted by:** Gaylan Garrett <gaylan@keenercom.net>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6203790
**Server:** External
**Client/Site:** Keener
**Account:** NA
**Issue:**
Each time I try to do a cc payment from SA billing, I get the error, We’re sorry. Something went wrong.
However, if I go directly into authorize.net and process it, it goes through right away and then I just have to post the payment to SA billing.
I am scheduled to process ALL cc payments on Wednesday, so if we could get this to work by then, that would be greatly appreciated.
|
process
|
keener unable to process cc payments in gitlab by kdjstudios on dec submitted by gaylan garrett helpdesk server external client site keener account na issue each time i try to do a cc payment from sa billing i get the error we’re sorry something went wrong however if i go directly into authorize net and process it it goes through right away and then i just have to post the payment to sa billing i am scheduled to process all cc payments on wednesday so if we could get this to work by then that would be greatly appreciated
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.