Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
483,909
| 13,931,415,226
|
IssuesEvent
|
2020-10-22 05:12:23
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.canva.com - see bug description
|
browser-focus-geckoview engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical
|
<!-- @browser: Firefox Mobile 81.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:81.0) Gecko/81.0 Firefox/81.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/60226 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.canva.com/
**Browser / Version**: Firefox Mobile 81.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: can't log in or sign up
**Steps to Reproduce**:
It was a 405 error.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.canva.com - see bug description - <!-- @browser: Firefox Mobile 81.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:81.0) Gecko/81.0 Firefox/81.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/60226 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.canva.com/
**Browser / Version**: Firefox Mobile 81.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: can't log in or sign up
**Steps to Reproduce**:
It was a 405 error.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description can t log in or sign up steps to reproduce it was a error browser configuration none from with ❤️
| 0
|
343,418
| 24,770,736,840
|
IssuesEvent
|
2022-10-23 04:59:30
|
Ingressive-for-Good/I4G-OPENSOURCE-FRONTEND-PROJECT-2022
|
https://api.github.com/repos/Ingressive-for-Good/I4G-OPENSOURCE-FRONTEND-PROJECT-2022
|
closed
|
Authentication: Forget page Notification UI
|
documentation enhancement hacktoberfest-accepted hacktoberfest
|
new page alert on the Authentication page
### Develop the forget password notification page UI appropriately
### Ensure they are responsive and follow the responsive design on the Figma
### Implement images properly
### Follow the color guide and the appropriate colors
|
1.0
|
Authentication: Forget page Notification UI - new page alert on the Authentication page
### Develop the forget password notification page UI appropriately
### Ensure they are responsive and follow the responsive design on the Figma
### Implement images properly
### Follow the color guide and the appropriate colors
|
non_process
|
authentication forget page notification ui new page alert on the authentication page develop the forget password notification page ui appropriately ensure they are responsive and follow the responsive design on the figma implement images properly follow the color guide and the appropriate colors
| 0
|
3,051
| 2,732,580,077
|
IssuesEvent
|
2015-04-17 07:38:00
|
nilearn/nilearn
|
https://api.github.com/repos/nilearn/nilearn
|
closed
|
make doc: Tell gen_rst.py which examples to build
|
Discussion Documentation Easy Enhancement
|
OK, at times you just want to test the build of a particular example script , see how it integrates with the doc pages, and then move on with your life. This is not possible at present ...
I've made a one-line patch to the `gen_rst.py` which offers this feature. The default behaviour is as before.
Use like so:
`DEMO_REGEXP="plot_(?:haxby_simple|haxby_space_net|nilearn_101|poldrack_space_net)" make doc`
If this is useful for someone else, then I'll PR it.
|
1.0
|
make doc: Tell gen_rst.py which examples to build - OK, at times you just want to test the build of a particular example script , see how it integrates with the doc pages, and then move on with your life. This is not possible at present ...
I've made a one-line patch to the `gen_rst.py` which offers this feature. The default behaviour is as before.
Use like so:
`DEMO_REGEXP="plot_(?:haxby_simple|haxby_space_net|nilearn_101|poldrack_space_net)" make doc`
If this is useful for someone else, then I'll PR it.
|
non_process
|
make doc tell gen rst py which examples to build ok at times you just want to test the build of a particular example script see how it integrates with the doc pages and then move on with your life this is not possible at present i ve made a one line patch to the gen rst py which offers this feature the default behaviour is as before use like so demo regexp plot haxby simple haxby space net nilearn poldrack space net make doc if this is useful for someone else then i ll pr it
| 0
|
4,845
| 6,849,919,733
|
IssuesEvent
|
2017-11-14 00:15:45
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Library author wants to target netstandard and work with packages.config with LTS
|
area-System.Runtime.InteropServices bug
|
This bug needs to be back ported to NETStandard.Library -- 1.6.0.1 -- and remain in the LTS that works everywhere.
The issue was fixed here https://github.com/dotnet/corefx/issues/10445, but only is available in FTS.
This means that 1.6.0 is broken here.
@ericstj
|
1.0
|
Library author wants to target netstandard and work with packages.config with LTS - This bug needs to be back ported to NETStandard.Library -- 1.6.0.1 -- and remain in the LTS that works everywhere.
The issue was fixed here https://github.com/dotnet/corefx/issues/10445, but only is available in FTS.
This means that 1.6.0 is broken here.
@ericstj
|
non_process
|
library author wants to target netstandard and work with packages config with lts this bug needs to be back ported to netstandard library and remain in the lts that works everywhere the issue was fixed here but only is available in fts this means that is broken here ericstj
| 0
|
600,344
| 18,293,907,985
|
IssuesEvent
|
2021-10-05 18:17:30
|
wp-media/wp-rocket
|
https://api.github.com/repos/wp-media/wp-rocket
|
opened
|
RUCSS warmup process running during the update might not be finished
|
type: bug priority: low severity: moderate module: remove unused css
|
**Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version
- Used the search feature to ensure that the bug hasn’t been reported before
**Describe the bug**
When RUCSS process is running during the update, it's possible that it'll be interrupted and not finished after th update. Data can be malformed:
`"warmup_status":{"total":1,"warmed_count":0,"notwarmed_resources":[],"duration":70,"completed":false}.`
**To Reproduce**
Steps to reproduce the behavior:
1. On previous WP Rocket version (3.10.1+) enable RUCSS
2. Update the plugin while it's running
3. Observe the RUCSS warming up process
4. If it's okay, repeat the steps.
**Expected behavior**
We should be able to restore the process after the update
**Additional context**
Found during the QA here:
https://github.com/wp-media/wp-rocket/pull/4392#issuecomment-934553906
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
|
1.0
|
RUCSS warmup process running during the update might not be finished - **Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version
- Used the search feature to ensure that the bug hasn’t been reported before
**Describe the bug**
When RUCSS process is running during the update, it's possible that it'll be interrupted and not finished after th update. Data can be malformed:
`"warmup_status":{"total":1,"warmed_count":0,"notwarmed_resources":[],"duration":70,"completed":false}.`
**To Reproduce**
Steps to reproduce the behavior:
1. On previous WP Rocket version (3.10.1+) enable RUCSS
2. Update the plugin while it's running
3. Observe the RUCSS warming up process
4. If it's okay, repeat the steps.
**Expected behavior**
We should be able to restore the process after the update
**Additional context**
Found during the QA here:
https://github.com/wp-media/wp-rocket/pull/4392#issuecomment-934553906
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
|
non_process
|
rucss warmup process running during the update might not be finished before submitting an issue please check that you’ve completed the following steps made sure you’re on the latest version used the search feature to ensure that the bug hasn’t been reported before describe the bug when rucss process is running during the update it s possible that it ll be interrupted and not finished after th update data can be malformed warmup status total warmed count notwarmed resources duration completed false to reproduce steps to reproduce the behavior on previous wp rocket version enable rucss update the plugin while it s running observe the rucss warming up process if it s okay repeat the steps expected behavior we should be able to restore the process after the update additional context found during the qa here backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort
| 0
|
3,841
| 6,808,533,808
|
IssuesEvent
|
2017-11-04 04:12:03
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
reopened
|
strikePrice and currentPrice
|
libs-utillib status-inprocess type-enhancement
|
If the user sends any command --dollars, we can do this:
UNHIDE_FIELD(CTransaction, "strikePrice");
UNHIDE_FIELD(CTransaction, "currentPrice");
UNHIDE_FIELD(CTrace, "strikePrice");
UNHIDE_FIELD(CTrace, "currentPrice");
UNHIDE_FIELD(CBlock, "strikePrice");
UNHIDE_FIELD(CBlock, "currentPrice");
so that any fields that are 'value', will also report strikePrice and currentPrice
Related to xxx.
|
1.0
|
strikePrice and currentPrice - If the user sends any command --dollars, we can do this:
UNHIDE_FIELD(CTransaction, "strikePrice");
UNHIDE_FIELD(CTransaction, "currentPrice");
UNHIDE_FIELD(CTrace, "strikePrice");
UNHIDE_FIELD(CTrace, "currentPrice");
UNHIDE_FIELD(CBlock, "strikePrice");
UNHIDE_FIELD(CBlock, "currentPrice");
so that any fields that are 'value', will also report strikePrice and currentPrice
Related to xxx.
|
process
|
strikeprice and currentprice if the user sends any command dollars we can do this unhide field ctransaction strikeprice unhide field ctransaction currentprice unhide field ctrace strikeprice unhide field ctrace currentprice unhide field cblock strikeprice unhide field cblock currentprice so that any fields that are value will also report strikeprice and currentprice related to xxx
| 1
|
7,241
| 10,410,228,407
|
IssuesEvent
|
2019-09-13 10:48:17
|
Open-EO/openeo-processes
|
https://api.github.com/repos/Open-EO/openeo-processes
|
opened
|
load_user_data (or similar)
|
new process
|
A new process was proposed on the 3rd year planning: load_user_data (or similar).
Should load user-uploaded data and convert it into a data cube, similar to load_collection and load_result.
We need to check how to communicate to a user what file formats are allowed to be uploaded. (Change /output_formats to /file_formats and add a list of supported formats for uploading?)
|
1.0
|
load_user_data (or similar) - A new process was proposed on the 3rd year planning: load_user_data (or similar).
Should load user-uploaded data and convert it into a data cube, similar to load_collection and load_result.
We need to check how to communicate to a user what file formats are allowed to be uploaded. (Change /output_formats to /file_formats and add a list of supported formats for uploading?)
|
process
|
load user data or similar a new process was proposed on the year planning load user data or similar should load user uploaded data and convert it into a data cube similar to load collection and load result we need to check how to communicate to a user what file formats are allowed to be uploaded change output formats to file formats and add a list of supported formats for uploading
| 1
|
221,332
| 17,333,450,278
|
IssuesEvent
|
2021-07-28 07:13:00
|
18F/fedramp-automation
|
https://api.github.com/repos/18F/fedramp-automation
|
opened
|
Update or Accept Risk of Overmind Dev Tools Vulnerabilities
|
backlog item g: fedramp integration testing
|
**Extended Description**
As a 10x ASAP developer, in order to maintain the security of the dependencies of our development tools, I would like to determine a best course of action for moderate impact vulnerabilities for the `electron` and `ws` packages from NPM.
<img width="1046" alt="Screen Shot 2021-07-28 at 2 37 17 AM" src="https://user-images.githubusercontent.com/61464190/127278842-ad421c4e-7bb6-4c29-a5a6-c7e6f9a3f519.png">
NOTE: Github has indicated through Dependabot for the last week it is not automatically patchable because it is a transitive dependency, so we can safely presume we're on our own.
**Preconditions**
- Preconditions...
**Acceptance Critera**
- One of the following:
- [ ] A documented explanation of the risk we can ask our project manager to accept.
- [ ] Evidence of an updated patched version of this vulnerability.
**Story Tasks**
- [ ] Tasks...
**Definition of Done**
- [ ] Acceptance criteria met - Each user story should meet the acceptance criteria in the description
- [ ] Unit test coverage of our code > 90% (*from [QASP](https://github.com/flexion/TTS-10x/wiki/TTS-QASP)*) this may be fuzzy and hard to prove
- [ ] Code quality checks passed - Enable html tidy with XML code standards as part of the build (*from [QASP](https://github.com/flexion/TTS-10x/wiki/TTS-QASP)*)
- [ ] Accessibility: (*from [QASP](https://github.com/flexion/TTS-10x/wiki/TTS-QASP)*) as we create guidance or documentation and reports (semantic tagging including aria tags): demonstrate with 0 errors reported for WCAG 2.1 AA standards using an automated scanner and 0 errors reported in manual testing
- [ ] Code reviewed - Code reviewed by at least one other team members (or developed by a pair)
- [ ] Source code merged - Code that’s demoed must be in source control and merged
- [ ] Code must successfully build and deploy into staging environment (*from [QASP](https://github.com/flexion/TTS-10x/wiki/TTS-QASP)*): this may evolve from xslt sh pipline into something more
- [ ] Security reviewed and reported - Conduct vulnerability and compliance scanning. threat modeling?
- [ ] Code submitted must be free of medium- and high-level static and dynamic security vulnerabilities (*from [QASP](https://github.com/flexion/TTS-10x/wiki/TTS-QASP)*)
- [ ] Usability tests passed - Each user story should be easy to use by target users (development community? FedRAMP FART team)
- [ ] Usability testing and other user research methods must be conducted at regular intervals throughout the development process (not just at the beginning or end). (*from [QASP](https://github.com/flexion/TTS-10x/wiki/TTS-QASP)*)
- [ ] Code refactored for clarity - Code must be clean, self-documenting
- [ ] No local design debt
- [ ] Load/performance tests passed - test data needed - saxon instrumentation
- [ ] Documentation generated - update readme or contributing markdown as necessary.
- [ ] Architectural Decision Record completed as necessary for significant design choices
|
1.0
|
Update or Accept Risk of Overmind Dev Tools Vulnerabilities - **Extended Description**
As a 10x ASAP developer, in order to maintain the security of the dependencies of our development tools, I would like to determine a best course of action for moderate impact vulnerabilities for the `electron` and `ws` packages from NPM.
<img width="1046" alt="Screen Shot 2021-07-28 at 2 37 17 AM" src="https://user-images.githubusercontent.com/61464190/127278842-ad421c4e-7bb6-4c29-a5a6-c7e6f9a3f519.png">
NOTE: Github has indicated through Dependabot for the last week it is not automatically patchable because it is a transitive dependency, so we can safely presume we're on our own.
**Preconditions**
- Preconditions...
**Acceptance Critera**
- One of the following:
- [ ] A documented explanation of the risk we can ask our project manager to accept.
- [ ] Evidence of an updated patched version of this vulnerability.
**Story Tasks**
- [ ] Tasks...
**Definition of Done**
- [ ] Acceptance criteria met - Each user story should meet the acceptance criteria in the description
- [ ] Unit test coverage of our code > 90% (*from [QASP](https://github.com/flexion/TTS-10x/wiki/TTS-QASP)*) this may be fuzzy and hard to prove
- [ ] Code quality checks passed - Enable html tidy with XML code standards as part of the build (*from [QASP](https://github.com/flexion/TTS-10x/wiki/TTS-QASP)*)
- [ ] Accessibility: (*from [QASP](https://github.com/flexion/TTS-10x/wiki/TTS-QASP)*) as we create guidance or documentation and reports (semantic tagging including aria tags): demonstrate with 0 errors reported for WCAG 2.1 AA standards using an automated scanner and 0 errors reported in manual testing
- [ ] Code reviewed - Code reviewed by at least one other team members (or developed by a pair)
- [ ] Source code merged - Code that’s demoed must be in source control and merged
- [ ] Code must successfully build and deploy into staging environment (*from [QASP](https://github.com/flexion/TTS-10x/wiki/TTS-QASP)*): this may evolve from xslt sh pipline into something more
- [ ] Security reviewed and reported - Conduct vulnerability and compliance scanning. threat modeling?
- [ ] Code submitted must be free of medium- and high-level static and dynamic security vulnerabilities (*from [QASP](https://github.com/flexion/TTS-10x/wiki/TTS-QASP)*)
- [ ] Usability tests passed - Each user story should be easy to use by target users (development community? FedRAMP FART team)
- [ ] Usability testing and other user research methods must be conducted at regular intervals throughout the development process (not just at the beginning or end). (*from [QASP](https://github.com/flexion/TTS-10x/wiki/TTS-QASP)*)
- [ ] Code refactored for clarity - Code must be clean, self-documenting
- [ ] No local design debt
- [ ] Load/performance tests passed - test data needed - saxon instrumentation
- [ ] Documentation generated - update readme or contributing markdown as necessary.
- [ ] Architectural Decision Record completed as necessary for significant design choices
|
non_process
|
update or accept risk of overmind dev tools vulnerabilities extended description as a asap developer in order to maintain the security of the dependencies of our development tools i would like to determine a best course of action for moderate impact vulnerabilities for the electron and ws packages from npm img width alt screen shot at am src note github has indicated through dependabot for the last week it is not automatically patchable because it is a transitive dependency so we can safely presume we re on our own preconditions preconditions acceptance critera one of the following a documented explanation of the risk we can ask our project manager to accept evidence of an updated patched version of this vulnerability story tasks tasks definition of done acceptance criteria met each user story should meet the acceptance criteria in the description unit test coverage of our code from this may be fuzzy and hard to prove code quality checks passed enable html tidy with xml code standards as part of the build from accessibility from as we create guidance or documentation and reports semantic tagging including aria tags demonstrate with errors reported for wcag aa standards using an automated scanner and errors reported in manual testing code reviewed code reviewed by at least one other team members or developed by a pair source code merged code that’s demoed must be in source control and merged code must successfully build and deploy into staging environment from this may evolve from xslt sh pipline into something more security reviewed and reported conduct vulnerability and compliance scanning threat modeling code submitted must be free of medium and high level static and dynamic security vulnerabilities from usability tests passed each user story should be easy to use by target users development community fedramp fart team usability testing and other user research methods must be conducted at regular intervals throughout the development process not just at the beginning or end from code refactored for clarity code must be clean self documenting no local design debt load performance tests passed test data needed saxon instrumentation documentation generated update readme or contributing markdown as necessary architectural decision record completed as necessary for significant design choices
| 0
|
17,921
| 23,908,365,777
|
IssuesEvent
|
2022-09-09 05:01:23
|
googleapis/repo-automation-bots
|
https://api.github.com/repos/googleapis/repo-automation-bots
|
closed
|
Pinned dependencies
|
type: process priority: p2
|
Pinning some dependencies to older major versions.
gcf-utils:
- [ ] google-auth-library: v7
- [ ] yargs: v16
- [ ] @types/yargs: v16
- [ ] into-stream: v6
cron-utils:
- [ ] google-auth-library: v7
policy:
- [ ] meow: v9
secret-rotator:
- [ ] google-auth-library: v7
- [ ] @google-cloud/secret-manager: v3
|
1.0
|
Pinned dependencies - Pinning some dependencies to older major versions.
gcf-utils:
- [ ] google-auth-library: v7
- [ ] yargs: v16
- [ ] @types/yargs: v16
- [ ] into-stream: v6
cron-utils:
- [ ] google-auth-library: v7
policy:
- [ ] meow: v9
secret-rotator:
- [ ] google-auth-library: v7
- [ ] @google-cloud/secret-manager: v3
|
process
|
pinned dependencies pinning some dependencies to older major versions gcf utils google auth library yargs types yargs into stream cron utils google auth library policy meow secret rotator google auth library google cloud secret manager
| 1
|
8,100
| 11,275,922,144
|
IssuesEvent
|
2020-01-14 21:52:51
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
vagrant-cloud post-processor error
|
bug post-processor/vagrant-cloud
|
I'm getting this error:
" virtualbox-iso (vagrant-cloud): error decoding error response: json: cannot unmarshal array into Go struct field VagrantCloudErrors.errors of type map[string][]string"
#### Reproduction Steps
I'm assuming running any packer build script using the vagrant-cloud post-processor.
It's taking me a while to test this as I don't know a way of running the post-processor in isolation with the previously built virtualbox-iso asset.
(I'm going to try again setting the env variable PACKER_LOG=1.)
### Packer version
v1.5.1
### Operating system and Environment details
Arch Linux 5.4.8-arch1-1 x86_64 GNU/Linux
|
1.0
|
vagrant-cloud post-processor error - I'm getting this error:
" virtualbox-iso (vagrant-cloud): error decoding error response: json: cannot unmarshal array into Go struct field VagrantCloudErrors.errors of type map[string][]string"
#### Reproduction Steps
I'm assuming running any packer build script using the vagrant-cloud post-processor.
It's taking me a while to test this as I don't know a way of running the post-processor in isolation with the previously built virtualbox-iso asset.
(I'm going to try again setting the env variable PACKER_LOG=1.)
### Packer version
v1.5.1
### Operating system and Environment details
Arch Linux 5.4.8-arch1-1 x86_64 GNU/Linux
|
process
|
vagrant cloud post processor error i m getting this error virtualbox iso vagrant cloud error decoding error response json cannot unmarshal array into go struct field vagrantclouderrors errors of type map string reproduction steps i m assuming running any packer build script using the vagrant cloud post processor it s taking me a while to test this as i don t know a way of running the post processor in isolation with the previously built virtualbox iso asset i m going to try again setting the env variable packer log packer version operating system and environment details arch linux gnu linux
| 1
|
328,633
| 24,193,088,704
|
IssuesEvent
|
2022-09-23 19:48:58
|
nextgenhealthcare/connect
|
https://api.github.com/repos/nextgenhealthcare/connect
|
opened
|
com.mirth.connect.connectors.file.FileReceiver: Error polling in channel
|
documentation
|
I have the following problem with the Mirth Connect application, in which this channel has communication with an SSH server.
Do you know what exactly it refers to?
ERROR 2022-09-14 22:18:21,274 [Timer-1] com.mirth.connect.connectors.file.FileReceiver: Error polling in channel: 0c708f73-8567-454a-84ff-3d11bfd70cd4
com.jcraft.jsch.JSchException: java.net.ConnectException: Connection timed out: connect
at com.jcraft.jsch.Util.createSocket(Util.java:389)
at com.jcraft.jsch.Session.connect(Session.java:215)
at com.mirth.connect.connectors.file.filesystems.SftpConnection.<init>(SftpConnection.java:115)
at com.mirth.connect.connectors.file.filesystems.FileSystemConnectionFactory.makeObject(FileSystemConnectionFactory.java:97)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1188)
at com.mirth.connect.connectors.file.FileConnector.getConnection(FileConnector.java:197)
at com.mirth.connect.connectors.file.FileReceiver.listFiles(FileReceiver.java:538)
at com.mirth.connect.connectors.file.FileReceiver.poll(FileReceiver.java:192)
at com.mirth.connect.donkey.server.channel.PollConnector$PollConnectorTask.run(PollConnector.java:141)
at java.util.TimerThread.mainLoop(Unknown Source)
at java.util.TimerThread.run(Unknown Source)
|
1.0
|
com.mirth.connect.connectors.file.FileReceiver: Error polling in channel - I have the following problem with the Mirth Connect application, in which this channel has communication with an SSH server.
Do you know what exactly it refers to?
ERROR 2022-09-14 22:18:21,274 [Timer-1] com.mirth.connect.connectors.file.FileReceiver: Error polling in channel: 0c708f73-8567-454a-84ff-3d11bfd70cd4
com.jcraft.jsch.JSchException: java.net.ConnectException: Connection timed out: connect
at com.jcraft.jsch.Util.createSocket(Util.java:389)
at com.jcraft.jsch.Session.connect(Session.java:215)
at com.mirth.connect.connectors.file.filesystems.SftpConnection.<init>(SftpConnection.java:115)
at com.mirth.connect.connectors.file.filesystems.FileSystemConnectionFactory.makeObject(FileSystemConnectionFactory.java:97)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1188)
at com.mirth.connect.connectors.file.FileConnector.getConnection(FileConnector.java:197)
at com.mirth.connect.connectors.file.FileReceiver.listFiles(FileReceiver.java:538)
at com.mirth.connect.connectors.file.FileReceiver.poll(FileReceiver.java:192)
at com.mirth.connect.donkey.server.channel.PollConnector$PollConnectorTask.run(PollConnector.java:141)
at java.util.TimerThread.mainLoop(Unknown Source)
at java.util.TimerThread.run(Unknown Source)
|
non_process
|
com mirth connect connectors file filereceiver error polling in channel i have the following problem with the mirth connect application in which this channel has communication with an ssh server do you know what exactly it refers to error com mirth connect connectors file filereceiver error polling in channel com jcraft jsch jschexception java net connectexception connection timed out connect at com jcraft jsch util createsocket util java at com jcraft jsch session connect session java at com mirth connect connectors file filesystems sftpconnection sftpconnection java at com mirth connect connectors file filesystems filesystemconnectionfactory makeobject filesystemconnectionfactory java at org apache commons pool impl genericobjectpool borrowobject genericobjectpool java at com mirth connect connectors file fileconnector getconnection fileconnector java at com mirth connect connectors file filereceiver listfiles filereceiver java at com mirth connect connectors file filereceiver poll filereceiver java at com mirth connect donkey server channel pollconnector pollconnectortask run pollconnector java at java util timerthread mainloop unknown source at java util timerthread run unknown source
| 0
|
40,420
| 9,985,474,079
|
IssuesEvent
|
2019-07-10 16:38:41
|
openanthem/nimbus-core
|
https://api.github.com/repos/openanthem/nimbus-core
|
opened
|
@CardDetail child component's text wrapping in @Grid expanded row is incorrect
|
Defect
|
# Issue Details
When configuring `@Grid` > `@GridRowBody` > `@CardDetail` > `@CardDetail.Body` > `@FieldValue`, component, if the string length of the param state of any child configured in the `@CardDetail **component is small, the text is wrapping incorrectly**.

When the **string length of the param state long, the text is also wrapping better, but still incorrectly.**

**Type of Issue** (check one with "X")
```
[X] Bug Report => Please search GitHub for a similar issue or PR before submitting
[ ] Feature Request => Please ensure feature is not already in progress
[ ] Support Request => Please do not submit support requests here, instead see: https://discourse.oss.antheminc.com/
```
## Expected Behavior
The text field should wrap according to the column width configured for the `@CardDetail.Body`.
## How to Reproduce the Issue
### Steps to Reproduce
1. Configure using the code snippet below
2. Set the state of `description` to something long, then something small. Observe the difference.
### Code Snippet
```java
@GridRowBody
private ExpandedDetails expandedDetails;
public static class ExpandedDetails {
@CardDetail
private CardDetails cardDetails;
}
@Model @Getter @Setter
public static class CardDetails {
@CardDetail.Body
private CardBody cardBody;
}
@Model @Getter @Setter
@MapsTo.Type(SampleCoreEntity.class)
public static class CardBody {
@Path
@FieldValue
@Label("Description")
private String description;
}
```
# Environment Details
* **Nimbus Version:**
1.3.1.M2
* **Browser:**
Google Chrome
|
1.0
|
@CardDetail child component's text wrapping in @Grid expanded row is incorrect - # Issue Details
When configuring `@Grid` > `@GridRowBody` > `@CardDetail` > `@CardDetail.Body` > `@FieldValue`, component, if the string length of the param state of any child configured in the `@CardDetail **component is small, the text is wrapping incorrectly**.

When the **string length of the param state long, the text is also wrapping better, but still incorrectly.**

**Type of Issue** (check one with "X")
```
[X] Bug Report => Please search GitHub for a similar issue or PR before submitting
[ ] Feature Request => Please ensure feature is not already in progress
[ ] Support Request => Please do not submit support requests here, instead see: https://discourse.oss.antheminc.com/
```
## Expected Behavior
The text field should wrap according to the column width configured for the `@CardDetail.Body`.
## How to Reproduce the Issue
### Steps to Reproduce
1. Configure using the code snippet below
2. Set the state of `description` to something long, then something small. Observe the difference.
### Code Snippet
```java
@GridRowBody
private ExpandedDetails expandedDetails;
public static class ExpandedDetails {
@CardDetail
private CardDetails cardDetails;
}
@Model @Getter @Setter
public static class CardDetails {
@CardDetail.Body
private CardBody cardBody;
}
@Model @Getter @Setter
@MapsTo.Type(SampleCoreEntity.class)
public static class CardBody {
@Path
@FieldValue
@Label("Description")
private String description;
}
```
# Environment Details
* **Nimbus Version:**
1.3.1.M2
* **Browser:**
Google Chrome
|
non_process
|
carddetail child component s text wrapping in grid expanded row is incorrect issue details when configuring grid gridrowbody carddetail carddetail body fieldvalue component if the string length of the param state of any child configured in the carddetail component is small the text is wrapping incorrectly when the string length of the param state long the text is also wrapping better but still incorrectly type of issue check one with x bug report please search github for a similar issue or pr before submitting feature request please ensure feature is not already in progress support request please do not submit support requests here instead see expected behavior the text field should wrap according to the column width configured for the carddetail body how to reproduce the issue steps to reproduce configure using the code snippet below set the state of description to something long then something small observe the difference code snippet java gridrowbody private expandeddetails expandeddetails public static class expandeddetails carddetail private carddetails carddetails model getter setter public static class carddetails carddetail body private cardbody cardbody model getter setter mapsto type samplecoreentity class public static class cardbody path fieldvalue label description private string description environment details nimbus version browser google chrome
| 0
|
2,016
| 4,837,202,027
|
IssuesEvent
|
2016-11-08 21:52:32
|
PagerNation/PagerNation
|
https://api.github.com/repos/PagerNation/PagerNation
|
opened
|
Create card template
|
process
|
Create a template that will act as a guide for what goes into a card, the testing required, etc
|
1.0
|
Create card template - Create a template that will act as a guide for what goes into a card, the testing required, etc
|
process
|
create card template create a template that will act as a guide for what goes into a card the testing required etc
| 1
|
148,612
| 5,693,545,842
|
IssuesEvent
|
2017-04-15 02:41:02
|
OperationCode/operationcode
|
https://api.github.com/repos/OperationCode/operationcode
|
closed
|
Create Deploy Subdomain
|
beginner friendly Priority: Medium Status: Available Type: Maintenance
|
As a VISITOR
When I VISIT DEPLOY
It should maybe BE A SUBDOMAIN instead of a standalone page so it won't get caught up in rails and would be easier to update, etc. Just a thought to make life easier.
Bret (and David)
|
1.0
|
Create Deploy Subdomain - As a VISITOR
When I VISIT DEPLOY
It should maybe BE A SUBDOMAIN instead of a standalone page so it won't get caught up in rails and would be easier to update, etc. Just a thought to make life easier.
Bret (and David)
|
non_process
|
create deploy subdomain as a visitor when i visit deploy it should maybe be a subdomain instead of a standalone page so it won t get caught up in rails and would be easier to update etc just a thought to make life easier bret and david
| 0
|
428,942
| 12,418,469,775
|
IssuesEvent
|
2020-05-23 00:27:47
|
eclipse-ee4j/glassfish
|
https://api.github.com/repos/eclipse-ee4j/glassfish
|
closed
|
[UB]Document the copying of jdbc drivers to appropriate locations for different modes of JMS
|
Component: docs ERR: Assignee Priority: Major Stale Type: Bug
|
Please document that
For EMBEDDED mode, the jdbc driver needs to be copied to the directory glassfish3/glassfish/lib/install/applications/jmsra/
For LOCAL/REMOTE modes, the jdbc driver needs to be copied to the directory glassfish3/mq/lib/ext/
-Sarada.
#### Affected Versions
[3.1.2_dev]
|
1.0
|
[UB]Document the copying of jdbc drivers to appropriate locations for different modes of JMS - Please document that
For EMBEDDED mode, the jdbc driver needs to be copied to the directory glassfish3/glassfish/lib/install/applications/jmsra/
For LOCAL/REMOTE modes, the jdbc driver needs to be copied to the directory glassfish3/mq/lib/ext/
-Sarada.
#### Affected Versions
[3.1.2_dev]
|
non_process
|
document the copying of jdbc drivers to appropriate locations for different modes of jms please document that for embedded mode the jdbc driver needs to be copied to the directory glassfish lib install applications jmsra for local remote modes the jdbc driver needs to be copied to the directory mq lib ext sarada affected versions
| 0
|
3,068
| 6,051,751,256
|
IssuesEvent
|
2017-06-13 01:26:12
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Child process remains alive after parent terminates while being debugged
|
child_process inspector
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: 8.0.0
* **Platform**: macOS Sierra
* **Subsystem**: child_process
<!-- Enter your issue details below this comment. -->
**Steps:**
1. Type the following code below into a module.
1. Run it in a terminal with no arguments.
1. Attach Chrome's Devtools debugger to the child process.
1. Kill the parent.
```js
if (process.execArgv.includes("--inspect-brk") return
child_process.spawn(process.argv[0], [__filename, "--inspect-brk"], {
stdio: "inherit",
})
.on("exit", () => { console.log("exit") })
.on("error", () => { console.log("error") })
```
**Expected:**
The child should've been killed along with the parent, with the debugger disconnected.
**Observed:**
The child remains alive with the debugger at the first line waiting.
|
1.0
|
Child process remains alive after parent terminates while being debugged - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: 8.0.0
* **Platform**: macOS Sierra
* **Subsystem**: child_process
<!-- Enter your issue details below this comment. -->
**Steps:**
1. Type the following code below into a module.
1. Run it in a terminal with no arguments.
1. Attach Chrome's Devtools debugger to the child process.
1. Kill the parent.
```js
if (process.execArgv.includes("--inspect-brk") return
child_process.spawn(process.argv[0], [__filename, "--inspect-brk"], {
stdio: "inherit",
})
.on("exit", () => { console.log("exit") })
.on("error", () => { console.log("error") })
```
**Expected:**
The child should've been killed along with the parent, with the debugger disconnected.
**Observed:**
The child remains alive with the debugger at the first line waiting.
|
process
|
child process remains alive after parent terminates while being debugged thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform macos sierra subsystem child process steps type the following code below into a module run it in a terminal with no arguments attach chrome s devtools debugger to the child process kill the parent js if process execargv includes inspect brk return child process spawn process argv stdio inherit on exit console log exit on error console log error expected the child should ve been killed along with the parent with the debugger disconnected observed the child remains alive with the debugger at the first line waiting
| 1
|
4,517
| 7,360,188,346
|
IssuesEvent
|
2018-03-10 16:02:03
|
ODiogoSilva/assemblerflow
|
https://api.github.com/repos/ODiogoSilva/assemblerflow
|
opened
|
Add dynamic compute resources to assemblers
|
enhancement process
|
Assembler processes may demand high RAM resources, but are highly variable. They depend on the genome size, coverage, etc. For these processes, we could take advantage of nextflow's [dynamic computing resources](https://www.nextflow.io/docs/latest/process.html#dynamic-computing-resources) to make the RAM attribute adaptive.
|
1.0
|
Add dynamic compute resources to assemblers - Assembler processes may demand high RAM resources, but are highly variable. They depend on the genome size, coverage, etc. For these processes, we could take advantage of nextflow's [dynamic computing resources](https://www.nextflow.io/docs/latest/process.html#dynamic-computing-resources) to make the RAM attribute adaptive.
|
process
|
add dynamic compute resources to assemblers assembler processes may demand high ram resources but are highly variable they depend on the genome size coverage etc for these processes we could take advantage of nextflow s to make the ram attribute adaptive
| 1
|
22,631
| 31,880,004,870
|
IssuesEvent
|
2023-09-16 08:56:42
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Big white zone on exported file
|
duplicate scope: image processing
|
### Describe the bug
Hi,
I from time to time get problem on export. It's quit strange because it doesn't happen always, just on some pictures and not at every export size. I for instance have a picture which is good exported at reduced size, but get big white zones on full size export.


### Steps to reproduce
I can't give you here the raw and XMP files because the raw is too heavy for github...
But if there is another way I have no problem to share them.
Once you have these file, just export the image full size in jpeg to reproduce the bug.
### Expected behavior
Export the proper image
### Logfile | Screenshot | Screencast
I don't get any message, just a bug on the resulting picture after export.
### Commit
No idea
### Where did you install darktable from?
darktable.org
### darktable version
4.4.2
### What OS are you using?
Linux
### What is the version of your OS?
fedora 37
### Describe your system?
I7
8Gb RAM
NV137 / Mesa Intel® HD Graphics 630 (KBL GT2)
### Are you using OpenCL GPU in darktable?
None
### If yes, what is the GPU card and driver?
NV137 / Mesa Intel® HD Graphics 630 (KBL GT2)
### Please provide additional context if applicable. You can attach files too, but might need to rename to .txt or .zip
Are the steps above reproducible with a fresh edit (i.e. after discarding history)? -> yes
No idea about the others questions...
|
1.0
|
Big white zone on exported file - ### Describe the bug
Hi,
I from time to time get problem on export. It's quit strange because it doesn't happen always, just on some pictures and not at every export size. I for instance have a picture which is good exported at reduced size, but get big white zones on full size export.


### Steps to reproduce
I can't give you here the raw and XMP files because the raw is too heavy for github...
But if there is another way I have no problem to share them.
Once you have these file, just export the image full size in jpeg to reproduce the bug.
### Expected behavior
Export the proper image
### Logfile | Screenshot | Screencast
I don't get any message, just a bug on the resulting picture after export.
### Commit
No idea
### Where did you install darktable from?
darktable.org
### darktable version
4.4.2
### What OS are you using?
Linux
### What is the version of your OS?
fedora 37
### Describe your system?
I7
8Gb RAM
NV137 / Mesa Intel® HD Graphics 630 (KBL GT2)
### Are you using OpenCL GPU in darktable?
None
### If yes, what is the GPU card and driver?
NV137 / Mesa Intel® HD Graphics 630 (KBL GT2)
### Please provide additional context if applicable. You can attach files too, but might need to rename to .txt or .zip
Are the steps above reproducible with a fresh edit (i.e. after discarding history)? -> yes
No idea about the others questions...
|
process
|
big white zone on exported file describe the bug hi i from time to time get problem on export it s quit strange because it doesn t happen always just on some pictures and not at every export size i for instance have a picture which is good exported at reduced size but get big white zones on full size export steps to reproduce i can t give you here the raw and xmp files because the raw is too heavy for github but if there is another way i have no problem to share them once you have these file just export the image full size in jpeg to reproduce the bug expected behavior export the proper image logfile screenshot screencast i don t get any message just a bug on the resulting picture after export commit no idea where did you install darktable from darktable org darktable version what os are you using linux what is the version of your os fedora describe your system ram mesa intel® hd graphics kbl are you using opencl gpu in darktable none if yes what is the gpu card and driver mesa intel® hd graphics kbl please provide additional context if applicable you can attach files too but might need to rename to txt or zip are the steps above reproducible with a fresh edit i e after discarding history yes no idea about the others questions
| 1
|
375,463
| 26,162,895,602
|
IssuesEvent
|
2022-12-31 21:33:48
|
stratosphererl/stratosphere
|
https://api.github.com/repos/stratosphererl/stratosphere
|
closed
|
[SPIKE] - Proposal: Split Agenda Board into Many Boards (One for each developer/sprint)
|
type: documentation priority: low
|
# Context
The agenda board is getting pretty cluttered and it is a bit difficult to keep myself organized about which stories I am completing and whether they are on the board or not yet
# Timebox
N/A
# Tasks
- [ ] Organize agenda board stories into multiple boards: either per developer or per sprint
|
1.0
|
[SPIKE] - Proposal: Split Agenda Board into Many Boards (One for each developer/sprint) - # Context
The agenda board is getting pretty cluttered and it is a bit difficult to keep myself organized about which stories I am completing and whether they are on the board or not yet
# Timebox
N/A
# Tasks
- [ ] Organize agenda board stories into multiple boards: either per developer or per sprint
|
non_process
|
proposal split agenda board into many boards one for each developer sprint context the agenda board is getting pretty cluttered and it is a bit difficult to keep myself organized about which stories i am completing and whether they are on the board or not yet timebox n a tasks organize agenda board stories into multiple boards either per developer or per sprint
| 0
|
5,555
| 8,394,588,028
|
IssuesEvent
|
2018-10-10 01:34:54
|
nion-software/nionswift
|
https://api.github.com/repos/nion-software/nionswift
|
opened
|
Snapshot of 16GB HDF5 data fails
|
f - processing level - normal p2 - high type - bug w4 - ready
|
This seems to be a low level bug. 16GB HDF5 backed data copied using numpy.copy(data) returns all zeros.
|
1.0
|
Snapshot of 16GB HDF5 data fails - This seems to be a low level bug. 16GB HDF5 backed data copied using numpy.copy(data) returns all zeros.
|
process
|
snapshot of data fails this seems to be a low level bug backed data copied using numpy copy data returns all zeros
| 1
|
18,216
| 12,836,638,111
|
IssuesEvent
|
2020-07-07 14:38:23
|
enarx/enarx
|
https://api.github.com/repos/enarx/enarx
|
closed
|
Intel SGX revision 33 patches
|
infrastructure intel sgx
|
[Revision 33](https://patchwork.kernel.org/cover/11610831/) has been sent to the linux-sgx mailing list.
These are based on 5.8.
|
1.0
|
Intel SGX revision 33 patches - [Revision 33](https://patchwork.kernel.org/cover/11610831/) has been sent to the linux-sgx mailing list.
These are based on 5.8.
|
non_process
|
intel sgx revision patches has been sent to the linux sgx mailing list these are based on
| 0
|
1,854
| 4,651,571,565
|
IssuesEvent
|
2016-10-03 10:40:14
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
opened
|
Validate URLs before adding them to the database
|
0. Ready for Analysis Processors
|
We use URLs in:
* documents.url
* files.url
* publications.source_url
* records.source_url
* sources.url
* sources.terms_and_conditions_url
To avoid we inserting invalid URLs into these fields (e.g. `foobar`), we should validate them on the processors' writers. They must all be fully qualified (i.e. `https://example.com` instead of `example.com`), so you could validate them with something like:
```python
import urlparse
def is_fully_qualified_remote_url(url):
parsed_url = urlparse.urlparse(url)
if parsed_url.scheme and parsed_url.netloc:
return parsed_url.scheme != 'file'
```
This check should be done in the `processors/base/writers/*.py` level, so every processor that uses them will implement the validation.
|
1.0
|
Validate URLs before adding them to the database - We use URLs in:
* documents.url
* files.url
* publications.source_url
* records.source_url
* sources.url
* sources.terms_and_conditions_url
To avoid we inserting invalid URLs into these fields (e.g. `foobar`), we should validate them on the processors' writers. They must all be fully qualified (i.e. `https://example.com` instead of `example.com`), so you could validate them with something like:
```python
import urlparse
def is_fully_qualified_remote_url(url):
parsed_url = urlparse.urlparse(url)
if parsed_url.scheme and parsed_url.netloc:
return parsed_url.scheme != 'file'
```
This check should be done in the `processors/base/writers/*.py` level, so every processor that uses them will implement the validation.
|
process
|
validate urls before adding them to the database we use urls in documents url files url publications source url records source url sources url sources terms and conditions url to avoid we inserting invalid urls into these fields e g foobar we should validate them on the processors writers they must all be fully qualified i e instead of example com so you could validate them with something like python import urlparse def is fully qualified remote url url parsed url urlparse urlparse url if parsed url scheme and parsed url netloc return parsed url scheme file this check should be done in the processors base writers py level so every processor that uses them will implement the validation
| 1
|
1,151
| 3,633,939,081
|
IssuesEvent
|
2016-02-11 16:18:19
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
NTR: cellular response to pulsatile (and: oscillatory) fluid shear stress
|
BHF-UCL miRNA New term request pending RNA processes
|
Dear Biocurators,
I am writing to request a new GO term, which arose whilst annotating paper PMID: 21768538 (Wu et al., 2011).
It is demonstrated in Figures 3 and 4 in this paper that the type of fluid shear stress, which endothelial cells are exposed to, affects the expression of miR-92a and, consequently, its target: the transcription factor Kru¨ppel-like factor 2 (KLF2).
I was able to capture the data presented in Figure 3 using term GO:0071499: ‘cellular response to laminar fluid shear stress’.
I order to capture the information from Figure 4, I wish to request two sibling terms:
1) ‘cellular response to pulsatile fluid shear stress’
2) ‘cellular response to oscillatory fluid shear stress’
Like their existing sibling, these terms would be is_a child terms to two parents:
GO:0071498: ‘cellular response to fluid shear stress’; and
GO:0034616: ‘response to laminar fluid shear stress’.
DbxREFs: GOC:BHF, GOC:BHF_miRNA, GOC:bc
I will look forward to hearing from you with regard to my request.
Thank you,
Barbara
cc: @RLovering
cc: @rachhuntley
|
1.0
|
NTR: cellular response to pulsatile (and: oscillatory) fluid shear stress - Dear Biocurators,
I am writing to request a new GO term, which arose whilst annotating paper PMID: 21768538 (Wu et al., 2011).
It is demonstrated in Figures 3 and 4 in this paper that the type of fluid shear stress, which endothelial cells are exposed to, affects the expression of miR-92a and, consequently, its target: the transcription factor Kru¨ppel-like factor 2 (KLF2).
I was able to capture the data presented in Figure 3 using term GO:0071499: ‘cellular response to laminar fluid shear stress’.
I order to capture the information from Figure 4, I wish to request two sibling terms:
1) ‘cellular response to pulsatile fluid shear stress’
2) ‘cellular response to oscillatory fluid shear stress’
Like their existing sibling, these terms would be is_a child terms to two parents:
GO:0071498: ‘cellular response to fluid shear stress’; and
GO:0034616: ‘response to laminar fluid shear stress’.
DbxREFs: GOC:BHF, GOC:BHF_miRNA, GOC:bc
I will look forward to hearing from you with regard to my request.
Thank you,
Barbara
cc: @RLovering
cc: @rachhuntley
|
process
|
ntr cellular response to pulsatile and oscillatory fluid shear stress dear biocurators i am writing to request a new go term which arose whilst annotating paper pmid wu et al it is demonstrated in figures and in this paper that the type of fluid shear stress which endothelial cells are exposed to affects the expression of mir and consequently its target the transcription factor kru¨ppel like factor i was able to capture the data presented in figure using term go ‘cellular response to laminar fluid shear stress’ i order to capture the information from figure i wish to request two sibling terms ‘cellular response to pulsatile fluid shear stress’ ‘cellular response to oscillatory fluid shear stress’ like their existing sibling these terms would be is a child terms to two parents go ‘cellular response to fluid shear stress’ and go ‘response to laminar fluid shear stress’ dbxrefs goc bhf goc bhf mirna goc bc i will look forward to hearing from you with regard to my request thank you barbara cc rlovering cc rachhuntley
| 1
|
21,704
| 30,201,863,799
|
IssuesEvent
|
2023-07-05 06:39:30
|
juspay/hyperswitch
|
https://api.github.com/repos/juspay/hyperswitch
|
closed
|
[FEATURE] Send email to merchant notifying api_key expiry
|
A-core A-process-tracker C-feature C-tracking-issue
|
### Feature Description
Email Service
Subissues :-
- [x] #1094
- [x] #1095
### Possible Implementation
Included in each of the subissues.
### Have you spent some time to check if this feature request has been raised before?
- [X] I checked and didn't find similar issue
### Have you read the Contributing Guidelines?
- [X] I have read the [Contributing Guidelines](https://github.com/juspay/hyperswitch/blob/main/docs/CONTRIBUTING.md)
### Are you willing to submit a PR?
None
|
1.0
|
[FEATURE] Send email to merchant notifying api_key expiry - ### Feature Description
Email Service
Subissues :-
- [x] #1094
- [x] #1095
### Possible Implementation
Included in each of the subissues.
### Have you spent some time to check if this feature request has been raised before?
- [X] I checked and didn't find similar issue
### Have you read the Contributing Guidelines?
- [X] I have read the [Contributing Guidelines](https://github.com/juspay/hyperswitch/blob/main/docs/CONTRIBUTING.md)
### Are you willing to submit a PR?
None
|
process
|
send email to merchant notifying api key expiry feature description email service subissues possible implementation included in each of the subissues have you spent some time to check if this feature request has been raised before i checked and didn t find similar issue have you read the contributing guidelines i have read the are you willing to submit a pr none
| 1
|
86,589
| 24,898,972,520
|
IssuesEvent
|
2022-10-28 18:42:21
|
opensearch-project/OpenSearch-Dashboards
|
https://api.github.com/repos/opensearch-project/OpenSearch-Dashboards
|
closed
|
[D&D] Replace wizard naming convention
|
vis builder v2.4.0
|
Renaming "Wizard" to "Visualization Builder"
Tasks:
In code reference of 'wizard' string:
- [x] Rename `wizard` in code to `visBuilder` in function name, type name and class name
- [x] Replace class name prefixes `wiz` with `vb`
- [x] Replace file name and paths of 'wizard' to 'vis_builder'
- [x] Rename 'wizard' saved object type
UI/UX reference of 'wizard' string: https://github.com/opensearch-project/ux/issues/49
- [x] wizard editing page
- [x] dashboard page
- [x] visualization page
- [x] saved object page
|
1.0
|
[D&D] Replace wizard naming convention - Renaming "Wizard" to "Visualization Builder"
Tasks:
In code reference of 'wizard' string:
- [x] Rename `wizard` in code to `visBuilder` in function name, type name and class name
- [x] Replace class name prefixes `wiz` with `vb`
- [x] Replace file name and paths of 'wizard' to 'vis_builder'
- [x] Rename 'wizard' saved object type
UI/UX reference of 'wizard' string: https://github.com/opensearch-project/ux/issues/49
- [x] wizard editing page
- [x] dashboard page
- [x] visualization page
- [x] saved object page
|
non_process
|
replace wizard naming convention renaming wizard to visualization builder tasks in code reference of wizard string rename wizard in code to visbuilder in function name type name and class name replace class name prefixes wiz with vb replace file name and paths of wizard to vis builder rename wizard saved object type ui ux reference of wizard string wizard editing page dashboard page visualization page saved object page
| 0
|
91,152
| 18,354,245,543
|
IssuesEvent
|
2021-10-08 15:56:50
|
google/web-stories-wp
|
https://api.github.com/repos/google/web-stories-wp
|
closed
|
Story Auto Analytics: Remove feature flag code
|
Type: Enhancement P2 Type: Code Quality Pod: WP & Infra Group: Analytics
|
<!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ -->
## Feature Description
<!-- A clear and concise description of what the problem is and what you want to happen. -->
After +1 release we want to completely remove the feature flag code.
## Alternatives Considered
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional Context
<!-- Add any other context or screenshots about the feature request. -->
|
1.0
|
Story Auto Analytics: Remove feature flag code - <!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ -->
## Feature Description
<!-- A clear and concise description of what the problem is and what you want to happen. -->
After +1 release we want to completely remove the feature flag code.
## Alternatives Considered
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional Context
<!-- Add any other context or screenshots about the feature request. -->
|
non_process
|
story auto analytics remove feature flag code feature description after release we want to completely remove the feature flag code alternatives considered additional context
| 0
|
17,270
| 23,051,022,661
|
IssuesEvent
|
2022-07-24 16:24:56
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add Not Without My Dolphin from The Boys
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Not Without My Dolphin
Type (film/tv show): Film
Film or show in which it appears: The Boys
Is the parent film/show streaming anywhere? Amazon Prime
About when in the parent film/show does it appear? Season 3 Episode 2. You can also just find the fake trailer on youtube.
Actual footage of the film/show can be seen (yes/no)? Yes
|
1.0
|
Add Not Without My Dolphin from The Boys - Please add as much of the following info as you can:
Title: Not Without My Dolphin
Type (film/tv show): Film
Film or show in which it appears: The Boys
Is the parent film/show streaming anywhere? Amazon Prime
About when in the parent film/show does it appear? Season 3 Episode 2. You can also just find the fake trailer on youtube.
Actual footage of the film/show can be seen (yes/no)? Yes
|
process
|
add not without my dolphin from the boys please add as much of the following info as you can title not without my dolphin type film tv show film film or show in which it appears the boys is the parent film show streaming anywhere amazon prime about when in the parent film show does it appear season episode you can also just find the fake trailer on youtube actual footage of the film show can be seen yes no yes
| 1
|
6,617
| 9,698,944,256
|
IssuesEvent
|
2019-05-26 09:50:12
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
zonal histogram wrong
|
Bug Processing
|
Author Name: **yonghyun kim** (yonghyun kim)
Original Redmine Issue: [22104](https://issues.qgis.org/issues/22104)
Affected QGIS version: 3.4.8
Redmine category:processing/qgis
---
The zonal histogram is not generated by the raster band setting and the raster value is listed as the field value of the output.
For example,
the raster band type is singleband pseudocolor,
rast value range = 0 ~ 100
band is classified by 5 (20, 40, 60, 80 ,100)
But oustput is stragne like below
----------------------
HISTO_10 int8 -1 0
HISTO_10.0033 int8 -1 0
HISTO_10.004 int8 -1 0
HISTO_10.0041 int8 -1 0
........
HISTO_100.0 int8 -1 0
----------------------------------------
Is it normal output?
I expect this function is same zonal histtogram in Arcgis
|
1.0
|
zonal histogram wrong - Author Name: **yonghyun kim** (yonghyun kim)
Original Redmine Issue: [22104](https://issues.qgis.org/issues/22104)
Affected QGIS version: 3.4.8
Redmine category:processing/qgis
---
The zonal histogram is not generated by the raster band setting and the raster value is listed as the field value of the output.
For example,
the raster band type is singleband pseudocolor,
rast value range = 0 ~ 100
band is classified by 5 (20, 40, 60, 80 ,100)
But oustput is stragne like below
----------------------
HISTO_10 int8 -1 0
HISTO_10.0033 int8 -1 0
HISTO_10.004 int8 -1 0
HISTO_10.0041 int8 -1 0
........
HISTO_100.0 int8 -1 0
----------------------------------------
Is it normal output?
I expect this function is same zonal histtogram in Arcgis
|
process
|
zonal histogram wrong author name yonghyun kim yonghyun kim original redmine issue affected qgis version redmine category processing qgis the zonal histogram is not generated by the raster band setting and the raster value is listed as the field value of the output for example the raster band type is singleband pseudocolor rast value range band is classified by but oustput is stragne like below histo histo histo histo histo is it normal output i expect this function is same zonal histtogram in arcgis
| 1
|
206,665
| 7,114,742,946
|
IssuesEvent
|
2018-01-18 02:28:49
|
lidarr/Lidarr
|
https://api.github.com/repos/lidarr/Lidarr
|
closed
|
Allow Select Wanted Release(s) for a Given Release Group/Album
|
db-migration enhancement metadata change priority:high ui
|
Allow user to select which release(s) they want for a given album instead of just the default returned from metadata.
Metadata currently returns all releases for a given album during initial artist info pull, however we may have to add another route to allow retrieving additional info for a release. Track data will need to be removed and replaced with selected release track data
We also need to consider that we must ensure we refresh correct release during Artist refresh and not override track data with default release track info.
|
1.0
|
Allow Select Wanted Release(s) for a Given Release Group/Album - Allow user to select which release(s) they want for a given album instead of just the default returned from metadata.
Metadata currently returns all releases for a given album during initial artist info pull, however we may have to add another route to allow retrieving additional info for a release. Track data will need to be removed and replaced with selected release track data
We also need to consider that we must ensure we refresh correct release during Artist refresh and not override track data with default release track info.
|
non_process
|
allow select wanted release s for a given release group album allow user to select which release s they want for a given album instead of just the default returned from metadata metadata currently returns all releases for a given album during initial artist info pull however we may have to add another route to allow retrieving additional info for a release track data will need to be removed and replaced with selected release track data we also need to consider that we must ensure we refresh correct release during artist refresh and not override track data with default release track info
| 0
|
334,382
| 24,416,186,326
|
IssuesEvent
|
2022-10-05 16:04:41
|
ProfessionalWiki/WikibaseRDF
|
https://api.github.com/repos/ProfessionalWiki/WikibaseRDF
|
closed
|
Create extension's page on MediaWiki.org
|
documentation
|
... at [Extension:Wikibase_RDF](https://www.mediawiki.org/wiki/Extension:Wikibase_RDF) and provide basic information about it.
<!-- - [ ] https://www.wikidata.org/wiki/Q112194307 -->
|
1.0
|
Create extension's page on MediaWiki.org - ... at [Extension:Wikibase_RDF](https://www.mediawiki.org/wiki/Extension:Wikibase_RDF) and provide basic information about it.
<!-- - [ ] https://www.wikidata.org/wiki/Q112194307 -->
|
non_process
|
create extension s page on mediawiki org at and provide basic information about it
| 0
|
18,619
| 24,579,426,688
|
IssuesEvent
|
2022-10-13 14:36:28
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Consent API] [Android] Share option is not available for the unsigned consent document in the mobile app
|
Bug P0 Android Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Install the app
2. Sign in / Sign up
3. Click on any study
4. Click on 'View Consent' button in welcome screen and Verify
**AR:** Share option is not available for the unsigned consent document in the mobile app
**ER:** Share option should be available for the unsigned consent document in the mobile app

|
3.0
|
[Consent API] [Android] Share option is not available for the unsigned consent document in the mobile app - **Steps:**
1. Install the app
2. Sign in / Sign up
3. Click on any study
4. Click on 'View Consent' button in welcome screen and Verify
**AR:** Share option is not available for the unsigned consent document in the mobile app
**ER:** Share option should be available for the unsigned consent document in the mobile app

|
process
|
share option is not available for the unsigned consent document in the mobile app steps install the app sign in sign up click on any study click on view consent button in welcome screen and verify ar share option is not available for the unsigned consent document in the mobile app er share option should be available for the unsigned consent document in the mobile app
| 1
|
3,860
| 6,808,627,938
|
IssuesEvent
|
2017-11-04 05:49:35
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
reopened
|
Reminder note
|
libs-etherlib status-inprocess type-enhancement
|
From a TODO in the source code
// TODO(tjayrush): The fields in CBlock class are the least stuff QuickBlocks needs, I should create a CMegaBlock that contains all fields from the node just in case someone needs everything for some reason.
|
1.0
|
Reminder note - From a TODO in the source code
// TODO(tjayrush): The fields in CBlock class are the least stuff QuickBlocks needs, I should create a CMegaBlock that contains all fields from the node just in case someone needs everything for some reason.
|
process
|
reminder note from a todo in the source code todo tjayrush the fields in cblock class are the least stuff quickblocks needs i should create a cmegablock that contains all fields from the node just in case someone needs everything for some reason
| 1
|
697,471
| 23,940,609,500
|
IssuesEvent
|
2022-09-11 21:07:10
|
philhawksworth/the-united-effort-orginization
|
https://api.github.com/repos/philhawksworth/the-united-effort-orginization
|
closed
|
Remove Income Bracket column from rent tables with a single row
|
enhancement Priority 3 starter
|
The Income Bracket column in a property's [rent tables](https://www.theunitedeffort.org/housing/affordable-housing/902) is meant to explain to the user why there are multiple rows in the rent table. That column often shows the AMI % associated with a given rent offering, but sometimes no AMI % is available. In that case, placeholder text, e.g. "Bracket 1" is rendered instead. For rent tables with multiple rows, the AMI % or placeholder text is helpful. For rent tables with a single row, the placeholder text is less useful and can even be confusing.
Thus, we should delete the Income Bracket column from the table altogether if:
- the table has a single row
- no AMI percentage is associated with that row
|
1.0
|
Remove Income Bracket column from rent tables with a single row - The Income Bracket column in a property's [rent tables](https://www.theunitedeffort.org/housing/affordable-housing/902) is meant to explain to the user why there are multiple rows in the rent table. That column often shows the AMI % associated with a given rent offering, but sometimes no AMI % is available. In that case, placeholder text, e.g. "Bracket 1" is rendered instead. For rent tables with multiple rows, the AMI % or placeholder text is helpful. For rent tables with a single row, the placeholder text is less useful and can even be confusing.
Thus, we should delete the Income Bracket column from the table altogether if:
- the table has a single row
- no AMI percentage is associated with that row
|
non_process
|
remove income bracket column from rent tables with a single row the income bracket column in a property s is meant to explain to the user why there are multiple rows in the rent table that column often shows the ami associated with a given rent offering but sometimes no ami is available in that case placeholder text e g bracket is rendered instead for rent tables with multiple rows the ami or placeholder text is helpful for rent tables with a single row the placeholder text is less useful and can even be confusing thus we should delete the income bracket column from the table altogether if the table has a single row no ami percentage is associated with that row
| 0
|
139,417
| 18,852,039,576
|
IssuesEvent
|
2021-11-11 22:21:21
|
DemoEnv/Java-Demo
|
https://api.github.com/repos/DemoEnv/Java-Demo
|
opened
|
CVE-2016-3092 (High) detected in commons-fileupload-1.3.1.jar
|
security vulnerability
|
## CVE-2016-3092 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.3.1.jar</b></p></summary>
<p>The Apache Commons FileUpload component provides a simple yet flexible means of adding support for multipart
file upload functionality to servlets and web applications.</p>
<p>Library home page: <a href="http://commons.apache.org/proper/commons-fileupload/">http://commons.apache.org/proper/commons-fileupload/</a></p>
<p>Path to dependency file: Java-Demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.1/commons-fileupload-1.3.1.jar</p>
<p>
Dependency Hierarchy:
- esapi-2.1.0.1.jar (Root Library)
- :x: **commons-fileupload-1.3.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DemoEnv/Java-Demo/commit/43308dc67d60bc98113872a647b47a4971a2ff2a">43308dc67d60bc98113872a647b47a4971a2ff2a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause a denial of service (CPU consumption) via a long boundary string.
<p>Publish Date: 2016-07-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-3092>CVE-2016-3092</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092</a></p>
<p>Release Date: 2016-07-04</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:9.0.0.M8,8.5.3,8.0.36,7.0.70,org.apache.tomcat:tomcat-coyote:9.0.0.M8,8.5.3,8.0.36,7.0.70,commons-fileupload:commons-fileupload:1.3.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-fileupload","packageName":"commons-fileupload","packageVersion":"1.3.1","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.owasp.esapi:esapi:2.1.0.1;commons-fileupload:commons-fileupload:1.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat.embed:tomcat-embed-core:9.0.0.M8,8.5.3,8.0.36,7.0.70,org.apache.tomcat:tomcat-coyote:9.0.0.M8,8.5.3,8.0.36,7.0.70,commons-fileupload:commons-fileupload:1.3.2"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2016-3092","vulnerabilityDetails":"The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause a denial of service (CPU consumption) via a long boundary string.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-3092","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2016-3092 (High) detected in commons-fileupload-1.3.1.jar - ## CVE-2016-3092 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.3.1.jar</b></p></summary>
<p>The Apache Commons FileUpload component provides a simple yet flexible means of adding support for multipart
file upload functionality to servlets and web applications.</p>
<p>Library home page: <a href="http://commons.apache.org/proper/commons-fileupload/">http://commons.apache.org/proper/commons-fileupload/</a></p>
<p>Path to dependency file: Java-Demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.1/commons-fileupload-1.3.1.jar</p>
<p>
Dependency Hierarchy:
- esapi-2.1.0.1.jar (Root Library)
- :x: **commons-fileupload-1.3.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DemoEnv/Java-Demo/commit/43308dc67d60bc98113872a647b47a4971a2ff2a">43308dc67d60bc98113872a647b47a4971a2ff2a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause a denial of service (CPU consumption) via a long boundary string.
<p>Publish Date: 2016-07-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-3092>CVE-2016-3092</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092</a></p>
<p>Release Date: 2016-07-04</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:9.0.0.M8,8.5.3,8.0.36,7.0.70,org.apache.tomcat:tomcat-coyote:9.0.0.M8,8.5.3,8.0.36,7.0.70,commons-fileupload:commons-fileupload:1.3.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-fileupload","packageName":"commons-fileupload","packageVersion":"1.3.1","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.owasp.esapi:esapi:2.1.0.1;commons-fileupload:commons-fileupload:1.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat.embed:tomcat-embed-core:9.0.0.M8,8.5.3,8.0.36,7.0.70,org.apache.tomcat:tomcat-coyote:9.0.0.M8,8.5.3,8.0.36,7.0.70,commons-fileupload:commons-fileupload:1.3.2"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2016-3092","vulnerabilityDetails":"The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause a denial of service (CPU consumption) via a long boundary string.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-3092","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in commons fileupload jar cve high severity vulnerability vulnerable library commons fileupload jar the apache commons fileupload component provides a simple yet flexible means of adding support for multipart file upload functionality to servlets and web applications library home page a href path to dependency file java demo pom xml path to vulnerable library home wss scanner repository commons fileupload commons fileupload commons fileupload jar dependency hierarchy esapi jar root library x commons fileupload jar vulnerable library found in head commit a href found in base branch main vulnerability details the multipartstream class in apache commons fileupload before as used in apache tomcat x before x before x before and x before and other products allows remote attackers to cause a denial of service cpu consumption via a long boundary string publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core org apache tomcat tomcat coyote commons fileupload commons fileupload isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org owasp esapi esapi commons fileupload commons fileupload isminimumfixversionavailable true minimumfixversion org apache tomcat embed tomcat embed core org apache tomcat tomcat coyote commons fileupload commons fileupload basebranches vulnerabilityidentifier cve vulnerabilitydetails the multipartstream class in apache commons fileupload before as used in apache tomcat x before x before x before and x before and other products allows remote attackers to cause a denial of service cpu consumption via a long boundary string vulnerabilityurl
| 0
|
68,088
| 7,087,507,660
|
IssuesEvent
|
2018-01-11 18:03:49
|
publiclab/plots2
|
https://api.github.com/repos/publiclab/plots2
|
closed
|
Create new inline grid for wiki pages, with format [wikis:_tagname_]
|
Ruby enhancement help-wanted testing
|
## New feature
Our inline power tags let us make grids for activities, like in https://publiclab.org/wiki/activities
Let's make a new one to display lists of wiki pages! It could have the syntax `[wikis:_tagname_]` where `_tagname_` is something like "balloon-mapping" -- so it would show wiki pages tagged with "balloon-mapping", as currently shown on these pages:
* https://publiclab.org/wiki/tag/balloon-mapping
* https://publiclab.org/wiki/tag/hydrogen-sulfide
## Changes
For how we currently set up a grid using notes, see `def notes_grid`:
https://github.com/publiclab/plots2/blob/master/app/models/concerns/node_shared.rb#L31-L52
For the template we'll have to copy into `_wikis.html.erb`, see:
https://github.com/publiclab/plots2/blob/master/app/views/grids/_notes.html.erb
I believe this is it -- but there may be one more step to get this working. Let's start with this and we can ensure it works by adding a test to this file, similarly to how we test notes grids;
https://github.com/publiclab/plots2/blob/master/test/unit/node_shared_test.rb#L4-L12
The end result is that we should be able to put `[wikis:test]` onto a wiki page, and it should show all wiki pages that have been tagged with the word `test`.
Make sense? This is a big one, with several moving parts, but it should be easy to write a test for and make progress on. And I'm happy to help! Thanks!
|
1.0
|
Create new inline grid for wiki pages, with format [wikis:_tagname_] - ## New feature
Our inline power tags let us make grids for activities, like in https://publiclab.org/wiki/activities
Let's make a new one to display lists of wiki pages! It could have the syntax `[wikis:_tagname_]` where `_tagname_` is something like "balloon-mapping" -- so it would show wiki pages tagged with "balloon-mapping", as currently shown on these pages:
* https://publiclab.org/wiki/tag/balloon-mapping
* https://publiclab.org/wiki/tag/hydrogen-sulfide
## Changes
For how we currently set up a grid using notes, see `def notes_grid`:
https://github.com/publiclab/plots2/blob/master/app/models/concerns/node_shared.rb#L31-L52
For the template we'll have to copy into `_wikis.html.erb`, see:
https://github.com/publiclab/plots2/blob/master/app/views/grids/_notes.html.erb
I believe this is it -- but there may be one more step to get this working. Let's start with this and we can ensure it works by adding a test to this file, similarly to how we test notes grids;
https://github.com/publiclab/plots2/blob/master/test/unit/node_shared_test.rb#L4-L12
The end result is that we should be able to put `[wikis:test]` onto a wiki page, and it should show all wiki pages that have been tagged with the word `test`.
Make sense? This is a big one, with several moving parts, but it should be easy to write a test for and make progress on. And I'm happy to help! Thanks!
|
non_process
|
create new inline grid for wiki pages with format new feature our inline power tags let us make grids for activities like in let s make a new one to display lists of wiki pages it could have the syntax where tagname is something like balloon mapping so it would show wiki pages tagged with balloon mapping as currently shown on these pages changes for how we currently set up a grid using notes see def notes grid for the template we ll have to copy into wikis html erb see i believe this is it but there may be one more step to get this working let s start with this and we can ensure it works by adding a test to this file similarly to how we test notes grids the end result is that we should be able to put onto a wiki page and it should show all wiki pages that have been tagged with the word test make sense this is a big one with several moving parts but it should be easy to write a test for and make progress on and i m happy to help thanks
| 0
|
13,567
| 16,105,621,930
|
IssuesEvent
|
2021-04-27 14:37:51
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Close specified fd may cause a double free
|
fs process
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in PowerShell console (Windows)
Subsystem: if known, please specify affected core module name
-->
* **Version**: v16.0.0-pre
* **Platform**: Linux 5.8.0-38-generic #43~20.04.1-Ubuntu SMP Tue Jan 12 16:39:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
* **Subsystem**: fs.close/fs.closeSync
### What steps will reproduce the bug?
Setup a node instance,
```
» node
```
and run the following javascript code.
```
fs.closeSync(8);
// fs.close(8, (err)=>{}) works too.
process.exit();
```
Then the node instance occurs an abort.
If invoking `fs.closeSync(8)` twice, then a "bad file descriptor" error message would be alert.
However, when exiting the process with `process.exit()`, an abort occurs. I'm not sure if any other way to trigger this problem.
This issue is almost the same as #37874, but since a double-free may have been triggered, maybe security risk should be considered.
Feel free to close this issue if you think nothing important.
<!--
Enter details about your bug, preferably a simple code snippet that can be
run using `node` directly without installing third-party dependencies.
-->
### How often does it reproduce? Is there a required condition?
This abort can always be triggered following the steps above.
### What is the expected behavior?
If any error occurs, an exception or other similar error-reporting stuff should be thrown. There is no reason to abort the whole node process.
<!--
If possible please provide textual output instead of screenshots.
-->
### What do you see instead?
```
» node
Welcome to Node.js v16.0.0-pre.
Type ".help" for more information.
> fs.closeSync(8);
undefined
> process.exit()
[1] 95516 abort (core dumped) /home/zys/Toolchains/node/node
```
<!--
If possible please provide textual output instead of screenshots.
-->
### Additional information
<!--
Tell us anything else you think we should know.
-->
|
1.0
|
Close specified fd may cause a double free - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in PowerShell console (Windows)
Subsystem: if known, please specify affected core module name
-->
* **Version**: v16.0.0-pre
* **Platform**: Linux 5.8.0-38-generic #43~20.04.1-Ubuntu SMP Tue Jan 12 16:39:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
* **Subsystem**: fs.close/fs.closeSync
### What steps will reproduce the bug?
Setup a node instance,
```
» node
```
and run the following javascript code.
```
fs.closeSync(8);
// fs.close(8, (err)=>{}) works too.
process.exit();
```
Then the node instance occurs an abort.
If invoking `fs.closeSync(8)` twice, then a "bad file descriptor" error message would be alert.
However, when exiting the process with `process.exit()`, an abort occurs. I'm not sure if any other way to trigger this problem.
This issue is almost the same as #37874, but since a double-free may have been triggered, maybe security risk should be considered.
Feel free to close this issue if you think nothing important.
<!--
Enter details about your bug, preferably a simple code snippet that can be
run using `node` directly without installing third-party dependencies.
-->
### How often does it reproduce? Is there a required condition?
This abort can always be triggered following the steps above.
### What is the expected behavior?
If any error occurs, an exception or other similar error-reporting stuff should be thrown. There is no reason to abort the whole node process.
<!--
If possible please provide textual output instead of screenshots.
-->
### What do you see instead?
```
» node
Welcome to Node.js v16.0.0-pre.
Type ".help" for more information.
> fs.closeSync(8);
undefined
> process.exit()
[1] 95516 abort (core dumped) /home/zys/Toolchains/node/node
```
<!--
If possible please provide textual output instead of screenshots.
-->
### Additional information
<!--
Tell us anything else you think we should know.
-->
|
process
|
close specified fd may cause a double free thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or output of osversion foreach object versionstring if else in powershell console windows subsystem if known please specify affected core module name version pre platform linux generic ubuntu smp tue jan utc gnu linux subsystem fs close fs closesync what steps will reproduce the bug setup a node instance » node and run the following javascript code fs closesync fs close err works too process exit then the node instance occurs an abort if invoking fs closesync twice then a bad file descriptor error message would be alert however when exiting the process with process exit an abort occurs i m not sure if any other way to trigger this problem this issue is almost the same as but since a double free may have been triggered maybe security risk should be considered feel free to close this issue if you think nothing important enter details about your bug preferably a simple code snippet that can be run using node directly without installing third party dependencies how often does it reproduce is there a required condition this abort can always be triggered following the steps above what is the expected behavior if any error occurs an exception or other similar error reporting stuff should be thrown there is no reason to abort the whole node process if possible please provide textual output instead of screenshots what do you see instead » node welcome to node js pre type help for more information fs closesync undefined process exit abort core dumped home zys toolchains node node if possible please provide textual output instead of screenshots additional information tell us anything else you think we should know
| 1
|
14,843
| 18,237,327,017
|
IssuesEvent
|
2021-10-01 08:37:46
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Error: [libs/datamodel/connectors/dml/src/model.rs:222:92] called `Option::unwrap()` on a `None` value
|
bug/1-repro-available kind/bug process/candidate topic: error reporting team/migrations
|
<!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db pull`
Version: `3.0.2`
Binary Version: `2452cc6313d52b8b9a96999ac0e974d0aedf88db`
Report: https://prisma-errors.netlify.app/report/13530
OS: `x64 linux 5.11.0-36-generic`
JS Stacktrace:
```
Error: [libs/datamodel/connectors/dml/src/model.rs:222:92] called `Option::unwrap()` on a `None` value
at ChildProcess.<anonymous> (/<censored>/node_modules/prisma/build/index.js:41925:28)
at ChildProcess.emit (events.js:400:28)
at ChildProcess.emit (domain.js:470:12)
at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
```
Rust Stacktrace:
```
0: user_facing_errors::Error::new_in_panic_hook
1: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
2: std::panicking::rust_panic_with_hook
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:595:17
3: std::panicking::begin_panic_handler::{{closure}}
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:495:13
4: std::sys_common::backtrace::__rust_end_short_backtrace
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys_common/backtrace.rs:141:18
5: rust_begin_unwind
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:493:5
6: core::panicking::panic_fmt
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/panicking.rs:92:14
7: core::panicking::panic
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/panicking.rs:50:5
8: <alloc::vec::Vec<T> as alloc::vec::spec_from_iter::SpecFromIter<T,I>>::from_iter
9: dml::model::Model::unique_criterias
10: dml::model::Model::strict_unique_criterias_disregarding_unsupported
11: sql_introspection_connector::commenting_out_guardrails::commenting_out_guardrails
12: sql_introspection_connector::calculate_datamodel::calculate_datamodel
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
15: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
16: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
17: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
18: introspection_engine::main::{{closure}}
19: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
20: introspection_engine::main
21: std::sys_common::backtrace::__rust_begin_short_backtrace
22: std::rt::lang_start::{{closure}}
23: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/ops/function.rs:259:13
std::panicking::try::do_call
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:379:40
std::panicking::try
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:343:19
std::panic::catch_unwind
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panic.rs:431:14
std::rt::lang_start_internal
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/rt.rs:34:21
24: std::rt::lang_start
25: __libc_start_main
26: <unknown>
```
|
1.0
|
Error: [libs/datamodel/connectors/dml/src/model.rs:222:92] called `Option::unwrap()` on a `None` value - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db pull`
Version: `3.0.2`
Binary Version: `2452cc6313d52b8b9a96999ac0e974d0aedf88db`
Report: https://prisma-errors.netlify.app/report/13530
OS: `x64 linux 5.11.0-36-generic`
JS Stacktrace:
```
Error: [libs/datamodel/connectors/dml/src/model.rs:222:92] called `Option::unwrap()` on a `None` value
at ChildProcess.<anonymous> (/<censored>/node_modules/prisma/build/index.js:41925:28)
at ChildProcess.emit (events.js:400:28)
at ChildProcess.emit (domain.js:470:12)
at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
```
Rust Stacktrace:
```
0: user_facing_errors::Error::new_in_panic_hook
1: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
2: std::panicking::rust_panic_with_hook
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:595:17
3: std::panicking::begin_panic_handler::{{closure}}
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:495:13
4: std::sys_common::backtrace::__rust_end_short_backtrace
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys_common/backtrace.rs:141:18
5: rust_begin_unwind
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:493:5
6: core::panicking::panic_fmt
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/panicking.rs:92:14
7: core::panicking::panic
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/panicking.rs:50:5
8: <alloc::vec::Vec<T> as alloc::vec::spec_from_iter::SpecFromIter<T,I>>::from_iter
9: dml::model::Model::unique_criterias
10: dml::model::Model::strict_unique_criterias_disregarding_unsupported
11: sql_introspection_connector::commenting_out_guardrails::commenting_out_guardrails
12: sql_introspection_connector::calculate_datamodel::calculate_datamodel
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
15: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
16: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
17: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
18: introspection_engine::main::{{closure}}
19: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
20: introspection_engine::main
21: std::sys_common::backtrace::__rust_begin_short_backtrace
22: std::rt::lang_start::{{closure}}
23: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/ops/function.rs:259:13
std::panicking::try::do_call
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:379:40
std::panicking::try
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:343:19
std::panic::catch_unwind
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panic.rs:431:14
std::rt::lang_start_internal
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/rt.rs:34:21
24: std::rt::lang_start
25: __libc_start_main
26: <unknown>
```
|
process
|
error called option unwrap on a none value command prisma db pull version binary version report os linux generic js stacktrace error called option unwrap on a none value at childprocess node modules prisma build index js at childprocess emit events js at childprocess emit domain js at process childprocess handle onexit internal child process js rust stacktrace user facing errors error new in panic hook user facing errors panic hook set panic hook closure std panicking rust panic with hook at rustc library std src panicking rs std panicking begin panic handler closure at rustc library std src panicking rs std sys common backtrace rust end short backtrace at rustc library std src sys common backtrace rs rust begin unwind at rustc library std src panicking rs core panicking panic fmt at rustc library core src panicking rs core panicking panic at rustc library core src panicking rs as alloc vec spec from iter specfromiter from iter dml model model unique criterias dml model model strict unique criterias disregarding unsupported sql introspection connector commenting out guardrails commenting out guardrails sql introspection connector calculate datamodel calculate datamodel as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll introspection engine main closure as core future future future poll introspection engine main std sys common backtrace rust begin short backtrace std rt lang start closure core ops function impls for f call once at rustc library core src ops function rs std panicking try do call at rustc library std src panicking rs std panicking try at rustc library std src panicking rs std panic catch unwind at rustc library std src panic rs std rt lang start internal at rustc library std src rt rs std rt lang start libc start main
| 1
|
68,678
| 17,377,912,519
|
IssuesEvent
|
2021-07-31 04:17:26
|
apache/shardingsphere-elasticjob
|
https://api.github.com/repos/apache/shardingsphere-elasticjob
|
closed
|
Project build failed using JDK11
|
build
|
## Bug Report
### Which version of ElasticJob did you use?
3.0.0
### Actual behavior
Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.10.3:jar (attach-javadocs) on project elasticjob-api: MavenReportException: Error while generating Javadoc: Unable to find javadoc command: The javadoc executable 'C:\Program Files\Java\jdk-11.0.2\..\bin\javadoc.exe' doesn't exist or is not a file. Verify the <javadocExecutable/> parameter.
|
1.0
|
Project build failed using JDK11 - ## Bug Report
### Which version of ElasticJob did you use?
3.0.0
### Actual behavior
Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.10.3:jar (attach-javadocs) on project elasticjob-api: MavenReportException: Error while generating Javadoc: Unable to find javadoc command: The javadoc executable 'C:\Program Files\Java\jdk-11.0.2\..\bin\javadoc.exe' doesn't exist or is not a file. Verify the <javadocExecutable/> parameter.
|
non_process
|
project build failed using bug report which version of elasticjob did you use actual behavior failed to execute goal org apache maven plugins maven javadoc plugin jar attach javadocs on project elasticjob api mavenreportexception error while generating javadoc unable to find javadoc command the javadoc executable c program files java jdk bin javadoc exe doesn t exist or is not a file verify the parameter
| 0
|
698,493
| 23,982,521,316
|
IssuesEvent
|
2022-09-13 16:07:13
|
Su386yt/partly-sane-skies
|
https://api.github.com/repos/Su386yt/partly-sane-skies
|
closed
|
Cookie Buff Warning
|
enhancement low priority
|
u/RGBeleuchtang, u/RLV1gaming, u/romin0
A warning when my cookie buff is low or ran out
|
1.0
|
Cookie Buff Warning - u/RGBeleuchtang, u/RLV1gaming, u/romin0
A warning when my cookie buff is low or ran out
|
non_process
|
cookie buff warning u rgbeleuchtang u u a warning when my cookie buff is low or ran out
| 0
|
1,485
| 4,059,009,639
|
IssuesEvent
|
2016-05-25 07:55:44
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
closed
|
Нетішин Хмельницька обл - Погодження розміщення на об'єктах благоустрою пересувних об'єктів сезонної торгівлі та проведення ярмарків
|
In process of testing in work
|
Опрацьовувати їх буде та ж людина - начальник ЦНАПу Кушта Галина
Галина Кушта - начальник ЦНАП (їхня робоча пошта)
cnap_netishyn@ukr.net
neteshin_user1 - логин и пароль
Олена Матросова (менеджер від iGov)
olena.boichuk@gmail.com
neteshin_user2 - логин и пароль
Руслан Рудомський (менеджер від iGov)
nebajduzhyj@gmail.com
neteshin_user3 - логин и пароль
|
1.0
|
Нетішин Хмельницька обл - Погодження розміщення на об'єктах благоустрою пересувних об'єктів сезонної торгівлі та проведення ярмарків - Опрацьовувати їх буде та ж людина - начальник ЦНАПу Кушта Галина
Галина Кушта - начальник ЦНАП (їхня робоча пошта)
cnap_netishyn@ukr.net
neteshin_user1 - логин и пароль
Олена Матросова (менеджер від iGov)
olena.boichuk@gmail.com
neteshin_user2 - логин и пароль
Руслан Рудомський (менеджер від iGov)
nebajduzhyj@gmail.com
neteshin_user3 - логин и пароль
|
process
|
нетішин хмельницька обл погодження розміщення на об єктах благоустрою пересувних об єктів сезонної торгівлі та проведення ярмарків опрацьовувати їх буде та ж людина начальник цнапу кушта галина галина кушта начальник цнап їхня робоча пошта cnap netishyn ukr net neteshin логин и пароль олена матросова менеджер від igov olena boichuk gmail com neteshin логин и пароль руслан рудомський менеджер від igov nebajduzhyj gmail com neteshin логин и пароль
| 1
|
918
| 3,377,770,813
|
IssuesEvent
|
2015-11-25 06:43:09
|
onyx-platform/onyx
|
https://api.github.com/repos/onyx-platform/onyx
|
closed
|
Fix the plugin release process
|
release-process
|
Currently the plugin releases do not update the READMEs which breaks the X.x.x.x releases.
The version should be set back to a snapshot release.
I'd prefer we always release 4 digit releases of plugins.
For example, if we release 0.8.0, then onyx-metrics 0.8.0.0 should be released, then master should be set to 0.8.0.1-SNAPSHOT with the README and any other files reflecting this.
|
1.0
|
Fix the plugin release process - Currently the plugin releases do not update the READMEs which breaks the X.x.x.x releases.
The version should be set back to a snapshot release.
I'd prefer we always release 4 digit releases of plugins.
For example, if we release 0.8.0, then onyx-metrics 0.8.0.0 should be released, then master should be set to 0.8.0.1-SNAPSHOT with the README and any other files reflecting this.
|
process
|
fix the plugin release process currently the plugin releases do not update the readmes which breaks the x x x x releases the version should be set back to a snapshot release i d prefer we always release digit releases of plugins for example if we release then onyx metrics should be released then master should be set to snapshot with the readme and any other files reflecting this
| 1
|
13,784
| 16,542,868,099
|
IssuesEvent
|
2021-05-27 19:14:32
|
scieloorg/search-journals
|
https://api.github.com/repos/scieloorg/search-journals
|
closed
|
Criar cluster "Periódicos - Status"
|
Processamento
|
Criar um cluster/filtro de nome "Periódicos - Status" que permite filtrar pelo status do periódico na Coleção:
Todos
Títulos correntes
Títulos não-correntes
Por padrão, os resultados de busca devem trazer todos.
@deandr se for necessário envolver o designer nessa atividade, favor designá-lo a esse ticket.
|
1.0
|
Criar cluster "Periódicos - Status" - Criar um cluster/filtro de nome "Periódicos - Status" que permite filtrar pelo status do periódico na Coleção:
Todos
Títulos correntes
Títulos não-correntes
Por padrão, os resultados de busca devem trazer todos.
@deandr se for necessário envolver o designer nessa atividade, favor designá-lo a esse ticket.
|
process
|
criar cluster periódicos status criar um cluster filtro de nome periódicos status que permite filtrar pelo status do periódico na coleção todos títulos correntes títulos não correntes por padrão os resultados de busca devem trazer todos deandr se for necessário envolver o designer nessa atividade favor designá lo a esse ticket
| 1
|
23,015
| 2,651,958,310
|
IssuesEvent
|
2015-03-16 14:54:31
|
nlbdev/nordic-epub3-dtbook-migrator
|
https://api.github.com/repos/nlbdev/nordic-epub3-dtbook-migrator
|
closed
|
Make sure that directory structure is validated
|
2 - High priority dtbook-validator epub3-validator
|
<!---
@huboard:{"order":181.875,"milestone_order":162.0,"custom_state":""}
-->
|
1.0
|
Make sure that directory structure is validated -
<!---
@huboard:{"order":181.875,"milestone_order":162.0,"custom_state":""}
-->
|
non_process
|
make sure that directory structure is validated huboard order milestone order custom state
| 0
|
4,065
| 6,995,959,547
|
IssuesEvent
|
2017-12-15 21:41:08
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
MSBuild Exec task broken with .NET Core 2.1
|
area-System.Diagnostics.Process bug
|
I'm running into an issue [updating the .NET CLI to use .NET Core 2.1](https://github.com/dotnet/cli/pull/7478). My best guess currently is that this is a regression in .NET Core, because it is surfacing when I update to .NET Core 2.1 but worked with .NET Core 2.0.
## Repro steps
- On a non-Windows OS
- With a version of the .NET CLI that uses .NET Core 2.1
- Run `dotnet msbuild` with the following project
```xml
<Project>
<Target Name="Build">
<Exec Command="echo Hello World" />
</Target>
</Project>
```
## Expected
Build succeeds
## Actual
Build fails with the following:
> Microsoft (R) Build Engine version 15.3.409.57025 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
>
> /bin/sh: 1: export LANG=en_US.UTF-8; export LC_ALL=en_US.UTF-8; . /tmp/tmp365c3d61708c464b9358f5adcdafd024.exec.cmd: not found
/mnt/c/git/dotnet-cli-linux/artifacts/testexec/testexec.proj(6,9): error MSB3073: The command "echo Hello World" exited with code 127.
## Details
The MSBuild Exec task [creates a script file in the temporary folder](https://github.com/Microsoft/msbuild/blob/080ef976a428f6ff7bf53ca5dd4ee637b3fe949c/src/Tasks/Exec.cs#L216) in order to [set encodings](https://github.com/Microsoft/msbuild/blob/080ef976a428f6ff7bf53ca5dd4ee637b3fe949c/src/Tasks/Exec.cs#L608-L612) for the launched process. After the command is executed, the temporary file [is deleted](https://github.com/Microsoft/msbuild/blob/080ef976a428f6ff7bf53ca5dd4ee637b3fe949c/src/Tasks/Exec.cs#L340).
It appears that with .NET Core 2.1, the temporary script file is not visible to the launched process, resulting in a "not found" error.
## Building a CLI with .NET Core 2.1
Currently there isn't a version of the CLI available with .NET Core 2.1. To build one yourself in order to repro the issue:
- Clone https://github.com/dsplaisted/cli
- Check out the `update-runtime-to-2.1` branch
- Set the `CLIBUILD_SKIP_TESTS` environment variable to `true`
- Run `build.sh` from the repo root
- To use the newly built CLI, run `dotnet` from the `artifacts/linux-x64/stage2` folder
|
1.0
|
MSBuild Exec task broken with .NET Core 2.1 - I'm running into an issue [updating the .NET CLI to use .NET Core 2.1](https://github.com/dotnet/cli/pull/7478). My best guess currently is that this is a regression in .NET Core, because it is surfacing when I update to .NET Core 2.1 but worked with .NET Core 2.0.
## Repro steps
- On a non-Windows OS
- With a version of the .NET CLI that uses .NET Core 2.1
- Run `dotnet msbuild` with the following project
```xml
<Project>
<Target Name="Build">
<Exec Command="echo Hello World" />
</Target>
</Project>
```
## Expected
Build succeeds
## Actual
Build fails with the following:
> Microsoft (R) Build Engine version 15.3.409.57025 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
>
> /bin/sh: 1: export LANG=en_US.UTF-8; export LC_ALL=en_US.UTF-8; . /tmp/tmp365c3d61708c464b9358f5adcdafd024.exec.cmd: not found
/mnt/c/git/dotnet-cli-linux/artifacts/testexec/testexec.proj(6,9): error MSB3073: The command "echo Hello World" exited with code 127.
## Details
The MSBuild Exec task [creates a script file in the temporary folder](https://github.com/Microsoft/msbuild/blob/080ef976a428f6ff7bf53ca5dd4ee637b3fe949c/src/Tasks/Exec.cs#L216) in order to [set encodings](https://github.com/Microsoft/msbuild/blob/080ef976a428f6ff7bf53ca5dd4ee637b3fe949c/src/Tasks/Exec.cs#L608-L612) for the launched process. After the command is executed, the temporary file [is deleted](https://github.com/Microsoft/msbuild/blob/080ef976a428f6ff7bf53ca5dd4ee637b3fe949c/src/Tasks/Exec.cs#L340).
It appears that with .NET Core 2.1, the temporary script file is not visible to the launched process, resulting in a "not found" error.
## Building a CLI with .NET Core 2.1
Currently there isn't a version of the CLI available with .NET Core 2.1. To build one yourself in order to repro the issue:
- Clone https://github.com/dsplaisted/cli
- Check out the `update-runtime-to-2.1` branch
- Set the `CLIBUILD_SKIP_TESTS` environment variable to `true`
- Run `build.sh` from the repo root
- To use the newly built CLI, run `dotnet` from the `artifacts/linux-x64/stage2` folder
|
process
|
msbuild exec task broken with net core i m running into an issue my best guess currently is that this is a regression in net core because it is surfacing when i update to net core but worked with net core repro steps on a non windows os with a version of the net cli that uses net core run dotnet msbuild with the following project xml expected build succeeds actual build fails with the following microsoft r build engine version for net core copyright c microsoft corporation all rights reserved bin sh export lang en us utf export lc all en us utf tmp exec cmd not found mnt c git dotnet cli linux artifacts testexec testexec proj error the command echo hello world exited with code details the msbuild exec task in order to for the launched process after the command is executed the temporary file it appears that with net core the temporary script file is not visible to the launched process resulting in a not found error building a cli with net core currently there isn t a version of the cli available with net core to build one yourself in order to repro the issue clone check out the update runtime to branch set the clibuild skip tests environment variable to true run build sh from the repo root to use the newly built cli run dotnet from the artifacts linux folder
| 1
|
76,588
| 14,645,692,303
|
IssuesEvent
|
2020-12-26 09:30:51
|
dotnet/roslyn
|
https://api.github.com/repos/dotnet/roslyn
|
closed
|
IDE0054 Use compound assignment is offered for "with" initialization expression
|
Area-IDE Bug IDE-CodeStyle New Language Feature - Records help wanted
|
**Version Used**:
Visual Studio Community 2019 16.8.3
.NET 5.0.101
**Steps to Reproduce**:
Write the following code:
```cs
public record MyRecord
{
public int MyValue { get; init; }
public MyRecord IncrementValue()
=> this with
{
MyValue = MyValue + 1
};
}
```
**Expected Behavior**:
Suggestion IDE0054 should not be offered on line 8, since the result is not actually valid syntax.
**Actual Behavior**:
Suggestion IDE0054 is offered on line 8.

This is similar to #49303, but applies to `with` expressions instead of `new()` expressions.
|
1.0
|
IDE0054 Use compound assignment is offered for "with" initialization expression - **Version Used**:
Visual Studio Community 2019 16.8.3
.NET 5.0.101
**Steps to Reproduce**:
Write the following code:
```cs
public record MyRecord
{
public int MyValue { get; init; }
public MyRecord IncrementValue()
=> this with
{
MyValue = MyValue + 1
};
}
```
**Expected Behavior**:
Suggestion IDE0054 should not be offered on line 8, since the result is not actually valid syntax.
**Actual Behavior**:
Suggestion IDE0054 is offered on line 8.

This is similar to #49303, but applies to `with` expressions instead of `new()` expressions.
|
non_process
|
use compound assignment is offered for with initialization expression version used visual studio community net steps to reproduce write the following code cs public record myrecord public int myvalue get init public myrecord incrementvalue this with myvalue myvalue expected behavior suggestion should not be offered on line since the result is not actually valid syntax actual behavior suggestion is offered on line this is similar to but applies to with expressions instead of new expressions
| 0
|
9,254
| 12,291,892,542
|
IssuesEvent
|
2020-05-10 12:15:57
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Please note that HTTP requests containing the **same IP**, the **same date**, and the **same user agent** are considered a unique visitor. In your case, I'm assuming you are only taking unique IPs.
|
log-processing question
|
Please note that HTTP requests containing the **same IP**, the **same date**, and the **same user agent** are considered a unique visitor. In your case, I'm assuming you are only taking unique IPs.
The unique IPs count should be displayed right above the data column in the Visitors Panel.
_Originally posted by @allinurl in https://github.com/allinurl/goaccess/issues/1205#issuecomment-414336124_
|
1.0
|
Please note that HTTP requests containing the **same IP**, the **same date**, and the **same user agent** are considered a unique visitor. In your case, I'm assuming you are only taking unique IPs. - Please note that HTTP requests containing the **same IP**, the **same date**, and the **same user agent** are considered a unique visitor. In your case, I'm assuming you are only taking unique IPs.
The unique IPs count should be displayed right above the data column in the Visitors Panel.
_Originally posted by @allinurl in https://github.com/allinurl/goaccess/issues/1205#issuecomment-414336124_
|
process
|
please note that http requests containing the same ip the same date and the same user agent are considered a unique visitor in your case i m assuming you are only taking unique ips please note that http requests containing the same ip the same date and the same user agent are considered a unique visitor in your case i m assuming you are only taking unique ips the unique ips count should be displayed right above the data column in the visitors panel originally posted by allinurl in
| 1
|
5,995
| 8,805,375,383
|
IssuesEvent
|
2018-12-26 19:14:16
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
When <navtitle> is used in a map, move-meta copies that navtitle to following entries
|
bug preprocess priority/medium stale
|
Noticed this with 1.7.4 but verified in 2.0M3
I have a map that has a few navtitle elements and a few navtitle attributes, using @locktitle in a few instances to lock a local title. After the move-meta step, the first navtitle in my map is copied into all following topicref elements. This is usually not noticed, but when I have a locked title later in the map, that locked title is rewritten, causing the wrong title to come out in my navigation.
I'm copying in a map here reproduces the problem. Any topic named "P024117.dita" can be used to verify the issue. I narrowed it down by creating a copy of the map before/after move-meta runs, but the effect is easily visible by creating a navigation output such as eclipsehelp.
``` xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN"
"map.dtd">
<map xml:lang="en-us">
<title>Bats</title>
<topicref href="P024117.dita"/>
<topicref href="P024117.dita" navtitle="NAVTITLE ATTRIBUTE IN MAP"/>
<topicref href="P024117.dita">
<topicmeta>
<navtitle>1 navtitle element in map, no @navtitle</navtitle>
</topicmeta>
</topicref>
<topicref href="P024117.dita" navtitle="(ignored) NAVTITLE ATTRIBUTE IN MAP, also have element">
<topicmeta>
<navtitle>2 navtitle element in map, also have @navtitle</navtitle>
</topicmeta>
</topicref>
<topicref href="P024117.dita" navtitle="LOCKED NAVTITLE ATTRIBUTE IN MAP" locktitle="yes"/>
<topicref href="P024117.dita" locktitle="yes">
<topicmeta>
<navtitle>3 LOCKED navtitle element in map, no @navtitle</navtitle>
</topicmeta>
</topicref>
<topicref href="P024117.dita" navtitle="LOCKED but ignored NAVTITLE ATTRIBUTE IN MAP, also have element"
locktitle="yes">
<topicmeta>
<navtitle>4 LOCKED navtitle element in map, also have @navtitle</navtitle>
</topicmeta>
</topicref>
</map>
```
|
1.0
|
When <navtitle> is used in a map, move-meta copies that navtitle to following entries - Noticed this with 1.7.4 but verified in 2.0M3
I have a map that has a few navtitle elements and a few navtitle attributes, using @locktitle in a few instances to lock a local title. After the move-meta step, the first navtitle in my map is copied into all following topicref elements. This is usually not noticed, but when I have a locked title later in the map, that locked title is rewritten, causing the wrong title to come out in my navigation.
I'm copying in a map here reproduces the problem. Any topic named "P024117.dita" can be used to verify the issue. I narrowed it down by creating a copy of the map before/after move-meta runs, but the effect is easily visible by creating a navigation output such as eclipsehelp.
``` xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN"
"map.dtd">
<map xml:lang="en-us">
<title>Bats</title>
<topicref href="P024117.dita"/>
<topicref href="P024117.dita" navtitle="NAVTITLE ATTRIBUTE IN MAP"/>
<topicref href="P024117.dita">
<topicmeta>
<navtitle>1 navtitle element in map, no @navtitle</navtitle>
</topicmeta>
</topicref>
<topicref href="P024117.dita" navtitle="(ignored) NAVTITLE ATTRIBUTE IN MAP, also have element">
<topicmeta>
<navtitle>2 navtitle element in map, also have @navtitle</navtitle>
</topicmeta>
</topicref>
<topicref href="P024117.dita" navtitle="LOCKED NAVTITLE ATTRIBUTE IN MAP" locktitle="yes"/>
<topicref href="P024117.dita" locktitle="yes">
<topicmeta>
<navtitle>3 LOCKED navtitle element in map, no @navtitle</navtitle>
</topicmeta>
</topicref>
<topicref href="P024117.dita" navtitle="LOCKED but ignored NAVTITLE ATTRIBUTE IN MAP, also have element"
locktitle="yes">
<topicmeta>
<navtitle>4 LOCKED navtitle element in map, also have @navtitle</navtitle>
</topicmeta>
</topicref>
</map>
```
|
process
|
when is used in a map move meta copies that navtitle to following entries noticed this with but verified in i have a map that has a few navtitle elements and a few navtitle attributes using locktitle in a few instances to lock a local title after the move meta step the first navtitle in my map is copied into all following topicref elements this is usually not noticed but when i have a locked title later in the map that locked title is rewritten causing the wrong title to come out in my navigation i m copying in a map here reproduces the problem any topic named dita can be used to verify the issue i narrowed it down by creating a copy of the map before after move meta runs but the effect is easily visible by creating a navigation output such as eclipsehelp xml doctype map public oasis dtd dita map en map dtd bats navtitle element in map no navtitle navtitle element in map also have navtitle locked navtitle element in map no navtitle topicref href dita navtitle locked but ignored navtitle attribute in map also have element locktitle yes locked navtitle element in map also have navtitle
| 1
|
15,476
| 19,685,501,728
|
IssuesEvent
|
2022-01-11 21:37:13
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
M68K addx doesn't handle carry properly
|
Feature: Processor/68000
|
The current definition is
`{ local tmp =zext(XF)+Ty; addflags(tmp,Tx); Tx=Tx+tmp; extendedResultFlags(Tx); }`
If `Ty = 0xffff_ffff, Tx = 0 and XF = 1` `zext(XF)+Ty` will overflow but the carry will be discarded causing the operation to fail to properly set the CF/XF.
It looks like the `ADDC` in ia.sinc handles this properly.
|
1.0
|
M68K addx doesn't handle carry properly - The current definition is
`{ local tmp =zext(XF)+Ty; addflags(tmp,Tx); Tx=Tx+tmp; extendedResultFlags(Tx); }`
If `Ty = 0xffff_ffff, Tx = 0 and XF = 1` `zext(XF)+Ty` will overflow but the carry will be discarded causing the operation to fail to properly set the CF/XF.
It looks like the `ADDC` in ia.sinc handles this properly.
|
process
|
addx doesn t handle carry properly the current definition is local tmp zext xf ty addflags tmp tx tx tx tmp extendedresultflags tx if ty ffff tx and xf zext xf ty will overflow but the carry will be discarded causing the operation to fail to properly set the cf xf it looks like the addc in ia sinc handles this properly
| 1
|
179,629
| 14,707,170,770
|
IssuesEvent
|
2021-01-04 21:08:42
|
microsoft/hyperspace
|
https://api.github.com/repos/microsoft/hyperspace
|
closed
|
[DOCUMENTATION]: Update docs (config + behavior) for supporting globbing patterns
|
documentation
|
### Describe the issue
<!--
A clear and concise description of what the issue is.
-->
Documentation needs to be updated for globbing pattern support with supported configs
### To Reproduce
<!--
Steps to reproduce the behavior:
1. Do this first '...'
2. Write this code '....'
3. Execute the code this way '....'
4. See this behavior
5. If applicable, add screenshots to help explain your problem.
-->
### Expected behavior
<!--
A clear and concise description of what you expected to happen.
-->
### Environment
<!--
Please complete the following information if applicable:
- OS: [e.g. Windows, Linux, iOS]
- IDE (and version) [e.g. IntelliJ v10.1, Eclipse v2.3]
- Apache Spark Version [e.g., Apache Spark 2.4.2]
- Platform [e.g. Local execution, Azure Synapse Analytics, Azure/AWS Databricks, HDInsight Spark]
-->
|
1.0
|
[DOCUMENTATION]: Update docs (config + behavior) for supporting globbing patterns - ### Describe the issue
<!--
A clear and concise description of what the issue is.
-->
Documentation needs to be updated for globbing pattern support with supported configs
### To Reproduce
<!--
Steps to reproduce the behavior:
1. Do this first '...'
2. Write this code '....'
3. Execute the code this way '....'
4. See this behavior
5. If applicable, add screenshots to help explain your problem.
-->
### Expected behavior
<!--
A clear and concise description of what you expected to happen.
-->
### Environment
<!--
Please complete the following information if applicable:
- OS: [e.g. Windows, Linux, iOS]
- IDE (and version) [e.g. IntelliJ v10.1, Eclipse v2.3]
- Apache Spark Version [e.g., Apache Spark 2.4.2]
- Platform [e.g. Local execution, Azure Synapse Analytics, Azure/AWS Databricks, HDInsight Spark]
-->
|
non_process
|
update docs config behavior for supporting globbing patterns describe the issue a clear and concise description of what the issue is documentation needs to be updated for globbing pattern support with supported configs to reproduce steps to reproduce the behavior do this first write this code execute the code this way see this behavior if applicable add screenshots to help explain your problem expected behavior a clear and concise description of what you expected to happen environment please complete the following information if applicable os ide and version apache spark version platform
| 0
|
3,680
| 6,713,741,979
|
IssuesEvent
|
2017-10-13 14:29:15
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
[Process] Be able to not inherit ENV var
|
Feature Process
|
| Q | A
| ---------------- | -----
| Bug report? | no
| Feature request? | yes
| BC Break report? | no
| RFC? | no
| Symfony version | 4.0
---
Nowadays, symfony recommend using env var to configure the application.
So in the env we can find some API credentials.
If an application is using the Process composant some information can leak via the env variables.
So I would like to be able to NOT inherit from the parent process to be as safe as possible.
For now the solution to disable the propagation is to use this code:
```php
$process = new Symfony\Component\Process\Process('env');
$env = array_combine(array_keys($_SERVER), array_fill(0, count($_SERVER), false));
$process->run(null, $env);
echo $process->getOutput();
```
It's not really easy :/
---
And if you need a use case: https://twitter.com/o_cee/status/892306836199800836 (yeah NPM drama inside, but anyway it could happens with composer too !)
---
Finally here I'm just asking for a way to **not** inherit env var. But Ideally Symfony, by default, should not inherit env var (but it's another story)
|
1.0
|
[Process] Be able to not inherit ENV var - | Q | A
| ---------------- | -----
| Bug report? | no
| Feature request? | yes
| BC Break report? | no
| RFC? | no
| Symfony version | 4.0
---
Nowadays, symfony recommend using env var to configure the application.
So in the env we can find some API credentials.
If an application is using the Process composant some information can leak via the env variables.
So I would like to be able to NOT inherit from the parent process to be as safe as possible.
For now the solution to disable the propagation is to use this code:
```php
$process = new Symfony\Component\Process\Process('env');
$env = array_combine(array_keys($_SERVER), array_fill(0, count($_SERVER), false));
$process->run(null, $env);
echo $process->getOutput();
```
It's not really easy :/
---
And if you need a use case: https://twitter.com/o_cee/status/892306836199800836 (yeah NPM drama inside, but anyway it could happens with composer too !)
---
Finally here I'm just asking for a way to **not** inherit env var. But Ideally Symfony, by default, should not inherit env var (but it's another story)
|
process
|
be able to not inherit env var q a bug report no feature request yes bc break report no rfc no symfony version nowadays symfony recommend using env var to configure the application so in the env we can find some api credentials if an application is using the process composant some information can leak via the env variables so i would like to be able to not inherit from the parent process to be as safe as possible for now the solution to disable the propagation is to use this code php process new symfony component process process env env array combine array keys server array fill count server false process run null env echo process getoutput it s not really easy and if you need a use case yeah npm drama inside but anyway it could happens with composer too finally here i m just asking for a way to not inherit env var but ideally symfony by default should not inherit env var but it s another story
| 1
|
136,979
| 18,751,525,119
|
IssuesEvent
|
2021-11-05 03:02:21
|
Dima2022/Resiliency-Studio
|
https://api.github.com/repos/Dima2022/Resiliency-Studio
|
closed
|
CVE-2019-0232 (High) detected in tomcat-embed-core-8.5.11.jar - autoclosed
|
security vulnerability
|
## CVE-2019-0232 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.11.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p>
<p>Path to dependency file: Resiliency-Studio/resiliency-studio-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.11/tomcat-embed-core-8.5.11.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.11/tomcat-embed-core-8.5.11.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.11/tomcat-embed-core-8.5.11.jar</p>
<p>
Dependency Hierarchy:
- sdk-java-rest-6.2.0.4-oss.jar (Root Library)
- spring-boot-starter-tomcat-1.5.1.RELEASE.jar
- :x: **tomcat-embed-core-8.5.11.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/Resiliency-Studio/commit/9809d9b7bfdc114eafb0a14d86667f3a76a014e8">9809d9b7bfdc114eafb0a14d86667f3a76a014e8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When running on Windows with enableCmdLineArguments enabled, the CGI Servlet in Apache Tomcat 9.0.0.M1 to 9.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 is vulnerable to Remote Code Execution due to a bug in the way the JRE passes command line arguments to Windows. The CGI Servlet is disabled by default. The CGI option enableCmdLineArguments is disable by default in Tomcat 9.0.x (and will be disabled by default in all versions in response to this vulnerability). For a detailed explanation of the JRE behaviour, see Markus Wulftange's blog (https://codewhitesec.blogspot.com/2016/02/java-and-command-line-injections-in-windows.html) and this archived MSDN blog (https://web.archive.org/web/20161228144344/https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/).
<p>Publish Date: 2019-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0232>CVE-2019-0232</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0232">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0232</a></p>
<p>Release Date: 2019-04-15</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:7.0.94, 8.5.40, 9.0.18</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"8.5.11","packageFilePaths":["/resiliency-studio-service/pom.xml","/resiliency-studio-security/pom.xml","/resiliency-studio-agent/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.att.ajsc:sdk-java-rest:6.2.0.4-oss;org.springframework.boot:spring-boot-starter-tomcat:1.5.1.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:8.5.11","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat.embed:tomcat-embed-core:7.0.94, 8.5.40, 9.0.18"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-0232","vulnerabilityDetails":"When running on Windows with enableCmdLineArguments enabled, the CGI Servlet in Apache Tomcat 9.0.0.M1 to 9.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 is vulnerable to Remote Code Execution due to a bug in the way the JRE passes command line arguments to Windows. The CGI Servlet is disabled by default. The CGI option enableCmdLineArguments is disable by default in Tomcat 9.0.x (and will be disabled by default in all versions in response to this vulnerability). For a detailed explanation of the JRE behaviour, see Markus Wulftange\u0027s blog (https://codewhitesec.blogspot.com/2016/02/java-and-command-line-injections-in-windows.html) and this archived MSDN blog (https://web.archive.org/web/20161228144344/https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0232","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-0232 (High) detected in tomcat-embed-core-8.5.11.jar - autoclosed - ## CVE-2019-0232 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.11.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p>
<p>Path to dependency file: Resiliency-Studio/resiliency-studio-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.11/tomcat-embed-core-8.5.11.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.11/tomcat-embed-core-8.5.11.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.11/tomcat-embed-core-8.5.11.jar</p>
<p>
Dependency Hierarchy:
- sdk-java-rest-6.2.0.4-oss.jar (Root Library)
- spring-boot-starter-tomcat-1.5.1.RELEASE.jar
- :x: **tomcat-embed-core-8.5.11.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/Resiliency-Studio/commit/9809d9b7bfdc114eafb0a14d86667f3a76a014e8">9809d9b7bfdc114eafb0a14d86667f3a76a014e8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When running on Windows with enableCmdLineArguments enabled, the CGI Servlet in Apache Tomcat 9.0.0.M1 to 9.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 is vulnerable to Remote Code Execution due to a bug in the way the JRE passes command line arguments to Windows. The CGI Servlet is disabled by default. The CGI option enableCmdLineArguments is disable by default in Tomcat 9.0.x (and will be disabled by default in all versions in response to this vulnerability). For a detailed explanation of the JRE behaviour, see Markus Wulftange's blog (https://codewhitesec.blogspot.com/2016/02/java-and-command-line-injections-in-windows.html) and this archived MSDN blog (https://web.archive.org/web/20161228144344/https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/).
<p>Publish Date: 2019-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0232>CVE-2019-0232</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0232">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0232</a></p>
<p>Release Date: 2019-04-15</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:7.0.94, 8.5.40, 9.0.18</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"8.5.11","packageFilePaths":["/resiliency-studio-service/pom.xml","/resiliency-studio-security/pom.xml","/resiliency-studio-agent/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.att.ajsc:sdk-java-rest:6.2.0.4-oss;org.springframework.boot:spring-boot-starter-tomcat:1.5.1.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:8.5.11","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat.embed:tomcat-embed-core:7.0.94, 8.5.40, 9.0.18"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-0232","vulnerabilityDetails":"When running on Windows with enableCmdLineArguments enabled, the CGI Servlet in Apache Tomcat 9.0.0.M1 to 9.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 is vulnerable to Remote Code Execution due to a bug in the way the JRE passes command line arguments to Windows. The CGI Servlet is disabled by default. The CGI option enableCmdLineArguments is disable by default in Tomcat 9.0.x (and will be disabled by default in all versions in response to this vulnerability). For a detailed explanation of the JRE behaviour, see Markus Wulftange\u0027s blog (https://codewhitesec.blogspot.com/2016/02/java-and-command-line-injections-in-windows.html) and this archived MSDN blog (https://web.archive.org/web/20161228144344/https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0232","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in tomcat embed core jar autoclosed cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file resiliency studio resiliency studio service pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy sdk java rest oss jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details when running on windows with enablecmdlinearguments enabled the cgi servlet in apache tomcat to to and to is vulnerable to remote code execution due to a bug in the way the jre passes command line arguments to windows the cgi servlet is disabled by default the cgi option enablecmdlinearguments is disable by default in tomcat x and will be disabled by default in all versions in response to this vulnerability for a detailed explanation of the jre behaviour see markus wulftange s blog and this archived msdn blog publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com att ajsc sdk java rest oss org springframework boot spring boot starter tomcat release org apache tomcat embed tomcat embed core isminimumfixversionavailable true minimumfixversion org apache tomcat embed tomcat embed core basebranches vulnerabilityidentifier cve vulnerabilitydetails when running on windows with enablecmdlinearguments enabled the cgi servlet in apache tomcat to to and to is vulnerable to remote code execution due to a bug in the way the jre passes command line arguments to windows the cgi servlet is disabled by default the cgi option enablecmdlinearguments is disable by default in tomcat x and will be disabled by default in all versions in response to this vulnerability for a detailed explanation of the jre behaviour see markus wulftange blog and this archived msdn blog
| 0
|
9,591
| 12,542,015,558
|
IssuesEvent
|
2020-06-05 13:22:35
|
aiidateam/plumpy
|
https://api.github.com/repos/aiidateam/plumpy
|
closed
|
Optional name spaces end up in `Process.inputs` even if not explicitly passed at creation
|
priority/nice to have topic/ports topic/processes type/bug
|
While playing around with the siesta plugin, I noticed this behaviour.
Defining:
```
spec.input_namespace('pseudos', required="a", ..)
```
always creates a key `pseudos` in self.inputs, independently if `a=false` or `a=true`
Here, for instance, the explicit string I get when I do print(self.inputs):
```
<AttributesFrozendict {
'code': <Code: Remote code 'Siesta-4.0.2' on kay, pk: 2, uuid: 32a019cd-1ca9-47ac-a3f0-66bdc4d26726>,
'structure': <StructureData: uuid: 93e4dac9-e991-4442-b00c-8e87a2d71f63 (pk: 884)>,
'parameters': <Dict: uuid: b02f5e93-5cc1-4135-bee9-aeca0d5867f8 (pk: 885)>,
'metadata': <AttributesFrozendict {'label': 'TestOnSiliconBulk', 'options': <AttributesFrozendict {'max_wallclock_seconds': 360, 'resources': {'num_machines': 1, 'num_mpiprocs_per_machine': 1, 'default_mpiprocs_per_machine': 20}, 'input_filename': 'aiida.fdf', 'output_filename': 'aiida.out', 'scheduler_stdout': '_scheduler-stdout.txt', 'scheduler_stderr': '_scheduler-stderr.txt', 'custom_scheduler_commands': '', 'withmpi': False, 'mpirun_extra_params': [], 'import_sys_environment': True, 'environment_variables': {}, 'prepend_text': '', 'append_text': '', 'parser_name': 'siesta.parser'}>, 'store_provenance': True, 'call_link_label': 'CALL', 'dry_run': False}>,
'pseudos': <AttributesFrozendict {}>
}>
```
You see the last entry is `pseudos`, that I haven't specified.
I don't know if this is a wanted feature, but for sure it is confusing for many developers as I see in the plugins lines like [this one](https://github.com/aiidateam/aiida-quantumespresso/blob/01ee430a550db789ad9c91a323408f35568e40d5/aiida_quantumespresso/utils/pseudopotential.py#L35) that are useless since pseudos is never `None`.
Personally I think that at least with `required="false"`, the `pseudos` key shouldn't be there, but maybe you have arguments against it. Also, I think the role of `required=""` for `input_namespace` is not so clear.
|
1.0
|
Optional name spaces end up in `Process.inputs` even if not explicitly passed at creation - While playing around with the siesta plugin, I noticed this behaviour.
Defining:
```
spec.input_namespace('pseudos', required="a", ..)
```
always creates a key `pseudos` in self.inputs, independently if `a=false` or `a=true`
Here, for instance, the explicit string I get when I do print(self.inputs):
```
<AttributesFrozendict {
'code': <Code: Remote code 'Siesta-4.0.2' on kay, pk: 2, uuid: 32a019cd-1ca9-47ac-a3f0-66bdc4d26726>,
'structure': <StructureData: uuid: 93e4dac9-e991-4442-b00c-8e87a2d71f63 (pk: 884)>,
'parameters': <Dict: uuid: b02f5e93-5cc1-4135-bee9-aeca0d5867f8 (pk: 885)>,
'metadata': <AttributesFrozendict {'label': 'TestOnSiliconBulk', 'options': <AttributesFrozendict {'max_wallclock_seconds': 360, 'resources': {'num_machines': 1, 'num_mpiprocs_per_machine': 1, 'default_mpiprocs_per_machine': 20}, 'input_filename': 'aiida.fdf', 'output_filename': 'aiida.out', 'scheduler_stdout': '_scheduler-stdout.txt', 'scheduler_stderr': '_scheduler-stderr.txt', 'custom_scheduler_commands': '', 'withmpi': False, 'mpirun_extra_params': [], 'import_sys_environment': True, 'environment_variables': {}, 'prepend_text': '', 'append_text': '', 'parser_name': 'siesta.parser'}>, 'store_provenance': True, 'call_link_label': 'CALL', 'dry_run': False}>,
'pseudos': <AttributesFrozendict {}>
}>
```
You see the last entry is `pseudos`, that I haven't specified.
I don't know if this is a wanted feature, but for sure it is confusing for many developers as I see in the plugins lines like [this one](https://github.com/aiidateam/aiida-quantumespresso/blob/01ee430a550db789ad9c91a323408f35568e40d5/aiida_quantumespresso/utils/pseudopotential.py#L35) that are useless since pseudos is never `None`.
Personally I think that at least with `required="false"`, the `pseudos` key shouldn't be there, but maybe you have arguments against it. Also, I think the role of `required=""` for `input_namespace` is not so clear.
|
process
|
optional name spaces end up in process inputs even if not explicitly passed at creation while playing around with the siesta plugin i noticed this behaviour defining spec input namespace pseudos required a always creates a key pseudos in self inputs independently if a false or a true here for instance the explicit string i get when i do print self inputs attributesfrozendict code structure parameters metadata store provenance true call link label call dry run false pseudos you see the last entry is pseudos that i haven t specified i don t know if this is a wanted feature but for sure it is confusing for many developers as i see in the plugins lines like that are useless since pseudos is never none personally i think that at least with required false the pseudos key shouldn t be there but maybe you have arguments against it also i think the role of required for input namespace is not so clear
| 1
|
1,021
| 3,480,869,328
|
IssuesEvent
|
2015-12-29 11:45:37
|
osresearch/vst
|
https://api.github.com/repos/osresearch/vst
|
closed
|
Clipping includes one too many pixels
|
bug processing
|
Clipping goes from 0 to width and 0 to height, not 0 to width-1 and height-1. This caused some lines to not be drawn on the vector display.
|
1.0
|
Clipping includes one too many pixels - Clipping goes from 0 to width and 0 to height, not 0 to width-1 and height-1. This caused some lines to not be drawn on the vector display.
|
process
|
clipping includes one too many pixels clipping goes from to width and to height not to width and height this caused some lines to not be drawn on the vector display
| 1
|
88,010
| 11,017,823,573
|
IssuesEvent
|
2019-12-05 09:17:56
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Tasks inputs is not properly accepting commands
|
*as-designed tasks
|
Hello, I'm running into an issue after executing a task. The task is working correctly, but after adding the `${input:var}` and populating **inputs** as shown below. I tried several options, from executing other commands or a script in the root folder of the project, and it always provided an error that "command not found". Please see bellow.
Version: 1.40.2
Commit: f359dd69833dd8800b54d458f6d37ab7c78df520
Date: 2019-11-25T14:54:40.719Z
Electron: 6.1.5
Chrome: 76.0.3809.146
Node.js: 12.4.0
V8: 7.6.303.31-electron.0
OS: Linux x64 5.3.0-24-generic snap
Steps to Reproduce:
1.Create a Task: builds tasks.json
2.Populate tasks.json as follows:
```
{
"label": "Flash Firmware",
"group": "build",
"type": "shell",
"command": "idf.py",
"args": ["-c","${input:setPort}","Flash"],
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "shared",
"showReuseMessage": true,
"clear": false
}
}
],
"inputs": [
{
"id": "setPort",
"type" : "command",
"command": "ls /dev/tty* | grep USB " }
]
}
```
Expected behavior is output from execution is `idef.py -c /dev/sttyUSB0 flash`
Error received is ` command '/usr/bin/ls /dev/tty* | grep USB' not found`
I received this message even after adding the explicit path.
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
|
1.0
|
Tasks inputs is not properly accepting commands - Hello, I'm running into an issue after executing a task. The task is working correctly, but after adding the `${input:var}` and populating **inputs** as shown below. I tried several options, from executing other commands or a script in the root folder of the project, and it always provided an error that "command not found". Please see bellow.
Version: 1.40.2
Commit: f359dd69833dd8800b54d458f6d37ab7c78df520
Date: 2019-11-25T14:54:40.719Z
Electron: 6.1.5
Chrome: 76.0.3809.146
Node.js: 12.4.0
V8: 7.6.303.31-electron.0
OS: Linux x64 5.3.0-24-generic snap
Steps to Reproduce:
1.Create a Task: builds tasks.json
2.Populate tasks.json as follows:
```
{
"label": "Flash Firmware",
"group": "build",
"type": "shell",
"command": "idf.py",
"args": ["-c","${input:setPort}","Flash"],
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "shared",
"showReuseMessage": true,
"clear": false
}
}
],
"inputs": [
{
"id": "setPort",
"type" : "command",
"command": "ls /dev/tty* | grep USB " }
]
}
```
Expected behavior is output from execution is `idef.py -c /dev/sttyUSB0 flash`
Error received is ` command '/usr/bin/ls /dev/tty* | grep USB' not found`
I received this message even after adding the explicit path.
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
|
non_process
|
tasks inputs is not properly accepting commands hello i m running into an issue after executing a task the task is working correctly but after adding the input var and populating inputs as shown below i tried several options from executing other commands or a script in the root folder of the project and it always provided an error that command not found please see bellow version commit date electron chrome node js electron os linux generic snap steps to reproduce create a task builds tasks json populate tasks json as follows label flash firmware group build type shell command idf py args presentation echo true reveal always focus false panel shared showreusemessage true clear false inputs id setport type command command ls dev tty grep usb expected behavior is output from execution is idef py c dev flash error received is command usr bin ls dev tty grep usb not found i received this message even after adding the explicit path does this issue occur when all extensions are disabled yes
| 0
|
20,860
| 27,644,531,630
|
IssuesEvent
|
2023-03-10 21:23:43
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Merge Layers cause a Reflected composite
|
Processing Bug
|
### What is the bug or the crash?
When I tried to merge 11 layers to make a composite image of the Emit Data from the ISS, the composite was a reflection about the longitude axis. I am using 3.30.0. I hope you get my screen shot.

### Steps to reproduce the issue
Open the raster menu. Then go to merge and click. I next fill out the menu and hit run. The image is returned as a mirror image.
### Versions
QGIS version
3.30.0-'s-Hertogenbosch
QGIS code revision
f186b8efe0e
Qt version
5.15.3
Python version
3.9.5
GDAL/OGR version
3.6.2
PROJ version
9.1.1
EPSG Registry database version
v10.076 (2022-08-31)
GEOS version
3.11.1-CAPI-1.17.1
SQLite version
3.39.4
PDAL version
2.4.3
PostgreSQL client version
unknown
SpatiaLite version
5.0.1
QWT version
6.1.6
QScintilla2 version
2.13.1
OS version
Windows 10 Version 2009
Active Python plugins
catalogpl_plugin
2.1.1
ClipMultipleLayers
3.2.0
clipper
1.2
CloudMasking
23.1.7
ClusterMap
1.0
ee_plugin
0.0.6
enmapboxplugin
3.11.1
geetimeseriesexplorer
2.0
geoCore
0.8.1
geoscience
1.9
geosys-plugin
1.1.2
gistools
0.3
ImportPhotos
3.0.5
index_calculator
0.1
JSONEater-master
0.3
LAStools
1.4
layer2kmz
1.5.3
LineDirectionHistogram
3.1.1
MapsPrinter
0.9
NITK_RS-GIS_17
1.2
pca4cd
23.2a
pcraster_tools
0.2.0
pdaltools
0.1.6
PointConnector
2.0
qgis_gee_data_catalog
0.4.3
qgSurf
version 3.0.0
QMarxanToolbox
2.0.2
roof_draw
1.0.0
SampleByArea
0.7
sentinel2_download
3.5.7
SentinelHub
2.0.0
spatial_query_with_values
1.0.2
spectral_libraries
1.1.3
SpreadsheetLayers
2.1.0-alpha1
SRTM-Downloader
3.1.17
ThinGreyscale
3.0
usgs_stream_mapper
0.3
valuetool
3.0.15
open_lidar_tools
2.2.1
wofe_module
0.1
MetaSearch
0.3.6
otbprovider
2.12.99
processing
2.12.99
### Supported QGIS version
- [ ] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
Merge Layers cause a Reflected composite - ### What is the bug or the crash?
When I tried to merge 11 layers to make a composite image of the Emit Data from the ISS, the composite was a reflection about the longitude axis. I am using 3.30.0. I hope you get my screen shot.

### Steps to reproduce the issue
Open the raster menu. Then go to merge and click. I next fill out the menu and hit run. The image is returned as a mirror image.
### Versions
QGIS version
3.30.0-'s-Hertogenbosch
QGIS code revision
f186b8efe0e
Qt version
5.15.3
Python version
3.9.5
GDAL/OGR version
3.6.2
PROJ version
9.1.1
EPSG Registry database version
v10.076 (2022-08-31)
GEOS version
3.11.1-CAPI-1.17.1
SQLite version
3.39.4
PDAL version
2.4.3
PostgreSQL client version
unknown
SpatiaLite version
5.0.1
QWT version
6.1.6
QScintilla2 version
2.13.1
OS version
Windows 10 Version 2009
Active Python plugins
catalogpl_plugin
2.1.1
ClipMultipleLayers
3.2.0
clipper
1.2
CloudMasking
23.1.7
ClusterMap
1.0
ee_plugin
0.0.6
enmapboxplugin
3.11.1
geetimeseriesexplorer
2.0
geoCore
0.8.1
geoscience
1.9
geosys-plugin
1.1.2
gistools
0.3
ImportPhotos
3.0.5
index_calculator
0.1
JSONEater-master
0.3
LAStools
1.4
layer2kmz
1.5.3
LineDirectionHistogram
3.1.1
MapsPrinter
0.9
NITK_RS-GIS_17
1.2
pca4cd
23.2a
pcraster_tools
0.2.0
pdaltools
0.1.6
PointConnector
2.0
qgis_gee_data_catalog
0.4.3
qgSurf
version 3.0.0
QMarxanToolbox
2.0.2
roof_draw
1.0.0
SampleByArea
0.7
sentinel2_download
3.5.7
SentinelHub
2.0.0
spatial_query_with_values
1.0.2
spectral_libraries
1.1.3
SpreadsheetLayers
2.1.0-alpha1
SRTM-Downloader
3.1.17
ThinGreyscale
3.0
usgs_stream_mapper
0.3
valuetool
3.0.15
open_lidar_tools
2.2.1
wofe_module
0.1
MetaSearch
0.3.6
otbprovider
2.12.99
processing
2.12.99
### Supported QGIS version
- [ ] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
merge layers cause a reflected composite what is the bug or the crash when i tried to merge layers to make a composite image of the emit data from the iss the composite was a reflection about the longitude axis i am using i hope you get my screen shot steps to reproduce the issue open the raster menu then go to merge and click i next fill out the menu and hit run the image is returned as a mirror image versions qgis version s hertogenbosch qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version unknown spatialite version qwt version version os version windows version active python plugins catalogpl plugin clipmultiplelayers clipper cloudmasking clustermap ee plugin enmapboxplugin geetimeseriesexplorer geocore geoscience geosys plugin gistools importphotos index calculator jsoneater master lastools linedirectionhistogram mapsprinter nitk rs gis pcraster tools pdaltools pointconnector qgis gee data catalog qgsurf version qmarxantoolbox roof draw samplebyarea download sentinelhub spatial query with values spectral libraries spreadsheetlayers srtm downloader thingreyscale usgs stream mapper valuetool open lidar tools wofe module metasearch otbprovider processing supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
347,056
| 31,079,926,746
|
IssuesEvent
|
2023-08-13 00:30:16
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
tests: drivers: uart: uart_mix_fifo_poll: tests drivers.uart.uart_mix_poll_async_api_* fail
|
bug priority: low area: UART platform: nRF area: Tests Stale
|
**Describe the bug**
Test scenarios `drivers.uart.uart_mix_poll_async_api`, `drivers.uart.uart_mix_poll_async_api_const`, `drivers.uart.uart_mix_poll_async_api_low_power`, `drivers.uart.uart_mix_poll_async_api_with_ppi` and `drivers.uart.uart_mix_poll_async_api_with_ppi_low_power` fail.
Observed for `nrf52840dk_nrf52840`.
**To Reproduce**
Steps to reproduce the behavior:
1. on the board nrf52840dk_nrf52840 connect two pairs of the pins following overlay definition for this test scenario.
2. connect `nrf52840dk_nrf52840`
3. go to zephyr dir
4. call `scripts/twister -T tests/drivers/uart/uart_mix_fifo_poll/ -p nrf52840dk_nrf52840 --device-testing --device-serial /dev/ttyACM0 -v -v --inline-logs --fixture gpio_loopback -j 1`
5. See console output with error
**Expected behavior**
Test scenarios should be passed.
**Impact**
Not clear
**Logs and console output**
Report from `twister.json`:
```
{
"environment":{
"os":"posix",
"zephyr_version":"zephyr-v3.2.0-3195-g11aa8454f01d",
"toolchain":"zephyr",
"commit_date":"2023-01-09T19:29:50+09:00",
"run_date":"2023-01-09T11:04:39+00:00"
},
"testsuites":[
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"00a41cb36b0f901d32d973d6882015dc",
"runnable":true,
"retries":0,
"status":"passed",
"execution_time":"24.94",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll.mixed_uart_access",
"execution_time":"24.94",
"status":"passed"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_fifo",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"26f0e5b3da7f4d78d9a50b838a6918ae",
"runnable":true,
"retries":0,
"status":"passed",
"execution_time":"21.74",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_fifo.mixed_uart_access",
"execution_time":"21.74",
"status":"passed"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_async_api",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"4261e39f500b027c3d8f535ca91685cc",
"runnable":true,
"retries":0,
"status":"error",
"reason":"No Console Output(Timeout)",
"log":"*** Booting Zephyr OS build zephyr-v3.2.0-3195-g11aa8454f01d ***\r\nRunning TESTSUITE uart_mix_fifo_poll\r\n===================================================================\r\nSTART - test_mixed_uart_access\r\nASSERTION FAIL [!sys_dnode_is_linked(&to->node)] @ WEST_TOPDIR/zephyr/kernel/timeout.c:99\r\nE: r0/a1: 0x00000004 r1/a2: 0x00000063 r2/a3: 0x00000000\r\nE: r3/a4: 0x00000021 r12/ip: 0x00000000 r14/lr: 0x0000ae9f\r\nE: xpsr: 0x41000021\r\nE: Faulting instruction address (r15/pc): 0x0000e0b8\r\nE: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0\r\nE: Fault during interrupt handling\r\nE: Current thread: 0x20000268 (idle)\r\nE: Halting system\r\n",
"execution_time":"74.83",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_async_api.mixed_uart_access",
"execution_time":"74.83",
"status":"blocked",
"reason":"No Console Output(Timeout)"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_async_api_const",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"e9e60b8998611421c55af8c995c773ce",
"runnable":true,
"retries":0,
"status":"error",
"reason":"No Console Output(Timeout)",
"log":"*** Booting Zephyr OS build zephyr-v3.2.0-3195-g11aa8454f01d ***\r\nRunning TESTSUITE uart_mix_fifo_poll\r\n===================================================================\r\nSTART - test_mixed_uart_access\r\nASSERTION FAIL [!sys_dnode_is_linked(&to->node)] @ WEST_TOPDIR/zephyr/kernel/timeout.c:99\r\nE: r0/a1: 0x00000004 r1/a2: 0x00000063 r2/a3: 0x00000000\r\nE: r3/a4: 0x00000021 r12/ip: 0x00000000 r14/lr: 0x0000ae9f\r\nE: xpsr: 0x41000021\r\nE: Faulting instruction address (r15/pc): 0x0000e0b8\r\nE: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0\r\nE: Fault during interrupt handling\r\nE: Current thread: 0x20000190 (test_mixed_uart_access)\r\nE: Halting system\r\n",
"execution_time":"73.43",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_async_api_const.mixed_uart_access",
"execution_time":"73.43",
"status":"blocked",
"reason":"No Console Output(Timeout)"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_async_api_low_power",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"263eca03163dbd5e45996f514f804861",
"runnable":true,
"retries":0,
"status":"error",
"reason":"No Console Output(Timeout)",
"log":"*** Booting Zephyr OS build zephyr-v3.2.0-3195-g11aa8454f01d ***\r\nRunning TESTSUITE uart_mix_fifo_poll\r\n===================================================================\r\nSTART - test_mixed_uart_access\r\nASSERTION FAIL [!sys_dnode_is_linked(&to->node)] @ WEST_TOPDIR/zephyr/kernel/timeout.c:99\r\nE: r0/a1: 0x00000004 r1/a2: 0x00000063 r2/a3: 0x00000000\r\nE: r3/a4: 0x00000021 r12/ip: 0x00000000 r14/lr: 0x0000ae9f\r\nE: xpsr: 0x41000021\r\nE: Faulting instruction address (r15/pc): 0x0000e0b8\r\nE: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0\r\nE: Fault during interrupt handling\r\nE: Current thread: 0x20000190 (test_mixed_uart_access)\r\nE: Halting system\r\n",
"execution_time":"78.27",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_async_api_low_power.mixed_uart_access",
"execution_time":"78.27",
"status":"blocked",
"reason":"No Console Output(Timeout)"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_with_ppi",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"0dd79a19e8cbb7136533e9ab18d769c5",
"runnable":true,
"retries":0,
"status":"passed",
"execution_time":"24.51",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_with_ppi.mixed_uart_access",
"execution_time":"24.51",
"status":"passed"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_fifo_with_ppi",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"4f77ad49493f6b346f4c293818be2a50",
"runnable":true,
"retries":0,
"status":"passed",
"execution_time":"21.29",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_fifo_with_ppi.mixed_uart_access",
"execution_time":"21.29",
"status":"passed"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_async_api_with_ppi",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"3666ef800a413403c7797b66b0583435",
"runnable":true,
"retries":0,
"status":"error",
"reason":"No Console Output(Timeout)",
"log":"*** Booting Zephyr OS build zephyr-v3.2.0-3195-g11aa8454f01d ***\r\nRunning TESTSUITE uart_mix_fifo_poll\r\n===================================================================\r\nSTART - test_mixed_uart_access\r\nASSERTION FAIL [!sys_dnode_is_linked(&to->node)] @ WEST_TOPDIR/zephyr/kernel/timeout.c:99\r\nE: r0/a1: 0x00000004 r1/a2: 0x00000063 r2/a3: 0x00000000\r\nE: r3/a4: 0x00000021 r12/ip: 0x00000000 r14/lr: 0x0000ae9f\r\nE: xpsr: 0x41000021\r\nE: Faulting instruction address (r15/pc): 0x0000e0b8\r\nE: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0\r\nE: Fault during interrupt handling\r\nE: Current thread: 0x20000268 (idle)\r\nE: Halting system\r\n",
"execution_time":"74.00",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_async_api_with_ppi.mixed_uart_access",
"execution_time":"74.00",
"status":"blocked",
"reason":"No Console Output(Timeout)"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_async_api_with_ppi_low_power",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"cab14c201deaa1ee8c7eed64d5d93e9d",
"runnable":true,
"retries":0,
"status":"error",
"reason":"No Console Output(Timeout)",
"log":"*** Booting Zephyr OS build zephyr-v3.2.0-3195-g11aa8454f01d ***\r\nRunning TESTSUITE uart_mix_fifo_poll\r\n===================================================================\r\nSTART - test_mixed_uart_access\r\nAssertion failed at WEST_TOPDIR/zephyr/tests/drivers/uart/uart_mix_fifo_poll/src/main.c:93: process_byte: (ok is false)\r\nUnexpected byte received:0x00, prev:0x09\r\nASSERTION FAIL [0] @ WEST_TOPDIR/zephyr/kernel/sched.c:1731\r\naborting essential thread 0x20000268\r\nE: r0/a1: 0x00000004 r1/a2: 0x000006c3 r2/a3: 0x00000000\r\nE: r3/a4: 0x00000021 r12/ip: 0x00000000 r14/lr: 0x0000aa03\r\nE: xpsr: 0x41000021\r\nE: Faulting instruction address (r15/pc): 0x0000e0b8\r\nE: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0\r\nE: Fault during interrupt handling\r\nE: Current thread: 0x20000268 (idle)\r\nE: Halting system\r\n",
"execution_time":"79.03",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_async_api_with_ppi_low_power.mixed_uart_access",
"execution_time":"79.03",
"status":"blocked",
"reason":"No Console Output(Timeout)"
}
]
}
]
}
```
**Environment (please complete the following information):**
- OS: Ubuntu 20.04.5 LTS
- Toolchain Zephyr SDK 0.15.1
- Commit SHA or Version used: zephyr-v3.2.0-3195-g11aa8454f01d
**Additional context**
Pin setting:
```
psels = <NRF_PSEL(UART_TX, 1, 12)>,
<NRF_PSEL(UART_RX, 1, 13)>,
<NRF_PSEL(UART_RTS, 1, 14)>,
<NRF_PSEL(UART_CTS, 1, 15)>;
```
|
1.0
|
tests: drivers: uart: uart_mix_fifo_poll: tests drivers.uart.uart_mix_poll_async_api_* fail - **Describe the bug**
Test scenarios `drivers.uart.uart_mix_poll_async_api`, `drivers.uart.uart_mix_poll_async_api_const`, `drivers.uart.uart_mix_poll_async_api_low_power`, `drivers.uart.uart_mix_poll_async_api_with_ppi` and `drivers.uart.uart_mix_poll_async_api_with_ppi_low_power` fail.
Observed for `nrf52840dk_nrf52840`.
**To Reproduce**
Steps to reproduce the behavior:
1. on the board nrf52840dk_nrf52840 connect two pairs of the pins following overlay definition for this test scenario.
2. connect `nrf52840dk_nrf52840`
3. go to zephyr dir
4. call `scripts/twister -T tests/drivers/uart/uart_mix_fifo_poll/ -p nrf52840dk_nrf52840 --device-testing --device-serial /dev/ttyACM0 -v -v --inline-logs --fixture gpio_loopback -j 1`
5. See console output with error
**Expected behavior**
Test scenarios should be passed.
**Impact**
Not clear
**Logs and console output**
Report from `twister.json`:
```
{
"environment":{
"os":"posix",
"zephyr_version":"zephyr-v3.2.0-3195-g11aa8454f01d",
"toolchain":"zephyr",
"commit_date":"2023-01-09T19:29:50+09:00",
"run_date":"2023-01-09T11:04:39+00:00"
},
"testsuites":[
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"00a41cb36b0f901d32d973d6882015dc",
"runnable":true,
"retries":0,
"status":"passed",
"execution_time":"24.94",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll.mixed_uart_access",
"execution_time":"24.94",
"status":"passed"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_fifo",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"26f0e5b3da7f4d78d9a50b838a6918ae",
"runnable":true,
"retries":0,
"status":"passed",
"execution_time":"21.74",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_fifo.mixed_uart_access",
"execution_time":"21.74",
"status":"passed"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_async_api",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"4261e39f500b027c3d8f535ca91685cc",
"runnable":true,
"retries":0,
"status":"error",
"reason":"No Console Output(Timeout)",
"log":"*** Booting Zephyr OS build zephyr-v3.2.0-3195-g11aa8454f01d ***\r\nRunning TESTSUITE uart_mix_fifo_poll\r\n===================================================================\r\nSTART - test_mixed_uart_access\r\nASSERTION FAIL [!sys_dnode_is_linked(&to->node)] @ WEST_TOPDIR/zephyr/kernel/timeout.c:99\r\nE: r0/a1: 0x00000004 r1/a2: 0x00000063 r2/a3: 0x00000000\r\nE: r3/a4: 0x00000021 r12/ip: 0x00000000 r14/lr: 0x0000ae9f\r\nE: xpsr: 0x41000021\r\nE: Faulting instruction address (r15/pc): 0x0000e0b8\r\nE: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0\r\nE: Fault during interrupt handling\r\nE: Current thread: 0x20000268 (idle)\r\nE: Halting system\r\n",
"execution_time":"74.83",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_async_api.mixed_uart_access",
"execution_time":"74.83",
"status":"blocked",
"reason":"No Console Output(Timeout)"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_async_api_const",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"e9e60b8998611421c55af8c995c773ce",
"runnable":true,
"retries":0,
"status":"error",
"reason":"No Console Output(Timeout)",
"log":"*** Booting Zephyr OS build zephyr-v3.2.0-3195-g11aa8454f01d ***\r\nRunning TESTSUITE uart_mix_fifo_poll\r\n===================================================================\r\nSTART - test_mixed_uart_access\r\nASSERTION FAIL [!sys_dnode_is_linked(&to->node)] @ WEST_TOPDIR/zephyr/kernel/timeout.c:99\r\nE: r0/a1: 0x00000004 r1/a2: 0x00000063 r2/a3: 0x00000000\r\nE: r3/a4: 0x00000021 r12/ip: 0x00000000 r14/lr: 0x0000ae9f\r\nE: xpsr: 0x41000021\r\nE: Faulting instruction address (r15/pc): 0x0000e0b8\r\nE: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0\r\nE: Fault during interrupt handling\r\nE: Current thread: 0x20000190 (test_mixed_uart_access)\r\nE: Halting system\r\n",
"execution_time":"73.43",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_async_api_const.mixed_uart_access",
"execution_time":"73.43",
"status":"blocked",
"reason":"No Console Output(Timeout)"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_async_api_low_power",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"263eca03163dbd5e45996f514f804861",
"runnable":true,
"retries":0,
"status":"error",
"reason":"No Console Output(Timeout)",
"log":"*** Booting Zephyr OS build zephyr-v3.2.0-3195-g11aa8454f01d ***\r\nRunning TESTSUITE uart_mix_fifo_poll\r\n===================================================================\r\nSTART - test_mixed_uart_access\r\nASSERTION FAIL [!sys_dnode_is_linked(&to->node)] @ WEST_TOPDIR/zephyr/kernel/timeout.c:99\r\nE: r0/a1: 0x00000004 r1/a2: 0x00000063 r2/a3: 0x00000000\r\nE: r3/a4: 0x00000021 r12/ip: 0x00000000 r14/lr: 0x0000ae9f\r\nE: xpsr: 0x41000021\r\nE: Faulting instruction address (r15/pc): 0x0000e0b8\r\nE: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0\r\nE: Fault during interrupt handling\r\nE: Current thread: 0x20000190 (test_mixed_uart_access)\r\nE: Halting system\r\n",
"execution_time":"78.27",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_async_api_low_power.mixed_uart_access",
"execution_time":"78.27",
"status":"blocked",
"reason":"No Console Output(Timeout)"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_with_ppi",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"0dd79a19e8cbb7136533e9ab18d769c5",
"runnable":true,
"retries":0,
"status":"passed",
"execution_time":"24.51",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_with_ppi.mixed_uart_access",
"execution_time":"24.51",
"status":"passed"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_fifo_with_ppi",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"4f77ad49493f6b346f4c293818be2a50",
"runnable":true,
"retries":0,
"status":"passed",
"execution_time":"21.29",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_fifo_with_ppi.mixed_uart_access",
"execution_time":"21.29",
"status":"passed"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_async_api_with_ppi",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"3666ef800a413403c7797b66b0583435",
"runnable":true,
"retries":0,
"status":"error",
"reason":"No Console Output(Timeout)",
"log":"*** Booting Zephyr OS build zephyr-v3.2.0-3195-g11aa8454f01d ***\r\nRunning TESTSUITE uart_mix_fifo_poll\r\n===================================================================\r\nSTART - test_mixed_uart_access\r\nASSERTION FAIL [!sys_dnode_is_linked(&to->node)] @ WEST_TOPDIR/zephyr/kernel/timeout.c:99\r\nE: r0/a1: 0x00000004 r1/a2: 0x00000063 r2/a3: 0x00000000\r\nE: r3/a4: 0x00000021 r12/ip: 0x00000000 r14/lr: 0x0000ae9f\r\nE: xpsr: 0x41000021\r\nE: Faulting instruction address (r15/pc): 0x0000e0b8\r\nE: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0\r\nE: Fault during interrupt handling\r\nE: Current thread: 0x20000268 (idle)\r\nE: Halting system\r\n",
"execution_time":"74.00",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_async_api_with_ppi.mixed_uart_access",
"execution_time":"74.00",
"status":"blocked",
"reason":"No Console Output(Timeout)"
}
]
},
{
"name":"zephyr/tests/drivers/uart/uart_mix_fifo_poll/drivers.uart.uart_mix_poll_async_api_with_ppi_low_power",
"arch":"arm",
"platform":"nrf52840dk_nrf52840",
"run_id":"cab14c201deaa1ee8c7eed64d5d93e9d",
"runnable":true,
"retries":0,
"status":"error",
"reason":"No Console Output(Timeout)",
"log":"*** Booting Zephyr OS build zephyr-v3.2.0-3195-g11aa8454f01d ***\r\nRunning TESTSUITE uart_mix_fifo_poll\r\n===================================================================\r\nSTART - test_mixed_uart_access\r\nAssertion failed at WEST_TOPDIR/zephyr/tests/drivers/uart/uart_mix_fifo_poll/src/main.c:93: process_byte: (ok is false)\r\nUnexpected byte received:0x00, prev:0x09\r\nASSERTION FAIL [0] @ WEST_TOPDIR/zephyr/kernel/sched.c:1731\r\naborting essential thread 0x20000268\r\nE: r0/a1: 0x00000004 r1/a2: 0x000006c3 r2/a3: 0x00000000\r\nE: r3/a4: 0x00000021 r12/ip: 0x00000000 r14/lr: 0x0000aa03\r\nE: xpsr: 0x41000021\r\nE: Faulting instruction address (r15/pc): 0x0000e0b8\r\nE: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0\r\nE: Fault during interrupt handling\r\nE: Current thread: 0x20000268 (idle)\r\nE: Halting system\r\n",
"execution_time":"79.03",
"testcases":[
{
"identifier":"drivers.uart.uart_mix_poll_async_api_with_ppi_low_power.mixed_uart_access",
"execution_time":"79.03",
"status":"blocked",
"reason":"No Console Output(Timeout)"
}
]
}
]
}
```
**Environment (please complete the following information):**
- OS: Ubuntu 20.04.5 LTS
- Toolchain Zephyr SDK 0.15.1
- Commit SHA or Version used: zephyr-v3.2.0-3195-g11aa8454f01d
**Additional context**
Pin setting:
```
psels = <NRF_PSEL(UART_TX, 1, 12)>,
<NRF_PSEL(UART_RX, 1, 13)>,
<NRF_PSEL(UART_RTS, 1, 14)>,
<NRF_PSEL(UART_CTS, 1, 15)>;
```
|
non_process
|
tests drivers uart uart mix fifo poll tests drivers uart uart mix poll async api fail describe the bug test scenarios drivers uart uart mix poll async api drivers uart uart mix poll async api const drivers uart uart mix poll async api low power drivers uart uart mix poll async api with ppi and drivers uart uart mix poll async api with ppi low power fail observed for to reproduce steps to reproduce the behavior on the board connect two pairs of the pins following overlay definition for this test scenario connect go to zephyr dir call scripts twister t tests drivers uart uart mix fifo poll p device testing device serial dev v v inline logs fixture gpio loopback j see console output with error expected behavior test scenarios should be passed impact not clear logs and console output report from twister json environment os posix zephyr version zephyr toolchain zephyr commit date run date testsuites name zephyr tests drivers uart uart mix fifo poll drivers uart uart mix poll arch arm platform run id runnable true retries status passed execution time testcases identifier drivers uart uart mix poll mixed uart access execution time status passed name zephyr tests drivers uart uart mix fifo poll drivers uart uart mix poll fifo arch arm platform run id runnable true retries status passed execution time testcases identifier drivers uart uart mix poll fifo mixed uart access execution time status passed name zephyr tests drivers uart uart mix fifo poll drivers uart uart mix poll async api arch arm platform run id runnable true retries status error reason no console output timeout log booting zephyr os build zephyr r nrunning testsuite uart mix fifo poll r n r nstart test mixed uart access r nassertion fail west topdir zephyr kernel timeout c r ne r ne ip lr r ne xpsr r ne faulting instruction address pc r ne zephyr fatal error kernel panic on cpu r ne fault during interrupt handling r ne current thread idle r ne halting system r n execution time testcases identifier drivers uart uart mix poll async api mixed uart access execution time status blocked reason no console output timeout name zephyr tests drivers uart uart mix fifo poll drivers uart uart mix poll async api const arch arm platform run id runnable true retries status error reason no console output timeout log booting zephyr os build zephyr r nrunning testsuite uart mix fifo poll r n r nstart test mixed uart access r nassertion fail west topdir zephyr kernel timeout c r ne r ne ip lr r ne xpsr r ne faulting instruction address pc r ne zephyr fatal error kernel panic on cpu r ne fault during interrupt handling r ne current thread test mixed uart access r ne halting system r n execution time testcases identifier drivers uart uart mix poll async api const mixed uart access execution time status blocked reason no console output timeout name zephyr tests drivers uart uart mix fifo poll drivers uart uart mix poll async api low power arch arm platform run id runnable true retries status error reason no console output timeout log booting zephyr os build zephyr r nrunning testsuite uart mix fifo poll r n r nstart test mixed uart access r nassertion fail west topdir zephyr kernel timeout c r ne r ne ip lr r ne xpsr r ne faulting instruction address pc r ne zephyr fatal error kernel panic on cpu r ne fault during interrupt handling r ne current thread test mixed uart access r ne halting system r n execution time testcases identifier drivers uart uart mix poll async api low power mixed uart access execution time status blocked reason no console output timeout name zephyr tests drivers uart uart mix fifo poll drivers uart uart mix poll with ppi arch arm platform run id runnable true retries status passed execution time testcases identifier drivers uart uart mix poll with ppi mixed uart access execution time status passed name zephyr tests drivers uart uart mix fifo poll drivers uart uart mix poll fifo with ppi arch arm platform run id runnable true retries status passed execution time testcases identifier drivers uart uart mix poll fifo with ppi mixed uart access execution time status passed name zephyr tests drivers uart uart mix fifo poll drivers uart uart mix poll async api with ppi arch arm platform run id runnable true retries status error reason no console output timeout log booting zephyr os build zephyr r nrunning testsuite uart mix fifo poll r n r nstart test mixed uart access r nassertion fail west topdir zephyr kernel timeout c r ne r ne ip lr r ne xpsr r ne faulting instruction address pc r ne zephyr fatal error kernel panic on cpu r ne fault during interrupt handling r ne current thread idle r ne halting system r n execution time testcases identifier drivers uart uart mix poll async api with ppi mixed uart access execution time status blocked reason no console output timeout name zephyr tests drivers uart uart mix fifo poll drivers uart uart mix poll async api with ppi low power arch arm platform run id runnable true retries status error reason no console output timeout log booting zephyr os build zephyr r nrunning testsuite uart mix fifo poll r n r nstart test mixed uart access r nassertion failed at west topdir zephyr tests drivers uart uart mix fifo poll src main c process byte ok is false r nunexpected byte received prev r nassertion fail west topdir zephyr kernel sched c r naborting essential thread r ne r ne ip lr r ne xpsr r ne faulting instruction address pc r ne zephyr fatal error kernel panic on cpu r ne fault during interrupt handling r ne current thread idle r ne halting system r n execution time testcases identifier drivers uart uart mix poll async api with ppi low power mixed uart access execution time status blocked reason no console output timeout environment please complete the following information os ubuntu lts toolchain zephyr sdk commit sha or version used zephyr additional context pin setting psels
| 0
|
22,611
| 31,835,427,303
|
IssuesEvent
|
2023-09-14 13:15:40
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
`Process.Start` will `System.ExecutionEngineException:“Exception_WasThrown”` in VS debug.
|
area-System.Diagnostics.Process untriaged
|
When use `Visual Studio` start debug code, and use
```
Process.Start("explorer");
```
was crash

But if not use VS debug, that work.


|
1.0
|
`Process.Start` will `System.ExecutionEngineException:“Exception_WasThrown”` in VS debug. - When use `Visual Studio` start debug code, and use
```
Process.Start("explorer");
```
was crash

But if not use VS debug, that work.


|
process
|
process start will system executionengineexception “exception wasthrown” in vs debug when use visual studio start debug code and use process start explorer was crash but if not use vs debug that work
| 1
|
218
| 2,648,843,304
|
IssuesEvent
|
2015-03-14 09:42:43
|
sysown/proxysql-0.2
|
https://api.github.com/repos/sysown/proxysql-0.2
|
closed
|
Implement global variable mysql-have_compress
|
ADMIN AUTHENTICATION CONNECTION POOL cxx_pa development MYSQL PROTOCOL QUERY PROCESSOR
|
This variable defines if compression should be enabled on frontends and backends.
This variable should be checked *only* when a new connection is established: active connections should not be affected.
|
1.0
|
Implement global variable mysql-have_compress - This variable defines if compression should be enabled on frontends and backends.
This variable should be checked *only* when a new connection is established: active connections should not be affected.
|
process
|
implement global variable mysql have compress this variable defines if compression should be enabled on frontends and backends this variable should be checked only when a new connection is established active connections should not be affected
| 1
|
14,264
| 17,202,873,147
|
IssuesEvent
|
2021-07-17 16:17:31
|
arunkumar9t2/scabbard
|
https://api.github.com/repos/arunkumar9t2/scabbard
|
closed
|
Build fails resolving transitive dependency
|
dependencies module:processor
|
Would [a newer version of kittinunf/Result](https://mvnrepository.com/artifact/com.github.kittinunf.result/result) in scabbard solve this problem?
```txt
> Task :app:kaptGenerateStubsDevDebugKotlin FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:kaptGenerateStubsDevDebugKotlin'.
> Could not resolve all files for configuration ':app:kapt'.
> Could not find com.github.kittinunf.result:result:3.0.0.
Searched in the following locations:
- https://dl.google.com/dl/android/maven2/com/github/kittinunf/result/result/3.0.0/result-3.0.0.pom
- https://repo.maven.apache.org/maven2/com/github/kittinunf/result/result/3.0.0/result-3.0.0.pom
- https://kotlin.bintray.com/kotlinx/com/github/kittinunf/result/result/3.0.0/result-3.0.0.pom
- https://oss.sonatype.org/content/repositories/snapshots/com/github/kittinunf/result/result/3.0.0/result-3.0.0.pom
Required by:
project :app > dev.arunkumar:scabbard-processor:0.5.0
```
|
1.0
|
Build fails resolving transitive dependency - Would [a newer version of kittinunf/Result](https://mvnrepository.com/artifact/com.github.kittinunf.result/result) in scabbard solve this problem?
```txt
> Task :app:kaptGenerateStubsDevDebugKotlin FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:kaptGenerateStubsDevDebugKotlin'.
> Could not resolve all files for configuration ':app:kapt'.
> Could not find com.github.kittinunf.result:result:3.0.0.
Searched in the following locations:
- https://dl.google.com/dl/android/maven2/com/github/kittinunf/result/result/3.0.0/result-3.0.0.pom
- https://repo.maven.apache.org/maven2/com/github/kittinunf/result/result/3.0.0/result-3.0.0.pom
- https://kotlin.bintray.com/kotlinx/com/github/kittinunf/result/result/3.0.0/result-3.0.0.pom
- https://oss.sonatype.org/content/repositories/snapshots/com/github/kittinunf/result/result/3.0.0/result-3.0.0.pom
Required by:
project :app > dev.arunkumar:scabbard-processor:0.5.0
```
|
process
|
build fails resolving transitive dependency would in scabbard solve this problem txt task app kaptgeneratestubsdevdebugkotlin failed failure build failed with an exception what went wrong execution failed for task app kaptgeneratestubsdevdebugkotlin could not resolve all files for configuration app kapt could not find com github kittinunf result result searched in the following locations required by project app dev arunkumar scabbard processor
| 1
|
641,096
| 20,817,947,614
|
IssuesEvent
|
2022-03-18 12:32:11
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
opened
|
Issues while starting the WSO2 API-M server
|
Type/Bug Priority/Normal
|
### Description:
The following issues is getting while I am trying to start wso2 APIM server. I get same issues in both situations below,
01. While I am trying to start server with the downloaded pack from github
02. While using the pack I got after building the product
###Errors
ERROR - [/accountrecoveryendpoint] Exception sending context initialized event to listener instance of class [org.wso2.carbon.identity.mgt.endpoint.util.listener.IdentityManagementEndpointContextListener]
java.lang.NoClassDefFoundError: org/wso2/carbon/identity/core/util/IdentityUtil
at org.wso2.carbon.identity.mgt.endpoint.util.IdentityManagementServiceUtil.init(IdentityManagementServiceUtil.java:130) ~[org.wso2.carbon.identity.mgt.endpoint.util_5.18.246.jar:?]
at org.wso2.carbon.identity.mgt.endpoint.util.listener.IdentityManagementEndpointContextListener.contextInitialized(IdentityManagementEndpointContextListener.java:34) ~[org.wso2.carbon.identity.mgt.endpoint.util_5.18.246.jar:?]
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4768) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5230) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:726) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:698) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:696) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:303) ~[?:?]
at org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:209) ~[?:?]
at org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleWebappDeployment(TomcatGenericWebappsDeployer.java:255) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleExplodedWebappDeployment(TomcatGenericWebappsDeployer.java:243) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleHotDeployment(TomcatGenericWebappsDeployer.java:173) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.deploy(TomcatGenericWebappsDeployer.java:140) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.wso2.carbon.webapp.mgt.AbstractWebappDeployer.deployThisWebApp(AbstractWebappDeployer.java:224) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.wso2.carbon.webapp.mgt.AbstractWebappDeployer.deploy(AbstractWebappDeployer.java:114) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.wso2.carbon.webapp.deployer.WebappDeployer.deploy(WebappDeployer.java:42) ~[org.wso2.carbon.webapp.deployer_4.11.2.jar:?]
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136) ~[axis2_1.6.1.wso2v77.jar:?]
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807) ~[axis2_1.6.1.wso2v77.jar:?]
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:153) ~[axis2_1.6.1.wso2v77.jar:?]
at org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377) ~[axis2_1.6.1.wso2v77.jar:?]
at org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254) ~[axis2_1.6.1.wso2v77.jar:?]
at org.apache.synapse.Axis2SynapseController.deployMediatorExtensions(Axis2SynapseController.java:785) ~[synapse-core_2.1.7.wso2v263.jar:2.1.7-wso2v263]
at org.apache.synapse.Axis2SynapseController.createSynapseEnvironment(Axis2SynapseController.java:403) ~[synapse-core_2.1.7.wso2v263.jar:2.1.7-wso2v263]
at org.apache.synapse.ServerManager.start(ServerManager.java:187) ~[synapse-core_2.1.7.wso2v263.jar:2.1.7-wso2v263]
at org.wso2.carbon.mediation.initializer.ServiceBusInitializer.initESB(ServiceBusInitializer.java:371) ~[org.wso2.carbon.mediation.initializer_4.7.126.jar:?]
at org.wso2.carbon.mediation.initializer.ServiceBusInitializer.activate(ServiceBusInitializer.java:170) ~[org.wso2.carbon.mediation.initializer_4.7.126.jar:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_312]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_312]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_312]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_312]
at org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:113) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:985) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:151) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:866) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:804) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:228) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:525) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:544) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.wso2.carbon.inbound.endpoint.persistence.service.InboundEndpointPersistenceServiceDSComponent.activate(InboundEndpointPersistenceServiceDSComponent.java:50) ~[org.wso2.carbon.inbound.endpoint.persistence_4.7.126.jar:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_312]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_312]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_312]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_312]
at org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:113) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:985) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:151) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:866) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:804) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:228) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:525) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:544) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.wso2.carbon.core.init.CarbonServerManager.initializeCarbon(CarbonServerManager.java:529) ~[org.wso2.carbon.core_4.6.3.beta.jar:?]
at org.wso2.carbon.core.init.CarbonServerManager.removePendingItem(CarbonServerManager.java:305) ~[org.wso2.carbon.core_4.6.3.beta.jar:?]
at org.wso2.carbon.core.init.PreAxis2ConfigItemListener.bundleChanged(PreAxis2ConfigItemListener.java:118) ~[org.wso2.carbon.core_4.6.3.beta.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:973) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:345) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
Caused by: java.lang.ClassNotFoundException: org.wso2.carbon.identity.core.util.IdentityUtil
at org.wso2.carbon.webapp.mgt.loader.CarbonWebappClassLoader.loadClass(CarbonWebappClassLoader.java:195) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1215) ~[tomcat_9.0.54.wso2v1.jar:?]
... 76 more
|
1.0
|
Issues while starting the WSO2 API-M server - ### Description:
The following issues is getting while I am trying to start wso2 APIM server. I get same issues in both situations below,
01. While I am trying to start server with the downloaded pack from github
02. While using the pack I got after building the product
###Errors
ERROR - [/accountrecoveryendpoint] Exception sending context initialized event to listener instance of class [org.wso2.carbon.identity.mgt.endpoint.util.listener.IdentityManagementEndpointContextListener]
java.lang.NoClassDefFoundError: org/wso2/carbon/identity/core/util/IdentityUtil
at org.wso2.carbon.identity.mgt.endpoint.util.IdentityManagementServiceUtil.init(IdentityManagementServiceUtil.java:130) ~[org.wso2.carbon.identity.mgt.endpoint.util_5.18.246.jar:?]
at org.wso2.carbon.identity.mgt.endpoint.util.listener.IdentityManagementEndpointContextListener.contextInitialized(IdentityManagementEndpointContextListener.java:34) ~[org.wso2.carbon.identity.mgt.endpoint.util_5.18.246.jar:?]
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4768) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5230) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:726) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:698) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:696) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:303) ~[?:?]
at org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:209) ~[?:?]
at org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleWebappDeployment(TomcatGenericWebappsDeployer.java:255) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleExplodedWebappDeployment(TomcatGenericWebappsDeployer.java:243) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleHotDeployment(TomcatGenericWebappsDeployer.java:173) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.deploy(TomcatGenericWebappsDeployer.java:140) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.wso2.carbon.webapp.mgt.AbstractWebappDeployer.deployThisWebApp(AbstractWebappDeployer.java:224) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.wso2.carbon.webapp.mgt.AbstractWebappDeployer.deploy(AbstractWebappDeployer.java:114) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.wso2.carbon.webapp.deployer.WebappDeployer.deploy(WebappDeployer.java:42) ~[org.wso2.carbon.webapp.deployer_4.11.2.jar:?]
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136) ~[axis2_1.6.1.wso2v77.jar:?]
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807) ~[axis2_1.6.1.wso2v77.jar:?]
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:153) ~[axis2_1.6.1.wso2v77.jar:?]
at org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377) ~[axis2_1.6.1.wso2v77.jar:?]
at org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254) ~[axis2_1.6.1.wso2v77.jar:?]
at org.apache.synapse.Axis2SynapseController.deployMediatorExtensions(Axis2SynapseController.java:785) ~[synapse-core_2.1.7.wso2v263.jar:2.1.7-wso2v263]
at org.apache.synapse.Axis2SynapseController.createSynapseEnvironment(Axis2SynapseController.java:403) ~[synapse-core_2.1.7.wso2v263.jar:2.1.7-wso2v263]
at org.apache.synapse.ServerManager.start(ServerManager.java:187) ~[synapse-core_2.1.7.wso2v263.jar:2.1.7-wso2v263]
at org.wso2.carbon.mediation.initializer.ServiceBusInitializer.initESB(ServiceBusInitializer.java:371) ~[org.wso2.carbon.mediation.initializer_4.7.126.jar:?]
at org.wso2.carbon.mediation.initializer.ServiceBusInitializer.activate(ServiceBusInitializer.java:170) ~[org.wso2.carbon.mediation.initializer_4.7.126.jar:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_312]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_312]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_312]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_312]
at org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:113) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:985) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:151) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:866) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:804) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:228) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:525) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:544) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.wso2.carbon.inbound.endpoint.persistence.service.InboundEndpointPersistenceServiceDSComponent.activate(InboundEndpointPersistenceServiceDSComponent.java:50) ~[org.wso2.carbon.inbound.endpoint.persistence_4.7.126.jar:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_312]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_312]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_312]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_312]
at org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222) ~[org.eclipse.equinox.ds_1.4.400.v20160226-2036.jar:?]
at org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:113) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:985) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:151) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:866) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:804) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:228) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:525) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:544) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.wso2.carbon.core.init.CarbonServerManager.initializeCarbon(CarbonServerManager.java:529) ~[org.wso2.carbon.core_4.6.3.beta.jar:?]
at org.wso2.carbon.core.init.CarbonServerManager.removePendingItem(CarbonServerManager.java:305) ~[org.wso2.carbon.core_4.6.3.beta.jar:?]
at org.wso2.carbon.core.init.PreAxis2ConfigItemListener.bundleChanged(PreAxis2ConfigItemListener.java:118) ~[org.wso2.carbon.core_4.6.3.beta.jar:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:973) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:345) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
Caused by: java.lang.ClassNotFoundException: org.wso2.carbon.identity.core.util.IdentityUtil
at org.wso2.carbon.webapp.mgt.loader.CarbonWebappClassLoader.loadClass(CarbonWebappClassLoader.java:195) ~[org.wso2.carbon.webapp.mgt_4.11.2.jar:?]
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1215) ~[tomcat_9.0.54.wso2v1.jar:?]
... 76 more
|
non_process
|
issues while starting the api m server description the following issues is getting while i am trying to start apim server i get same issues in both situations below while i am trying to start server with the downloaded pack from github while using the pack i got after building the product errors error exception sending context initialized event to listener instance of class java lang noclassdeffounderror org carbon identity core util identityutil at org carbon identity mgt endpoint util identitymanagementserviceutil init identitymanagementserviceutil java at org carbon identity mgt endpoint util listener identitymanagementendpointcontextlistener contextinitialized identitymanagementendpointcontextlistener java at org apache catalina core standardcontext listenerstart standardcontext java at org apache catalina core standardcontext startinternal standardcontext java at org apache catalina util lifecyclebase start lifecyclebase java at org apache catalina core containerbase addchildinternal containerbase java at org apache catalina core containerbase addchild containerbase java at org apache catalina core standardhost addchild standardhost java at org carbon tomcat internal carbontomcat addwebapp carbontomcat java at org carbon tomcat internal carbontomcat addwebapp carbontomcat java at org carbon webapp mgt tomcatgenericwebappsdeployer handlewebappdeployment tomcatgenericwebappsdeployer java at org carbon webapp mgt tomcatgenericwebappsdeployer handleexplodedwebappdeployment tomcatgenericwebappsdeployer java at org carbon webapp mgt tomcatgenericwebappsdeployer handlehotdeployment tomcatgenericwebappsdeployer java at org carbon webapp mgt tomcatgenericwebappsdeployer deploy tomcatgenericwebappsdeployer java at org carbon webapp mgt abstractwebappdeployer deploythiswebapp abstractwebappdeployer java at org carbon webapp mgt abstractwebappdeployer deploy abstractwebappdeployer java at org carbon webapp deployer webappdeployer deploy webappdeployer java at org apache deployment repository util deploymentfiledata deploy deploymentfiledata java at org apache deployment deploymentengine dodeploy deploymentengine java at org apache deployment repository util wsinfolist update wsinfolist java at org apache deployment repositorylistener update repositorylistener java at org apache deployment repositorylistener checkservices repositorylistener java at org apache synapse deploymediatorextensions java at org apache synapse createsynapseenvironment java at org apache synapse servermanager start servermanager java at org carbon mediation initializer servicebusinitializer initesb servicebusinitializer java at org carbon mediation initializer servicebusinitializer activate servicebusinitializer java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org eclipse equinox internal ds model servicecomponent activate servicecomponent java at org eclipse equinox internal ds model servicecomponentprop activate servicecomponentprop java at org eclipse equinox internal ds model servicecomponentprop build servicecomponentprop java at org eclipse equinox internal ds instanceprocess buildcomponent instanceprocess java at org eclipse equinox internal ds instanceprocess buildcomponents instanceprocess java at org eclipse equinox internal ds resolver geteligible resolver java at org eclipse equinox internal ds scrmanager servicechanged scrmanager java at org eclipse osgi internal serviceregistry filteredservicelistener servicechanged filteredservicelistener java at org eclipse osgi internal framework bundlecontextimpl dispatchevent bundlecontextimpl java at org eclipse osgi framework eventmgr eventmanager dispatchevent eventmanager java at org eclipse osgi framework eventmgr listenerqueue dispatcheventsynchronous listenerqueue java at org eclipse osgi internal serviceregistry serviceregistry publishserviceeventprivileged serviceregistry java at org eclipse osgi internal serviceregistry serviceregistry publishserviceevent serviceregistry java at org eclipse osgi internal serviceregistry serviceregistrationimpl register serviceregistrationimpl java at org eclipse osgi internal serviceregistry serviceregistry registerservice serviceregistry java at org eclipse osgi internal framework bundlecontextimpl registerservice bundlecontextimpl java at org eclipse osgi internal framework bundlecontextimpl registerservice bundlecontextimpl java at org carbon inbound endpoint persistence service inboundendpointpersistenceservicedscomponent activate inboundendpointpersistenceservicedscomponent java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org eclipse equinox internal ds model servicecomponent activate servicecomponent java at org eclipse equinox internal ds model servicecomponentprop activate servicecomponentprop java at org eclipse equinox internal ds model servicecomponentprop build servicecomponentprop java at org eclipse equinox internal ds instanceprocess buildcomponent instanceprocess java at org eclipse equinox internal ds instanceprocess buildcomponents instanceprocess java at org eclipse equinox internal ds resolver geteligible resolver java at org eclipse equinox internal ds scrmanager servicechanged scrmanager java at org eclipse osgi internal serviceregistry filteredservicelistener servicechanged filteredservicelistener java at org eclipse osgi internal framework bundlecontextimpl dispatchevent bundlecontextimpl java at org eclipse osgi framework eventmgr eventmanager dispatchevent eventmanager java at org eclipse osgi framework eventmgr listenerqueue dispatcheventsynchronous listenerqueue java at org eclipse osgi internal serviceregistry serviceregistry publishserviceeventprivileged serviceregistry java at org eclipse osgi internal serviceregistry serviceregistry publishserviceevent serviceregistry java at org eclipse osgi internal serviceregistry serviceregistrationimpl register serviceregistrationimpl java at org eclipse osgi internal serviceregistry serviceregistry registerservice serviceregistry java at org eclipse osgi internal framework bundlecontextimpl registerservice bundlecontextimpl java at org eclipse osgi internal framework bundlecontextimpl registerservice bundlecontextimpl java at org carbon core init carbonservermanager initializecarbon carbonservermanager java at org carbon core init carbonservermanager removependingitem carbonservermanager java at org carbon core init bundlechanged java at org eclipse osgi internal framework bundlecontextimpl dispatchevent bundlecontextimpl java at org eclipse osgi framework eventmgr eventmanager dispatchevent eventmanager java at org eclipse osgi framework eventmgr eventmanager eventthread run eventmanager java caused by java lang classnotfoundexception org carbon identity core util identityutil at org carbon webapp mgt loader carbonwebappclassloader loadclass carbonwebappclassloader java at org apache catalina loader webappclassloaderbase loadclass webappclassloaderbase java more
| 0
|
21,350
| 29,180,958,797
|
IssuesEvent
|
2023-05-19 11:52:33
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] Make temporal bucket names sentence case
|
.metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
For more-than-one-word bucketing options, we use capital case ("Minute of Hour"). OTOH, the notebook clause will use sentence case ("Created At: Minute of hour").
After a double-check with the design team, Maz [said](https://metaboat.slack.com/archives/C01LQQ2UW03/p1684431608938659) we should prefer using sentence case everywhere, so `available-temporal-buckets` should return items with display name being something like "Year", "Minute of hour", etc.
|
1.0
|
[MLv2] Make temporal bucket names sentence case - For more-than-one-word bucketing options, we use capital case ("Minute of Hour"). OTOH, the notebook clause will use sentence case ("Created At: Minute of hour").
After a double-check with the design team, Maz [said](https://metaboat.slack.com/archives/C01LQQ2UW03/p1684431608938659) we should prefer using sentence case everywhere, so `available-temporal-buckets` should return items with display name being something like "Year", "Minute of hour", etc.
|
process
|
make temporal bucket names sentence case for more than one word bucketing options we use capital case minute of hour otoh the notebook clause will use sentence case created at minute of hour after a double check with the design team maz we should prefer using sentence case everywhere so available temporal buckets should return items with display name being something like year minute of hour etc
| 1
|
17,003
| 22,366,127,519
|
IssuesEvent
|
2022-06-16 04:24:35
|
hashgraph/hedera-json-rpc-relay
|
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
|
opened
|
Updated acceptance test configurations to point to remote deployment
|
enhancement P1 process
|
### Problem
The acceptance test logic is missing a few minor items to point against remote instances
### Solution
Add configuration and logic to create clients point to a remote location and not spin up the server
### Alternatives
_No response_
|
1.0
|
Updated acceptance test configurations to point to remote deployment - ### Problem
The acceptance test logic is missing a few minor items to point against remote instances
### Solution
Add configuration and logic to create clients point to a remote location and not spin up the server
### Alternatives
_No response_
|
process
|
updated acceptance test configurations to point to remote deployment problem the acceptance test logic is missing a few minor items to point against remote instances solution add configuration and logic to create clients point to a remote location and not spin up the server alternatives no response
| 1
|
206,682
| 23,396,822,181
|
IssuesEvent
|
2022-08-12 01:12:59
|
tom9carthron1/infinite-wish-board
|
https://api.github.com/repos/tom9carthron1/infinite-wish-board
|
opened
|
CVE-2022-24785 (High) detected in moment-2.20.1.js
|
security vulnerability
|
## CVE-2022-24785 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.20.1.js</b></p></summary>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.20.1/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.20.1/moment.js</a></p>
<p>Path to dependency file: /ui/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>Path to vulnerable library: /ui/node_modules/js-base64/.attic/test-moment/./moment.js</p>
<p>
Dependency Hierarchy:
- :x: **moment-2.20.1.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tom9carthron1/infinite-wish-board/commit/5c98c8171c2f014c3587805f1090b7d16419c286">5c98c8171c2f014c3587805f1090b7d16419c286</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Moment.js is a JavaScript date library for parsing, validating, manipulating, and formatting dates. A path traversal vulnerability impacts npm (server) users of Moment.js between versions 1.0.1 and 2.29.1, especially if a user-provided locale string is directly used to switch moment locale. This problem is patched in 2.29.2, and the patch can be applied to all affected versions. As a workaround, sanitize the user-provided locale name before passing it to Moment.js.
<p>Publish Date: 2022-04-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24785>CVE-2022-24785</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/moment/moment/security/advisories/GHSA-8hfj-j24r-96c4">https://github.com/moment/moment/security/advisories/GHSA-8hfj-j24r-96c4</a></p>
<p>Release Date: 2022-04-04</p>
<p>Fix Resolution: moment - 2.29.2</p>
</p>
</details>
<p></p>
|
True
|
CVE-2022-24785 (High) detected in moment-2.20.1.js - ## CVE-2022-24785 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.20.1.js</b></p></summary>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.20.1/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.20.1/moment.js</a></p>
<p>Path to dependency file: /ui/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>Path to vulnerable library: /ui/node_modules/js-base64/.attic/test-moment/./moment.js</p>
<p>
Dependency Hierarchy:
- :x: **moment-2.20.1.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tom9carthron1/infinite-wish-board/commit/5c98c8171c2f014c3587805f1090b7d16419c286">5c98c8171c2f014c3587805f1090b7d16419c286</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Moment.js is a JavaScript date library for parsing, validating, manipulating, and formatting dates. A path traversal vulnerability impacts npm (server) users of Moment.js between versions 1.0.1 and 2.29.1, especially if a user-provided locale string is directly used to switch moment locale. This problem is patched in 2.29.2, and the patch can be applied to all affected versions. As a workaround, sanitize the user-provided locale name before passing it to Moment.js.
<p>Publish Date: 2022-04-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24785>CVE-2022-24785</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/moment/moment/security/advisories/GHSA-8hfj-j24r-96c4">https://github.com/moment/moment/security/advisories/GHSA-8hfj-j24r-96c4</a></p>
<p>Release Date: 2022-04-04</p>
<p>Fix Resolution: moment - 2.29.2</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in moment js cve high severity vulnerability vulnerable library moment js parse validate manipulate and display dates library home page a href path to dependency file ui node modules js attic test moment index html path to vulnerable library ui node modules js attic test moment moment js dependency hierarchy x moment js vulnerable library found in head commit a href found in base branch master vulnerability details moment js is a javascript date library for parsing validating manipulating and formatting dates a path traversal vulnerability impacts npm server users of moment js between versions and especially if a user provided locale string is directly used to switch moment locale this problem is patched in and the patch can be applied to all affected versions as a workaround sanitize the user provided locale name before passing it to moment js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution moment
| 0
|
255,096
| 21,898,620,655
|
IssuesEvent
|
2022-05-20 11:09:56
|
mozilla/addons-frontend
|
https://api.github.com/repos/mozilla/addons-frontend
|
closed
|
Migrate /components/TestAddonBadges.js to react-testing-library
|
component: testing qa: not needed priority: p3
|
Migrate /components/TestAddonBadges.js to react-testing-library
|
1.0
|
Migrate /components/TestAddonBadges.js to react-testing-library - Migrate /components/TestAddonBadges.js to react-testing-library
|
non_process
|
migrate components testaddonbadges js to react testing library migrate components testaddonbadges js to react testing library
| 0
|
12,155
| 14,741,454,663
|
IssuesEvent
|
2021-01-07 10:38:47
|
prisma/prisma-engines
|
https://api.github.com/repos/prisma/prisma-engines
|
opened
|
Query Engine is generating `min` and `max` aggregate for Json types to DMMF
|
bug/2-confirmed engines/query engine kind/bug process/candidate team/client
|
And when I want to query it then, I get an error. Probably we should just remove json fields from the `min` and `max` aggregation?
### query
```graphql
query {
aggregateUser {
count {
_all
}
min {
json
}
}
}
```
### error
```bash
Error occurred during query execution:
ConnectorError(ConnectorError { user_facing_error: None, kind:
QueryError(Error { kind: Db, cause: Some(DbError { severity: "ERROR", parsed_severity: Some(Error),
code: SqlState("42883"), message: "function min(jsonb) does not exist", detail: None, hint:
Some("No function matches the given name and argument types. You might need to add explicit type casts."),
position: Some(Original(18)), where_: None, schema: None, table: None, column: None, datatype: None,
constraint: None, file: Some("parse_func.c"), line: Some(528), routine: Some("ParseFuncOrColumn") }) }) })
```
|
1.0
|
Query Engine is generating `min` and `max` aggregate for Json types to DMMF - And when I want to query it then, I get an error. Probably we should just remove json fields from the `min` and `max` aggregation?
### query
```graphql
query {
aggregateUser {
count {
_all
}
min {
json
}
}
}
```
### error
```bash
Error occurred during query execution:
ConnectorError(ConnectorError { user_facing_error: None, kind:
QueryError(Error { kind: Db, cause: Some(DbError { severity: "ERROR", parsed_severity: Some(Error),
code: SqlState("42883"), message: "function min(jsonb) does not exist", detail: None, hint:
Some("No function matches the given name and argument types. You might need to add explicit type casts."),
position: Some(Original(18)), where_: None, schema: None, table: None, column: None, datatype: None,
constraint: None, file: Some("parse_func.c"), line: Some(528), routine: Some("ParseFuncOrColumn") }) }) })
```
|
process
|
query engine is generating min and max aggregate for json types to dmmf and when i want to query it then i get an error probably we should just remove json fields from the min and max aggregation query graphql query aggregateuser count all min json error bash error occurred during query execution connectorerror connectorerror user facing error none kind queryerror error kind db cause some dberror severity error parsed severity some error code sqlstate message function min jsonb does not exist detail none hint some no function matches the given name and argument types you might need to add explicit type casts position some original where none schema none table none column none datatype none constraint none file some parse func c line some routine some parsefuncorcolumn
| 1
|
20,927
| 27,772,521,693
|
IssuesEvent
|
2023-03-16 15:16:03
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
reopened
|
Prevent creation of recursive models (or provide solution to break out of loops)
|
Processing Bug
|
Author Name: **Anita Graser** (@anitagraser)
Original Redmine Issue: [9909](https://issues.qgis.org/issues/9909)
Affected QGIS version: 2.4.0
Redmine category:processing/modeller
Assignee: Victor Olaya
---
Currently, it is possible to include the model itself into the model workflow thus creating a model which calls itself. Of course this causes errors.
|
1.0
|
Prevent creation of recursive models (or provide solution to break out of loops) - Author Name: **Anita Graser** (@anitagraser)
Original Redmine Issue: [9909](https://issues.qgis.org/issues/9909)
Affected QGIS version: 2.4.0
Redmine category:processing/modeller
Assignee: Victor Olaya
---
Currently, it is possible to include the model itself into the model workflow thus creating a model which calls itself. Of course this causes errors.
|
process
|
prevent creation of recursive models or provide solution to break out of loops author name anita graser anitagraser original redmine issue affected qgis version redmine category processing modeller assignee victor olaya currently it is possible to include the model itself into the model workflow thus creating a model which calls itself of course this causes errors
| 1
|
20,676
| 27,343,220,457
|
IssuesEvent
|
2023-02-27 01:05:15
|
TeamAidemy/ds-paper-summaries
|
https://api.github.com/repos/TeamAidemy/ds-paper-summaries
|
opened
|
Improving Language Models by Retrieving from Trillions of Tokens
|
Natural language processing Transformer
|
Borgeaud, Sebastian, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, et al. 2021. “Improving Language Models by Retrieving from Trillions of Tokens.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2112.04426.
- 何兆ものトークンを持つデータベースから検索しながら、任意のテキストをモデル化する "Retrieval-Enhanced Transformer (RETRO)" という手法を提案。
- 記憶を外部のデータソースに頼ることで、25倍規模のパラメータ数を持つモデルと同等の性能を得ることができた。
- 「セミパラメトリック」なアプローチの有用性を示した。
## Abstract
> We enhance auto-regressive language models by conditioning on document chunks retrieved from a large corpus, based on local similarity with preceding tokens. With a 2 trillion token database, our Retrieval-Enhanced Transformer (Retro) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25x fewer parameters. After fine-tuning, Retro performance translates to downstream knowledge-intensive tasks such as question answering. Retro combines a frozen Bert retriever, a differentiable encoder and a chunked cross-attention mechanism to predict tokens based on an order of magnitude more data than what is typically consumed during training. We typically train Retro from scratch, yet can also rapidly Retrofit pre-trained transformers with retrieval and still achieve good performance. Our work opens up new avenues for improving language models through explicit memory at unprecedented scale.
(DeepL翻訳)
我々は、大規模コーパスから取得した文書チャンクを、先行トークンとの局所的な類似性に基づいて条件付けすることにより、自己回帰型言語モデルを強化する。2兆個のトークンデータベースを用いた我々の検索強化型変換器(Retro)は、25倍少ないパラメータで、Pile上のGPT-3やJurassic-1と同等の性能を得ることができる。Retroの性能は、微調整の後、質問応答のような下流の知識集約的なタスクに反映される。Retroは、凍結バートレトリバー、微分可能エンコーダー、チャンク型クロスアテンションメカニズムを組み合わせ、学習時に消費されるデータよりも一桁多いデータを基にトークンを予測します。私たちは通常、Retroをゼロから学習しますが、事前に学習した変換器を検索に迅速にRetrofitすることも可能であり、それでも良好な性能を達成することができます。私たちの研究は、前例のない規模の明示的記憶によって言語モデルを改善する新しい道を開くものです。
## コード
- [labmlai/annotated_deep_learning_paper_implementations](https://github.com/labmlai/annotated_deep_learning_paper_implementations)
- [lucidrains/RETRO-pytorch](https://github.com/lucidrains/RETRO-pytorch)
## 解決した課題/先行研究との比較
- 近年の大規模自然言語処理モデルの高性能化は学習データの増加・計算能力の向上・モデルサイズの増加によって達成されている。
- 実際、BERT (0.3B) → GPT-2 (1.5B) → T5 (11B) → GPT-3 (175B) → Gopher (280B) と進むにつれどんどん性能向上。
- モデルサイズの増加は「学習・推論の処理能力の増加」と「学習データの記憶力」という2つの利点があると考えられている。
- 本論文は、2つの利点がそれぞれどの程度効いているのかの分離を目指し、特に後者の「学習データの記憶力」という側面に着目。
- **記憶力に相当するものとして外部のデータベースを用いる**ことで、モデル自体の計算量を大幅に増やすことなく、言語モデルを拡張する方法を提案した。
- 過去にも検索を組み合わせる手法は行われてきた (Guu et al., 2020; Khandelwal et al., 2020; Lewis et al., 2020; Yogatama et al., 2021) が、モデルサイズやデータベースが小規模なものであったため、何兆ものトークンからなるデータベースを用いた初の報告。
- 「過去の仕事を大規模にやってみました」の一種?
- Chunked Cross Attentionは本論文で初出のように思われる。検索テキストの取り込み方に新規性!
## 技術・手法のポイント
- インプット文字列をチャンクに分割。現在のチャンクの予測のために、前のチャンクと似たテキストをデータベースから検索。
- 外部データベースからの検索には事前学習済みBERTを使用。近傍探索を行い、インプットに似たテキストを抽出。
- 検索されたテキストをChuncked Cross-Attentionモジュールを用いてRETROに取り込む。
[](https://gyazo.com/aa9ff460ec3644474e33fe79d7b6b85b)
## 評価指標
- 用いたデータセット
- C4
- Wikitext103
- Curation Corpus
- Lambada
- Pile
- マニュアルで選んだWikipediaの記事 (基準:データセットを集めた後に編集された記事)
- 比較した指標
- Bits-per-byte
- Perplexity
- Lambadaデータセットはaccuracy on the last word
- Q&A (The Natural Questions. Kwiatkowski et al., 2019) のAccuracy
- 比較対象のモデル
- Transformer (パラメータサイズが 172M, 425M, 1.5B, 7.5Bのものを用意。Baselineと呼称)
- RETRO (検索なし)
- RETRO (検索あり)
いずれにおいてもBaselineからの改良がみられ、Fine-tuningを行うことでQ&A taskでもstate-of-the-artとのcompetitive performanceを示した。検索なしでもbaselineと同程度の性能が出る。
## 残された課題・議論
- 検索のための外部データベースを工夫すれば、差別や暴力的表現などを前もって除くことができるかもしれないと議論。 (Bender et al. 2021; Weidinger et al., 2021あたりも参照)
- 大規模モデルより軽量化したとはいえど、外部データベースのサイズが 1T トークンくらい必要そうなので、一般人が使うのは難しそう。
## 重要な引用
- 本論文以前の検索を組み合わせる手法。データベースのサイズ的に、一般人が現実的に使えるのはこれらか。
- Guu, Kelvin, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 13--18 Jul 2020. “Retrieval Augmented Language Model Pre-Training.” In Proceedings of the 37th International Conference on Machine Learning, edited by Hal Daumé Iii and Aarti Singh, 119:3929–38. Proceedings of Machine Learning Research. PMLR.
- Khandelwal, Urvashi, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. “Generalization through Memorization: Nearest Neighbor Language Models.” https://openreview.net/pdf?id=HklBjCEKvH.
- Lewis, Patrick, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, et al. 2020. “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.” In Proceedings of the 34th International Conference on Neural Information Processing Systems, 9459–74. NIPS’20 793. Red Hook, NY, USA: Curran Associates Inc.
- Yogatama, Dani, Cyprien de Masson d’Autume, and Lingpeng Kong. 2021. “Adaptive Semiparametric Language Models.” Transactions of the Association for Computational Linguistics 9: 362–73.
- Gopher. DeepMindが開発した2,800億個のパラメータを持つ言語モデル。モデルのサイズを大きくして性能が良くなる分野と大きく変わらない分野を議論
- Rae, Jack W., Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, et al. 2021. “Scaling Language Models: Methods, Analysis & Insights from Training Gopher.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2112.11446.
- 大規模言語モデルが持つ潜在的弊害について議論
- Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery.
- Weidinger, Laura, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, et al. 2021. “Ethical and Social Risks of Harm from Language Models.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2112.04359.
|
1.0
|
Improving Language Models by Retrieving from Trillions of Tokens - Borgeaud, Sebastian, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, et al. 2021. “Improving Language Models by Retrieving from Trillions of Tokens.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2112.04426.
- 何兆ものトークンを持つデータベースから検索しながら、任意のテキストをモデル化する "Retrieval-Enhanced Transformer (RETRO)" という手法を提案。
- 記憶を外部のデータソースに頼ることで、25倍規模のパラメータ数を持つモデルと同等の性能を得ることができた。
- 「セミパラメトリック」なアプローチの有用性を示した。
## Abstract
> We enhance auto-regressive language models by conditioning on document chunks retrieved from a large corpus, based on local similarity with preceding tokens. With a 2 trillion token database, our Retrieval-Enhanced Transformer (Retro) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25x fewer parameters. After fine-tuning, Retro performance translates to downstream knowledge-intensive tasks such as question answering. Retro combines a frozen Bert retriever, a differentiable encoder and a chunked cross-attention mechanism to predict tokens based on an order of magnitude more data than what is typically consumed during training. We typically train Retro from scratch, yet can also rapidly Retrofit pre-trained transformers with retrieval and still achieve good performance. Our work opens up new avenues for improving language models through explicit memory at unprecedented scale.
(DeepL翻訳)
我々は、大規模コーパスから取得した文書チャンクを、先行トークンとの局所的な類似性に基づいて条件付けすることにより、自己回帰型言語モデルを強化する。2兆個のトークンデータベースを用いた我々の検索強化型変換器(Retro)は、25倍少ないパラメータで、Pile上のGPT-3やJurassic-1と同等の性能を得ることができる。Retroの性能は、微調整の後、質問応答のような下流の知識集約的なタスクに反映される。Retroは、凍結バートレトリバー、微分可能エンコーダー、チャンク型クロスアテンションメカニズムを組み合わせ、学習時に消費されるデータよりも一桁多いデータを基にトークンを予測します。私たちは通常、Retroをゼロから学習しますが、事前に学習した変換器を検索に迅速にRetrofitすることも可能であり、それでも良好な性能を達成することができます。私たちの研究は、前例のない規模の明示的記憶によって言語モデルを改善する新しい道を開くものです。
## コード
- [labmlai/annotated_deep_learning_paper_implementations](https://github.com/labmlai/annotated_deep_learning_paper_implementations)
- [lucidrains/RETRO-pytorch](https://github.com/lucidrains/RETRO-pytorch)
## 解決した課題/先行研究との比較
- 近年の大規模自然言語処理モデルの高性能化は学習データの増加・計算能力の向上・モデルサイズの増加によって達成されている。
- 実際、BERT (0.3B) → GPT-2 (1.5B) → T5 (11B) → GPT-3 (175B) → Gopher (280B) と進むにつれどんどん性能向上。
- モデルサイズの増加は「学習・推論の処理能力の増加」と「学習データの記憶力」という2つの利点があると考えられている。
- 本論文は、2つの利点がそれぞれどの程度効いているのかの分離を目指し、特に後者の「学習データの記憶力」という側面に着目。
- **記憶力に相当するものとして外部のデータベースを用いる**ことで、モデル自体の計算量を大幅に増やすことなく、言語モデルを拡張する方法を提案した。
- 過去にも検索を組み合わせる手法は行われてきた (Guu et al., 2020; Khandelwal et al., 2020; Lewis et al., 2020; Yogatama et al., 2021) が、モデルサイズやデータベースが小規模なものであったため、何兆ものトークンからなるデータベースを用いた初の報告。
- 「過去の仕事を大規模にやってみました」の一種?
- Chunked Cross Attentionは本論文で初出のように思われる。検索テキストの取り込み方に新規性!
## 技術・手法のポイント
- インプット文字列をチャンクに分割。現在のチャンクの予測のために、前のチャンクと似たテキストをデータベースから検索。
- 外部データベースからの検索には事前学習済みBERTを使用。近傍探索を行い、インプットに似たテキストを抽出。
- 検索されたテキストをChuncked Cross-Attentionモジュールを用いてRETROに取り込む。
[](https://gyazo.com/aa9ff460ec3644474e33fe79d7b6b85b)
## 評価指標
- 用いたデータセット
- C4
- Wikitext103
- Curation Corpus
- Lambada
- Pile
- マニュアルで選んだWikipediaの記事 (基準:データセットを集めた後に編集された記事)
- 比較した指標
- Bits-per-byte
- Perplexity
- Lambadaデータセットはaccuracy on the last word
- Q&A (The Natural Questions. Kwiatkowski et al., 2019) のAccuracy
- 比較対象のモデル
- Transformer (パラメータサイズが 172M, 425M, 1.5B, 7.5Bのものを用意。Baselineと呼称)
- RETRO (検索なし)
- RETRO (検索あり)
いずれにおいてもBaselineからの改良がみられ、Fine-tuningを行うことでQ&A taskでもstate-of-the-artとのcompetitive performanceを示した。検索なしでもbaselineと同程度の性能が出る。
## 残された課題・議論
- 検索のための外部データベースを工夫すれば、差別や暴力的表現などを前もって除くことができるかもしれないと議論。 (Bender et al. 2021; Weidinger et al., 2021あたりも参照)
- 大規模モデルより軽量化したとはいえど、外部データベースのサイズが 1T トークンくらい必要そうなので、一般人が使うのは難しそう。
## 重要な引用
- 本論文以前の検索を組み合わせる手法。データベースのサイズ的に、一般人が現実的に使えるのはこれらか。
- Guu, Kelvin, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 13--18 Jul 2020. “Retrieval Augmented Language Model Pre-Training.” In Proceedings of the 37th International Conference on Machine Learning, edited by Hal Daumé Iii and Aarti Singh, 119:3929–38. Proceedings of Machine Learning Research. PMLR.
- Khandelwal, Urvashi, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. “Generalization through Memorization: Nearest Neighbor Language Models.” https://openreview.net/pdf?id=HklBjCEKvH.
- Lewis, Patrick, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, et al. 2020. “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.” In Proceedings of the 34th International Conference on Neural Information Processing Systems, 9459–74. NIPS’20 793. Red Hook, NY, USA: Curran Associates Inc.
- Yogatama, Dani, Cyprien de Masson d’Autume, and Lingpeng Kong. 2021. “Adaptive Semiparametric Language Models.” Transactions of the Association for Computational Linguistics 9: 362–73.
- Gopher. DeepMindが開発した2,800億個のパラメータを持つ言語モデル。モデルのサイズを大きくして性能が良くなる分野と大きく変わらない分野を議論
- Rae, Jack W., Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, et al. 2021. “Scaling Language Models: Methods, Analysis & Insights from Training Gopher.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2112.11446.
- 大規模言語モデルが持つ潜在的弊害について議論
- Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery.
- Weidinger, Laura, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, et al. 2021. “Ethical and Social Risks of Harm from Language Models.” arXiv [cs.CL]. arXiv. http://arxiv.org/abs/2112.04359.
|
process
|
improving language models by retrieving from trillions of tokens borgeaud sebastian arthur mensch jordan hoffmann trevor cai eliza rutherford katie millican george van den driessche et al “improving language models by retrieving from trillions of tokens ” arxiv arxiv 何兆ものトークンを持つデータベースから検索しながら、任意のテキストをモデル化する retrieval enhanced transformer retro という手法を提案。 記憶を外部のデータソースに頼ることで、 。 「セミパラメトリック」なアプローチの有用性を示した。 abstract we enhance auto regressive language models by conditioning on document chunks retrieved from a large corpus based on local similarity with preceding tokens with a trillion token database our retrieval enhanced transformer retro obtains comparable performance to gpt and jurassic on the pile despite using fewer parameters after fine tuning retro performance translates to downstream knowledge intensive tasks such as question answering retro combines a frozen bert retriever a differentiable encoder and a chunked cross attention mechanism to predict tokens based on an order of magnitude more data than what is typically consumed during training we typically train retro from scratch yet can also rapidly retrofit pre trained transformers with retrieval and still achieve good performance our work opens up new avenues for improving language models through explicit memory at unprecedented scale deepl翻訳 我々は、大規模コーパスから取得した文書チャンクを、先行トークンとの局所的な類似性に基づいて条件付けすることにより、自己回帰型言語モデルを強化する。 (retro)は、 、pile上のgpt 。retroの性能は、微調整の後、質問応答のような下流の知識集約的なタスクに反映される。retroは、凍結バートレトリバー、微分可能エンコーダー、チャンク型クロスアテンションメカニズムを組み合わせ、学習時に消費されるデータよりも一桁多いデータを基にトークンを予測します。私たちは通常、retroをゼロから学習しますが、事前に学習した変換器を検索に迅速にretrofitすることも可能であり、それでも良好な性能を達成することができます。私たちの研究は、前例のない規模の明示的記憶によって言語モデルを改善する新しい道を開くものです。 コード 解決した課題 先行研究との比較 近年の大規模自然言語処理モデルの高性能化は学習データの増加・計算能力の向上・モデルサイズの増加によって達成されている。 実際、bert → gpt → → gpt → gopher と進むにつれどんどん性能向上。 モデルサイズの増加は「学習・推論の処理能力の増加」と「学習データの記憶力」 。 本論文は、 、特に後者の「学習データの記憶力」という側面に着目。 記憶力に相当するものとして外部のデータベースを用いる ことで、モデル自体の計算量を大幅に増やすことなく、言語モデルを拡張する方法を提案した。 過去にも検索を組み合わせる手法は行われてきた guu et al khandelwal et al lewis et al yogatama et al が、モデルサイズやデータベースが小規模なものであったため、何兆ものトークンからなるデータベースを用いた初の報告。 「過去の仕事を大規模にやってみました」の一種? chunked cross attentionは本論文で初出のように思われる。検索テキストの取り込み方に新規性! 技術・手法のポイント インプット文字列をチャンクに分割。現在のチャンクの予測のために、前のチャンクと似たテキストをデータベースから検索。 外部データベースからの検索には事前学習済みbertを使用。近傍探索を行い、インプットに似たテキストを抽出。 検索されたテキストをchuncked cross attentionモジュールを用いてretroに取り込む。 評価指標 用いたデータセット curation corpus lambada pile マニュアルで選んだwikipediaの記事 基準:データセットを集めた後に編集された記事 比較した指標 bits per byte perplexity lambadaデータセットはaccuracy on the last word q a the natural questions kwiatkowski et al のaccuracy 比較対象のモデル transformer パラメータサイズが 。baselineと呼称 retro 検索なし retro 検索あり いずれにおいてもbaselineからの改良がみられ、fine tuningを行うことでq a taskでもstate of the artとのcompetitive performanceを示した。検索なしでもbaselineと同程度の性能が出る。 残された課題・議論 検索のための外部データベースを工夫すれば、差別や暴力的表現などを前もって除くことができるかもしれないと議論。 bender et al weidinger et al 大規模モデルより軽量化したとはいえど、外部データベースのサイズが トークンくらい必要そうなので、一般人が使うのは難しそう。 重要な引用 本論文以前の検索を組み合わせる手法。データベースのサイズ的に、一般人が現実的に使えるのはこれらか。 guu kelvin kenton lee zora tung panupong pasupat and mingwei chang jul “retrieval augmented language model pre training ” in proceedings of the international conference on machine learning edited by hal daumé iii and aarti singh – proceedings of machine learning research pmlr khandelwal urvashi omer levy dan jurafsky luke zettlemoyer and mike lewis “generalization through memorization nearest neighbor language models ” lewis patrick ethan perez aleksandra piktus fabio petroni vladimir karpukhin naman goyal heinrich küttler et al “retrieval augmented generation for knowledge intensive nlp tasks ” in proceedings of the international conference on neural information processing systems – nips’ red hook ny usa curran associates inc yogatama dani cyprien de masson d’autume and lingpeng kong “adaptive semiparametric language models ” transactions of the association for computational linguistics – gopher 。モデルのサイズを大きくして性能が良くなる分野と大きく変わらない分野を議論 rae jack w sebastian borgeaud trevor cai katie millican jordan hoffmann francis song john aslanides et al “scaling language models methods analysis insights from training gopher ” arxiv arxiv 大規模言語モデルが持つ潜在的弊害について議論 bender emily m timnit gebru angelina mcmillan major and shmargaret shmitchell “on the dangers of stochastic parrots can language models be too big 🦜 ” in proceedings of the acm conference on fairness accountability and transparency – facct ’ new york ny usa association for computing machinery weidinger laura john mellor maribeth rauh conor griffin jonathan uesato po sen huang myra cheng et al “ethical and social risks of harm from language models ” arxiv arxiv
| 1
|
78,333
| 22,196,025,858
|
IssuesEvent
|
2022-06-07 06:59:34
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
closed
|
[Bug]: Home page behaviour on deployed app when page is hidden
|
Bug App Viewers Pod Production UI Building Pod Needs Triaging View Mode
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
Deploying an app where the home page is marked as `hidden,` hides all the other pages on the app in deployed mode. Launching the app from here on only shows the hidden home page and not the other pages
### Steps To Reproduce
1. Have multiple pages on an app
2. Mark home page as hidden and deploy the app
3. Notice other pages on the app not appear in view mode
https://www.loom.com/share/f3428a10459044b987b76567f181f490
### Public Sample App
_No response_
### Version
Cloud
|
1.0
|
[Bug]: Home page behaviour on deployed app when page is hidden - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
Deploying an app where the home page is marked as `hidden,` hides all the other pages on the app in deployed mode. Launching the app from here on only shows the hidden home page and not the other pages
### Steps To Reproduce
1. Have multiple pages on an app
2. Mark home page as hidden and deploy the app
3. Notice other pages on the app not appear in view mode
https://www.loom.com/share/f3428a10459044b987b76567f181f490
### Public Sample App
_No response_
### Version
Cloud
|
non_process
|
home page behaviour on deployed app when page is hidden is there an existing issue for this i have searched the existing issues description deploying an app where the home page is marked as hidden hides all the other pages on the app in deployed mode launching the app from here on only shows the hidden home page and not the other pages steps to reproduce have multiple pages on an app mark home page as hidden and deploy the app notice other pages on the app not appear in view mode public sample app no response version cloud
| 0
|
71,326
| 30,857,769,389
|
IssuesEvent
|
2023-08-02 22:23:41
|
agrc/api.mapserv.utah.gov
|
https://api.github.com/repos/agrc/api.mapserv.utah.gov
|
opened
|
Incorrect address parsing causing bad geocode result
|
bug service - api geocoding
|
`308 EAST 4500 SOUTH, SUITE 100-C Murray` gets parsed as `308 East SOUTH SUITE 4500` and matches in the salt lake avenues.
We recommend stripping sub addresses but we should not foul up this poorly.
|
1.0
|
Incorrect address parsing causing bad geocode result - `308 EAST 4500 SOUTH, SUITE 100-C Murray` gets parsed as `308 East SOUTH SUITE 4500` and matches in the salt lake avenues.
We recommend stripping sub addresses but we should not foul up this poorly.
|
non_process
|
incorrect address parsing causing bad geocode result east south suite c murray gets parsed as east south suite and matches in the salt lake avenues we recommend stripping sub addresses but we should not foul up this poorly
| 0
|
242
| 2,665,081,765
|
IssuesEvent
|
2015-03-20 18:12:54
|
saxifrage/learn
|
https://api.github.com/repos/saxifrage/learn
|
opened
|
publish main project site on a domain
|
Process ★
|
@timothyfcook Can you help me out here? I propose we host our main project site at:
http://learn.saxifrageschool.org/
I've got GitHub pages set up on the `gh-pages` branch. Once you okay the domain and add a `CNAME` for it to `saxifrage.github.io` I can finish the configuration to get our project site on a proper domain.
|
1.0
|
publish main project site on a domain - @timothyfcook Can you help me out here? I propose we host our main project site at:
http://learn.saxifrageschool.org/
I've got GitHub pages set up on the `gh-pages` branch. Once you okay the domain and add a `CNAME` for it to `saxifrage.github.io` I can finish the configuration to get our project site on a proper domain.
|
process
|
publish main project site on a domain timothyfcook can you help me out here i propose we host our main project site at i ve got github pages set up on the gh pages branch once you okay the domain and add a cname for it to saxifrage github io i can finish the configuration to get our project site on a proper domain
| 1
|
17,966
| 23,976,274,902
|
IssuesEvent
|
2022-09-13 11:52:39
|
JeroenMathon/NeosVR-Research-Initiative
|
https://api.github.com/repos/JeroenMathon/NeosVR-Research-Initiative
|
opened
|
Build profile on Karel Hulec
|
help wanted processing
|
**IMPORTANT**: It is essential that all information documented in this profile is strictly related to the project and the situation, anything unrelated is to be redacted.
Anything related to the project's:
- History
- Development
- Publicized Internal Affairs and relations
Are to be included in this comprehensive profile
|
1.0
|
Build profile on Karel Hulec - **IMPORTANT**: It is essential that all information documented in this profile is strictly related to the project and the situation, anything unrelated is to be redacted.
Anything related to the project's:
- History
- Development
- Publicized Internal Affairs and relations
Are to be included in this comprehensive profile
|
process
|
build profile on karel hulec important it is essential that all information documented in this profile is strictly related to the project and the situation anything unrelated is to be redacted anything related to the project s history development publicized internal affairs and relations are to be included in this comprehensive profile
| 1
|
50,878
| 6,130,830,860
|
IssuesEvent
|
2017-06-24 09:32:31
|
rust-lang/rust
|
https://api.github.com/repos/rust-lang/rust
|
closed
|
Compiler crashes when using feature "associated_type_defaults"
|
E-needstest I-ICE
|
rustc: `rustc 1.14.0-nightly (6e8f92f11 2016-10-07)`
cargo: `cargo 0.13.0-nightly (6534bdd 2016-10-07)`
Short code example to reproduce the issue:
``` rust
#![feature(associated_type_defaults)]
trait State: Sized {
type NextState: State = StateMachineEnded;
fn execute(self) -> Option<Self::NextState>;
}
struct StateMachineEnded;
impl State for StateMachineEnded {
fn execute(self) -> Option<Self::NextState> {
None
}
}
```
Adding `type NextState = StateMachineEnded;` to the implementation - will compile.
`RUST_LOG=trace RUST_BACKTRACE=1 cargo build` log: https://gist.github.com/lilianmoraru/69701f72b626f9872661952190066b37
|
1.0
|
Compiler crashes when using feature "associated_type_defaults" - rustc: `rustc 1.14.0-nightly (6e8f92f11 2016-10-07)`
cargo: `cargo 0.13.0-nightly (6534bdd 2016-10-07)`
Short code example to reproduce the issue:
``` rust
#![feature(associated_type_defaults)]
trait State: Sized {
type NextState: State = StateMachineEnded;
fn execute(self) -> Option<Self::NextState>;
}
struct StateMachineEnded;
impl State for StateMachineEnded {
fn execute(self) -> Option<Self::NextState> {
None
}
}
```
Adding `type NextState = StateMachineEnded;` to the implementation - will compile.
`RUST_LOG=trace RUST_BACKTRACE=1 cargo build` log: https://gist.github.com/lilianmoraru/69701f72b626f9872661952190066b37
|
non_process
|
compiler crashes when using feature associated type defaults rustc rustc nightly cargo cargo nightly short code example to reproduce the issue rust trait state sized type nextstate state statemachineended fn execute self option struct statemachineended impl state for statemachineended fn execute self option none adding type nextstate statemachineended to the implementation will compile rust log trace rust backtrace cargo build log
| 0
|
92,059
| 10,737,490,648
|
IssuesEvent
|
2019-10-29 13:12:29
|
corvus-dotnet/Corvus.Monitoring
|
https://api.github.com/repos/corvus-dotnet/Corvus.Monitoring
|
closed
|
Add missing elements to Readme
|
documentation
|
We need:
- Badges at top showing ADO build status, license, and IMM status
- License section
- Code of Conduct
- IMM detail section
|
1.0
|
Add missing elements to Readme - We need:
- Badges at top showing ADO build status, license, and IMM status
- License section
- Code of Conduct
- IMM detail section
|
non_process
|
add missing elements to readme we need badges at top showing ado build status license and imm status license section code of conduct imm detail section
| 0
|
237
| 4,875,525,937
|
IssuesEvent
|
2016-11-16 09:52:07
|
cf-tm-bot/openstack_cpi
|
https://api.github.com/repos/cf-tm-bot/openstack_cpi
|
closed
|
Apply tf modularization also to e2e and unify folder structure - Story Id: 131870543
|
accepted chore ci env-creation-automation pipeline
|
We have introduced tf modules in #127445791. To be consistent use it also for e2e tf scripts. The folder structure should be unified.
---
Mirrors: [story 131870543](https://www.pivotaltracker.com/story/show/131870543) submitted on Oct 7, 2016 UTC
- **Requester**: Felix Riegger
- **Owners**: Beyhan Veli, Mauro Morales
- **Estimate**: 0.0
|
1.0
|
Apply tf modularization also to e2e and unify folder structure - Story Id: 131870543 - We have introduced tf modules in #127445791. To be consistent use it also for e2e tf scripts. The folder structure should be unified.
---
Mirrors: [story 131870543](https://www.pivotaltracker.com/story/show/131870543) submitted on Oct 7, 2016 UTC
- **Requester**: Felix Riegger
- **Owners**: Beyhan Veli, Mauro Morales
- **Estimate**: 0.0
|
non_process
|
apply tf modularization also to and unify folder structure story id we have introduced tf modules in to be consistent use it also for tf scripts the folder structure should be unified mirrors submitted on oct utc requester felix riegger owners beyhan veli mauro morales estimate
| 0
|
115,926
| 9,817,267,643
|
IssuesEvent
|
2019-06-13 16:19:11
|
hashmapinc/Drillflow
|
https://api.github.com/repos/hashmapinc/Drillflow
|
opened
|
QA: The index is not getting the min max returned (Link #551)
|
Test
|
Create test case to ensure that min and max index is returned in index channel
|
1.0
|
QA: The index is not getting the min max returned (Link #551) - Create test case to ensure that min and max index is returned in index channel
|
non_process
|
qa the index is not getting the min max returned link create test case to ensure that min and max index is returned in index channel
| 0
|
20,653
| 27,328,803,824
|
IssuesEvent
|
2023-02-25 10:57:53
|
firebase/firebase-cpp-sdk
|
https://api.github.com/repos/firebase/firebase-cpp-sdk
|
closed
|
[C++] Nightly Integration Testing Report
|
type: process nightly-testing
|
Note: This report excludes firestore. Please also check **[the report for firestore](https://github.com/firebase/firebase-cpp-sdk/issues/1178)**
***
<hidden value="integration-test-status-comment"></hidden>
### [build against repo] Integration test with FLAKINESS (succeeded after retry)
Requested by @DellaBitta on commit 1be7d248741115daaf5196413ed0d96428635796
Last updated: Fri Feb 24 03:59 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4260875153)**
| Failures | Configs |
|----------|---------|
| gma | [TEST] [FLAKINESS] [Android] [1/3 os: windows] [1/4 android_device: android_latest]<details><summary>(1 failed tests)</summary> CRASH/TIMEOUT</details>[TEST] [FLAKINESS] [iOS] [macos] [1/6 ios_device: ios_min]<details><summary>(1 failed tests)</summary> FirebaseGmaTest.TestAdView</details> |
Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 1be7d248741115daaf5196413ed0d96428635796
Last updated: Fri Feb 24 15:15 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4262245158)**
<hidden value="integration-test-status-comment"></hidden>
|
1.0
|
[C++] Nightly Integration Testing Report - Note: This report excludes firestore. Please also check **[the report for firestore](https://github.com/firebase/firebase-cpp-sdk/issues/1178)**
***
<hidden value="integration-test-status-comment"></hidden>
### [build against repo] Integration test with FLAKINESS (succeeded after retry)
Requested by @DellaBitta on commit 1be7d248741115daaf5196413ed0d96428635796
Last updated: Fri Feb 24 03:59 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4260875153)**
| Failures | Configs |
|----------|---------|
| gma | [TEST] [FLAKINESS] [Android] [1/3 os: windows] [1/4 android_device: android_latest]<details><summary>(1 failed tests)</summary> CRASH/TIMEOUT</details>[TEST] [FLAKINESS] [iOS] [macos] [1/6 ios_device: ios_min]<details><summary>(1 failed tests)</summary> FirebaseGmaTest.TestAdView</details> |
Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 1be7d248741115daaf5196413ed0d96428635796
Last updated: Fri Feb 24 15:15 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4262245158)**
<hidden value="integration-test-status-comment"></hidden>
|
process
|
nightly integration testing report note this report excludes firestore please also check integration test with flakiness succeeded after retry requested by dellabitta on commit last updated fri feb pst failures configs gma failed tests nbsp nbsp crash timeout failed tests nbsp nbsp firebasegmatest testadview add flaky tests to ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated fri feb pst
| 1
|
14,044
| 2,789,869,058
|
IssuesEvent
|
2015-05-08 22:02:44
|
google/google-visualization-api-issues
|
https://api.github.com/repos/google/google-visualization-api-issues
|
opened
|
ImageSparkLine swapping dimensions
|
Priority-Medium Type-Defect
|
Original [issue 371](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=371) created by orwant on 2010-08-11T16:18:27.000Z:
<b>What steps will reproduce the problem? Please provide a link to a</b>
<b>demonstration page if at all possible, or attach code.</b>
1. See http://code.google.com/apis/ajax/playground/?type=visualization#sparkline
2. Add the following config object: {width:25, height:70}
3. Confirm that it will in fact generate an image 70px wide and 25px high
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
ImageSparkLine
<b>Are you using the test environment (version 1.1)?</b>
<b>(If you are not sure, answer NO)</b>
No - unsure.
<b>What operating system and browser are you using?</b>
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
|
1.0
|
ImageSparkLine swapping dimensions - Original [issue 371](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=371) created by orwant on 2010-08-11T16:18:27.000Z:
<b>What steps will reproduce the problem? Please provide a link to a</b>
<b>demonstration page if at all possible, or attach code.</b>
1. See http://code.google.com/apis/ajax/playground/?type=visualization#sparkline
2. Add the following config object: {width:25, height:70}
3. Confirm that it will in fact generate an image 70px wide and 25px high
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
ImageSparkLine
<b>Are you using the test environment (version 1.1)?</b>
<b>(If you are not sure, answer NO)</b>
No - unsure.
<b>What operating system and browser are you using?</b>
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
|
non_process
|
imagesparkline swapping dimensions original created by orwant on what steps will reproduce the problem please provide a link to a demonstration page if at all possible or attach code see add the following config object width height confirm that it will in fact generate an image wide and high what component is this issue related to piechart linechart datatable query etc imagesparkline are you using the test environment version if you are not sure answer no no unsure what operating system and browser are you using for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved
| 0
|
15,966
| 20,177,281,138
|
IssuesEvent
|
2022-02-10 15:27:54
|
ossf/tac
|
https://api.github.com/repos/ossf/tac
|
closed
|
TAC Election Process: Member makeup
|
ElectionProcess
|
**Proposal:**
Keep TAC size at 7 members
- 4 members are elected by the governing board
- 3 members are elected by Technical Initiative contributors. Each TI determines who is a contributor and may vote. 1 vote per person across all Technical Initiatives
|
1.0
|
TAC Election Process: Member makeup - **Proposal:**
Keep TAC size at 7 members
- 4 members are elected by the governing board
- 3 members are elected by Technical Initiative contributors. Each TI determines who is a contributor and may vote. 1 vote per person across all Technical Initiatives
|
process
|
tac election process member makeup proposal keep tac size at members members are elected by the governing board members are elected by technical initiative contributors each ti determines who is a contributor and may vote vote per person across all technical initiatives
| 1
|
39,811
| 8,688,868,337
|
IssuesEvent
|
2018-12-03 17:07:53
|
CCBlueX/LiquidBounce1.8-Issues
|
https://api.github.com/repos/CCBlueX/LiquidBounce1.8-Issues
|
closed
|
improve aac 1.9.10 fly
|
Bypass Recode
|
aac 1.9.10 fly ist relative schlecht im vergleich zu anderen clients (alpha centaurie, eazy usw.).
|
1.0
|
improve aac 1.9.10 fly - aac 1.9.10 fly ist relative schlecht im vergleich zu anderen clients (alpha centaurie, eazy usw.).
|
non_process
|
improve aac fly aac fly ist relative schlecht im vergleich zu anderen clients alpha centaurie eazy usw
| 0
|
174,753
| 14,498,147,274
|
IssuesEvent
|
2020-12-11 15:08:41
|
input-output-hk/cardano-rest
|
https://api.github.com/repos/input-output-hk/cardano-rest
|
closed
|
Incorrect JSON Schema Values
|
documentation
|
I have generated JSON schemas based on the wiki example results and found several issues. These could be misinterpreted by the schema generator but I used a tool based on the standard assuming http://json-schema.org/draft-04/schema#.
1. `GetCoin` is an `integer` in the docs but a `string` in the API (multiple endpoints)
2. `ctsRelayedBy` is a `string` in the docs but can be `null`. (`txs/summary/{txid}`)
3. `$schema`, `type`, `properties`, and `required` are all required fields in the docs but can be missing (`blocks/pages`)
4. `type` is an `object` in the docs but should be an `array` (`genesis/address`)
Again, these were autogenerated schemas so it could be a fault with that - however I'd assume we should be conforming to the standard?
|
1.0
|
Incorrect JSON Schema Values - I have generated JSON schemas based on the wiki example results and found several issues. These could be misinterpreted by the schema generator but I used a tool based on the standard assuming http://json-schema.org/draft-04/schema#.
1. `GetCoin` is an `integer` in the docs but a `string` in the API (multiple endpoints)
2. `ctsRelayedBy` is a `string` in the docs but can be `null`. (`txs/summary/{txid}`)
3. `$schema`, `type`, `properties`, and `required` are all required fields in the docs but can be missing (`blocks/pages`)
4. `type` is an `object` in the docs but should be an `array` (`genesis/address`)
Again, these were autogenerated schemas so it could be a fault with that - however I'd assume we should be conforming to the standard?
|
non_process
|
incorrect json schema values i have generated json schemas based on the wiki example results and found several issues these could be misinterpreted by the schema generator but i used a tool based on the standard assuming getcoin is an integer in the docs but a string in the api multiple endpoints ctsrelayedby is a string in the docs but can be null txs summary txid schema type properties and required are all required fields in the docs but can be missing blocks pages type is an object in the docs but should be an array genesis address again these were autogenerated schemas so it could be a fault with that however i d assume we should be conforming to the standard
| 0
|
19,903
| 26,358,045,244
|
IssuesEvent
|
2023-01-11 11:12:44
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
OTR+NTR: CRISPR biology and GO:0043571 ! maintenance of CRISPR repeat elements
|
New term request Other term-related request mini-project multi-species process
|
GO:0043571 ! maintenance of CRISPR repeat elements is defined as:
"Any process involved in sustaining CRISPR repeat clusters, including capture of new spacer elements, expansion or contraction of clusters, propagation of the leader sequence and repeat clusters within a genome, transfer of repeat clusters and CRISPR-associated (cas) genes to new genomes, transcription of the CRISPR repeat arrays into RNA and processing, and interaction of CRISPR/cas loci with the host genome. CRISPR (clustered regularly interspaced short palindromic repeat) elements are a family of sequence elements containing multiple direct repeats of 24-48 bp with weak dyad symmetry which are separated by regularly sized nonrepetitive spacer sequences."
It's an is_a child of GO:0043570 ! maintenance of DNA repeat elements.
This term strikes me as an overly broad umbrella for all of CRISPR-cas biology. At the same time, it's missing the key activity of the CRISPR systems in host defense. The reference provided in the notes doesn't give a good overview of the biology.
PMID: 23495939 is an Annual Reviews article, which breaks CRISPR biology into three types of potential process terms:
- CRISPR adaptation (i.e., new spacer acquisition), is_a GO:0006259 ! DNA metabolic process. I don't think it should be an is_a of GO:0043570 ! maintenance of DNA repeat elements. The key step is the capture of new unique sequences within the repeat array.
- CRISPR-associ
- crRNA biogenesis, (transcription and RNA processing) is_a GO:0006396 ! RNA processing
- crRNA-guided target nucleic acid inactivation
- crRNA-guided target DNA inactivation
- crRNA-guided target RNA inactivation
IIRC cells that are defective in CRISPR adaptation can still use CRISPRs to defend against invaders that match existing crRNAs
An umbrella term for the overall process of CRISPR-mediated adaptive immunity could be over all of these, the closest appropriate parent would be GO:0044355 ! clearance of foreign intracellular DNA, but this should have a parent term that includes both RNA and DNA. Otherwise, it would be a child of GO:0006952 ! defense response. From the review, some CRISPR mediated host defense systems work against DNA and RNA (see above)
In addition to process terms, there should probably also be component terms:
- CRISPR-associated complex for antiviral defense is_a GO:0030529 ! ribonucleoprotein complex
- CRISPR-associated complex for antiviral defense, Type I (Cascade)
- CRISPR-associated complex for antiviral defense, Type II
- CRISPR-associated complex for antiviral defense, Type III
One or more activity terms may also be needed.
Reported by: jimhu
Original Ticket: [geneontology/ontology-requests/10809](https://sourceforge.net/p/geneontology/ontology-requests/10809)
|
1.0
|
OTR+NTR: CRISPR biology and GO:0043571 ! maintenance of CRISPR repeat elements - GO:0043571 ! maintenance of CRISPR repeat elements is defined as:
"Any process involved in sustaining CRISPR repeat clusters, including capture of new spacer elements, expansion or contraction of clusters, propagation of the leader sequence and repeat clusters within a genome, transfer of repeat clusters and CRISPR-associated (cas) genes to new genomes, transcription of the CRISPR repeat arrays into RNA and processing, and interaction of CRISPR/cas loci with the host genome. CRISPR (clustered regularly interspaced short palindromic repeat) elements are a family of sequence elements containing multiple direct repeats of 24-48 bp with weak dyad symmetry which are separated by regularly sized nonrepetitive spacer sequences."
It's an is_a child of GO:0043570 ! maintenance of DNA repeat elements.
This term strikes me as an overly broad umbrella for all of CRISPR-cas biology. At the same time, it's missing the key activity of the CRISPR systems in host defense. The reference provided in the notes doesn't give a good overview of the biology.
PMID: 23495939 is an Annual Reviews article, which breaks CRISPR biology into three types of potential process terms:
- CRISPR adaptation (i.e., new spacer acquisition), is_a GO:0006259 ! DNA metabolic process. I don't think it should be an is_a of GO:0043570 ! maintenance of DNA repeat elements. The key step is the capture of new unique sequences within the repeat array.
- CRISPR-associ
- crRNA biogenesis, (transcription and RNA processing) is_a GO:0006396 ! RNA processing
- crRNA-guided target nucleic acid inactivation
- crRNA-guided target DNA inactivation
- crRNA-guided target RNA inactivation
IIRC cells that are defective in CRISPR adaptation can still use CRISPRs to defend against invaders that match existing crRNAs
An umbrella term for the overall process of CRISPR-mediated adaptive immunity could be over all of these, the closest appropriate parent would be GO:0044355 ! clearance of foreign intracellular DNA, but this should have a parent term that includes both RNA and DNA. Otherwise, it would be a child of GO:0006952 ! defense response. From the review, some CRISPR mediated host defense systems work against DNA and RNA (see above)
In addition to process terms, there should probably also be component terms:
- CRISPR-associated complex for antiviral defense is_a GO:0030529 ! ribonucleoprotein complex
- CRISPR-associated complex for antiviral defense, Type I (Cascade)
- CRISPR-associated complex for antiviral defense, Type II
- CRISPR-associated complex for antiviral defense, Type III
One or more activity terms may also be needed.
Reported by: jimhu
Original Ticket: [geneontology/ontology-requests/10809](https://sourceforge.net/p/geneontology/ontology-requests/10809)
|
process
|
otr ntr crispr biology and go maintenance of crispr repeat elements go maintenance of crispr repeat elements is defined as any process involved in sustaining crispr repeat clusters including capture of new spacer elements expansion or contraction of clusters propagation of the leader sequence and repeat clusters within a genome transfer of repeat clusters and crispr associated cas genes to new genomes transcription of the crispr repeat arrays into rna and processing and interaction of crispr cas loci with the host genome crispr clustered regularly interspaced short palindromic repeat elements are a family of sequence elements containing multiple direct repeats of bp with weak dyad symmetry which are separated by regularly sized nonrepetitive spacer sequences it s an is a child of go maintenance of dna repeat elements this term strikes me as an overly broad umbrella for all of crispr cas biology at the same time it s missing the key activity of the crispr systems in host defense the reference provided in the notes doesn t give a good overview of the biology pmid is an annual reviews article which breaks crispr biology into three types of potential process terms crispr adaptation i e new spacer acquisition is a go dna metabolic process i don t think it should be an is a of go maintenance of dna repeat elements the key step is the capture of new unique sequences within the repeat array crispr associ crrna biogenesis transcription and rna processing is a go rna processing crrna guided target nucleic acid inactivation crrna guided target dna inactivation crrna guided target rna inactivation iirc cells that are defective in crispr adaptation can still use crisprs to defend against invaders that match existing crrnas an umbrella term for the overall process of crispr mediated adaptive immunity could be over all of these the closest appropriate parent would be go clearance of foreign intracellular dna but this should have a parent term that includes both rna and dna otherwise it would be a child of go defense response from the review some crispr mediated host defense systems work against dna and rna see above in addition to process terms there should probably also be component terms crispr associated complex for antiviral defense is a go ribonucleoprotein complex crispr associated complex for antiviral defense type i cascade crispr associated complex for antiviral defense type ii crispr associated complex for antiviral defense type iii one or more activity terms may also be needed reported by jimhu original ticket
| 1
|
14,551
| 9,302,149,513
|
IssuesEvent
|
2019-03-24 06:27:56
|
fennekki/cdparacord
|
https://api.github.com/repos/fennekki/cdparacord
|
opened
|
Albumartist omitted on multi-artist album where artist matches albumartist
|
bug usability
|
When rippng a CD that has tracks with artist tag matching albumartist tag, and the setting to always tag albumartist is set to false, the generation of albumartist tags is determined *per-track* instead of per-album. Therefore an album by artist A that also has two songs by artist B will only have albumartist tagged on the songs by artist B.
This is not inherently wrong other than being inconsistent, but in practise creates a problem because some software actually treat "albumname with only artist tagged" and "albumname with albumartist tagged" as two separate albums of the same name by the same artist. This is very confusing and I'd imagine not really wanted by anybody.
|
True
|
Albumartist omitted on multi-artist album where artist matches albumartist - When rippng a CD that has tracks with artist tag matching albumartist tag, and the setting to always tag albumartist is set to false, the generation of albumartist tags is determined *per-track* instead of per-album. Therefore an album by artist A that also has two songs by artist B will only have albumartist tagged on the songs by artist B.
This is not inherently wrong other than being inconsistent, but in practise creates a problem because some software actually treat "albumname with only artist tagged" and "albumname with albumartist tagged" as two separate albums of the same name by the same artist. This is very confusing and I'd imagine not really wanted by anybody.
|
non_process
|
albumartist omitted on multi artist album where artist matches albumartist when rippng a cd that has tracks with artist tag matching albumartist tag and the setting to always tag albumartist is set to false the generation of albumartist tags is determined per track instead of per album therefore an album by artist a that also has two songs by artist b will only have albumartist tagged on the songs by artist b this is not inherently wrong other than being inconsistent but in practise creates a problem because some software actually treat albumname with only artist tagged and albumname with albumartist tagged as two separate albums of the same name by the same artist this is very confusing and i d imagine not really wanted by anybody
| 0
|
14,615
| 17,755,683,897
|
IssuesEvent
|
2021-08-28 18:04:02
|
bridgetownrb/bridgetown
|
https://api.github.com/repos/bridgetownrb/bridgetown
|
opened
|
Investigate a pure Ruby/Rack/Roda(?) implementation of live reloading in development
|
help wanted process
|
We're currently using BrowserSync which is a JS/Node tool to provide two services:
* live reloading of the page whenever files change in the repo
* exposing the localhost dev server via proxy to the computer's LAN
I'd like to investigate replacing BrowserSync with a pure Ruby solution, ideally integrated into our new Rack (maybe Roda specifically) based server (see #281).
|
1.0
|
Investigate a pure Ruby/Rack/Roda(?) implementation of live reloading in development - We're currently using BrowserSync which is a JS/Node tool to provide two services:
* live reloading of the page whenever files change in the repo
* exposing the localhost dev server via proxy to the computer's LAN
I'd like to investigate replacing BrowserSync with a pure Ruby solution, ideally integrated into our new Rack (maybe Roda specifically) based server (see #281).
|
process
|
investigate a pure ruby rack roda implementation of live reloading in development we re currently using browsersync which is a js node tool to provide two services live reloading of the page whenever files change in the repo exposing the localhost dev server via proxy to the computer s lan i d like to investigate replacing browsersync with a pure ruby solution ideally integrated into our new rack maybe roda specifically based server see
| 1
|
141,965
| 11,449,220,010
|
IssuesEvent
|
2020-02-06 06:28:36
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Use e2eskipper package
|
kind/cleanup sig/testing
|
This is a part of https://github.com/kubernetes/kubernetes/issues/81245
Please make the following e2e tests use e2eskipper package like https://github.com/kubernetes/kubernetes/pull/87031
- [x] e2e/apps/ (@k-toyoda-pi)
- [x] e2e/auth/ (@YuikoTakada https://github.com/kubernetes/kubernetes/pull/87062)
- [x] e2e/autoscaling/ (@YuikoTakada https://github.com/kubernetes/kubernetes/pull/87173)
- [x] e2e/cloud/ (@YuikoTakada https://github.com/kubernetes/kubernetes/pull/87175)
- [x] e2e/common/ (@YuikoTakada https://github.com/kubernetes/kubernetes/pull/87270)
- [x] e2e/framework/ (@YuikoTakada https://github.com/kubernetes/kubernetes/pull/87317)
- [x] e2e/gke_local_ssd.go and e2e/gke_node_pools.go (@yuxiaobo96 https://github.com/kubernetes/kubernetes/pull/87121)
- [x] e2e/instrumentation (@tanjunchen https://github.com/kubernetes/kubernetes/pull/87101)
- [x] e2e_kubeadm/ and e2e/kubectl/ (@tanjunchen https://github.com/kubernetes/kubernetes/pull/87102)
- [x] e2e/network/ (@tanjunchen https://github.com/kubernetes/kubernetes/pull/87073)
- [x] e2e/node/ (@tanjunchen)
- [x] e2e/scheduling/ and e2e/servicecatalog/ (@s-ito-ts https://github.com/kubernetes/kubernetes/pull/87169)
- [x] e2e/storage/ (@yuxiaobo96 https://github.com/kubernetes/kubernetes/pull/87124)
- [x] e2e/ui/ and e2e/upgrades/ (@yuxiaobo96 https://github.com/kubernetes/kubernetes/pull/87125)
- [x] e2e/windows/ (https://github.com/kubernetes/kubernetes/pull/87103)
|
1.0
|
Use e2eskipper package - This is a part of https://github.com/kubernetes/kubernetes/issues/81245
Please make the following e2e tests use e2eskipper package like https://github.com/kubernetes/kubernetes/pull/87031
- [x] e2e/apps/ (@k-toyoda-pi)
- [x] e2e/auth/ (@YuikoTakada https://github.com/kubernetes/kubernetes/pull/87062)
- [x] e2e/autoscaling/ (@YuikoTakada https://github.com/kubernetes/kubernetes/pull/87173)
- [x] e2e/cloud/ (@YuikoTakada https://github.com/kubernetes/kubernetes/pull/87175)
- [x] e2e/common/ (@YuikoTakada https://github.com/kubernetes/kubernetes/pull/87270)
- [x] e2e/framework/ (@YuikoTakada https://github.com/kubernetes/kubernetes/pull/87317)
- [x] e2e/gke_local_ssd.go and e2e/gke_node_pools.go (@yuxiaobo96 https://github.com/kubernetes/kubernetes/pull/87121)
- [x] e2e/instrumentation (@tanjunchen https://github.com/kubernetes/kubernetes/pull/87101)
- [x] e2e_kubeadm/ and e2e/kubectl/ (@tanjunchen https://github.com/kubernetes/kubernetes/pull/87102)
- [x] e2e/network/ (@tanjunchen https://github.com/kubernetes/kubernetes/pull/87073)
- [x] e2e/node/ (@tanjunchen)
- [x] e2e/scheduling/ and e2e/servicecatalog/ (@s-ito-ts https://github.com/kubernetes/kubernetes/pull/87169)
- [x] e2e/storage/ (@yuxiaobo96 https://github.com/kubernetes/kubernetes/pull/87124)
- [x] e2e/ui/ and e2e/upgrades/ (@yuxiaobo96 https://github.com/kubernetes/kubernetes/pull/87125)
- [x] e2e/windows/ (https://github.com/kubernetes/kubernetes/pull/87103)
|
non_process
|
use package this is a part of please make the following tests use package like apps k toyoda pi auth yuikotakada autoscaling yuikotakada cloud yuikotakada common yuikotakada framework yuikotakada gke local ssd go and gke node pools go instrumentation tanjunchen kubeadm and kubectl tanjunchen network tanjunchen node tanjunchen scheduling and servicecatalog s ito ts storage ui and upgrades windows
| 0
|
231,569
| 25,516,766,392
|
IssuesEvent
|
2022-11-28 16:56:18
|
turkdevops/textpattern
|
https://api.github.com/repos/turkdevops/textpattern
|
closed
|
CVE-2020-11022 (Medium) detected in jquery-1.12.4.js - autoclosed
|
security vulnerability
|
## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.12.4.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.js</a></p>
<p>Path to dependency file: /node_modules/jquery-ui-dist/index.html</p>
<p>Path to vulnerable library: /node_modules/jquery-ui-dist/external/jquery/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.12.4.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/textpattern/commit/1baacf5af678fa9fb4fecb3a047220f62418d3c4">1baacf5af678fa9fb4fecb3a047220f62418d3c4</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-11022 (Medium) detected in jquery-1.12.4.js - autoclosed - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.12.4.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.js</a></p>
<p>Path to dependency file: /node_modules/jquery-ui-dist/index.html</p>
<p>Path to vulnerable library: /node_modules/jquery-ui-dist/external/jquery/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.12.4.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/textpattern/commit/1baacf5af678fa9fb4fecb3a047220f62418d3c4">1baacf5af678fa9fb4fecb3a047220f62418d3c4</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery js autoclosed cve medium severity vulnerability vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file node modules jquery ui dist index html path to vulnerable library node modules jquery ui dist external jquery jquery js dependency hierarchy x jquery js vulnerable library found in head commit a href found in base branch dev vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with mend
| 0
|
115,368
| 17,313,733,343
|
IssuesEvent
|
2021-07-27 01:03:44
|
TechnoConserve/ml-practice
|
https://api.github.com/repos/TechnoConserve/ml-practice
|
opened
|
CVE-2021-25288 (High) detected in Pillow-4.3.0-cp27-cp27mu-manylinux1_x86_64.whl
|
security vulnerability
|
## CVE-2021-25288 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-4.3.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/40/45/cd1000f1c474136236c5105c882d8e1e40bd94ae939b5ca53bf724967514/Pillow-4.3.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/40/45/cd1000f1c474136236c5105c882d8e1e40bd94ae939b5ca53bf724967514/Pillow-4.3.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: ml-practice/requirements.txt</p>
<p>Path to vulnerable library: ml-practice/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-4.3.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in Pillow before 8.2.0. There is an out-of-bounds read in J2kDecode, in j2ku_gray_i.
<p>Publish Date: 2021-06-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25288>CVE-2021-25288</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25288">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25288</a></p>
<p>Release Date: 2021-06-02</p>
<p>Fix Resolution: Pillow - 8.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-25288 (High) detected in Pillow-4.3.0-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2021-25288 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-4.3.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/40/45/cd1000f1c474136236c5105c882d8e1e40bd94ae939b5ca53bf724967514/Pillow-4.3.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/40/45/cd1000f1c474136236c5105c882d8e1e40bd94ae939b5ca53bf724967514/Pillow-4.3.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: ml-practice/requirements.txt</p>
<p>Path to vulnerable library: ml-practice/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-4.3.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in Pillow before 8.2.0. There is an out-of-bounds read in J2kDecode, in j2ku_gray_i.
<p>Publish Date: 2021-06-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25288>CVE-2021-25288</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25288">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25288</a></p>
<p>Release Date: 2021-06-02</p>
<p>Fix Resolution: Pillow - 8.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in pillow whl cve high severity vulnerability vulnerable library pillow whl python imaging library fork library home page a href path to dependency file ml practice requirements txt path to vulnerable library ml practice requirements txt dependency hierarchy x pillow whl vulnerable library vulnerability details an issue was discovered in pillow before there is an out of bounds read in in gray i publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution pillow step up your open source security game with whitesource
| 0
|
21,313
| 28,507,281,156
|
IssuesEvent
|
2023-04-18 22:57:44
|
googleapis/java-storage
|
https://api.github.com/repos/googleapis/java-storage
|
closed
|
com.google.cloud.storage.GapicUnbufferedWritableByteChannelTest#resumableUpload flake?
|
type: process api: storage
|
Inexplicably this unit test which has been running successfully for multiple months failed today during a CI run of `main` with jdk 17.
The error
```
[ERROR] com.google.cloud.storage.GapicUnbufferedWritableByteChannelTest.resumableUpload Time elapsed: 0.043 s <<< ERROR!
com.google.cloud.storage.StorageException: PERMISSION_DENIED
at com.google.cloud.storage.StorageException.asStorageException(StorageException.java:145)
at com.google.cloud.storage.StorageException.coalesce(StorageException.java:117)
at com.google.cloud.storage.Retrying.lambda$run$0(Retrying.java:106)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:103)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.storage.Retrying.run(Retrying.java:94)
at com.google.cloud.storage.WriteFlushStrategy$FsyncEveryFlusher.flush(WriteFlushStrategy.java:136)
at com.google.cloud.storage.GapicUnbufferedWritableByteChannel.write(GapicUnbufferedWritableByteChannel.java:119)
at com.google.cloud.storage.GapicUnbufferedWritableByteChannel.write(GapicUnbufferedWritableByteChannel.java:72)
at com.google.cloud.storage.GapicUnbufferedWritableByteChannel.write(GapicUnbufferedWritableByteChannel.java:67)
at com.google.cloud.storage.GapicUnbufferedWritableByteChannelTest.resumableUpload(GapicUnbufferedWritableByteChannelTest.java:188)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:456)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:169)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:595)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:581)
Suppressed: com.google.cloud.storage.StorageException: PERMISSION_DENIED
at com.google.cloud.storage.StorageException.asStorageException(StorageException.java:145)
at com.google.cloud.storage.StorageException.coalesce(StorageException.java:117)
at com.google.cloud.storage.Retrying.lambda$run$0(Retrying.java:106)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:103)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.storage.Retrying.run(Retrying.java:94)
at com.google.cloud.storage.WriteFlushStrategy$FsyncEveryFlusher.flush(WriteFlushStrategy.java:136)
at com.google.cloud.storage.WriteFlushStrategy$FsyncEveryFlusher.close(WriteFlushStrategy.java:163)
at com.google.cloud.storage.GapicUnbufferedWritableByteChannel.close(GapicUnbufferedWritableByteChannel.java:147)
at com.google.cloud.storage.GapicUnbufferedWritableByteChannelTest.resumableUpload(GapicUnbufferedWritableByteChannelTest.java:177)
... 39 more
Caused by: com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:98)
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:41)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:86)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:66)
at com.google.api.gax.grpc.GrpcExceptionTranslatingStreamObserver.onError(GrpcExceptionTranslatingStreamObserver.java:60)
at com.google.api.gax.grpc.ApiStreamObserverDelegate.onError(ApiStreamObserverDelegate.java:55)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:487)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.api.gax.grpc.ChannelPool$ReleasingClientCall$1.onClose(ChannelPool.java:535)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:563)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:744)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:723)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: io.grpc.StatusRuntimeException: PERMISSION_DENIED
at io.grpc.Status.asRuntimeException(Status.java:539)
... 14 more
Caused by: com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:98)
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:41)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:86)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:66)
at com.google.api.gax.grpc.GrpcExceptionTranslatingStreamObserver.onError(GrpcExceptionTranslatingStreamObserver.java:60)
at com.google.api.gax.grpc.ApiStreamObserverDelegate.onError(ApiStreamObserverDelegate.java:55)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:487)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.api.gax.grpc.ChannelPool$ReleasingClientCall$1.onClose(ChannelPool.java:535)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:563)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:744)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:723)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: io.grpc.StatusRuntimeException: PERMISSION_DENIED
at io.grpc.Status.asRuntimeException(Status.java:539)
... 14 more
```
I only supposed to happen in the case of an unexpected request making it to the "server".
First observed in https://github.com/googleapis/java-storage/actions/runs/3347056726/jobs/5544620712
|
1.0
|
com.google.cloud.storage.GapicUnbufferedWritableByteChannelTest#resumableUpload flake? - Inexplicably this unit test which has been running successfully for multiple months failed today during a CI run of `main` with jdk 17.
The error
```
[ERROR] com.google.cloud.storage.GapicUnbufferedWritableByteChannelTest.resumableUpload Time elapsed: 0.043 s <<< ERROR!
com.google.cloud.storage.StorageException: PERMISSION_DENIED
at com.google.cloud.storage.StorageException.asStorageException(StorageException.java:145)
at com.google.cloud.storage.StorageException.coalesce(StorageException.java:117)
at com.google.cloud.storage.Retrying.lambda$run$0(Retrying.java:106)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:103)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.storage.Retrying.run(Retrying.java:94)
at com.google.cloud.storage.WriteFlushStrategy$FsyncEveryFlusher.flush(WriteFlushStrategy.java:136)
at com.google.cloud.storage.GapicUnbufferedWritableByteChannel.write(GapicUnbufferedWritableByteChannel.java:119)
at com.google.cloud.storage.GapicUnbufferedWritableByteChannel.write(GapicUnbufferedWritableByteChannel.java:72)
at com.google.cloud.storage.GapicUnbufferedWritableByteChannel.write(GapicUnbufferedWritableByteChannel.java:67)
at com.google.cloud.storage.GapicUnbufferedWritableByteChannelTest.resumableUpload(GapicUnbufferedWritableByteChannelTest.java:188)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:456)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:169)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:595)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:581)
Suppressed: com.google.cloud.storage.StorageException: PERMISSION_DENIED
at com.google.cloud.storage.StorageException.asStorageException(StorageException.java:145)
at com.google.cloud.storage.StorageException.coalesce(StorageException.java:117)
at com.google.cloud.storage.Retrying.lambda$run$0(Retrying.java:106)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:103)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.storage.Retrying.run(Retrying.java:94)
at com.google.cloud.storage.WriteFlushStrategy$FsyncEveryFlusher.flush(WriteFlushStrategy.java:136)
at com.google.cloud.storage.WriteFlushStrategy$FsyncEveryFlusher.close(WriteFlushStrategy.java:163)
at com.google.cloud.storage.GapicUnbufferedWritableByteChannel.close(GapicUnbufferedWritableByteChannel.java:147)
at com.google.cloud.storage.GapicUnbufferedWritableByteChannelTest.resumableUpload(GapicUnbufferedWritableByteChannelTest.java:177)
... 39 more
Caused by: com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:98)
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:41)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:86)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:66)
at com.google.api.gax.grpc.GrpcExceptionTranslatingStreamObserver.onError(GrpcExceptionTranslatingStreamObserver.java:60)
at com.google.api.gax.grpc.ApiStreamObserverDelegate.onError(ApiStreamObserverDelegate.java:55)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:487)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.api.gax.grpc.ChannelPool$ReleasingClientCall$1.onClose(ChannelPool.java:535)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:563)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:744)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:723)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: io.grpc.StatusRuntimeException: PERMISSION_DENIED
at io.grpc.Status.asRuntimeException(Status.java:539)
... 14 more
Caused by: com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:98)
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:41)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:86)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:66)
at com.google.api.gax.grpc.GrpcExceptionTranslatingStreamObserver.onError(GrpcExceptionTranslatingStreamObserver.java:60)
at com.google.api.gax.grpc.ApiStreamObserverDelegate.onError(ApiStreamObserverDelegate.java:55)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:487)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.api.gax.grpc.ChannelPool$ReleasingClientCall$1.onClose(ChannelPool.java:535)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:563)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:744)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:723)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: io.grpc.StatusRuntimeException: PERMISSION_DENIED
at io.grpc.Status.asRuntimeException(Status.java:539)
... 14 more
```
I only supposed to happen in the case of an unexpected request making it to the "server".
First observed in https://github.com/googleapis/java-storage/actions/runs/3347056726/jobs/5544620712
|
process
|
com google cloud storage gapicunbufferedwritablebytechanneltest resumableupload flake inexplicably this unit test which has been running successfully for multiple months failed today during a ci run of main with jdk the error com google cloud storage gapicunbufferedwritablebytechanneltest resumableupload time elapsed s error com google cloud storage storageexception permission denied at com google cloud storage storageexception asstorageexception storageexception java at com google cloud storage storageexception coalesce storageexception java at com google cloud storage retrying lambda run retrying java at com google api gax retrying directretryingexecutor submit directretryingexecutor java at com google cloud retryhelper run retryhelper java at com google cloud retryhelper runwithretries retryhelper java at com google cloud storage retrying run retrying java at com google cloud storage writeflushstrategy fsynceveryflusher flush writeflushstrategy java at com google cloud storage gapicunbufferedwritablebytechannel write gapicunbufferedwritablebytechannel java at com google cloud storage gapicunbufferedwritablebytechannel write gapicunbufferedwritablebytechannel java at com google cloud storage gapicunbufferedwritablebytechannel write gapicunbufferedwritablebytechannel java at com google cloud storage gapicunbufferedwritablebytechanneltest resumableupload gapicunbufferedwritablebytechanneltest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executeeager junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java suppressed com google cloud storage storageexception permission denied at com google cloud storage storageexception asstorageexception storageexception java at com google cloud storage storageexception coalesce storageexception java at com google cloud storage retrying lambda run retrying java at com google api gax retrying directretryingexecutor submit directretryingexecutor java at com google cloud retryhelper run retryhelper java at com google cloud retryhelper runwithretries retryhelper java at com google cloud storage retrying run retrying java at com google cloud storage writeflushstrategy fsynceveryflusher flush writeflushstrategy java at com google cloud storage writeflushstrategy fsynceveryflusher close writeflushstrategy java at com google cloud storage gapicunbufferedwritablebytechannel close gapicunbufferedwritablebytechannel java at com google cloud storage gapicunbufferedwritablebytechanneltest resumableupload gapicunbufferedwritablebytechanneltest java more caused by com google api gax rpc permissiondeniedexception io grpc statusruntimeexception permission denied at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcexceptiontranslatingstreamobserver onerror grpcexceptiontranslatingstreamobserver java at com google api gax grpc apistreamobserverdelegate onerror apistreamobserverdelegate java at io grpc stub clientcalls streamobservertocalllisteneradapter onclose clientcalls java at io grpc partialforwardingclientcalllistener onclose partialforwardingclientcalllistener java at io grpc forwardingclientcalllistener onclose forwardingclientcalllistener java at io grpc forwardingclientcalllistener simpleforwardingclientcalllistener onclose forwardingclientcalllistener java at com google api gax grpc channelpool releasingclientcall onclose channelpool java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by io grpc statusruntimeexception permission denied at io grpc status asruntimeexception status java more caused by com google api gax rpc permissiondeniedexception io grpc statusruntimeexception permission denied at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcexceptiontranslatingstreamobserver onerror grpcexceptiontranslatingstreamobserver java at com google api gax grpc apistreamobserverdelegate onerror apistreamobserverdelegate java at io grpc stub clientcalls streamobservertocalllisteneradapter onclose clientcalls java at io grpc partialforwardingclientcalllistener onclose partialforwardingclientcalllistener java at io grpc forwardingclientcalllistener onclose forwardingclientcalllistener java at io grpc forwardingclientcalllistener simpleforwardingclientcalllistener onclose forwardingclientcalllistener java at com google api gax grpc channelpool releasingclientcall onclose channelpool java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by io grpc statusruntimeexception permission denied at io grpc status asruntimeexception status java more i only supposed to happen in the case of an unexpected request making it to the server first observed in
| 1
|
16,398
| 21,181,791,416
|
IssuesEvent
|
2022-04-08 08:41:19
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Error: [/rustc/9d1b2106e23b1abd32fce1f17267604a5102f57a/library/core/src/str/mod.rs:584:13] byte index 47 is not a char boundary; it is inside '点' (bytes 46..49) of `AirbyteRawYuriPagesAirbyteDataProperties推荐点RichText`
|
bug/2-confirmed kind/bug process/candidate topic: error reporting team/schema topic: mongodb
|
This is string slicing inside a unicode glyph. We have the error location, so it should be easy to fix. This appears to be inside a library, which we may want to yeet.
<!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db pull`
Version: `3.12.0`
Binary Version: `22b822189f46ef0dc5c5b503368d1bee01213980`
Report: https://prisma-errors.netlify.app/report/13743
OS: `x64 darwin 21.3.0`
JS Stacktrace:
```
Error: [/rustc/9d1b2106e23b1abd32fce1f17267604a5102f57a/library/core/src/str/mod.rs:584:13] byte index 47 is not a char boundary; it is inside '点' (bytes 46..49) of `TheCollectionName推荐点RichText`
at ChildProcess.<anonymous> (.../node_modules/prisma/build/index.js:51878:30)
at ChildProcess.emit (events.js:400:28)
at Process.ChildProcess._handle.onexit (internal/child_process.js:282:12)
```
Rust Stacktrace:
```
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new
2: user_facing_errors::Error::new_in_panic_hook
3: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_handler::{{closure}}
6: std::sys_common::backtrace::__rust_end_short_backtrace
7: _rust_begin_unwind
8: core::panicking::panic_fmt
9: core::str::slice_error_fail
10: convert_case::words::Words::split_camel
11: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::try_fold
12: <core::iter::adapters::filter::Filter<I,P> as core::iter::traits::iterator::Iterator>::next
13: <alloc::vec::Vec<T> as alloc::vec::spec_from_iter::SpecFromIter<T,I>>::from_iter
14: <alloc::string::String as convert_case::Casing>::to_case
15: mongodb_introspection_connector::sampler::statistics::Statistics::composite_type_name
16: mongodb_introspection_connector::sampler::statistics::Statistics::track_document_types
17: mongodb_introspection_connector::sampler::statistics::Statistics::track_document_types
18: mongodb_introspection_connector::sampler::statistics::Statistics::track_document_types
19: mongodb_introspection_connector::sampler::statistics::Statistics::track_document_types
20: mongodb_introspection_connector::sampler::statistics::Statistics::track_document_types
21: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
22: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
23: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
24: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
25: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
26: tokio::runtime::task::harness::poll_future
27: tokio::runtime::task::raw::poll
28: std::thread::local::LocalKey<T>::with
29: tokio::runtime::thread_pool::worker::Context::run_task
30: tokio::runtime::thread_pool::worker::Context::run
31: tokio::macros::scoped_tls::ScopedKey<T>::set
32: tokio::runtime::thread_pool::worker::run
33: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
34: tokio::runtime::task::harness::Harness<T,S>::poll
35: tokio::runtime::blocking::pool::Inner::run
36: std::sys_common::backtrace::__rust_begin_short_backtrace
37: core::ops::function::FnOnce::call_once{{vtable.shim}}
38: std::sys::unix::thread::Thread::new::thread_start
39: __pthread_start
```
|
1.0
|
Error: [/rustc/9d1b2106e23b1abd32fce1f17267604a5102f57a/library/core/src/str/mod.rs:584:13] byte index 47 is not a char boundary; it is inside '点' (bytes 46..49) of `AirbyteRawYuriPagesAirbyteDataProperties推荐点RichText` - This is string slicing inside a unicode glyph. We have the error location, so it should be easy to fix. This appears to be inside a library, which we may want to yeet.
<!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db pull`
Version: `3.12.0`
Binary Version: `22b822189f46ef0dc5c5b503368d1bee01213980`
Report: https://prisma-errors.netlify.app/report/13743
OS: `x64 darwin 21.3.0`
JS Stacktrace:
```
Error: [/rustc/9d1b2106e23b1abd32fce1f17267604a5102f57a/library/core/src/str/mod.rs:584:13] byte index 47 is not a char boundary; it is inside '点' (bytes 46..49) of `TheCollectionName推荐点RichText`
at ChildProcess.<anonymous> (.../node_modules/prisma/build/index.js:51878:30)
at ChildProcess.emit (events.js:400:28)
at Process.ChildProcess._handle.onexit (internal/child_process.js:282:12)
```
Rust Stacktrace:
```
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new
2: user_facing_errors::Error::new_in_panic_hook
3: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_handler::{{closure}}
6: std::sys_common::backtrace::__rust_end_short_backtrace
7: _rust_begin_unwind
8: core::panicking::panic_fmt
9: core::str::slice_error_fail
10: convert_case::words::Words::split_camel
11: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::try_fold
12: <core::iter::adapters::filter::Filter<I,P> as core::iter::traits::iterator::Iterator>::next
13: <alloc::vec::Vec<T> as alloc::vec::spec_from_iter::SpecFromIter<T,I>>::from_iter
14: <alloc::string::String as convert_case::Casing>::to_case
15: mongodb_introspection_connector::sampler::statistics::Statistics::composite_type_name
16: mongodb_introspection_connector::sampler::statistics::Statistics::track_document_types
17: mongodb_introspection_connector::sampler::statistics::Statistics::track_document_types
18: mongodb_introspection_connector::sampler::statistics::Statistics::track_document_types
19: mongodb_introspection_connector::sampler::statistics::Statistics::track_document_types
20: mongodb_introspection_connector::sampler::statistics::Statistics::track_document_types
21: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
22: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
23: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
24: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
25: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
26: tokio::runtime::task::harness::poll_future
27: tokio::runtime::task::raw::poll
28: std::thread::local::LocalKey<T>::with
29: tokio::runtime::thread_pool::worker::Context::run_task
30: tokio::runtime::thread_pool::worker::Context::run
31: tokio::macros::scoped_tls::ScopedKey<T>::set
32: tokio::runtime::thread_pool::worker::run
33: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
34: tokio::runtime::task::harness::Harness<T,S>::poll
35: tokio::runtime::blocking::pool::Inner::run
36: std::sys_common::backtrace::__rust_begin_short_backtrace
37: core::ops::function::FnOnce::call_once{{vtable.shim}}
38: std::sys::unix::thread::Thread::new::thread_start
39: __pthread_start
```
|
process
|
error byte index is not a char boundary it is inside 点 bytes of airbyterawyuripagesairbytedataproperties推荐点richtext this is string slicing inside a unicode glyph we have the error location so it should be easy to fix this appears to be inside a library which we may want to yeet command prisma db pull version binary version report os darwin js stacktrace error byte index is not a char boundary it is inside 点 bytes of thecollectionname推荐点richtext at childprocess node modules prisma build index js at childprocess emit events js at process childprocess handle onexit internal child process js rust stacktrace backtrace backtrace trace backtrace capture backtrace new user facing errors error new in panic hook user facing errors panic hook set panic hook closure std panicking rust panic with hook std panicking begin panic handler closure std sys common backtrace rust end short backtrace rust begin unwind core panicking panic fmt core str slice error fail convert case words words split camel as core iter traits iterator iterator try fold as core iter traits iterator iterator next as alloc vec spec from iter specfromiter from iter to case mongodb introspection connector sampler statistics statistics composite type name mongodb introspection connector sampler statistics statistics track document types mongodb introspection connector sampler statistics statistics track document types mongodb introspection connector sampler statistics statistics track document types mongodb introspection connector sampler statistics statistics track document types mongodb introspection connector sampler statistics statistics track document types as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll tokio runtime task harness poll future tokio runtime task raw poll std thread local localkey with tokio runtime thread pool worker context run task tokio runtime thread pool worker context run tokio macros scoped tls scopedkey set tokio runtime thread pool worker run as core future future future poll tokio runtime task harness harness poll tokio runtime blocking pool inner run std sys common backtrace rust begin short backtrace core ops function fnonce call once vtable shim std sys unix thread thread new thread start pthread start
| 1
|
6,028
| 8,837,332,407
|
IssuesEvent
|
2019-01-05 03:19:42
|
jwowillo/greenerthumb
|
https://api.github.com/repos/jwowillo/greenerthumb
|
opened
|
Sensors Should Include Trend Data
|
process sensors
|
The trend data should include minimum, maximum, and whether the data is trending up or down. This can be implemented as a process program that expects a JSON message with a header and string keys to floating values.
|
1.0
|
Sensors Should Include Trend Data - The trend data should include minimum, maximum, and whether the data is trending up or down. This can be implemented as a process program that expects a JSON message with a header and string keys to floating values.
|
process
|
sensors should include trend data the trend data should include minimum maximum and whether the data is trending up or down this can be implemented as a process program that expects a json message with a header and string keys to floating values
| 1
|
18,532
| 24,553,097,703
|
IssuesEvent
|
2022-10-12 14:00:02
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] updated consent is not getting displayed to the participants in the following scenario
|
Bug P0 iOS Process: Fixed Process: Tested dev
|
Steps:
1. Sign up or Sign in to the mobile app
2. Enroll to the study
3. After navigating to the activities screen, lock the mobile
4. Go to the SB, update the consent for enrolled participants and publish the updates
5. Unlock the mobile app
6. Enter the passcode for the app
7. Refresh the page and observe
AR: Updated consent is not getting displayed to the participants
ER: Updated consent should get displayed to the participants
[Note: Issue should also be fixed for newly added active tasks for the above scenario]
|
2.0
|
[iOS] updated consent is not getting displayed to the participants in the following scenario - Steps:
1. Sign up or Sign in to the mobile app
2. Enroll to the study
3. After navigating to the activities screen, lock the mobile
4. Go to the SB, update the consent for enrolled participants and publish the updates
5. Unlock the mobile app
6. Enter the passcode for the app
7. Refresh the page and observe
AR: Updated consent is not getting displayed to the participants
ER: Updated consent should get displayed to the participants
[Note: Issue should also be fixed for newly added active tasks for the above scenario]
|
process
|
updated consent is not getting displayed to the participants in the following scenario steps sign up or sign in to the mobile app enroll to the study after navigating to the activities screen lock the mobile go to the sb update the consent for enrolled participants and publish the updates unlock the mobile app enter the passcode for the app refresh the page and observe ar updated consent is not getting displayed to the participants er updated consent should get displayed to the participants
| 1
|
9,513
| 12,497,671,451
|
IssuesEvent
|
2020-06-01 16:54:11
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Which tasks work out-of-the-box with environments?
|
Pri1 devops-cicd-process/tech devops/prod support-request
|
So far the only task I've been able to find is "IIS web app deploy" (IISWebAppDeploymentOnMachineGroup). I need a task for doing file copy to the target server(s) of the environment, but the ones I've looked at all require specifying machine host name/IP address as well as admin credentials.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Which tasks work out-of-the-box with environments? - So far the only task I've been able to find is "IIS web app deploy" (IISWebAppDeploymentOnMachineGroup). I need a task for doing file copy to the target server(s) of the environment, but the ones I've looked at all require specifying machine host name/IP address as well as admin credentials.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
which tasks work out of the box with environments so far the only task i ve been able to find is iis web app deploy iiswebappdeploymentonmachinegroup i need a task for doing file copy to the target server s of the environment but the ones i ve looked at all require specifying machine host name ip address as well as admin credentials document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
134,544
| 12,610,104,273
|
IssuesEvent
|
2020-06-12 03:50:33
|
vmware/govmomi
|
https://api.github.com/repos/vmware/govmomi
|
closed
|
session.login - No error, but unauthenticated session
|
area/govc documentation/examples
|
Hi!
I try
govc session.login -u my-user:mypasswd@host
No error after the command, but the session is not authentic!
If I do govc session.ls nothing is displayed!
govc version
govc 0.18.0
Thanks
|
1.0
|
session.login - No error, but unauthenticated session - Hi!
I try
govc session.login -u my-user:mypasswd@host
No error after the command, but the session is not authentic!
If I do govc session.ls nothing is displayed!
govc version
govc 0.18.0
Thanks
|
non_process
|
session login no error but unauthenticated session hi i try govc session login u my user mypasswd host no error after the command but the session is not authentic if i do govc session ls nothing is displayed govc version govc thanks
| 0
|
19,436
| 25,705,879,654
|
IssuesEvent
|
2022-12-07 00:36:56
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
ANALISTA DE QUALIDADE DE SOFTWARE na [DEALERNET]
|
LAURO DE FREITAS SCRUM JIRA AGILE DEVOPS SELENIUM PROCESSOS JENKINS SONAR QUALIDADE DE SOFTWARE KONG Stale
|
# ANALISTA DE QUALIDADE DE SOFTWARE
#### **REQUISITOS:**
- AUTOMAÇÃO DE PROCESSOS
- CONHECIMENTO EM DEVOPS
- EXPERIENCIA PROFISSIONAL COM AUTOMATIZAÇÃO DE TAREFA: **OPERACIONAL**, **AMBIENTE** E **APLICAÇÃO**
### **DESEJAVEL**
- JIRA, JENKINS, MANIFESTO ÁGIL E SCRUM
- SELENIUM | KONG | SONAR
> ENVIAR CURRICULO PARA **curriculo@dealernet.com.br** INFORMANDO NO ASSUNTO A VAGA DE SEU INTERESSE
|
1.0
|
ANALISTA DE QUALIDADE DE SOFTWARE na [DEALERNET] - # ANALISTA DE QUALIDADE DE SOFTWARE
#### **REQUISITOS:**
- AUTOMAÇÃO DE PROCESSOS
- CONHECIMENTO EM DEVOPS
- EXPERIENCIA PROFISSIONAL COM AUTOMATIZAÇÃO DE TAREFA: **OPERACIONAL**, **AMBIENTE** E **APLICAÇÃO**
### **DESEJAVEL**
- JIRA, JENKINS, MANIFESTO ÁGIL E SCRUM
- SELENIUM | KONG | SONAR
> ENVIAR CURRICULO PARA **curriculo@dealernet.com.br** INFORMANDO NO ASSUNTO A VAGA DE SEU INTERESSE
|
process
|
analista de qualidade de software na analista de qualidade de software requisitos automação de processos conhecimento em devops experiencia profissional com automatização de tarefa operacional ambiente e aplicação desejavel jira jenkins manifesto ágil e scrum selenium kong sonar enviar curriculo para curriculo dealernet com br informando no assunto a vaga de seu interesse
| 1
|
82
| 2,532,852,915
|
IssuesEvent
|
2015-01-23 18:57:57
|
GsDevKit/gsDevKitHome
|
https://api.github.com/repos/GsDevKit/gsDevKitHome
|
opened
|
lighttpd FastCGI support
|
in process
|
Should write documentation and hook in pre-defined configuration files for lighttd FastCGI support:
- [X] [Ubuntu 12.04 install instructins](https://www.linode.com/docs/websites/lighttpd/lighttpd-web-server-on-ubuntu-12-04-preci)
- [ ] lightppd install instructions
- [ ] lightppd configuration artifacts
|
1.0
|
lighttpd FastCGI support - Should write documentation and hook in pre-defined configuration files for lighttd FastCGI support:
- [X] [Ubuntu 12.04 install instructins](https://www.linode.com/docs/websites/lighttpd/lighttpd-web-server-on-ubuntu-12-04-preci)
- [ ] lightppd install instructions
- [ ] lightppd configuration artifacts
|
process
|
lighttpd fastcgi support should write documentation and hook in pre defined configuration files for lighttd fastcgi support lightppd install instructions lightppd configuration artifacts
| 1
|
88,846
| 17,671,835,175
|
IssuesEvent
|
2021-08-23 07:22:44
|
jumaallan/android-mpesa-api
|
https://api.github.com/repos/jumaallan/android-mpesa-api
|
closed
|
Update README
|
enhancement good first issue code cleanups
|
The readme appears to have a broken badge, that needs to be removed and update it after the library overhaul.
* Remove the contributing section -> maybe add a contributions.md file to outline best ways to accept contributions
* Remove the license section -> since we have the license file on code
* Remove DI specific samples (not everyone is using Dagger)
* Update screenshots with a gif maybe - better visual representation
|
1.0
|
Update README - The readme appears to have a broken badge, that needs to be removed and update it after the library overhaul.
* Remove the contributing section -> maybe add a contributions.md file to outline best ways to accept contributions
* Remove the license section -> since we have the license file on code
* Remove DI specific samples (not everyone is using Dagger)
* Update screenshots with a gif maybe - better visual representation
|
non_process
|
update readme the readme appears to have a broken badge that needs to be removed and update it after the library overhaul remove the contributing section maybe add a contributions md file to outline best ways to accept contributions remove the license section since we have the license file on code remove di specific samples not everyone is using dagger update screenshots with a gif maybe better visual representation
| 0
|
6,146
| 9,014,083,388
|
IssuesEvent
|
2019-02-05 21:20:30
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
[Firestore] Query.get returns incorrect type
|
api: firestore triaged for GA type: process
|
Query.get returns an Array of DocumentSnapshots, not a QuerySnapshot. QuerySnapshot are a requirement for Watch.
|
1.0
|
[Firestore] Query.get returns incorrect type - Query.get returns an Array of DocumentSnapshots, not a QuerySnapshot. QuerySnapshot are a requirement for Watch.
|
process
|
query get returns incorrect type query get returns an array of documentsnapshots not a querysnapshot querysnapshot are a requirement for watch
| 1
|
375,146
| 26,149,304,056
|
IssuesEvent
|
2022-12-30 10:59:01
|
woocommerce/woocommerce-blocks
|
https://api.github.com/repos/woocommerce/woocommerce-blocks
|
closed
|
Feedback on ./packages/checkout/slot/README.md
|
type: documentation
|
<!--
Thank you for taking the time to leave your feedback about the documentation. Please explain your issue or suggestion below.
-->
Hi! It would be great to have an example of extending the checkout block using the available Slot Fills **without React.** Like the samples of the WordPress docs:
https://developer.wordpress.org/block-editor/reference-guides/packages/packages-plugins/
ES5 could be enough for adding "small features."
In my case, I need to add a simple message to ExperimentalOrderMeta.
Thanks!
|
1.0
|
Feedback on ./packages/checkout/slot/README.md - <!--
Thank you for taking the time to leave your feedback about the documentation. Please explain your issue or suggestion below.
-->
Hi! It would be great to have an example of extending the checkout block using the available Slot Fills **without React.** Like the samples of the WordPress docs:
https://developer.wordpress.org/block-editor/reference-guides/packages/packages-plugins/
ES5 could be enough for adding "small features."
In my case, I need to add a simple message to ExperimentalOrderMeta.
Thanks!
|
non_process
|
feedback on packages checkout slot readme md thank you for taking the time to leave your feedback about the documentation please explain your issue or suggestion below hi it would be great to have an example of extending the checkout block using the available slot fills without react like the samples of the wordpress docs could be enough for adding small features in my case i need to add a simple message to experimentalordermeta thanks
| 0
|
14,244
| 17,173,092,292
|
IssuesEvent
|
2021-07-15 08:03:01
|
pystatgen/sgkit
|
https://api.github.com/repos/pystatgen/sgkit
|
closed
|
pre-commit mypy check is failing
|
bug process + tools
|
Not sure what changed (no new version of mypy or numpy in the last 3 weeks), but we now have lots of failures:
(from https://github.com/pystatgen/sgkit/runs/3062225915?check_suite_focus=true)
```
sgkit/stats/utils.py:71: error: Call to untyped function "broadcast_arrays" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:25: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:26: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:27: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:30: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:39: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:40: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:44: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:48: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_stats_utils.py:25: error: "NoReturn" has no attribute "T" [attr-defined]
sgkit/tests/test_stats_utils.py:29: error: Call to untyped function "assert_allclose" in typed context [no-untyped-call]
sgkit/tests/test_stats_utils.py:46: error: Call to untyped function "assert_allclose" in typed context [no-untyped-call]
sgkit/tests/test_stats_utils.py:77: error: Call to untyped function "copy" in typed context [no-untyped-call]
sgkit/tests/test_stats_utils.py:86: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/tests/test_stats_utils.py:87: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/io/vcf/vcf_partition.py:161: error: Call to untyped function "delete" in typed context [no-untyped-call]
sgkit/io/vcf/vcf_partition.py:164: error: Call to untyped function "unique" in typed context [no-untyped-call]
sgkit/tests/test_distance.py:102: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/test_distance.py:123: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/test_distance.py:130: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/test_distance.py:166: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/utils.py:101: error: Call to untyped function "unique" in typed context [no-untyped-call]
sgkit/utils.py:302: error: Call to untyped function "frompyfunc" in typed context [no-untyped-call]
sgkit/stats/regenie.py:24: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:77: error: Call to untyped function "diff" in typed context [no-untyped-call]
sgkit/stats/regenie.py:79: error: Call to untyped function "diff" in typed context [no-untyped-call]
sgkit/stats/regenie.py:80: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/stats/regenie.py:81: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/stats/regenie.py:84: error: Call to untyped function "diff" in typed context [no-untyped-call]
sgkit/stats/regenie.py:91: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:125: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/stats/regenie.py:146: error: Need type annotation for 'diag' [var-annotated]
sgkit/stats/regenie.py:152: error: Unsupported target for indexed assignment ("NoReturn") [index]
sgkit/stats/regenie.py:153: error: Call to untyped function "diag" in typed context [no-untyped-call]
sgkit/stats/regenie.py:155: error: Call to untyped function "inv" in typed context [no-untyped-call]
sgkit/stats/regenie.py:163: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:179: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:242: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:302: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:439: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:469: error: Call to untyped function "unique" in typed context [no-untyped-call]
sgkit/stats/regenie.py:574: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/window.py:231: error: Call to untyped function "append" in typed context [no-untyped-call]
sgkit/window.py:244: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/window.py:245: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/window.py:246: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/window.py:319: error: Call to untyped function "min" in typed context [no-untyped-call]
sgkit/window.py:321: error: Call to untyped function "min" in typed context [no-untyped-call]
sgkit/window.py:347: error: Call to untyped function "max" in typed context [no-untyped-call]
sgkit/window.py:405: error: Call to untyped function "insert" in typed context [no-untyped-call]
sgkit/window.py:420: error: Need type annotation for 'chunk_numbers' [var-annotated]
sgkit/window.py:426: error: Call to untyped function "unique" in typed context [no-untyped-call]
sgkit/io/utils.py:33: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/io/utils.py:38: error: Call to untyped function "unique" in typed context [no-untyped-call]
sgkit/io/utils.py:152: error: Call to untyped function "insert" in typed context [no-untyped-call]
sgkit/io/utils.py:200: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/stats/popgen.py:792: error: "nditer" has no attribute "__iter__" (not iterable) [attr-defined]
sgkit/stats/hwe.py:107: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/tests/io/bgen/test_bgen_reader.py:65: error: Call to untyped function "assert_array_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:66: error: Call to untyped function "assert_array_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:70: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:73: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:76: error: Call to untyped function "assert_array_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:79: error: Call to untyped function "assert_array_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:134: error: Non-overlapping equality check (left operand type: "dtype[Any]", right operand type: "Type[signedinteger[Any]]") [comparison-overlap]
sgkit/tests/io/bgen/test_bgen_reader.py:152: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:155: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:163: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:166: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:170: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:251: error: Call to untyped function "assert_allclose" in typed context [no-untyped-call]
sgkit/tests/io/plink/test_plink_reader.py:135: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/io/plink/test_plink_reader.py:143: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/io/plink/test_plink_reader.py:153: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
validation/gwas/method/pc_relate/validate_pc_relate.py:36: error: Call to untyped function "triu_indices_from" in typed context [no-untyped-call]
Found 325 errors in 35 files (checked 98 source files)
```
|
1.0
|
pre-commit mypy check is failing - Not sure what changed (no new version of mypy or numpy in the last 3 weeks), but we now have lots of failures:
(from https://github.com/pystatgen/sgkit/runs/3062225915?check_suite_focus=true)
```
sgkit/stats/utils.py:71: error: Call to untyped function "broadcast_arrays" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:25: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:26: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:27: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:30: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:39: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:40: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:44: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_cohorts.py:48: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/test_stats_utils.py:25: error: "NoReturn" has no attribute "T" [attr-defined]
sgkit/tests/test_stats_utils.py:29: error: Call to untyped function "assert_allclose" in typed context [no-untyped-call]
sgkit/tests/test_stats_utils.py:46: error: Call to untyped function "assert_allclose" in typed context [no-untyped-call]
sgkit/tests/test_stats_utils.py:77: error: Call to untyped function "copy" in typed context [no-untyped-call]
sgkit/tests/test_stats_utils.py:86: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/tests/test_stats_utils.py:87: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/io/vcf/vcf_partition.py:161: error: Call to untyped function "delete" in typed context [no-untyped-call]
sgkit/io/vcf/vcf_partition.py:164: error: Call to untyped function "unique" in typed context [no-untyped-call]
sgkit/tests/test_distance.py:102: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/test_distance.py:123: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/test_distance.py:130: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/test_distance.py:166: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/utils.py:101: error: Call to untyped function "unique" in typed context [no-untyped-call]
sgkit/utils.py:302: error: Call to untyped function "frompyfunc" in typed context [no-untyped-call]
sgkit/stats/regenie.py:24: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:77: error: Call to untyped function "diff" in typed context [no-untyped-call]
sgkit/stats/regenie.py:79: error: Call to untyped function "diff" in typed context [no-untyped-call]
sgkit/stats/regenie.py:80: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/stats/regenie.py:81: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/stats/regenie.py:84: error: Call to untyped function "diff" in typed context [no-untyped-call]
sgkit/stats/regenie.py:91: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:125: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/stats/regenie.py:146: error: Need type annotation for 'diag' [var-annotated]
sgkit/stats/regenie.py:152: error: Unsupported target for indexed assignment ("NoReturn") [index]
sgkit/stats/regenie.py:153: error: Call to untyped function "diag" in typed context [no-untyped-call]
sgkit/stats/regenie.py:155: error: Call to untyped function "inv" in typed context [no-untyped-call]
sgkit/stats/regenie.py:163: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:179: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:242: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:302: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:439: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/stats/regenie.py:469: error: Call to untyped function "unique" in typed context [no-untyped-call]
sgkit/stats/regenie.py:574: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/window.py:231: error: Call to untyped function "append" in typed context [no-untyped-call]
sgkit/window.py:244: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/window.py:245: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/window.py:246: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/window.py:319: error: Call to untyped function "min" in typed context [no-untyped-call]
sgkit/window.py:321: error: Call to untyped function "min" in typed context [no-untyped-call]
sgkit/window.py:347: error: Call to untyped function "max" in typed context [no-untyped-call]
sgkit/window.py:405: error: Call to untyped function "insert" in typed context [no-untyped-call]
sgkit/window.py:420: error: Need type annotation for 'chunk_numbers' [var-annotated]
sgkit/window.py:426: error: Call to untyped function "unique" in typed context [no-untyped-call]
sgkit/io/utils.py:33: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/io/utils.py:38: error: Call to untyped function "unique" in typed context [no-untyped-call]
sgkit/io/utils.py:152: error: Call to untyped function "insert" in typed context [no-untyped-call]
sgkit/io/utils.py:200: error: Call to untyped function "concatenate" in typed context [no-untyped-call]
sgkit/stats/popgen.py:792: error: "nditer" has no attribute "__iter__" (not iterable) [attr-defined]
sgkit/stats/hwe.py:107: error: Missing type parameters for generic type "ndarray" [type-arg]
sgkit/tests/io/bgen/test_bgen_reader.py:65: error: Call to untyped function "assert_array_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:66: error: Call to untyped function "assert_array_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:70: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:73: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:76: error: Call to untyped function "assert_array_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:79: error: Call to untyped function "assert_array_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:134: error: Non-overlapping equality check (left operand type: "dtype[Any]", right operand type: "Type[signedinteger[Any]]") [comparison-overlap]
sgkit/tests/io/bgen/test_bgen_reader.py:152: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:155: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:163: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:166: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:170: error: Call to untyped function "assert_almost_equal" in typed context [no-untyped-call]
sgkit/tests/io/bgen/test_bgen_reader.py:251: error: Call to untyped function "assert_allclose" in typed context [no-untyped-call]
sgkit/tests/io/plink/test_plink_reader.py:135: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/io/plink/test_plink_reader.py:143: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
sgkit/tests/io/plink/test_plink_reader.py:153: error: Call to untyped function "assert_equal" in typed context [no-untyped-call]
validation/gwas/method/pc_relate/validate_pc_relate.py:36: error: Call to untyped function "triu_indices_from" in typed context [no-untyped-call]
Found 325 errors in 35 files (checked 98 source files)
```
|
process
|
pre commit mypy check is failing not sure what changed no new version of mypy or numpy in the last weeks but we now have lots of failures from sgkit stats utils py error call to untyped function broadcast arrays in typed context sgkit tests test cohorts py error call to untyped function assert equal in typed context sgkit tests test cohorts py error call to untyped function assert equal in typed context sgkit tests test cohorts py error call to untyped function assert equal in typed context sgkit tests test cohorts py error call to untyped function assert equal in typed context sgkit tests test cohorts py error call to untyped function assert equal in typed context sgkit tests test cohorts py error call to untyped function assert equal in typed context sgkit tests test cohorts py error call to untyped function assert equal in typed context sgkit tests test cohorts py error call to untyped function assert equal in typed context sgkit tests test stats utils py error noreturn has no attribute t sgkit tests test stats utils py error call to untyped function assert allclose in typed context sgkit tests test stats utils py error call to untyped function assert allclose in typed context sgkit tests test stats utils py error call to untyped function copy in typed context sgkit tests test stats utils py error call to untyped function concatenate in typed context sgkit tests test stats utils py error call to untyped function assert equal in typed context sgkit io vcf vcf partition py error call to untyped function delete in typed context sgkit io vcf vcf partition py error call to untyped function unique in typed context sgkit tests test distance py error call to untyped function assert almost equal in typed context sgkit tests test distance py error call to untyped function assert almost equal in typed context sgkit tests test distance py error call to untyped function assert almost equal in typed context sgkit tests test distance py error call to untyped function assert almost equal in typed context sgkit utils py error call to untyped function unique in typed context sgkit utils py error call to untyped function frompyfunc in typed context sgkit stats regenie py error missing type parameters for generic type ndarray sgkit stats regenie py error call to untyped function diff in typed context sgkit stats regenie py error call to untyped function diff in typed context sgkit stats regenie py error call to untyped function concatenate in typed context sgkit stats regenie py error call to untyped function concatenate in typed context sgkit stats regenie py error call to untyped function diff in typed context sgkit stats regenie py error missing type parameters for generic type ndarray sgkit stats regenie py error call to untyped function concatenate in typed context sgkit stats regenie py error need type annotation for diag sgkit stats regenie py error unsupported target for indexed assignment noreturn sgkit stats regenie py error call to untyped function diag in typed context sgkit stats regenie py error call to untyped function inv in typed context sgkit stats regenie py error missing type parameters for generic type ndarray sgkit stats regenie py error missing type parameters for generic type ndarray sgkit stats regenie py error missing type parameters for generic type ndarray sgkit stats regenie py error missing type parameters for generic type ndarray sgkit stats regenie py error missing type parameters for generic type ndarray sgkit stats regenie py error call to untyped function unique in typed context sgkit stats regenie py error missing type parameters for generic type ndarray sgkit window py error call to untyped function append in typed context sgkit window py error call to untyped function concatenate in typed context sgkit window py error call to untyped function concatenate in typed context sgkit window py error call to untyped function concatenate in typed context sgkit window py error call to untyped function min in typed context sgkit window py error call to untyped function min in typed context sgkit window py error call to untyped function max in typed context sgkit window py error call to untyped function insert in typed context sgkit window py error need type annotation for chunk numbers sgkit window py error call to untyped function unique in typed context sgkit io utils py error missing type parameters for generic type ndarray sgkit io utils py error call to untyped function unique in typed context sgkit io utils py error call to untyped function insert in typed context sgkit io utils py error call to untyped function concatenate in typed context sgkit stats popgen py error nditer has no attribute iter not iterable sgkit stats hwe py error missing type parameters for generic type ndarray sgkit tests io bgen test bgen reader py error call to untyped function assert array equal in typed context sgkit tests io bgen test bgen reader py error call to untyped function assert array equal in typed context sgkit tests io bgen test bgen reader py error call to untyped function assert almost equal in typed context sgkit tests io bgen test bgen reader py error call to untyped function assert almost equal in typed context sgkit tests io bgen test bgen reader py error call to untyped function assert array equal in typed context sgkit tests io bgen test bgen reader py error call to untyped function assert array equal in typed context sgkit tests io bgen test bgen reader py error non overlapping equality check left operand type dtype right operand type type sgkit tests io bgen test bgen reader py error call to untyped function assert almost equal in typed context sgkit tests io bgen test bgen reader py error call to untyped function assert almost equal in typed context sgkit tests io bgen test bgen reader py error call to untyped function assert almost equal in typed context sgkit tests io bgen test bgen reader py error call to untyped function assert almost equal in typed context sgkit tests io bgen test bgen reader py error call to untyped function assert almost equal in typed context sgkit tests io bgen test bgen reader py error call to untyped function assert allclose in typed context sgkit tests io plink test plink reader py error call to untyped function assert equal in typed context sgkit tests io plink test plink reader py error call to untyped function assert equal in typed context sgkit tests io plink test plink reader py error call to untyped function assert equal in typed context validation gwas method pc relate validate pc relate py error call to untyped function triu indices from in typed context found errors in files checked source files
| 1
|
19,210
| 25,343,370,377
|
IssuesEvent
|
2022-11-19 00:57:13
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
closed
|
Roots endpoint returns empty set for PANET ontology
|
ontology processing problem
|
End user reported on the support list that they're unable to view the class tree for the [PANET ontology](https://bioportal.bioontology.org/ontologies/PANET) in the BioPortal UI. Note that this a private ontology, so you need to be logged in to reproduce.
The underlying cause for the UI issue is that the /roots endpoint is returning an empty set, so the UI can't construct a class tree:
http://data.bioontology.org/ontologies/PANET/classes/roots
I'm noting a couple of things about this ontology that might be worth investigating. If you open it in Protege, there is only one class under owl:Thing with a subject URI of http://purl.obolibrary.org/obo/OBI_0000070. This subject URI differs from the one in the ontology header:
```<owl:Ontology rdf:about="https://www.diamond.ac.uk/"/>```
Also, a lot of the classes are declared using a colon character in rdf:about, e.g.:
```
<owl:Class rdf:about="ExPaNDS:diffraction">
<rdfs:subClassOf rdf:resource="ExPaNDS:elastic scattering"/>
<obo:IAO_0000119 rdf:datatype="http://www.w3.org/2001/XMLSchema#string">https://en.wikipedia.org/wiki/Diffraction</obo:IAO_0000119>
<rdfs:label rdf:datatype="http://www.w3.org/2001/XMLSchema#string">diffraction</rdfs:label>
</owl:Class>
```
If you select such a class in Protege and bring up the "Change Entity IRI" dialog (Refactor -> Rename entity...), the dialog content is blank:
<img width="1113" alt="Screen Shot 2021-02-08 at 12 35 25 PM" src="https://user-images.githubusercontent.com/1696923/107278124-336e0e80-6a0a-11eb-8052-d759f8feb767.png">
I haven't seen this type of behavior in Protege before, and I wonder if there's something about the way this ontology is constructed that causes issues in our system in terms of determining roots classes.
|
1.0
|
Roots endpoint returns empty set for PANET ontology - End user reported on the support list that they're unable to view the class tree for the [PANET ontology](https://bioportal.bioontology.org/ontologies/PANET) in the BioPortal UI. Note that this a private ontology, so you need to be logged in to reproduce.
The underlying cause for the UI issue is that the /roots endpoint is returning an empty set, so the UI can't construct a class tree:
http://data.bioontology.org/ontologies/PANET/classes/roots
I'm noting a couple of things about this ontology that might be worth investigating. If you open it in Protege, there is only one class under owl:Thing with a subject URI of http://purl.obolibrary.org/obo/OBI_0000070. This subject URI differs from the one in the ontology header:
```<owl:Ontology rdf:about="https://www.diamond.ac.uk/"/>```
Also, a lot of the classes are declared using a colon character in rdf:about, e.g.:
```
<owl:Class rdf:about="ExPaNDS:diffraction">
<rdfs:subClassOf rdf:resource="ExPaNDS:elastic scattering"/>
<obo:IAO_0000119 rdf:datatype="http://www.w3.org/2001/XMLSchema#string">https://en.wikipedia.org/wiki/Diffraction</obo:IAO_0000119>
<rdfs:label rdf:datatype="http://www.w3.org/2001/XMLSchema#string">diffraction</rdfs:label>
</owl:Class>
```
If you select such a class in Protege and bring up the "Change Entity IRI" dialog (Refactor -> Rename entity...), the dialog content is blank:
<img width="1113" alt="Screen Shot 2021-02-08 at 12 35 25 PM" src="https://user-images.githubusercontent.com/1696923/107278124-336e0e80-6a0a-11eb-8052-d759f8feb767.png">
I haven't seen this type of behavior in Protege before, and I wonder if there's something about the way this ontology is constructed that causes issues in our system in terms of determining roots classes.
|
process
|
roots endpoint returns empty set for panet ontology end user reported on the support list that they re unable to view the class tree for the in the bioportal ui note that this a private ontology so you need to be logged in to reproduce the underlying cause for the ui issue is that the roots endpoint is returning an empty set so the ui can t construct a class tree i m noting a couple of things about this ontology that might be worth investigating if you open it in protege there is only one class under owl thing with a subject uri of this subject uri differs from the one in the ontology header owl ontology rdf about also a lot of the classes are declared using a colon character in rdf about e g obo iao rdf datatype rdfs label rdf datatype if you select such a class in protege and bring up the change entity iri dialog refactor rename entity the dialog content is blank img width alt screen shot at pm src i haven t seen this type of behavior in protege before and i wonder if there s something about the way this ontology is constructed that causes issues in our system in terms of determining roots classes
| 1
|
169,959
| 20,841,992,404
|
IssuesEvent
|
2022-03-21 02:02:29
|
michaeldotson/outerspace-vue
|
https://api.github.com/repos/michaeldotson/outerspace-vue
|
opened
|
CVE-2022-0691 (High) detected in url-parse-1.4.7.tgz
|
security vulnerability
|
## CVE-2022-0691 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /outerspace-vue/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-3.7.0.tgz (Root Library)
- webpack-dev-server-3.3.1.tgz
- sockjs-client-1.3.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.9.
<p>Publish Date: 2022-02-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0691>CVE-2022-0691</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691</a></p>
<p>Release Date: 2022-02-21</p>
<p>Fix Resolution (url-parse): 1.5.9</p>
<p>Direct dependency fix Resolution (@vue/cli-service): 3.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-0691 (High) detected in url-parse-1.4.7.tgz - ## CVE-2022-0691 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /outerspace-vue/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-3.7.0.tgz (Root Library)
- webpack-dev-server-3.3.1.tgz
- sockjs-client-1.3.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.9.
<p>Publish Date: 2022-02-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0691>CVE-2022-0691</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691</a></p>
<p>Release Date: 2022-02-21</p>
<p>Fix Resolution (url-parse): 1.5.9</p>
<p>Direct dependency fix Resolution (@vue/cli-service): 3.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in url parse tgz cve high severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file outerspace vue package json path to vulnerable library node modules url parse package json dependency hierarchy cli service tgz root library webpack dev server tgz sockjs client tgz x url parse tgz vulnerable library vulnerability details authorization bypass through user controlled key in npm url parse prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse direct dependency fix resolution vue cli service step up your open source security game with whitesource
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.