Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12,533
| 14,972,386,435
|
IssuesEvent
|
2021-01-27 22:45:45
|
BootBlock/FileSieve
|
https://api.github.com/repos/BootBlock/FileSieve
|
opened
|
Split Source Item pre-scanning across multiple threads
|
backend-core performance processing
|
Threaded pre-scanning is partially in-place, but an effective thread initialiser is halting things. It needs to be intelligent so that it can determine the correct number of threads to be used in all cases, from number of files to if files reside on the same or different physical media.
This has the potential for big speed gains (potentially; a huge performance gain was obtained via defaulting to **Get Files Mode** to `Burst` from `Yield`).
|
1.0
|
Split Source Item pre-scanning across multiple threads - Threaded pre-scanning is partially in-place, but an effective thread initialiser is halting things. It needs to be intelligent so that it can determine the correct number of threads to be used in all cases, from number of files to if files reside on the same or different physical media.
This has the potential for big speed gains (potentially; a huge performance gain was obtained via defaulting to **Get Files Mode** to `Burst` from `Yield`).
|
process
|
split source item pre scanning across multiple threads threaded pre scanning is partially in place but an effective thread initialiser is halting things it needs to be intelligent so that it can determine the correct number of threads to be used in all cases from number of files to if files reside on the same or different physical media this has the potential for big speed gains potentially a huge performance gain was obtained via defaulting to get files mode to burst from yield
| 1
|
477,451
| 13,762,656,004
|
IssuesEvent
|
2020-10-07 09:25:41
|
AY2021S1-CS2103T-T11-1/tp
|
https://api.github.com/repos/AY2021S1-CS2103T-T11-1/tp
|
closed
|
Enhance tag model and add batch operations
|
priority.High type.Task
|
Enhance tag model to support tagging of multiple other models and add batch operations on tags.
|
1.0
|
Enhance tag model and add batch operations - Enhance tag model to support tagging of multiple other models and add batch operations on tags.
|
non_process
|
enhance tag model and add batch operations enhance tag model to support tagging of multiple other models and add batch operations on tags
| 0
|
215,433
| 16,602,660,969
|
IssuesEvent
|
2021-06-01 21:54:21
|
klatour324/tea_subscription
|
https://api.github.com/repos/klatour324/tea_subscription
|
closed
|
API Contract
|
documentation good first issue mvp setup
|
- [x] Plan out endpoints to build
- [x] Create a table within README documentation to showcase the JSON API endpoints for FE plug in
|
1.0
|
API Contract - - [x] Plan out endpoints to build
- [x] Create a table within README documentation to showcase the JSON API endpoints for FE plug in
|
non_process
|
api contract plan out endpoints to build create a table within readme documentation to showcase the json api endpoints for fe plug in
| 0
|
3,325
| 2,676,477,875
|
IssuesEvent
|
2015-03-25 17:54:02
|
twosigma/beaker-notebook
|
https://api.github.com/repos/twosigma/beaker-notebook
|
closed
|
python dataframe display
|
bug UI Design
|
If from python you output a data frame without wrapping in html the system still detects a table BUT python outputs only a part of the data frame and the last table row contains only dots (....).
|
1.0
|
python dataframe display - If from python you output a data frame without wrapping in html the system still detects a table BUT python outputs only a part of the data frame and the last table row contains only dots (....).
|
non_process
|
python dataframe display if from python you output a data frame without wrapping in html the system still detects a table but python outputs only a part of the data frame and the last table row contains only dots
| 0
|
20,664
| 27,334,852,108
|
IssuesEvent
|
2023-02-26 03:50:55
|
cse442-at-ub/project_s23-team-infinity
|
https://api.github.com/repos/cse442-at-ub/project_s23-team-infinity
|
closed
|
Create backend documentation in order to collate and organize instructions for easier on-boarding and general guidance in the backend.
|
IO Task Processing Task Sprint 1
|
**Task Test**
*Test 1*
1) Research what systems will be running on the UB webserver.
2) Research how cheshire will be accessed and how programs will be placed on the server.
3) Research how the database will be created, accessed, updated, and maintained.
4) Create the documentation via Google Docs for easier sharing and more concurrent updates.
5) Proofread to find grammatical errors as well as any incorrect code snippets
6) Follow documentation as if from the outside and run code snippets to see if outcome is as expected.
7) Create task on ZenHub with collated and organized documentation linked.
[https://docs.google.com/document/d/1oRdNRbrfuvt2v9fK8POguemvKRa1VSf3GjAfRipbw5Y/edit](url)
|
1.0
|
Create backend documentation in order to collate and organize instructions for easier on-boarding and general guidance in the backend. - **Task Test**
*Test 1*
1) Research what systems will be running on the UB webserver.
2) Research how cheshire will be accessed and how programs will be placed on the server.
3) Research how the database will be created, accessed, updated, and maintained.
4) Create the documentation via Google Docs for easier sharing and more concurrent updates.
5) Proofread to find grammatical errors as well as any incorrect code snippets
6) Follow documentation as if from the outside and run code snippets to see if outcome is as expected.
7) Create task on ZenHub with collated and organized documentation linked.
[https://docs.google.com/document/d/1oRdNRbrfuvt2v9fK8POguemvKRa1VSf3GjAfRipbw5Y/edit](url)
|
process
|
create backend documentation in order to collate and organize instructions for easier on boarding and general guidance in the backend task test test research what systems will be running on the ub webserver research how cheshire will be accessed and how programs will be placed on the server research how the database will be created accessed updated and maintained create the documentation via google docs for easier sharing and more concurrent updates proofread to find grammatical errors as well as any incorrect code snippets follow documentation as if from the outside and run code snippets to see if outcome is as expected create task on zenhub with collated and organized documentation linked url
| 1
|
5,941
| 7,434,967,399
|
IssuesEvent
|
2018-03-26 12:53:53
|
zin-/Mem
|
https://api.github.com/repos/zin-/Mem
|
closed
|
Update mem
|
Persistent Service
|
# Feature
Update mem
# Specification
Implementation method
- [x] On Detail
- [x] On List(mem done)
# Test
- [ ] Update mem
|
1.0
|
Update mem - # Feature
Update mem
# Specification
Implementation method
- [x] On Detail
- [x] On List(mem done)
# Test
- [ ] Update mem
|
non_process
|
update mem feature update mem specification implementation method on detail on list mem done test update mem
| 0
|
22,643
| 4,822,299,467
|
IssuesEvent
|
2016-11-05 19:42:56
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
closed
|
Write more developer tutorials
|
area: documentation
|
Expand developer documentation with more tutorials explaining how to do various types of projects.
Ideas:
- add a new translation
- cut a new release
- add a new emoji
- test Zulip with a new browser
- work with a new file format (for uploading and/or thumbnailing in previews)
|
1.0
|
Write more developer tutorials - Expand developer documentation with more tutorials explaining how to do various types of projects.
Ideas:
- add a new translation
- cut a new release
- add a new emoji
- test Zulip with a new browser
- work with a new file format (for uploading and/or thumbnailing in previews)
|
non_process
|
write more developer tutorials expand developer documentation with more tutorials explaining how to do various types of projects ideas add a new translation cut a new release add a new emoji test zulip with a new browser work with a new file format for uploading and or thumbnailing in previews
| 0
|
739,240
| 25,586,831,931
|
IssuesEvent
|
2022-12-01 09:58:41
|
projectdiscovery/httpx
|
https://api.github.com/repos/projectdiscovery/httpx
|
closed
|
Add option to return base64-encoded response-body on json output
|
Priority: Low Status: Completed Type: Enhancement
|
<!--
1. Please make sure to provide a detailed description with all the relevant information that might be required to start working on this feature.
2. In case you are not sure about your request or whether the particular feature is already supported or not, please start a discussion instead.
3. GitHub Discussion: https://github.com/projectdiscovery/httpx/discussions/categories/ideas
4. Join our discord server at https://discord.gg/projectdiscovery to discuss the idea on the #httpx channel.
-->
### Please describe your feature request:
<!-- A clear and concise description of feature to implement -->
Add a `-irrb` option.
```
-irrb, -include-response-base64 include base64-encoded http request/response in JSON output (-json only)
```
### Describe the use case of this feature:
<!-- A clear and concise description of the feature request's motivation and the use-cases in which it could be useful. -->
Would make possible using json output to save binary unaltered by utf-8 encoding (needed by json)
For example, if you download a favicon through httpx, and try to write content returned by `response-body`, the saved files will be different from original favicon.
Having an option to get base64-encoded body allows such use-case.
|
1.0
|
Add option to return base64-encoded response-body on json output - <!--
1. Please make sure to provide a detailed description with all the relevant information that might be required to start working on this feature.
2. In case you are not sure about your request or whether the particular feature is already supported or not, please start a discussion instead.
3. GitHub Discussion: https://github.com/projectdiscovery/httpx/discussions/categories/ideas
4. Join our discord server at https://discord.gg/projectdiscovery to discuss the idea on the #httpx channel.
-->
### Please describe your feature request:
<!-- A clear and concise description of feature to implement -->
Add a `-irrb` option.
```
-irrb, -include-response-base64 include base64-encoded http request/response in JSON output (-json only)
```
### Describe the use case of this feature:
<!-- A clear and concise description of the feature request's motivation and the use-cases in which it could be useful. -->
Would make possible using json output to save binary unaltered by utf-8 encoding (needed by json)
For example, if you download a favicon through httpx, and try to write content returned by `response-body`, the saved files will be different from original favicon.
Having an option to get base64-encoded body allows such use-case.
|
non_process
|
add option to return encoded response body on json output please make sure to provide a detailed description with all the relevant information that might be required to start working on this feature in case you are not sure about your request or whether the particular feature is already supported or not please start a discussion instead github discussion join our discord server at to discuss the idea on the httpx channel please describe your feature request add a irrb option irrb include response include encoded http request response in json output json only describe the use case of this feature would make possible using json output to save binary unaltered by utf encoding needed by json for example if you download a favicon through httpx and try to write content returned by response body the saved files will be different from original favicon having an option to get encoded body allows such use case
| 0
|
19,565
| 25,887,139,182
|
IssuesEvent
|
2022-12-14 15:17:33
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
opened
|
Add utility that can easily relaunch a process from a `ProcessNode` with minimal required setup
|
priority/important type/accepted feature topic/processes topic/utilities type/usability
|
A big focus of AiiDA is provenance to guarantee reproducibility. Although it does store a significant amount of relevant provenance data, it is currently not very easy to actually reproduce a result that has been generated.
The first step was the introduction of the `ProcessNode.get_builder_restart` method which allows to easily retrieve a builder from a completed `ProcessNode`. This still suffered from problems since not all inputs (from the `metadata` input namespace) were restored. This was addressed in #5801 where as of now metadata inputs will be restored.
The final step is that if the process to be run contains `CalcJob`s, the `Code` inputs need to be usable. If the process node was imported, the `InstalledCode`s have an associated `Computer` that comes from the archive as well and most likely is not configured. The code will most likely have to be reconfigured by the current user. The utility should detect these code inputs and provide an easy way for the caller to replace them with codes that they have set up themselves on computers that they have access to and so are configured with an `AuthInfo`. Alternatively, if they are `ContainerizedCode`s the code is transferable and one should just make sure that the associated computers are configured (or optionally they are run on the `localhost`).
|
1.0
|
Add utility that can easily relaunch a process from a `ProcessNode` with minimal required setup - A big focus of AiiDA is provenance to guarantee reproducibility. Although it does store a significant amount of relevant provenance data, it is currently not very easy to actually reproduce a result that has been generated.
The first step was the introduction of the `ProcessNode.get_builder_restart` method which allows to easily retrieve a builder from a completed `ProcessNode`. This still suffered from problems since not all inputs (from the `metadata` input namespace) were restored. This was addressed in #5801 where as of now metadata inputs will be restored.
The final step is that if the process to be run contains `CalcJob`s, the `Code` inputs need to be usable. If the process node was imported, the `InstalledCode`s have an associated `Computer` that comes from the archive as well and most likely is not configured. The code will most likely have to be reconfigured by the current user. The utility should detect these code inputs and provide an easy way for the caller to replace them with codes that they have set up themselves on computers that they have access to and so are configured with an `AuthInfo`. Alternatively, if they are `ContainerizedCode`s the code is transferable and one should just make sure that the associated computers are configured (or optionally they are run on the `localhost`).
|
process
|
add utility that can easily relaunch a process from a processnode with minimal required setup a big focus of aiida is provenance to guarantee reproducibility although it does store a significant amount of relevant provenance data it is currently not very easy to actually reproduce a result that has been generated the first step was the introduction of the processnode get builder restart method which allows to easily retrieve a builder from a completed processnode this still suffered from problems since not all inputs from the metadata input namespace were restored this was addressed in where as of now metadata inputs will be restored the final step is that if the process to be run contains calcjob s the code inputs need to be usable if the process node was imported the installedcode s have an associated computer that comes from the archive as well and most likely is not configured the code will most likely have to be reconfigured by the current user the utility should detect these code inputs and provide an easy way for the caller to replace them with codes that they have set up themselves on computers that they have access to and so are configured with an authinfo alternatively if they are containerizedcode s the code is transferable and one should just make sure that the associated computers are configured or optionally they are run on the localhost
| 1
|
431,921
| 30,257,751,272
|
IssuesEvent
|
2023-07-07 05:15:37
|
PLAIF-dev/sw_synthetic_rospkg
|
https://api.github.com/repos/PLAIF-dev/sw_synthetic_rospkg
|
closed
|
๋ถํ์ํ Git ์์ /๋ณ๊ฒฝ ์ถ์ ํด์
|
documentation
|
## ์ค๋ช
1. .gitignore์ ์์ผ๋์นด๋ ๋ฐ ํจํด์ ์ถ๊ฐํ์ฌ ์์ /๋ณ๊ฒฝ ์ถ์ ํด์ ์ฌํญ๋ค์ ์ผ๊ด ์ธ์ํ๋๋ก ๋ณ๊ฒฝํฉ๋๋ค.
2. ํ๋ก์ ํธ ๋ด์ ํฌํจ๋์ง ์์์ผ ํ ํ์ผ๋ค์ ์ ๊ฑฐํฉ๋๋ค.
|
1.0
|
๋ถํ์ํ Git ์์ /๋ณ๊ฒฝ ์ถ์ ํด์ - ## ์ค๋ช
1. .gitignore์ ์์ผ๋์นด๋ ๋ฐ ํจํด์ ์ถ๊ฐํ์ฌ ์์ /๋ณ๊ฒฝ ์ถ์ ํด์ ์ฌํญ๋ค์ ์ผ๊ด ์ธ์ํ๋๋ก ๋ณ๊ฒฝํฉ๋๋ค.
2. ํ๋ก์ ํธ ๋ด์ ํฌํจ๋์ง ์์์ผ ํ ํ์ผ๋ค์ ์ ๊ฑฐํฉ๋๋ค.
|
non_process
|
๋ถํ์ํ git ์์ ๋ณ๊ฒฝ ์ถ์ ํด์ ์ค๋ช
gitignore์ ์์ผ๋์นด๋ ๋ฐ ํจํด์ ์ถ๊ฐํ์ฌ ์์ ๋ณ๊ฒฝ ์ถ์ ํด์ ์ฌํญ๋ค์ ์ผ๊ด ์ธ์ํ๋๋ก ๋ณ๊ฒฝํฉ๋๋ค ํ๋ก์ ํธ ๋ด์ ํฌํจ๋์ง ์์์ผ ํ ํ์ผ๋ค์ ์ ๊ฑฐํฉ๋๋ค
| 0
|
12,945
| 15,308,024,309
|
IssuesEvent
|
2021-02-24 21:47:45
|
cypress-io/cypress-documentation
|
https://api.github.com/repos/cypress-io/cypress-documentation
|
closed
|
Switch from manual deploy to semantic-action
|
process: deployment
|
Just need to think how to setup branches `develop` to `staging` and `master` to `production`
|
1.0
|
Switch from manual deploy to semantic-action - Just need to think how to setup branches `develop` to `staging` and `master` to `production`
|
process
|
switch from manual deploy to semantic action just need to think how to setup branches develop to staging and master to production
| 1
|
11,917
| 14,702,007,827
|
IssuesEvent
|
2021-01-04 12:54:03
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
closed
|
VDC Pricing Does Not Match Resource
|
process_wontfix type_bug
|
the smallest flavor of the vdc `Silver` which costs 10 EUR ~= 364 TFT consumes much more resources than what can be bought with this price.
the initial deployment includes the below workloads:
```python
JS-NG> for w in workloads:
2 cloud_units = w.resource_units().cloud_units()
3 print(w.id, w.info.workload_type, "cu: ", cloud_units.cu, "su: ", cloud_units.su, "ipv4u: ", c
... loud_units.ipv4u)
516768 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516769 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516765 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516764 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516763 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516771 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516770 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516762 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516767 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516766 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516807 WorkloadType.Public_IP cu: 0.0 su: 0.0 ipv4u: 1.0
516810 WorkloadType.Kubernetes cu: 0.5 su: 0.083 ipv4u: 0.0
516836 WorkloadType.Kubernetes cu: 0.5 su: 0.083 ipv4u: 0.0
516837 WorkloadType.Volume cu: 0.0 su: 0.007 ipv4u: 0.0
516838 WorkloadType.Container cu: 0.5 su: 0.0 ipv4u: 0.0
516853 WorkloadType.Container cu: 0.5 su: 0.0 ipv4u: 0.0
JS-NG>
```
these workloads consume resource with cost ~= 780 TFT per month
Note: all zdbs are using HDD and the initial deployment gives the user 60 GB of usable S3 storage (6 + 4) which means it can scale up to use 140 GB total storage when it reaches the 100 GB specified in the plan
|
1.0
|
VDC Pricing Does Not Match Resource - the smallest flavor of the vdc `Silver` which costs 10 EUR ~= 364 TFT consumes much more resources than what can be bought with this price.
the initial deployment includes the below workloads:
```python
JS-NG> for w in workloads:
2 cloud_units = w.resource_units().cloud_units()
3 print(w.id, w.info.workload_type, "cu: ", cloud_units.cu, "su: ", cloud_units.su, "ipv4u: ", c
... loud_units.ipv4u)
516768 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516769 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516765 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516764 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516763 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516771 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516770 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516762 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516767 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516766 WorkloadType.Zdb cu: 0.0 su: 0.008 ipv4u: 0.0
516807 WorkloadType.Public_IP cu: 0.0 su: 0.0 ipv4u: 1.0
516810 WorkloadType.Kubernetes cu: 0.5 su: 0.083 ipv4u: 0.0
516836 WorkloadType.Kubernetes cu: 0.5 su: 0.083 ipv4u: 0.0
516837 WorkloadType.Volume cu: 0.0 su: 0.007 ipv4u: 0.0
516838 WorkloadType.Container cu: 0.5 su: 0.0 ipv4u: 0.0
516853 WorkloadType.Container cu: 0.5 su: 0.0 ipv4u: 0.0
JS-NG>
```
these workloads consume resource with cost ~= 780 TFT per month
Note: all zdbs are using HDD and the initial deployment gives the user 60 GB of usable S3 storage (6 + 4) which means it can scale up to use 140 GB total storage when it reaches the 100 GB specified in the plan
|
process
|
vdc pricing does not match resource the smallest flavor of the vdc silver which costs eur tft consumes much more resources than what can be bought with this price the initial deployment includes the below workloads python js ng for w in workloads cloud units w resource units cloud units print w id w info workload type cu cloud units cu su cloud units su c loud units workloadtype zdb cu su workloadtype zdb cu su workloadtype zdb cu su workloadtype zdb cu su workloadtype zdb cu su workloadtype zdb cu su workloadtype zdb cu su workloadtype zdb cu su workloadtype zdb cu su workloadtype zdb cu su workloadtype public ip cu su workloadtype kubernetes cu su workloadtype kubernetes cu su workloadtype volume cu su workloadtype container cu su workloadtype container cu su js ng these workloads consume resource with cost tft per month note all zdbs are using hdd and the initial deployment gives the user gb of usable storage which means it can scale up to use gb total storage when it reaches the gb specified in the plan
| 1
|
8,976
| 12,093,093,522
|
IssuesEvent
|
2020-04-19 18:12:32
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
[FALSE-POSITIVE?] d1f8f9xcsvx3ha.cloudfront.net d3e1078hs60k37.cloudfront.net
|
whitelisting process
|
Looks like we are blocking all of Cloudfront due to Smed79 blocking all of Cloudfront https://github.com/smed79/blacklist/blob/master/hosts/cloudfront.txt
I posted an issue for smed79 to unblacklist Cloudfront, but if he says no we might need to consider other alternatives.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally
Not all Cloudfront is Ads
We should not be blocking it everywhere.


|
1.0
|
[FALSE-POSITIVE?] d1f8f9xcsvx3ha.cloudfront.net d3e1078hs60k37.cloudfront.net - Looks like we are blocking all of Cloudfront due to Smed79 blocking all of Cloudfront https://github.com/smed79/blacklist/blob/master/hosts/cloudfront.txt
I posted an issue for smed79 to unblacklist Cloudfront, but if he says no we might need to consider other alternatives.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally
Not all Cloudfront is Ads
We should not be blocking it everywhere.


|
process
|
cloudfront net cloudfront net looks like we are blocking all of cloudfront due to blocking all of cloudfront i posted an issue for to unblacklist cloudfront but if he says no we might need to consider other alternatives amazon cloudfront is a fast content delivery network cdn service that securely delivers data videos applications and apis to customers globally not all cloudfront is ads we should not be blocking it everywhere
| 1
|
724,838
| 24,943,305,563
|
IssuesEvent
|
2022-10-31 20:56:59
|
bounswe/bounswe2022group4
|
https://api.github.com/repos/bounswe/bounswe2022group4
|
closed
|
Frontend: UI Improvement For Navigation Bar
|
Category - To Do Priority - High Status: In Progress whom: individual Difficulty - Medium Language - CSS Language - React.js Team - Frontend
|
I need to improve navigation bar ui in order to increase user experience.
Steps:
1) Changing static background color to gradiant one.
2) Improving sign in and sign up button design
3) Changing text colors and put some effects on them
Deadline: 31.10.2022 15.00
Reviewer: @mbatuhan-malazgirt
|
1.0
|
Frontend: UI Improvement For Navigation Bar - I need to improve navigation bar ui in order to increase user experience.
Steps:
1) Changing static background color to gradiant one.
2) Improving sign in and sign up button design
3) Changing text colors and put some effects on them
Deadline: 31.10.2022 15.00
Reviewer: @mbatuhan-malazgirt
|
non_process
|
frontend ui improvement for navigation bar i need to improve navigation bar ui in order to increase user experience steps changing static background color to gradiant one improving sign in and sign up button design changing text colors and put some effects on them deadline reviewer mbatuhan malazgirt
| 0
|
10,805
| 13,609,288,192
|
IssuesEvent
|
2020-09-23 04:50:12
|
googleapis/java-securitycenter-settings
|
https://api.github.com/repos/googleapis/java-securitycenter-settings
|
closed
|
Dependency Dashboard
|
api: securitycenter type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-securitycenter-settings-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-securitycenter-settings to v0.3.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-securitycenter-settings-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-securitycenter-settings to v0.3.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any chore deps update dependency com google cloud google cloud securitycenter settings to chore deps update dependency com google cloud libraries bom to check this box to trigger a request for renovate to run again on this repository
| 1
|
31
| 2,499,433,347
|
IssuesEvent
|
2015-01-08 00:26:10
|
tinkerpop/tinkerpop3
|
https://api.github.com/repos/tinkerpop/tinkerpop3
|
opened
|
Now we can do as('a').out.as('b') in Gremlin-Groovy with SugarPlugin.
|
enhancement process
|
@mbroecheler @joshsh @dkuppitz
Now that we have `AnonymousGraphTraversal.Tokens.__`, `match()` looks like this in Gremlin-Groovy (with Sugar).
```groovy
g.V.match('a',
__.as('a').out('created').as('b'),
__.as('b').has('name', 'lop'),
__.as('b').in('created').as('c'),
__.as('c').has('age', 29)).
select('a', 'c').by('name')
```
The only difference between this and non-sugar is `g.V` vs. `g.V()`. However, if we add to SugarPlugin that a missing method named `as()` is actually a called to `AnonymousGraphTraversal.Tokens.__` (which is 3 lines of code :)), then it will look like this:
```groovy
g.V.match('a',
as('a').out('created').as('b'),
as('b').has('name', 'lop'),
as('b').in('created').as('c'),
as('c').has('age', 29)).
select('a', 'c').by('name')
```
This is cool, but realize it only works for Gremlin-Groovy (with Sugar). Be nice to come up with a simplification that would also work for non-sugar and Gremlin-Java.
|
1.0
|
Now we can do as('a').out.as('b') in Gremlin-Groovy with SugarPlugin. - @mbroecheler @joshsh @dkuppitz
Now that we have `AnonymousGraphTraversal.Tokens.__`, `match()` looks like this in Gremlin-Groovy (with Sugar).
```groovy
g.V.match('a',
__.as('a').out('created').as('b'),
__.as('b').has('name', 'lop'),
__.as('b').in('created').as('c'),
__.as('c').has('age', 29)).
select('a', 'c').by('name')
```
The only difference between this and non-sugar is `g.V` vs. `g.V()`. However, if we add to SugarPlugin that a missing method named `as()` is actually a called to `AnonymousGraphTraversal.Tokens.__` (which is 3 lines of code :)), then it will look like this:
```groovy
g.V.match('a',
as('a').out('created').as('b'),
as('b').has('name', 'lop'),
as('b').in('created').as('c'),
as('c').has('age', 29)).
select('a', 'c').by('name')
```
This is cool, but realize it only works for Gremlin-Groovy (with Sugar). Be nice to come up with a simplification that would also work for non-sugar and Gremlin-Java.
|
process
|
now we can do as a out as b in gremlin groovy with sugarplugin mbroecheler joshsh dkuppitz now that we have anonymousgraphtraversal tokens match looks like this in gremlin groovy with sugar groovy g v match a as a out created as b as b has name lop as b in created as c as c has age select a c by name the only difference between this and non sugar is g v vs g v however if we add to sugarplugin that a missing method named as is actually a called to anonymousgraphtraversal tokens which is lines of code then it will look like this groovy g v match a as a out created as b as b has name lop as b in created as c as c has age select a c by name this is cool but realize it only works for gremlin groovy with sugar be nice to come up with a simplification that would also work for non sugar and gremlin java
| 1
|
16,560
| 12,042,753,363
|
IssuesEvent
|
2020-04-14 11:10:02
|
teambit/bit
|
https://api.github.com/repos/teambit/bit
|
opened
|
fix split e2e tests on circle by timing on windows
|
area/infrastructure type/bug
|
Currently, if look at the windows_e2e tests on circle under the `write e2e files` step, you will see this warning:
```
Error autodetecting timing type, falling back to weighting by name. Autodetect no matching filename or classname. If file names are used, double check paths for absolute vs relative.
Example input file: "e2e\\api\\add-many.e2e.1.ts"
Example file from timings: "C:\\Users\\******ci\\project\\bit\\bit\\e2e\\commands\\eject.e2e.1.ts"
```
This might be a result of the next line of how we run them.
We might try to make it the same as in the Linux e2e.
This might require to run it under bash.exe instead of under powershell.
It requires more investigation.
|
1.0
|
fix split e2e tests on circle by timing on windows - Currently, if look at the windows_e2e tests on circle under the `write e2e files` step, you will see this warning:
```
Error autodetecting timing type, falling back to weighting by name. Autodetect no matching filename or classname. If file names are used, double check paths for absolute vs relative.
Example input file: "e2e\\api\\add-many.e2e.1.ts"
Example file from timings: "C:\\Users\\******ci\\project\\bit\\bit\\e2e\\commands\\eject.e2e.1.ts"
```
This might be a result of the next line of how we run them.
We might try to make it the same as in the Linux e2e.
This might require to run it under bash.exe instead of under powershell.
It requires more investigation.
|
non_process
|
fix split tests on circle by timing on windows currently if look at the windows tests on circle under the write files step you will see this warning error autodetecting timing type falling back to weighting by name autodetect no matching filename or classname if file names are used double check paths for absolute vs relative example input file api add many ts example file from timings c users ci project bit bit commands eject ts this might be a result of the next line of how we run them we might try to make it the same as in the linux this might require to run it under bash exe instead of under powershell it requires more investigation
| 0
|
13,601
| 16,177,450,288
|
IssuesEvent
|
2021-05-03 09:15:32
|
melink14/rikaikun
|
https://api.github.com/repos/melink14/rikaikun
|
closed
|
Split codeql scanning such that it only runs dependabot changes on pull_request
|
P2 process
|
dependabot doesn't play well with codeql. During pull requests, the push action fails due to coming too quickly after PR action, and during merge it fails do to being read only.
We should make https://github.com/melink14/rikaikun/blob/master/.github/workflows/codeql-analysis.yml a composite action and have one workflow for Dependabot on pull_request and one for not dependabot on push/pr/cron.
Composite action docs:
https://docs.github.com/en/actions/creating-actions/creating-a-composite-run-steps-action
Example actor filtering in actions:
```
on: [pull_request]
jobs:
job1:
if: github.actor!= 'dependabot-preview[bot]' # ignore the pull request which comes from user depbot.
```
Example error message:
> Error: Workflows triggered by Dependabot on the "push" event run with read-only access. Uploading Code Scanning results requires write access. To use Code Scanning with Dependabot, please ensure you are using the "pull_request" event for this workflow and avoid triggering on the "push" event for Dependabot branches. See docs.github.com/en/code-security/secure-coding/configuring-code-scanning#scanning-on-push for more information on how to configure these events.
|
1.0
|
Split codeql scanning such that it only runs dependabot changes on pull_request - dependabot doesn't play well with codeql. During pull requests, the push action fails due to coming too quickly after PR action, and during merge it fails do to being read only.
We should make https://github.com/melink14/rikaikun/blob/master/.github/workflows/codeql-analysis.yml a composite action and have one workflow for Dependabot on pull_request and one for not dependabot on push/pr/cron.
Composite action docs:
https://docs.github.com/en/actions/creating-actions/creating-a-composite-run-steps-action
Example actor filtering in actions:
```
on: [pull_request]
jobs:
job1:
if: github.actor!= 'dependabot-preview[bot]' # ignore the pull request which comes from user depbot.
```
Example error message:
> Error: Workflows triggered by Dependabot on the "push" event run with read-only access. Uploading Code Scanning results requires write access. To use Code Scanning with Dependabot, please ensure you are using the "pull_request" event for this workflow and avoid triggering on the "push" event for Dependabot branches. See docs.github.com/en/code-security/secure-coding/configuring-code-scanning#scanning-on-push for more information on how to configure these events.
|
process
|
split codeql scanning such that it only runs dependabot changes on pull request dependabot doesn t play well with codeql during pull requests the push action fails due to coming too quickly after pr action and during merge it fails do to being read only we should make a composite action and have one workflow for dependabot on pull request and one for not dependabot on push pr cron composite action docs example actor filtering in actions on jobs if github actor dependabot preview ignore the pull request which comes from user depbot example error message error workflows triggered by dependabot on the push event run with read only access uploading code scanning results requires write access to use code scanning with dependabot please ensure you are using the pull request event for this workflow and avoid triggering on the push event for dependabot branches see docs github com en code security secure coding configuring code scanning scanning on push for more information on how to configure these events
| 1
|
135,844
| 11,019,838,131
|
IssuesEvent
|
2019-12-05 13:32:01
|
dzhw/SLC-IntEr
|
https://api.github.com/repos/dzhw/SLC-IntEr
|
closed
|
Kalendarien
|
Modul: Kalendarien Update testing
|
Wir benรถtigen zwei Kalendarien, jeweils eine pro Kohorte:
- Cohort 1: Zeitraum Januar 2018 bis Dezember 2019 mit Episodentypen:
o Studium
o Berufsausbildung
o Fort-, Weiterbildung (lรคngerfristig, insgesamt mind. 70 h)
o Jobben
o Erwerbstรคtigkeit (auch Selbstรคndigkeit)
o Praktikum
o Elternzeit/Mutterschutz, Hausmann/-frau, Familientรคtigkeit
o Arbeitslosigkeit
o Sonstiges (z. B. Urlaub/Reisen, Krankheit, Studienvorbereitung, Freiwilligendienst)
- Cohort 2: Zeitraum Juli 2018 bis Dezember 2019 mit Episodentypen:
o Schule zum Erwerb d. allgemeinen Hochschulreife
o Studium
o Berufsausbildung
o Fort-, Weiterbildung (lรคngerfristig, insgesamt mind. 70 h)
o Jobben
o Erwerbstรคtigkeit (auch Selbstรคndigkeit)
o Praktikum (mind. 1 Monat, auch z.B. wรคhrend Studium)
o Elternzeit/Mutterschutz, Hausmann/-frau, Familientรคtigkeit
o Arbeitslosigkeit
o Sonstiges (z.B. Freiwilligendienst, Work & Travel, Au Pair, Studienvorbereitung, Familientรคtigkeit, mehrmonatige Urlaube, Krankheit)
Aus beiden Kalendarien brauchen wir als Hilfsvariablen Dummies, die pro Episodentyp angeben, ob mindestens ein Monat dieses Typs genannt wurde. Die Hilfsvariablen sollen folgende Bezeichnungen haben (รผberwiegend wie im Pretest):
o Schule: h_schule
o Studium: h_studium
o Berufsausbildung: h_ausb
o Fort-, Weiterbildung: h_fortb
o Jobben: h_jobben
o Erwerbstรคtigkeit: h_erwerb
o Praktikum: h_praktikum
o Elternzeit etc.: h_elternz
o Arbeitslosigkeit: h_alo
o Sonstiges (z. B. Urlaub/Reisen, Krankheit, Studienvorbereitung, Freiwilligendienst): h_sonstiges
|
1.0
|
Kalendarien - Wir benรถtigen zwei Kalendarien, jeweils eine pro Kohorte:
- Cohort 1: Zeitraum Januar 2018 bis Dezember 2019 mit Episodentypen:
o Studium
o Berufsausbildung
o Fort-, Weiterbildung (lรคngerfristig, insgesamt mind. 70 h)
o Jobben
o Erwerbstรคtigkeit (auch Selbstรคndigkeit)
o Praktikum
o Elternzeit/Mutterschutz, Hausmann/-frau, Familientรคtigkeit
o Arbeitslosigkeit
o Sonstiges (z. B. Urlaub/Reisen, Krankheit, Studienvorbereitung, Freiwilligendienst)
- Cohort 2: Zeitraum Juli 2018 bis Dezember 2019 mit Episodentypen:
o Schule zum Erwerb d. allgemeinen Hochschulreife
o Studium
o Berufsausbildung
o Fort-, Weiterbildung (lรคngerfristig, insgesamt mind. 70 h)
o Jobben
o Erwerbstรคtigkeit (auch Selbstรคndigkeit)
o Praktikum (mind. 1 Monat, auch z.B. wรคhrend Studium)
o Elternzeit/Mutterschutz, Hausmann/-frau, Familientรคtigkeit
o Arbeitslosigkeit
o Sonstiges (z.B. Freiwilligendienst, Work & Travel, Au Pair, Studienvorbereitung, Familientรคtigkeit, mehrmonatige Urlaube, Krankheit)
Aus beiden Kalendarien brauchen wir als Hilfsvariablen Dummies, die pro Episodentyp angeben, ob mindestens ein Monat dieses Typs genannt wurde. Die Hilfsvariablen sollen folgende Bezeichnungen haben (รผberwiegend wie im Pretest):
o Schule: h_schule
o Studium: h_studium
o Berufsausbildung: h_ausb
o Fort-, Weiterbildung: h_fortb
o Jobben: h_jobben
o Erwerbstรคtigkeit: h_erwerb
o Praktikum: h_praktikum
o Elternzeit etc.: h_elternz
o Arbeitslosigkeit: h_alo
o Sonstiges (z. B. Urlaub/Reisen, Krankheit, Studienvorbereitung, Freiwilligendienst): h_sonstiges
|
non_process
|
kalendarien wir benรถtigen zwei kalendarien jeweils eine pro kohorte cohort zeitraum januar bis dezember mit episodentypen o studium o berufsausbildung o fort weiterbildung lรคngerfristig insgesamt mind h o jobben o erwerbstรคtigkeit auch selbstรคndigkeit o praktikum o elternzeit mutterschutz hausmann frau familientรคtigkeit o arbeitslosigkeit o sonstiges z b urlaub reisen krankheit studienvorbereitung freiwilligendienst cohort zeitraum juli bis dezember mit episodentypen o schule zum erwerb d allgemeinen hochschulreife o studium o berufsausbildung o fort weiterbildung lรคngerfristig insgesamt mind h o jobben o erwerbstรคtigkeit auch selbstรคndigkeit o praktikum mind monat auch z b wรคhrend studium o elternzeit mutterschutz hausmann frau familientรคtigkeit o arbeitslosigkeit o sonstiges z b freiwilligendienst work travel au pair studienvorbereitung familientรคtigkeit mehrmonatige urlaube krankheit aus beiden kalendarien brauchen wir als hilfsvariablen dummies die pro episodentyp angeben ob mindestens ein monat dieses typs genannt wurde die hilfsvariablen sollen folgende bezeichnungen haben รผberwiegend wie im pretest o schule h schule o studium h studium o berufsausbildung h ausb o fort weiterbildung h fortb o jobben h jobben o erwerbstรคtigkeit h erwerb o praktikum h praktikum o elternzeit etc h elternz o arbeitslosigkeit h alo o sonstiges z b urlaub reisen krankheit studienvorbereitung freiwilligendienst h sonstiges
| 0
|
20,942
| 27,802,341,444
|
IssuesEvent
|
2023-03-17 16:47:10
|
RConsortium/submissions-pilot3-adam
|
https://api.github.com/repos/RConsortium/submissions-pilot3-adam
|
closed
|
renv.lock
|
submission process
|
Would it be a possibility to include renv.lock into submission to help reviewers to reconstruct our environment?
|
1.0
|
renv.lock - Would it be a possibility to include renv.lock into submission to help reviewers to reconstruct our environment?
|
process
|
renv lock would it be a possibility to include renv lock into submission to help reviewers to reconstruct our environment
| 1
|
53,709
| 11,114,736,307
|
IssuesEvent
|
2019-12-18 09:19:30
|
microsoft/calculator
|
https://api.github.com/repos/microsoft/calculator
|
closed
|
UnitConverter has many instances where code can be simiplified ore made more efficient
|
codebase quality
|
- Some raw loops could be made to be range-for loops.
- wstringstream is often used for building a string which has lots of overhead compared to appending to a wstring.
- Some functions take a `wstring` or `const wchar_t*` where a `wstring_view` would be more appropriate.
- `wstring_view` does not require constructing a string if a `const wchar_t*` is passed in.
- `wstring_view` is constexpr constructable so it does not have to calculate its size at runtime when using string literals.
|
1.0
|
UnitConverter has many instances where code can be simiplified ore made more efficient - - Some raw loops could be made to be range-for loops.
- wstringstream is often used for building a string which has lots of overhead compared to appending to a wstring.
- Some functions take a `wstring` or `const wchar_t*` where a `wstring_view` would be more appropriate.
- `wstring_view` does not require constructing a string if a `const wchar_t*` is passed in.
- `wstring_view` is constexpr constructable so it does not have to calculate its size at runtime when using string literals.
|
non_process
|
unitconverter has many instances where code can be simiplified ore made more efficient some raw loops could be made to be range for loops wstringstream is often used for building a string which has lots of overhead compared to appending to a wstring some functions take a wstring or const wchar t where a wstring view would be more appropriate wstring view does not require constructing a string if a const wchar t is passed in wstring view is constexpr constructable so it does not have to calculate its size at runtime when using string literals
| 0
|
279,708
| 8,672,121,030
|
IssuesEvent
|
2018-11-29 21:10:17
|
Davoleo/Metallurgy-4-Reforged
|
https://api.github.com/repos/Davoleo/Metallurgy-4-Reforged
|
closed
|
Typo in Wiki
|
Priority: Low Wiki bug
|
In Harvest Level of Metals Block and Ores Block, there are typos in
Ore Harvestability:
1. "Zinco Ore"
4. "Rubracacium Ore"
|
1.0
|
Typo in Wiki - In Harvest Level of Metals Block and Ores Block, there are typos in
Ore Harvestability:
1. "Zinco Ore"
4. "Rubracacium Ore"
|
non_process
|
typo in wiki in harvest level of metals block and ores block there are typos in ore harvestability zinco ore rubracacium ore
| 0
|
13,066
| 15,396,868,235
|
IssuesEvent
|
2021-03-03 21:16:30
|
opendistro-for-elasticsearch/opendistro-build
|
https://api.github.com/repos/opendistro-for-elasticsearch/opendistro-build
|
closed
|
How are the variables in the values.yaml file defined?
|
in process
|
https://github.com/opendistro-for-elasticsearch/opendistro-build/blob/b42c52309f05dd9d28accaaf0abbf7c795d0763f/helm/opendistro-es/values.yaml#L454
Hello,
In the `elasticsearch.config` part of the **values.yaml** file, there is these variables like ${CLUSTER_NAME}, ${NODE_MASTER}, ${HTTP_ENABLE} that are referenced but its unclear to me where these should be defined.
Can you please tell me how and where these variables are supposed to be defined first? Or are we just supposed to replace them by hard-coded values?
|
1.0
|
How are the variables in the values.yaml file defined? - https://github.com/opendistro-for-elasticsearch/opendistro-build/blob/b42c52309f05dd9d28accaaf0abbf7c795d0763f/helm/opendistro-es/values.yaml#L454
Hello,
In the `elasticsearch.config` part of the **values.yaml** file, there is these variables like ${CLUSTER_NAME}, ${NODE_MASTER}, ${HTTP_ENABLE} that are referenced but its unclear to me where these should be defined.
Can you please tell me how and where these variables are supposed to be defined first? Or are we just supposed to replace them by hard-coded values?
|
process
|
how are the variables in the values yaml file defined hello in the elasticsearch config part of the values yaml file there is these variables like cluster name node master http enable that are referenced but its unclear to me where these should be defined can you please tell me how and where these variables are supposed to be defined first or are we just supposed to replace them by hard coded values
| 1
|
1,953
| 4,082,945,858
|
IssuesEvent
|
2016-05-31 14:29:32
|
IBM-Bluemix/logistics-wizard
|
https://api.github.com/repos/IBM-Bluemix/logistics-wizard
|
closed
|
New API to delete a Demo environment
|
erp-service task
|
DELETE /Demos/{guid}
to delete all users and their related data
related to #11
|
1.0
|
New API to delete a Demo environment - DELETE /Demos/{guid}
to delete all users and their related data
related to #11
|
non_process
|
new api to delete a demo environment delete demos guid to delete all users and their related data related to
| 0
|
19,154
| 25,234,970,961
|
IssuesEvent
|
2022-11-14 23:35:50
|
rusefi/rusefi_documentation
|
https://api.github.com/repos/rusefi/rusefi_documentation
|
closed
|
What shall we do with legacy wiki? wiki1
|
wiki location & process change
|
wiki1 https://rusefi.com/wiki/
wiki2-human https://github.com/rusefi/rusefi/wiki/
wiki2-technical https://github.com/rusefi/rusefi_documentation
wiki3 wiki.rusefi.com
Problem: there are a few legacy links to pages like
https://rusefi.com/wiki/index.php?title=Main_Page
See http://www.turbobricks.org/forums/showthread.php?t=329200
See https://www.electronicspoint.com/forums/threads/ecm-jeep-computer-help-iding-the-motherboard-please.285822/
Do we want to ignore and just drop or do we want some nicer solution?
|
1.0
|
What shall we do with legacy wiki? wiki1 - wiki1 https://rusefi.com/wiki/
wiki2-human https://github.com/rusefi/rusefi/wiki/
wiki2-technical https://github.com/rusefi/rusefi_documentation
wiki3 wiki.rusefi.com
Problem: there are a few legacy links to pages like
https://rusefi.com/wiki/index.php?title=Main_Page
See http://www.turbobricks.org/forums/showthread.php?t=329200
See https://www.electronicspoint.com/forums/threads/ecm-jeep-computer-help-iding-the-motherboard-please.285822/
Do we want to ignore and just drop or do we want some nicer solution?
|
process
|
what shall we do with legacy wiki human technical wiki rusefi com problem there are a few legacy links to pages like see see do we want to ignore and just drop or do we want some nicer solution
| 1
|
11,577
| 14,443,245,840
|
IssuesEvent
|
2020-12-07 19:21:23
|
A01731346/4a
|
https://api.github.com/repos/A01731346/4a
|
closed
|
complete_size_estimating_template
|
process - dashboard
|
- Completar el formato de estimaciรณn de LOC con los valores reales obtenidos
|
1.0
|
complete_size_estimating_template - - Completar el formato de estimaciรณn de LOC con los valores reales obtenidos
|
process
|
complete size estimating template completar el formato de estimaciรณn de loc con los valores reales obtenidos
| 1
|
6,513
| 7,657,286,997
|
IssuesEvent
|
2018-05-10 19:08:11
|
odoo/odoo
|
https://api.github.com/repos/odoo/odoo
|
closed
|
CRM Activities not showing in Calendar
|
11.0 Services
|
Impacted versions: 11
Steps to reproduce: Create an Activity in CRM and then looking for in calendar.
Current behavior: Not showing
Expected behavior: Show my activities in Calendar.
Video/Screenshot link (optional):
|
1.0
|
CRM Activities not showing in Calendar - Impacted versions: 11
Steps to reproduce: Create an Activity in CRM and then looking for in calendar.
Current behavior: Not showing
Expected behavior: Show my activities in Calendar.
Video/Screenshot link (optional):
|
non_process
|
crm activities not showing in calendar impacted versions steps to reproduce create an activity in crm and then looking for in calendar current behavior not showing expected behavior show my activities in calendar video screenshot link optional
| 0
|
10,962
| 16,017,268,208
|
IssuesEvent
|
2021-04-20 17:36:12
|
NASA-PDS/pds-registry-app
|
https://api.github.com/repos/NASA-PDS/pds-registry-app
|
closed
|
The service shall store metadata for a registered artifact in an underlying metadata store
|
component:registry requirement requirement-topic:publish-artifacts
|
Level 4 Requirement:
* ๐ฆ L4.REG.3
|
2.0
|
The service shall store metadata for a registered artifact in an underlying metadata store - Level 4 Requirement:
* ๐ฆ L4.REG.3
|
non_process
|
the service shall store metadata for a registered artifact in an underlying metadata store level requirement ๐ฆ reg
| 0
|
17,509
| 23,319,868,431
|
IssuesEvent
|
2022-08-08 15:26:03
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
opened
|
refactor Hasher lookup handling in the chiplets bus
|
processor
|
As discussed in #348, the first version of the hasher handling in the chiplets bus was straightforward and left room for several future optimizations.
In particular, the following should be refactored and optimized in the future:
- how hasher lookup values are computed
- how lookups are requested from the decoder
- how lookups are stored for "providing" them to the bus from the hash chiplet module.
Each item is described below. In all cases, there are also inline TODOs in the code with comments.
## how hasher lookup values are computed
The hasher state is currently stored in the `HasherLookup` struct, which makes it heavy and means that other things which contain it become correspondingly much heavier (e.g. `ChipletsLookupRow`). Similarly, the "next" hasher state is held in the `Absorb` variant of the `HasherLookupContext` enum, which has the same knock-on effects. The aforementioned struct & enum are [here](https://github.com/maticnetwork/miden/blob/next/processor/src/chiplets/hasher/lookups.rs).
Instead, we could get the state from the trace when the lookup values are included in the $b_{chip}$ column. However, because the trace is in column-major form, this might not be more performant, so any change should be benchmarked.
## how lookups are requested from the decoder
Currently, the entire hash computation is done when the decoder makes its initialization request and all intermediate lookups required for the correctness of $b_{chip}$ are queued. When the decoder needs subsequent lookups (e.g. as it absorbs new operations during `RESPAN` or when the completes code blocks and needs the return hash), it sends a request, and they are dequeued and sent to the $b_{chip}$ bus.
Instead, it might be better to compute the lookups at the time they are needed. This requires refactoring the decoder and the functions in the hasher more extensively.
See the relevant discussions in the previous PR:
- https://github.com/maticnetwork/miden/pull/348#discussion_r937418159
- https://github.com/maticnetwork/miden/pull/348#discussion_r937400503
## how lookups are stored for "providing" them to the bus from the hash chiplet module.
Currently, all lookups are saved as they are computed during hash computations. At the end, during `fill_trace`, the hash chiplet iterates through and provides each lookup to the bus $b_{chip}$ one by one.
There are a few different options here:
1. provide the lookups to the bus as soon as they are computed, instead of saving them and sending them later. This would require refactoring the request/response handling a bit in the chiplet bus module since there is currently an assumption that all requests come first. It's a simple refactor, but it does also mean that the Hash chiplet would work differently from the Bitwise and Memory chiplets
2. during the `fill_trace` function, add the lookup "responses" to the chiplet bus in bulk instead of individually.
|
1.0
|
refactor Hasher lookup handling in the chiplets bus - As discussed in #348, the first version of the hasher handling in the chiplets bus was straightforward and left room for several future optimizations.
In particular, the following should be refactored and optimized in the future:
- how hasher lookup values are computed
- how lookups are requested from the decoder
- how lookups are stored for "providing" them to the bus from the hash chiplet module.
Each item is described below. In all cases, there are also inline TODOs in the code with comments.
## how hasher lookup values are computed
The hasher state is currently stored in the `HasherLookup` struct, which makes it heavy and means that other things which contain it become correspondingly much heavier (e.g. `ChipletsLookupRow`). Similarly, the "next" hasher state is held in the `Absorb` variant of the `HasherLookupContext` enum, which has the same knock-on effects. The aforementioned struct & enum are [here](https://github.com/maticnetwork/miden/blob/next/processor/src/chiplets/hasher/lookups.rs).
Instead, we could get the state from the trace when the lookup values are included in the $b_{chip}$ column. However, because the trace is in column-major form, this might not be more performant, so any change should be benchmarked.
## how lookups are requested from the decoder
Currently, the entire hash computation is done when the decoder makes its initialization request and all intermediate lookups required for the correctness of $b_{chip}$ are queued. When the decoder needs subsequent lookups (e.g. as it absorbs new operations during `RESPAN` or when the completes code blocks and needs the return hash), it sends a request, and they are dequeued and sent to the $b_{chip}$ bus.
Instead, it might be better to compute the lookups at the time they are needed. This requires refactoring the decoder and the functions in the hasher more extensively.
See the relevant discussions in the previous PR:
- https://github.com/maticnetwork/miden/pull/348#discussion_r937418159
- https://github.com/maticnetwork/miden/pull/348#discussion_r937400503
## how lookups are stored for "providing" them to the bus from the hash chiplet module.
Currently, all lookups are saved as they are computed during hash computations. At the end, during `fill_trace`, the hash chiplet iterates through and provides each lookup to the bus $b_{chip}$ one by one.
There are a few different options here:
1. provide the lookups to the bus as soon as they are computed, instead of saving them and sending them later. This would require refactoring the request/response handling a bit in the chiplet bus module since there is currently an assumption that all requests come first. It's a simple refactor, but it does also mean that the Hash chiplet would work differently from the Bitwise and Memory chiplets
2. during the `fill_trace` function, add the lookup "responses" to the chiplet bus in bulk instead of individually.
|
process
|
refactor hasher lookup handling in the chiplets bus as discussed in the first version of the hasher handling in the chiplets bus was straightforward and left room for several future optimizations in particular the following should be refactored and optimized in the future how hasher lookup values are computed how lookups are requested from the decoder how lookups are stored for providing them to the bus from the hash chiplet module each item is described below in all cases there are also inline todos in the code with comments how hasher lookup values are computed the hasher state is currently stored in the hasherlookup struct which makes it heavy and means that other things which contain it become correspondingly much heavier e g chipletslookuprow similarly the next hasher state is held in the absorb variant of the hasherlookupcontext enum which has the same knock on effects the aforementioned struct enum are instead we could get the state from the trace when the lookup values are included in the b chip column however because the trace is in column major form this might not be more performant so any change should be benchmarked how lookups are requested from the decoder currently the entire hash computation is done when the decoder makes its initialization request and all intermediate lookups required for the correctness of b chip are queued when the decoder needs subsequent lookups e g as it absorbs new operations during respan or when the completes code blocks and needs the return hash it sends a request and they are dequeued and sent to the b chip bus instead it might be better to compute the lookups at the time they are needed this requires refactoring the decoder and the functions in the hasher more extensively see the relevant discussions in the previous pr how lookups are stored for providing them to the bus from the hash chiplet module currently all lookups are saved as they are computed during hash computations at the end during fill trace the hash chiplet iterates through and provides each lookup to the bus b chip one by one there are a few different options here provide the lookups to the bus as soon as they are computed instead of saving them and sending them later this would require refactoring the request response handling a bit in the chiplet bus module since there is currently an assumption that all requests come first it s a simple refactor but it does also mean that the hash chiplet would work differently from the bitwise and memory chiplets during the fill trace function add the lookup responses to the chiplet bus in bulk instead of individually
| 1
|
40,898
| 10,591,146,410
|
IssuesEvent
|
2019-10-09 10:13:30
|
gradle/gradle
|
https://api.github.com/repos/gradle/gradle
|
opened
|
Allow CopySpec to specify a different normalization
|
@build-cache from:member
|
While copying files (using `Copy` or `Zip` etc.) we currently treat the sources specified via `CopySpec`s as `PathSensitivity.RELATIVE`. However, sometimes we want to copy things where we know they are going to be used as a classpath, like when including JARs in a WAR file. In those cases we could specify `@Classpath` normalization and avoid recreating the WAR file every time one of the JARs is rebuilt without a (runtime) significant change.
---
cc: @gradle/build-cache
|
1.0
|
Allow CopySpec to specify a different normalization - While copying files (using `Copy` or `Zip` etc.) we currently treat the sources specified via `CopySpec`s as `PathSensitivity.RELATIVE`. However, sometimes we want to copy things where we know they are going to be used as a classpath, like when including JARs in a WAR file. In those cases we could specify `@Classpath` normalization and avoid recreating the WAR file every time one of the JARs is rebuilt without a (runtime) significant change.
---
cc: @gradle/build-cache
|
non_process
|
allow copyspec to specify a different normalization while copying files using copy or zip etc we currently treat the sources specified via copyspec s as pathsensitivity relative however sometimes we want to copy things where we know they are going to be used as a classpath like when including jars in a war file in those cases we could specify classpath normalization and avoid recreating the war file every time one of the jars is rebuilt without a runtime significant change cc gradle build cache
| 0
|
250,950
| 27,127,581,459
|
IssuesEvent
|
2023-02-16 07:07:01
|
monizb/FireShort
|
https://api.github.com/repos/monizb/FireShort
|
closed
|
CVE-2022-25881 (Medium) detected in http-cache-semantics-4.1.0.tgz - autoclosed
|
security vulnerability
|
## CVE-2022-25881 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-cache-semantics-4.1.0.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- public-ip-4.0.2.tgz (Root Library)
- got-9.6.0.tgz
- cacheable-request-6.1.0.tgz
- :x: **http-cache-semantics-4.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/monizb/FireShort/commit/01d2522e4209e107bda54c059ee7caae1a2713dc">01d2522e4209e107bda54c059ee7caae1a2713dc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
<p>Publish Date: 2023-01-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p>
<p>Release Date: 2023-01-31</p>
<p>Fix Resolution: http-cache-semantics - 4.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-25881 (Medium) detected in http-cache-semantics-4.1.0.tgz - autoclosed - ## CVE-2022-25881 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-cache-semantics-4.1.0.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- public-ip-4.0.2.tgz (Root Library)
- got-9.6.0.tgz
- cacheable-request-6.1.0.tgz
- :x: **http-cache-semantics-4.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/monizb/FireShort/commit/01d2522e4209e107bda54c059ee7caae1a2713dc">01d2522e4209e107bda54c059ee7caae1a2713dc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
<p>Publish Date: 2023-01-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p>
<p>Release Date: 2023-01-31</p>
<p>Fix Resolution: http-cache-semantics - 4.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in http cache semantics tgz autoclosed cve medium severity vulnerability vulnerable library http cache semantics tgz parses cache control and other headers helps building correct http caches and proxies library home page a href path to dependency file package json path to vulnerable library node modules http cache semantics package json dependency hierarchy public ip tgz root library got tgz cacheable request tgz x http cache semantics tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects versions of the package http cache semantics before the issue can be exploited via malicious request header values sent to a server when that server reads the cache policy from the request using this library publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http cache semantics step up your open source security game with mend
| 0
|
346,364
| 24,886,774,273
|
IssuesEvent
|
2022-10-28 08:28:11
|
carriezhengjr/ped
|
https://api.github.com/repos/carriezhengjr/ped
|
opened
|
Example command provided in UG does not work in actual app
|
type.DocumentationBug severity.Medium
|
The User Guide (UG) states that the command `addcus n/John Doe e/johnd@example.com a/John t/animal cartoons t/vip` will creates a new customer named John Doe.

However, when the user enters the command `addcus n/John Doe e/johnd@example.com a/John t/animal cartoons t/vip` into the app in Customer View, an error message appears.

This bug is possibly due to the fact that the command should be in the format `addcus n/NAME p/PHONE_NUMBER e/EMAIL [a/ADDRESS] [t/TAG]โฆโ` where phone number is a required field. This information is obtained only if the user continues to read the "Features" section of the UG. Hence, the bug here will mislead users especially new users when they follow the "Quick start" section to run the app.
<!--session: 1666944915699-9f02aa31-a152-4882-b5a7-bae59bd58244-->
<!--Version: Web v3.4.4-->
|
1.0
|
Example command provided in UG does not work in actual app - The User Guide (UG) states that the command `addcus n/John Doe e/johnd@example.com a/John t/animal cartoons t/vip` will creates a new customer named John Doe.

However, when the user enters the command `addcus n/John Doe e/johnd@example.com a/John t/animal cartoons t/vip` into the app in Customer View, an error message appears.

This bug is possibly due to the fact that the command should be in the format `addcus n/NAME p/PHONE_NUMBER e/EMAIL [a/ADDRESS] [t/TAG]โฆโ` where phone number is a required field. This information is obtained only if the user continues to read the "Features" section of the UG. Hence, the bug here will mislead users especially new users when they follow the "Quick start" section to run the app.
<!--session: 1666944915699-9f02aa31-a152-4882-b5a7-bae59bd58244-->
<!--Version: Web v3.4.4-->
|
non_process
|
example command provided in ug does not work in actual app the user guide ug states that the command addcus n john doe e johnd example com a john t animal cartoons t vip will creates a new customer named john doe however when the user enters the command addcus n john doe e johnd example com a john t animal cartoons t vip into the app in customer view an error message appears this bug is possibly due to the fact that the command should be in the format addcus n name p phone number e email โฆโ where phone number is a required field this information is obtained only if the user continues to read the features section of the ug hence the bug here will mislead users especially new users when they follow the quick start section to run the app
| 0
|
6,895
| 10,039,269,949
|
IssuesEvent
|
2019-07-18 16:57:04
|
leighmurdick/CRT_S-Actuarial
|
https://api.github.com/repos/leighmurdick/CRT_S-Actuarial
|
opened
|
New accruals process and premium reconciliation (JP spreadsheet) by Aug 9
|
Accounting Data/Processes Portfolio Management QBR/Quarterly
|
Produce accrual with full adjustment (Kelly will then give feedback)
Need to manually accrue for MGA retro and fees
Kelly needs JP's spreadsheet to reconcile premium received
DC project (Walter/Travis) to ingest loan files?
JP tied Freddie - Fannie calc easier
|
1.0
|
New accruals process and premium reconciliation (JP spreadsheet) by Aug 9 - Produce accrual with full adjustment (Kelly will then give feedback)
Need to manually accrue for MGA retro and fees
Kelly needs JP's spreadsheet to reconcile premium received
DC project (Walter/Travis) to ingest loan files?
JP tied Freddie - Fannie calc easier
|
process
|
new accruals process and premium reconciliation jp spreadsheet by aug produce accrual with full adjustment kelly will then give feedback need to manually accrue for mga retro and fees kelly needs jp s spreadsheet to reconcile premium received dc project walter travis to ingest loan files jp tied freddie fannie calc easier
| 1
|
247,036
| 26,670,305,002
|
IssuesEvent
|
2023-01-26 09:41:41
|
eclipse-kanto/kanto
|
https://api.github.com/repos/eclipse-kanto/kanto
|
closed
|
Move to the last Go version
|
feature security
|
Currently, the Kanto release workflow is using: 1.17.2. There are new major versions with a bunch of features and bug fixes:
- [1.18](https://go.dev/doc/devel/release#go1.18)
- [1.19](https://go.dev/doc/devel/release#go1.19)
Reminder: All notice files on all components have to be updated to this new version.
Latest version to use - **1.19.4**.
**Tasks:**
- #162
- eclipse-kanto/suite-connector#50
- eclipse-kanto/suite-bootstrapping#21
- eclipse-kanto/local-digital-twins#15
- eclipse-kanto/container-management#74
- eclipse-kanto/software-update#47
- eclipse-kanto/file-upload#59
- eclipse-kanto/file-backup#30
- eclipse-kanto/system-metrics#15
- eclipse-kanto/azure-connector#28
- #193
|
True
|
Move to the last Go version - Currently, the Kanto release workflow is using: 1.17.2. There are new major versions with a bunch of features and bug fixes:
- [1.18](https://go.dev/doc/devel/release#go1.18)
- [1.19](https://go.dev/doc/devel/release#go1.19)
Reminder: All notice files on all components have to be updated to this new version.
Latest version to use - **1.19.4**.
**Tasks:**
- #162
- eclipse-kanto/suite-connector#50
- eclipse-kanto/suite-bootstrapping#21
- eclipse-kanto/local-digital-twins#15
- eclipse-kanto/container-management#74
- eclipse-kanto/software-update#47
- eclipse-kanto/file-upload#59
- eclipse-kanto/file-backup#30
- eclipse-kanto/system-metrics#15
- eclipse-kanto/azure-connector#28
- #193
|
non_process
|
move to the last go version currently the kanto release workflow is using there are new major versions with a bunch of features and bug fixes reminder all notice files on all components have to be updated to this new version latest version to use tasks eclipse kanto suite connector eclipse kanto suite bootstrapping eclipse kanto local digital twins eclipse kanto container management eclipse kanto software update eclipse kanto file upload eclipse kanto file backup eclipse kanto system metrics eclipse kanto azure connector
| 0
|
27,315
| 11,470,793,095
|
IssuesEvent
|
2020-02-09 06:21:35
|
scriptex/webpack-mpa
|
https://api.github.com/repos/scriptex/webpack-mpa
|
closed
|
CVE-2020-8116 (Medium) detected in dot-prop-4.2.0.tgz
|
security vulnerability
|
## CVE-2020-8116 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/webpack-mpa/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/webpack-mpa/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- postcss-merge-rules-4.0.3.tgz (Root Library)
- postcss-selector-parser-3.1.1.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scriptex/webpack-mpa/commit/365a011029592107ca187b27a584fc3c92713864">365a011029592107ca187b27a584fc3c92713864</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-8116 (Medium) detected in dot-prop-4.2.0.tgz - ## CVE-2020-8116 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/webpack-mpa/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/webpack-mpa/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- postcss-merge-rules-4.0.3.tgz (Root Library)
- postcss-selector-parser-3.1.1.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scriptex/webpack-mpa/commit/365a011029592107ca187b27a584fc3c92713864">365a011029592107ca187b27a584fc3c92713864</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in dot prop tgz cve medium severity vulnerability vulnerable library dot prop tgz get set or delete a property from a nested object using a dot path library home page a href path to dependency file tmp ws scm webpack mpa package json path to vulnerable library tmp ws scm webpack mpa node modules dot prop package json dependency hierarchy postcss merge rules tgz root library postcss selector parser tgz x dot prop tgz vulnerable library found in head commit a href vulnerability details prototype pollution vulnerability in dot prop npm package version and earlier allows an attacker to add arbitrary properties to javascript language constructs such as objects publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution dot prop step up your open source security game with whitesource
| 0
|
19,373
| 25,499,188,894
|
IssuesEvent
|
2022-11-28 01:16:23
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
opened
|
Replace Maven with Gradle in CI
|
enhancement process
|
### Problem
As a developer, I'd like to consolidate build tools to reduce dependabot PRs, reduce newcomer confusion, and take advantage of the benefits that Gradle providers over Maven.
### Solution
* Remove separate module workflows in favor of consolidated Gradle matrix workflow
* Replace use of Maven in CI with Gradle
* Run Sonar in Gradle
* Update docs to reference Gradle and not Maven
### Alternatives
_No response_
|
1.0
|
Replace Maven with Gradle in CI - ### Problem
As a developer, I'd like to consolidate build tools to reduce dependabot PRs, reduce newcomer confusion, and take advantage of the benefits that Gradle providers over Maven.
### Solution
* Remove separate module workflows in favor of consolidated Gradle matrix workflow
* Replace use of Maven in CI with Gradle
* Run Sonar in Gradle
* Update docs to reference Gradle and not Maven
### Alternatives
_No response_
|
process
|
replace maven with gradle in ci problem as a developer i d like to consolidate build tools to reduce dependabot prs reduce newcomer confusion and take advantage of the benefits that gradle providers over maven solution remove separate module workflows in favor of consolidated gradle matrix workflow replace use of maven in ci with gradle run sonar in gradle update docs to reference gradle and not maven alternatives no response
| 1
|
487,075
| 14,018,791,360
|
IssuesEvent
|
2020-10-29 17:17:19
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Introduce ordering on column for table widget
|
Accepted Priority: High Project: C040 Widgets
|
Enable the sorting tool (ascending / descending) of records in table widget (currently not enabled)

as it is currently working for the Attribute Table tool

|
1.0
|
Introduce ordering on column for table widget - Enable the sorting tool (ascending / descending) of records in table widget (currently not enabled)

as it is currently working for the Attribute Table tool

|
non_process
|
introduce ordering on column for table widget enable the sorting tool ascending descending of records in table widget currently not enabled as it is currently working for the attribute table tool
| 0
|
159,606
| 20,074,933,477
|
IssuesEvent
|
2022-02-04 11:36:14
|
finos/cla-bot
|
https://api.github.com/repos/finos/cla-bot
|
opened
|
CVE-2021-23383 (High) detected in handlebars-4.4.2.tgz
|
security vulnerability
|
## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.4.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.4.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.4.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- :x: **handlebars-4.4.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/finos/cla-bot/git/commits/d3311528b3e24dc4ade3886abe8f8f9a4a993f59">d3311528b3e24dc4ade3886abe8f8f9a4a993f59</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: 4.7.7</p>
</p>
</details>
<p></p>
|
True
|
CVE-2021-23383 (High) detected in handlebars-4.4.2.tgz - ## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.4.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.4.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.4.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- :x: **handlebars-4.4.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/finos/cla-bot/git/commits/d3311528b3e24dc4ade3886abe8f8f9a4a993f59">d3311528b3e24dc4ade3886abe8f8f9a4a993f59</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: 4.7.7</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file package json path to vulnerable library node modules handlebars package json dependency hierarchy x handlebars tgz vulnerable library found in head commit a href vulnerability details the package handlebars before are vulnerable to prototype pollution when selecting certain compiling options to compile templates coming from an untrusted source publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
8,409
| 11,575,537,751
|
IssuesEvent
|
2020-02-21 09:58:26
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Custom Schedules or Post-Script doesn't work
|
Pri1 automation/svc cxp process-automation/subsvc product-question triaged
|
When creating a custom schedule or running the StartStop Parent runbook with the stop action as a post script, the $WhatIf value is inputted as a string rather than a bool causing failure.
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Custom Schedules or Post-Script doesn't work - When creating a custom schedule or running the StartStop Parent runbook with the stop action as a post script, the $WhatIf value is inputted as a string rather than a bool causing failure.
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
custom schedules or post script doesn t work when creating a custom schedule or running the startstop parent runbook with the stop action as a post script the whatif value is inputted as a string rather than a bool causing failure document details โ do not edit this section it is required for docs microsoft com โ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
21,073
| 28,017,779,645
|
IssuesEvent
|
2023-03-28 01:07:06
|
HaoNguyenNhat/CNPMNC
|
https://api.github.com/repos/HaoNguyenNhat/CNPMNC
|
opened
|
Yรชu cแบงu hแป trแปฃ
|
8 point DG HUY NN HรO IN PROCESS
|
Lร khรกch hร ng chรญnh thแปฉc tรดi muแปn gแปญi yรชu cแบงu hแป trแปฃ trแปฑc tuyแบฟn ฤแป ฤฦฐแปฃc giแบฃi ฤรกp thแบฏc mแบฏc vร giแบฃi quyแบฟt vแบฅn ฤแป kแปp thแปi
|
1.0
|
Yรชu cแบงu hแป trแปฃ - Lร khรกch hร ng chรญnh thแปฉc tรดi muแปn gแปญi yรชu cแบงu hแป trแปฃ trแปฑc tuyแบฟn ฤแป ฤฦฐแปฃc giแบฃi ฤรกp thแบฏc mแบฏc vร giแบฃi quyแบฟt vแบฅn ฤแป kแปp thแปi
|
process
|
yรชu cแบงu hแป trแปฃ lร khรกch hร ng chรญnh thแปฉc tรดi muแปn gแปญi yรชu cแบงu hแป trแปฃ trแปฑc tuyแบฟn ฤแป ฤฦฐแปฃc giแบฃi ฤรกp thแบฏc mแบฏc vร giแบฃi quyแบฟt vแบฅn ฤแป kแปp thแปi
| 1
|
18,121
| 24,151,418,567
|
IssuesEvent
|
2022-09-22 01:27:50
|
neuropsychology/NeuroKit
|
https://api.github.com/repos/neuropsychology/NeuroKit
|
reopened
|
Generic function for 1/f noise
|
wontfix signal processing :chart_with_upwards_trend: Complexity/Chaos :bomb: inactive ๐ป
|
Would be good to have a generic `signal_1f` function to measure 1/f noise in particular for eeg and ecg signals (many papers have been published on this e.g., [here](https://www.jneurosci.org/content/jneuro/35/38/13257.full.pdf))
To-do:
- [ ] Dissociate the functionalities for our existing [fractal_psdslope](https://github.com/neuropsychology/NeuroKit/blob/master/neurokit2/complexity/fractal_psdslope.py#L8) - slope itself should be computed in `signal_1f` and this can then be embedded into `fractal_psdslope` which converts to an estimate of fractal dimension
- [ ] Additional parameter considerations for computing 1/f (ref [FOOOF](https://github.com/fooof-tools/fooof) tool)
|
1.0
|
Generic function for 1/f noise - Would be good to have a generic `signal_1f` function to measure 1/f noise in particular for eeg and ecg signals (many papers have been published on this e.g., [here](https://www.jneurosci.org/content/jneuro/35/38/13257.full.pdf))
To-do:
- [ ] Dissociate the functionalities for our existing [fractal_psdslope](https://github.com/neuropsychology/NeuroKit/blob/master/neurokit2/complexity/fractal_psdslope.py#L8) - slope itself should be computed in `signal_1f` and this can then be embedded into `fractal_psdslope` which converts to an estimate of fractal dimension
- [ ] Additional parameter considerations for computing 1/f (ref [FOOOF](https://github.com/fooof-tools/fooof) tool)
|
process
|
generic function for f noise would be good to have a generic signal function to measure f noise in particular for eeg and ecg signals many papers have been published on this e g to do dissociate the functionalities for our existing slope itself should be computed in signal and this can then be embedded into fractal psdslope which converts to an estimate of fractal dimension additional parameter considerations for computing f ref tool
| 1
|
96,762
| 12,156,481,009
|
IssuesEvent
|
2020-04-25 17:31:09
|
COVID19Tracking/website
|
https://api.github.com/repos/COVID19Tracking/website
|
opened
|
Responsive Navbar Seems weird and crowded.
|
DESIGN
|
The mobile navbar seems so crowded and basic. It just doesn't match with the whole layout of the site.
I suggest we either add some space between the navbar items or create a vertical hamburger menu instead.
#### Screenshots
<img src="https://user-images.githubusercontent.com/54989142/80286262-f4572100-8747-11ea-8639-23883d91fd1b.png" width="45%"> <img src="https://user-images.githubusercontent.com/54989142/80286243-d5588f00-8747-11ea-8c2e-35876e231bbe.png" width="45%">
|
1.0
|
Responsive Navbar Seems weird and crowded. - The mobile navbar seems so crowded and basic. It just doesn't match with the whole layout of the site.
I suggest we either add some space between the navbar items or create a vertical hamburger menu instead.
#### Screenshots
<img src="https://user-images.githubusercontent.com/54989142/80286262-f4572100-8747-11ea-8639-23883d91fd1b.png" width="45%"> <img src="https://user-images.githubusercontent.com/54989142/80286243-d5588f00-8747-11ea-8c2e-35876e231bbe.png" width="45%">
|
non_process
|
responsive navbar seems weird and crowded the mobile navbar seems so crowded and basic it just doesn t match with the whole layout of the site i suggest we either add some space between the navbar items or create a vertical hamburger menu instead screenshots nbsp nbsp
| 0
|
222,424
| 17,069,249,319
|
IssuesEvent
|
2021-07-07 11:14:56
|
telerik/kendo-react
|
https://api.github.com/repos/telerik/kendo-react
|
closed
|
Editor raises an error on enter key
|
documentation pkg:editor
|
The error is "RangeError: Can not convert <> to a Fragment (looks like multiple versions of prosemirror-model were loaded)".
Currently, it happens on the [KendoReact website](https://www.telerik.com/kendo-react-ui/components/editor/plugins/#toc-input-rules). If you open an example in StackBlitz, download it and run it locally, you will see that it works as expected. The error happens when different versions of [ProseMirror packages](https://www.telerik.com/kendo-react-ui/components/editor/get-started/#toc-dependencies) are loaded.
To prevent this error in your app, use the ProseMirror stuff from the '@progress/kendo-editor-common' package and install the same version as it is pointed in the editor's package.json file. For example, [for version 4.4.0., you should install version 1.1.5. of '@progress/kendo-editor-common'](https://unpkg.com/browse/@progress/kendo-react-editor@4.4.0/package.json). If you don't need to use the ProseMirror stuff to customize or extend the editor's functionality, you will not get such an error.
If you use yarn, define all the [ProseMirror packages](https://unpkg.com/browse/@progress/kendo-editor-common@1.1.6/package.json) versions in the [`resolutions`](https://classic.yarnpkg.com/en/docs/selective-version-resolutions/) section in package.json file.

|
1.0
|
Editor raises an error on enter key - The error is "RangeError: Can not convert <> to a Fragment (looks like multiple versions of prosemirror-model were loaded)".
Currently, it happens on the [KendoReact website](https://www.telerik.com/kendo-react-ui/components/editor/plugins/#toc-input-rules). If you open an example in StackBlitz, download it and run it locally, you will see that it works as expected. The error happens when different versions of [ProseMirror packages](https://www.telerik.com/kendo-react-ui/components/editor/get-started/#toc-dependencies) are loaded.
To prevent this error in your app, use the ProseMirror stuff from the '@progress/kendo-editor-common' package and install the same version as it is pointed in the editor's package.json file. For example, [for version 4.4.0., you should install version 1.1.5. of '@progress/kendo-editor-common'](https://unpkg.com/browse/@progress/kendo-react-editor@4.4.0/package.json). If you don't need to use the ProseMirror stuff to customize or extend the editor's functionality, you will not get such an error.
If you use yarn, define all the [ProseMirror packages](https://unpkg.com/browse/@progress/kendo-editor-common@1.1.6/package.json) versions in the [`resolutions`](https://classic.yarnpkg.com/en/docs/selective-version-resolutions/) section in package.json file.

|
non_process
|
editor raises an error on enter key the error is rangeerror can not convert to a fragment looks like multiple versions of prosemirror model were loaded currently it happens on the if you open an example in stackblitz download it and run it locally you will see that it works as expected the error happens when different versions of are loaded to prevent this error in your app use the prosemirror stuff from the progress kendo editor common package and install the same version as it is pointed in the editor s package json file for example if you don t need to use the prosemirror stuff to customize or extend the editor s functionality you will not get such an error if you use yarn define all the versions in the section in package json file
| 0
|
343,131
| 30,653,288,146
|
IssuesEvent
|
2023-07-25 10:20:00
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix jax_numpy_creation.test_jax_csingle
|
JAX Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5497819199/jobs/10018855036"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5497819199/jobs/10018855036"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5497819199/jobs/10018855036"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5497819199/jobs/10018855036"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5497819199/jobs/10018855036"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix jax_numpy_creation.test_jax_csingle - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5497819199/jobs/10018855036"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5497819199/jobs/10018855036"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5497819199/jobs/10018855036"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5497819199/jobs/10018855036"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5497819199/jobs/10018855036"><img src=https://img.shields.io/badge/-success-success></a>
|
non_process
|
fix jax numpy creation test jax csingle tensorflow a href src torch a href src paddle a href src numpy a href src jax a href src
| 0
|
10,093
| 13,044,162,070
|
IssuesEvent
|
2020-07-29 03:47:29
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `AddDateAndDuration` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `AddDateAndDuration` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `AddDateAndDuration` from TiDB -
## Description
Port the scalar function `AddDateAndDuration` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function adddateandduration from tidb description port the scalar function adddateandduration from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
264,163
| 23,099,669,271
|
IssuesEvent
|
2022-07-27 00:24:11
|
MPMG-DCC-UFMG/F01
|
https://api.github.com/repos/MPMG-DCC-UFMG/F01
|
closed
|
Teste de generalizacao para a tag Servidores - Registro da remuneraรงรฃo - Padre Carvalho
|
generalization test development template-Sรญntese tecnologia informatica subtag-Dados de Remuneraรงรฃo tag-Servidores
|
DoD: Realizar o teste de Generalizaรงรฃo do validador da tag Servidores - Registro da remuneraรงรฃo para o Municรญpio de Padre Carvalho.
|
1.0
|
Teste de generalizacao para a tag Servidores - Registro da remuneraรงรฃo - Padre Carvalho - DoD: Realizar o teste de Generalizaรงรฃo do validador da tag Servidores - Registro da remuneraรงรฃo para o Municรญpio de Padre Carvalho.
|
non_process
|
teste de generalizacao para a tag servidores registro da remuneraรงรฃo padre carvalho dod realizar o teste de generalizaรงรฃo do validador da tag servidores registro da remuneraรงรฃo para o municรญpio de padre carvalho
| 0
|
508,494
| 14,701,368,389
|
IssuesEvent
|
2021-01-04 11:45:39
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.google.com - see bug description
|
browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical
|
<!-- @browser: Firefox 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64894 -->
**URL**: https://www.google.com/sorry/index?continue=https://www.google.com/search%3Fclient%3Dfirefox-b-d%26q%3Ddhl&q=EgRvd7clGPmry_8FIhkA8aeDS9jOAkzi579DcaphJ577ULR_0iNZMgFy
**Browser / Version**: Firefox 85.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: recptcha
**Steps to Reproduce**:
im not a robot error appear
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/5387120d-3dc7-4f79-ab35-5eedf72e8891.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201220193140</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/1/caac1b07-ac2b-4243-8251-fa040f27eee1)
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
1.0
|
www.google.com - see bug description - <!-- @browser: Firefox 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64894 -->
**URL**: https://www.google.com/sorry/index?continue=https://www.google.com/search%3Fclient%3Dfirefox-b-d%26q%3Ddhl&q=EgRvd7clGPmry_8FIhkA8aeDS9jOAkzi579DcaphJ577ULR_0iNZMgFy
**Browser / Version**: Firefox 85.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: recptcha
**Steps to Reproduce**:
im not a robot error appear
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/5387120d-3dc7-4f79-ab35-5eedf72e8891.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201220193140</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/1/caac1b07-ac2b-4243-8251-fa040f27eee1)
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
non_process
|
see bug description url browser version firefox operating system windows tested another browser yes chrome problem type something else description recptcha steps to reproduce im not a robot error appear view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with โค๏ธ
| 0
|
240,999
| 7,807,945,905
|
IssuesEvent
|
2018-06-11 18:32:36
|
michalkowal/Cake.MkDocs
|
https://api.github.com/repos/michalkowal/Cake.MkDocs
|
closed
|
Recommended changes resulting from automated audit
|
Priority: Medium Status: In Progress Type: Maintenance
|
We performed an automated audit of your Cake addin and found that it does not follow all the best practices.
We encourage you to make the following modifications:
- [x] You are currently referencing Cake.Core 0.27.2. Please upgrade to 0.28.0
- [ ] The nuget package for your addin should use the cake-contrib icon. Specifically, your addin's `.csproj` should have a line like this: `<PackageIconUrl>https://cdn.rawgit.com/cake-contrib/graphics/a5cf0f881c390650144b2243ae551d5b9f836196/png/cake-contrib-medium.png</PackageIconUrl>`.
Apologies if this is already being worked on, or if there are existing open issues, this issue was created based on what is currently published for this package on NuGet.org and in the project on github.
|
1.0
|
Recommended changes resulting from automated audit - We performed an automated audit of your Cake addin and found that it does not follow all the best practices.
We encourage you to make the following modifications:
- [x] You are currently referencing Cake.Core 0.27.2. Please upgrade to 0.28.0
- [ ] The nuget package for your addin should use the cake-contrib icon. Specifically, your addin's `.csproj` should have a line like this: `<PackageIconUrl>https://cdn.rawgit.com/cake-contrib/graphics/a5cf0f881c390650144b2243ae551d5b9f836196/png/cake-contrib-medium.png</PackageIconUrl>`.
Apologies if this is already being worked on, or if there are existing open issues, this issue was created based on what is currently published for this package on NuGet.org and in the project on github.
|
non_process
|
recommended changes resulting from automated audit we performed an automated audit of your cake addin and found that it does not follow all the best practices we encourage you to make the following modifications you are currently referencing cake core please upgrade to the nuget package for your addin should use the cake contrib icon specifically your addin s csproj should have a line like this apologies if this is already being worked on or if there are existing open issues this issue was created based on what is currently published for this package on nuget org and in the project on github
| 0
|
9,763
| 12,744,389,169
|
IssuesEvent
|
2020-06-26 12:24:54
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
PS equivalent for portal steps
|
Pri2 automation/svc cxp process-automation/subsvc product-question triaged
|
[Enter feedback here]
https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks#use-runbook-authentication-with-run-as-account
Is there a Powershell equivalent for executing the above step.
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: a21ca143-2f33-5cea-94a8-ace7e9de5f9c
* Version Independent ID: d7f2ef01-8c25-770e-dfd9-37b98dc7ba29
* Content: [Run Azure Automation runbooks on a Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks#start-a-runbook-on-a-hybrid-runbook-worker)
* Content Source: [articles/automation/automation-hrw-run-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-hrw-run-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
PS equivalent for portal steps -
[Enter feedback here]
https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks#use-runbook-authentication-with-run-as-account
Is there a Powershell equivalent for executing the above step.
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: a21ca143-2f33-5cea-94a8-ace7e9de5f9c
* Version Independent ID: d7f2ef01-8c25-770e-dfd9-37b98dc7ba29
* Content: [Run Azure Automation runbooks on a Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks#start-a-runbook-on-a-hybrid-runbook-worker)
* Content Source: [articles/automation/automation-hrw-run-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-hrw-run-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
ps equivalent for portal steps is there a powershell equivalent for executing the above step document details โ do not edit this section it is required for docs microsoft com โ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
315,390
| 27,069,826,505
|
IssuesEvent
|
2023-02-14 05:28:57
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
[Testerina] Jacoco code coverage throws a MethodTooLarge exception for certain services
|
Type/Bug Team/DevTools Area/TestFramework
|
**Description:**
This is thrown from the ballerina project after the tests are executed at the code coverage generation phase.
```
java.lang.instrument.IllegalClassFormatException: Error while instrumenting cookie/logging/0_1_0/$value$Service$$service$_2.
at org.jacoco.agent.rt.internal_f3994fa.CoverageTransformer.transform(CoverageTransformer.java:94)
at java.instrument/java.lang.instrument.ClassFileTransformer.transform(ClassFileTransformer.java:246)
at java.instrument/sun.instrument.TransformerManager.transform(TransformerManager.java:188)
at java.instrument/sun.instrument.InstrumentationImpl.transform(InstrumentationImpl.java:563)
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1016)
at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174)
at java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:800)
at java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:698)
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:621)
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:579)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_6(logging:61)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_5(logging:12)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_4(logging:31)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_3(logging:29)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_2(logging:22)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_1(logging:15)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_0(logging:38)
at cookie.logging.0_1_0.$_init.$cookie_logging_0_1_0__init_(logging:18)
at cookie.logging.0_1_0.$_init.$moduleInit(logging)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.ballerinalang.test.runtime.entity.TesterinaFunction.lambda$runOnSchedule$0(TesterinaFunction.java:109)
at io.ballerina.runtime.scheduling.SchedulerItem.execute(Scheduler.java:501)
at io.ballerina.runtime.scheduling.Scheduler.run(Scheduler.java:277)
at io.ballerina.runtime.scheduling.Scheduler.runSafely(Scheduler.java:245)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.io.IOException: Error while instrumenting cookie/logging/0_1_0/$value$Service$$service$_2.
at org.jacoco.agent.rt.internal_f3994fa.core.instr.Instrumenter.instrumentError(Instrumenter.java:160)
at org.jacoco.agent.rt.internal_f3994fa.core.instr.Instrumenter.instrument(Instrumenter.java:110)
at org.jacoco.agent.rt.internal_f3994fa.CoverageTransformer.transform(CoverageTransformer.java:92)
... 30 more
Caused by: org.jacoco.agent.rt.internal_f3994fa.asm.MethodTooLargeException: Method too large: cookie/logging/0_1_0/$value$Service$$service$_2.getZippedLogs (Lio/ballerina/runtime/scheduling/Strand;Lio/ballerina/runtime/api/values/BObject;ZLio/ballerina/runtime/api/values/BObject;ZLio/ballerina/runtime/api/values/BString;Z)Ljava/lang/Object;
at org.jacoco.agent.rt.internal_f3994fa.asm.MethodWriter.computeMethodInfoSize(MethodWriter.java:2087)
at org.jacoco.agent.rt.internal_f3994fa.asm.ClassWriter.toByteArray(ClassWriter.java:496)
at org.jacoco.agent.rt.internal_f3994fa.core.instr.Instrumenter.instrument(Instrumenter.java:91)
at org.jacoco.agent.rt.internal_f3994fa.core.instr.Instrumenter.instrument(Instrumenter.java:108)
... 31 more
```
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers canโt assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers canโt assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
1.0
|
[Testerina] Jacoco code coverage throws a MethodTooLarge exception for certain services - **Description:**
This is thrown from the ballerina project after the tests are executed at the code coverage generation phase.
```
java.lang.instrument.IllegalClassFormatException: Error while instrumenting cookie/logging/0_1_0/$value$Service$$service$_2.
at org.jacoco.agent.rt.internal_f3994fa.CoverageTransformer.transform(CoverageTransformer.java:94)
at java.instrument/java.lang.instrument.ClassFileTransformer.transform(ClassFileTransformer.java:246)
at java.instrument/sun.instrument.TransformerManager.transform(TransformerManager.java:188)
at java.instrument/sun.instrument.InstrumentationImpl.transform(InstrumentationImpl.java:563)
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1016)
at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174)
at java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:800)
at java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:698)
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:621)
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:579)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_6(logging:61)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_5(logging:12)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_4(logging:31)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_3(logging:29)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_2(logging:22)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_1(logging:15)
at cookie.logging.0_1_0.logging.$cookie_logging_0_1_0__init_0(logging:38)
at cookie.logging.0_1_0.$_init.$cookie_logging_0_1_0__init_(logging:18)
at cookie.logging.0_1_0.$_init.$moduleInit(logging)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.ballerinalang.test.runtime.entity.TesterinaFunction.lambda$runOnSchedule$0(TesterinaFunction.java:109)
at io.ballerina.runtime.scheduling.SchedulerItem.execute(Scheduler.java:501)
at io.ballerina.runtime.scheduling.Scheduler.run(Scheduler.java:277)
at io.ballerina.runtime.scheduling.Scheduler.runSafely(Scheduler.java:245)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.io.IOException: Error while instrumenting cookie/logging/0_1_0/$value$Service$$service$_2.
at org.jacoco.agent.rt.internal_f3994fa.core.instr.Instrumenter.instrumentError(Instrumenter.java:160)
at org.jacoco.agent.rt.internal_f3994fa.core.instr.Instrumenter.instrument(Instrumenter.java:110)
at org.jacoco.agent.rt.internal_f3994fa.CoverageTransformer.transform(CoverageTransformer.java:92)
... 30 more
Caused by: org.jacoco.agent.rt.internal_f3994fa.asm.MethodTooLargeException: Method too large: cookie/logging/0_1_0/$value$Service$$service$_2.getZippedLogs (Lio/ballerina/runtime/scheduling/Strand;Lio/ballerina/runtime/api/values/BObject;ZLio/ballerina/runtime/api/values/BObject;ZLio/ballerina/runtime/api/values/BString;Z)Ljava/lang/Object;
at org.jacoco.agent.rt.internal_f3994fa.asm.MethodWriter.computeMethodInfoSize(MethodWriter.java:2087)
at org.jacoco.agent.rt.internal_f3994fa.asm.ClassWriter.toByteArray(ClassWriter.java:496)
at org.jacoco.agent.rt.internal_f3994fa.core.instr.Instrumenter.instrument(Instrumenter.java:91)
at org.jacoco.agent.rt.internal_f3994fa.core.instr.Instrumenter.instrument(Instrumenter.java:108)
... 31 more
```
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers canโt assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers canโt assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
non_process
|
jacoco code coverage throws a methodtoolarge exception for certain services description this is thrown from the ballerina project after the tests are executed at the code coverage generation phase java lang instrument illegalclassformatexception error while instrumenting cookie logging value service service at org jacoco agent rt internal coveragetransformer transform coveragetransformer java at java instrument java lang instrument classfiletransformer transform classfiletransformer java at java instrument sun instrument transformermanager transform transformermanager java at java instrument sun instrument instrumentationimpl transform instrumentationimpl java at java base java lang classloader native method at java base java lang classloader defineclass classloader java at java base java security secureclassloader defineclass secureclassloader java at java base jdk internal loader builtinclassloader defineclass builtinclassloader java at java base jdk internal loader builtinclassloader findclassonclasspathornull builtinclassloader java at java base jdk internal loader builtinclassloader loadclassornull builtinclassloader java at java base jdk internal loader builtinclassloader loadclass builtinclassloader java at java base jdk internal loader classloaders appclassloader loadclass classloaders java at java base java lang classloader loadclass classloader java at cookie logging logging cookie logging init logging at cookie logging logging cookie logging init logging at cookie logging logging cookie logging init logging at cookie logging logging cookie logging init logging at cookie logging logging cookie logging init logging at cookie logging logging cookie logging init logging at cookie logging logging cookie logging init logging at cookie logging init cookie logging init logging at cookie logging init moduleinit logging at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org ballerinalang test runtime entity testerinafunction lambda runonschedule testerinafunction java at io ballerina runtime scheduling scheduleritem execute scheduler java at io ballerina runtime scheduling scheduler run scheduler java at io ballerina runtime scheduling scheduler runsafely scheduler java at java base java lang thread run thread java caused by java io ioexception error while instrumenting cookie logging value service service at org jacoco agent rt internal core instr instrumenter instrumenterror instrumenter java at org jacoco agent rt internal core instr instrumenter instrument instrumenter java at org jacoco agent rt internal coveragetransformer transform coveragetransformer java more caused by org jacoco agent rt internal asm methodtoolargeexception method too large cookie logging value service service getzippedlogs lio ballerina runtime scheduling strand lio ballerina runtime api values bobject zlio ballerina runtime api values bobject zlio ballerina runtime api values bstring z ljava lang object at org jacoco agent rt internal asm methodwriter computemethodinfosize methodwriter java at org jacoco agent rt internal asm classwriter tobytearray classwriter java at org jacoco agent rt internal core instr instrumenter instrument instrumenter java at org jacoco agent rt internal core instr instrumenter instrument instrumenter java more steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional
| 0
|
177,082
| 13,683,105,144
|
IssuesEvent
|
2020-09-30 00:46:50
|
MiqueasAmorim/Pedido
|
https://api.github.com/repos/MiqueasAmorim/Pedido
|
closed
|
CT02 - (ProdutoTest) - O valor unitรกrio do produto nรฃo pode ser negativo
|
test case
|
**Dados de entrada:**
- Nome : Borracha
- Valor unitรกrio: -1.5
- Quantidade: 10
**Resultado esperado:**
- RuntimeException: "Valor invรกlido: -1.5"
|
1.0
|
CT02 - (ProdutoTest) - O valor unitรกrio do produto nรฃo pode ser negativo - **Dados de entrada:**
- Nome : Borracha
- Valor unitรกrio: -1.5
- Quantidade: 10
**Resultado esperado:**
- RuntimeException: "Valor invรกlido: -1.5"
|
non_process
|
produtotest o valor unitรกrio do produto nรฃo pode ser negativo dados de entrada nome borracha valor unitรกrio quantidade resultado esperado runtimeexception valor invรกlido
| 0
|
6,958
| 10,114,086,867
|
IssuesEvent
|
2019-07-30 18:19:39
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
New-OnPremiseHybridWorker.ps1 needs to be updated
|
Pri2 assigned-to-author automation/svc process-automation/subsvc triaged
|
The file New-OnPremiseHybridWorker.ps1 should be updated to use the new PowerShell AZ modules, it's still using AzureRM.
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 7b29372c-7bd9-7da2-4cff-9afbb432bccf
* Version Independent ID: 66ce101d-d21b-3fdf-be70-7f9cadc1570e
* Content: [Azure Automation Windows Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-windows-hrw-install#feedback)
* Content Source: [articles/automation/automation-windows-hrw-install.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-windows-hrw-install.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
1.0
|
New-OnPremiseHybridWorker.ps1 needs to be updated - The file New-OnPremiseHybridWorker.ps1 should be updated to use the new PowerShell AZ modules, it's still using AzureRM.
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 7b29372c-7bd9-7da2-4cff-9afbb432bccf
* Version Independent ID: 66ce101d-d21b-3fdf-be70-7f9cadc1570e
* Content: [Azure Automation Windows Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-windows-hrw-install#feedback)
* Content Source: [articles/automation/automation-windows-hrw-install.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-windows-hrw-install.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
process
|
new onpremisehybridworker needs to be updated the file new onpremisehybridworker should be updated to use the new powershell az modules it s still using azurerm document details โ do not edit this section it is required for docs microsoft com โ github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
| 1
|
21,699
| 30,195,098,560
|
IssuesEvent
|
2023-07-04 19:52:39
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] [Bug] `replace-clause` is broken with joins
|
Type:Bug .Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
Failing test:
```clj
(deftest ^:parallel replace-join-test
(let [query (lib.tu/query-with-join)
[original-join] (lib/joins query)]
(is (=? {:stages [{:joins [{:lib/type :mbql/join, :alias "Cat", :fields :all}]}]}
query))
(let [new-join (lib/with-join-fields original-join :none)]
(is (=? {:stages [{:joins [{:lib/type :mbql/join, :alias "Cat", :fields :none}]}]}
(lib/replace-clause query original-join new-join))))))
```
Stacktrace:
```
1. Unhandled java.lang.IllegalArgumentException
Don't know how to create ISeq from: clojure.lang.Keyword
RT.java: 557 clojure.lang.RT/seqFrom
RT.java: 537 clojure.lang.RT/seq
core.clj: 139 clojure.core/seq
core.clj: 2709 clojure.core/some
core.clj: 2709 clojure.core/some
remove_replace.cljc: 137 metabase.lib.remove_replace$remove_replace_STAR_$fn__78158/invoke
core.cljc: 20 medley.core$find_first$fn__13159/invoke
ArrayChunk.java: 58 clojure.lang.ArrayChunk/reduce
protocols.clj: 136 clojure.core.protocols/fn
protocols.clj: 124 clojure.core.protocols/fn
protocols.clj: 19 clojure.core.protocols/fn/G
protocols.clj: 31 clojure.core.protocols/seq-reduce
protocols.clj: 75 clojure.core.protocols/fn
protocols.clj: 75 clojure.core.protocols/fn
protocols.clj: 13 clojure.core.protocols/fn/G
core.clj: 6886 clojure.core/reduce
core.clj: 6868 clojure.core/reduce
core.cljc: 20 medley.core$find_first/invokeStatic
core.cljc: 7 medley.core$find_first/invoke
remove_replace.cljc: 133 metabase.lib.remove_replace$remove_replace_STAR_/invokeStatic
remove_replace.cljc: 129 metabase.lib.remove_replace$remove_replace_STAR_/invoke
remove_replace.cljc: 192 metabase.lib.remove_replace$eval78174$replace_clause__78175/invoke
AFn.java: 165 clojure.lang.AFn/applyToHelper
AFn.java: 144 clojure.lang.AFn/applyTo
core.clj: 667 clojure.core/apply
core.clj: 662 clojure.core/apply
core.cljc: 2526 malli.core$_instrument$fn__12235/doInvoke
RestFn.java: 137 clojure.lang.RestFn/applyTo
core.clj: 667 clojure.core/apply
core.clj: 662 clojure.core/apply
core.cljc: 2543 malli.core$_instrument$fn__12223/doInvoke
RestFn.java: 457 clojure.lang.RestFn/invoke
remove_replace.cljc: 187 metabase.lib.remove_replace$eval78174$replace_clause__78175/invoke
AFn.java: 160 clojure.lang.AFn/applyToHelper
AFn.java: 144 clojure.lang.AFn/applyTo
core.clj: 667 clojure.core/apply
core.clj: 662 clojure.core/apply
core.cljc: 2526 malli.core$_instrument$fn__12235/doInvoke
RestFn.java: 137 clojure.lang.RestFn/applyTo
core.clj: 667 clojure.core/apply
core.clj: 662 clojure.core/apply
core.cljc: 2543 malli.core$_instrument$fn__12223/doInvoke
RestFn.java: 436 clojure.lang.RestFn/invoke
join_test.cljc: 601 metabase.lib.join_test$fn__147113/invokeStatic
join_test.cljc: 594 metabase.lib.join_test$fn__147113/invoke
test.clj: 244 cider.nrepl.middleware.test/test-var/fn
test.clj: 244 cider.nrepl.middleware.test/test-var
test.clj: 236 cider.nrepl.middleware.test/test-var
test.clj: 259 cider.nrepl.middleware.test/test-vars/fn/fn
test.clj: 687 clojure.test/default-fixture
test.clj: 683 clojure.test/default-fixture
test.clj: 259 cider.nrepl.middleware.test/test-vars/fn
test.clj: 687 clojure.test/default-fixture
test.clj: 683 clojure.test/default-fixture
test.clj: 256 cider.nrepl.middleware.test/test-vars
test.clj: 250 cider.nrepl.middleware.test/test-vars
test.clj: 272 cider.nrepl.middleware.test/test-ns
test.clj: 263 cider.nrepl.middleware.test/test-ns
test.clj: 283 cider.nrepl.middleware.test/test-var-query
test.clj: 276 cider.nrepl.middleware.test/test-var-query
test.clj: 321 cider.nrepl.middleware.test/handle-test-var-query-op/fn/fn
AFn.java: 152 clojure.lang.AFn/applyToHelper
AFn.java: 144 clojure.lang.AFn/applyTo
core.clj: 667 clojure.core/apply
core.clj: 1990 clojure.core/with-bindings*
core.clj: 1990 clojure.core/with-bindings*
RestFn.java: 425 clojure.lang.RestFn/invoke
test.clj: 313 cider.nrepl.middleware.test/handle-test-var-query-op/fn
AFn.java: 22 clojure.lang.AFn/run
session.clj: 218 nrepl.middleware.session/session-exec/main-loop/fn
session.clj: 217 nrepl.middleware.session/session-exec/main-loop
AFn.java: 22 clojure.lang.AFn/run
Thread.java: 1589 java.lang.Thread/run
```
|
1.0
|
[MLv2] [Bug] `replace-clause` is broken with joins - Failing test:
```clj
(deftest ^:parallel replace-join-test
(let [query (lib.tu/query-with-join)
[original-join] (lib/joins query)]
(is (=? {:stages [{:joins [{:lib/type :mbql/join, :alias "Cat", :fields :all}]}]}
query))
(let [new-join (lib/with-join-fields original-join :none)]
(is (=? {:stages [{:joins [{:lib/type :mbql/join, :alias "Cat", :fields :none}]}]}
(lib/replace-clause query original-join new-join))))))
```
Stacktrace:
```
1. Unhandled java.lang.IllegalArgumentException
Don't know how to create ISeq from: clojure.lang.Keyword
RT.java: 557 clojure.lang.RT/seqFrom
RT.java: 537 clojure.lang.RT/seq
core.clj: 139 clojure.core/seq
core.clj: 2709 clojure.core/some
core.clj: 2709 clojure.core/some
remove_replace.cljc: 137 metabase.lib.remove_replace$remove_replace_STAR_$fn__78158/invoke
core.cljc: 20 medley.core$find_first$fn__13159/invoke
ArrayChunk.java: 58 clojure.lang.ArrayChunk/reduce
protocols.clj: 136 clojure.core.protocols/fn
protocols.clj: 124 clojure.core.protocols/fn
protocols.clj: 19 clojure.core.protocols/fn/G
protocols.clj: 31 clojure.core.protocols/seq-reduce
protocols.clj: 75 clojure.core.protocols/fn
protocols.clj: 75 clojure.core.protocols/fn
protocols.clj: 13 clojure.core.protocols/fn/G
core.clj: 6886 clojure.core/reduce
core.clj: 6868 clojure.core/reduce
core.cljc: 20 medley.core$find_first/invokeStatic
core.cljc: 7 medley.core$find_first/invoke
remove_replace.cljc: 133 metabase.lib.remove_replace$remove_replace_STAR_/invokeStatic
remove_replace.cljc: 129 metabase.lib.remove_replace$remove_replace_STAR_/invoke
remove_replace.cljc: 192 metabase.lib.remove_replace$eval78174$replace_clause__78175/invoke
AFn.java: 165 clojure.lang.AFn/applyToHelper
AFn.java: 144 clojure.lang.AFn/applyTo
core.clj: 667 clojure.core/apply
core.clj: 662 clojure.core/apply
core.cljc: 2526 malli.core$_instrument$fn__12235/doInvoke
RestFn.java: 137 clojure.lang.RestFn/applyTo
core.clj: 667 clojure.core/apply
core.clj: 662 clojure.core/apply
core.cljc: 2543 malli.core$_instrument$fn__12223/doInvoke
RestFn.java: 457 clojure.lang.RestFn/invoke
remove_replace.cljc: 187 metabase.lib.remove_replace$eval78174$replace_clause__78175/invoke
AFn.java: 160 clojure.lang.AFn/applyToHelper
AFn.java: 144 clojure.lang.AFn/applyTo
core.clj: 667 clojure.core/apply
core.clj: 662 clojure.core/apply
core.cljc: 2526 malli.core$_instrument$fn__12235/doInvoke
RestFn.java: 137 clojure.lang.RestFn/applyTo
core.clj: 667 clojure.core/apply
core.clj: 662 clojure.core/apply
core.cljc: 2543 malli.core$_instrument$fn__12223/doInvoke
RestFn.java: 436 clojure.lang.RestFn/invoke
join_test.cljc: 601 metabase.lib.join_test$fn__147113/invokeStatic
join_test.cljc: 594 metabase.lib.join_test$fn__147113/invoke
test.clj: 244 cider.nrepl.middleware.test/test-var/fn
test.clj: 244 cider.nrepl.middleware.test/test-var
test.clj: 236 cider.nrepl.middleware.test/test-var
test.clj: 259 cider.nrepl.middleware.test/test-vars/fn/fn
test.clj: 687 clojure.test/default-fixture
test.clj: 683 clojure.test/default-fixture
test.clj: 259 cider.nrepl.middleware.test/test-vars/fn
test.clj: 687 clojure.test/default-fixture
test.clj: 683 clojure.test/default-fixture
test.clj: 256 cider.nrepl.middleware.test/test-vars
test.clj: 250 cider.nrepl.middleware.test/test-vars
test.clj: 272 cider.nrepl.middleware.test/test-ns
test.clj: 263 cider.nrepl.middleware.test/test-ns
test.clj: 283 cider.nrepl.middleware.test/test-var-query
test.clj: 276 cider.nrepl.middleware.test/test-var-query
test.clj: 321 cider.nrepl.middleware.test/handle-test-var-query-op/fn/fn
AFn.java: 152 clojure.lang.AFn/applyToHelper
AFn.java: 144 clojure.lang.AFn/applyTo
core.clj: 667 clojure.core/apply
core.clj: 1990 clojure.core/with-bindings*
core.clj: 1990 clojure.core/with-bindings*
RestFn.java: 425 clojure.lang.RestFn/invoke
test.clj: 313 cider.nrepl.middleware.test/handle-test-var-query-op/fn
AFn.java: 22 clojure.lang.AFn/run
session.clj: 218 nrepl.middleware.session/session-exec/main-loop/fn
session.clj: 217 nrepl.middleware.session/session-exec/main-loop
AFn.java: 22 clojure.lang.AFn/run
Thread.java: 1589 java.lang.Thread/run
```
|
process
|
replace clause is broken with joins failing test clj deftest parallel replace join test let query lib tu query with join lib joins query is stages query let is stages lib replace clause query original join new join stacktrace unhandled java lang illegalargumentexception don t know how to create iseq from clojure lang keyword rt java clojure lang rt seqfrom rt java clojure lang rt seq core clj clojure core seq core clj clojure core some core clj clojure core some remove replace cljc metabase lib remove replace remove replace star fn invoke core cljc medley core find first fn invoke arraychunk java clojure lang arraychunk reduce protocols clj clojure core protocols fn protocols clj clojure core protocols fn protocols clj clojure core protocols fn g protocols clj clojure core protocols seq reduce protocols clj clojure core protocols fn protocols clj clojure core protocols fn protocols clj clojure core protocols fn g core clj clojure core reduce core clj clojure core reduce core cljc medley core find first invokestatic core cljc medley core find first invoke remove replace cljc metabase lib remove replace remove replace star invokestatic remove replace cljc metabase lib remove replace remove replace star invoke remove replace cljc metabase lib remove replace replace clause invoke afn java clojure lang afn applytohelper afn java clojure lang afn applyto core clj clojure core apply core clj clojure core apply core cljc malli core instrument fn doinvoke restfn java clojure lang restfn applyto core clj clojure core apply core clj clojure core apply core cljc malli core instrument fn doinvoke restfn java clojure lang restfn invoke remove replace cljc metabase lib remove replace replace clause invoke afn java clojure lang afn applytohelper afn java clojure lang afn applyto core clj clojure core apply core clj clojure core apply core cljc malli core instrument fn doinvoke restfn java clojure lang restfn applyto core clj clojure core apply core clj clojure core apply core cljc malli core instrument fn doinvoke restfn java clojure lang restfn invoke join test cljc metabase lib join test fn invokestatic join test cljc metabase lib join test fn invoke test clj cider nrepl middleware test test var fn test clj cider nrepl middleware test test var test clj cider nrepl middleware test test var test clj cider nrepl middleware test test vars fn fn test clj clojure test default fixture test clj clojure test default fixture test clj cider nrepl middleware test test vars fn test clj clojure test default fixture test clj clojure test default fixture test clj cider nrepl middleware test test vars test clj cider nrepl middleware test test vars test clj cider nrepl middleware test test ns test clj cider nrepl middleware test test ns test clj cider nrepl middleware test test var query test clj cider nrepl middleware test test var query test clj cider nrepl middleware test handle test var query op fn fn afn java clojure lang afn applytohelper afn java clojure lang afn applyto core clj clojure core apply core clj clojure core with bindings core clj clojure core with bindings restfn java clojure lang restfn invoke test clj cider nrepl middleware test handle test var query op fn afn java clojure lang afn run session clj nrepl middleware session session exec main loop fn session clj nrepl middleware session session exec main loop afn java clojure lang afn run thread java java lang thread run
| 1
|
476,516
| 13,745,938,126
|
IssuesEvent
|
2020-10-06 04:21:10
|
AY2021S1-CS2113T-F12-3/tp
|
https://api.github.com/repos/AY2021S1-CS2113T-F12-3/tp
|
closed
|
Add JSON parsing for questions loaded from file
|
priority.High type.Task
|
The file that stores questions in #8 can be a JSON file which will be parsed and put into the topics, questions, answers and hints objects.
|
1.0
|
Add JSON parsing for questions loaded from file - The file that stores questions in #8 can be a JSON file which will be parsed and put into the topics, questions, answers and hints objects.
|
non_process
|
add json parsing for questions loaded from file the file that stores questions in can be a json file which will be parsed and put into the topics questions answers and hints objects
| 0
|
522,917
| 15,169,581,154
|
IssuesEvent
|
2021-02-12 21:25:16
|
MaowImpl/Optionals
|
https://api.github.com/repos/MaowImpl/Optionals
|
closed
|
@Optional attributes don't support arrays
|
annotation enhancement javac low priority
|
Currently, the attributes in `@Optional` do not include any arrays. This will require a bit more condition checking in the Javac side of things, but it seems doable for now.
|
1.0
|
@Optional attributes don't support arrays - Currently, the attributes in `@Optional` do not include any arrays. This will require a bit more condition checking in the Javac side of things, but it seems doable for now.
|
non_process
|
optional attributes don t support arrays currently the attributes in optional do not include any arrays this will require a bit more condition checking in the javac side of things but it seems doable for now
| 0
|
169,578
| 6,411,882,065
|
IssuesEvent
|
2017-08-08 00:44:51
|
projectcalico/calico
|
https://api.github.com/repos/projectcalico/calico
|
reopened
|
General content improvements (style, consistency etc.)
|
area/docs/ux priority/P2 size/L
|
A lot of our docs are authored in subtly different ways. Would be good to go through the entire docs and provide a more common use of MD directives.
For example:
- [ ] Consistent use of numbered headings when describing a set of instructions
- [ ] Remove the clickable links and replace with inline bold links at the start of the line (e.g. see k8s AWS install instructions)
- [x] calico/node environment table has extra column
- [ ] Index for the calico integrations page has a bulleted list that duplicates the LHS menu - seems like a maintenance nightmare.
- [ ] usage and reference index pages need beefing up with some decent text
- [ ] mesos demo is actually an installation option (vagrant) and numbering is off: 1, 1.2, 3
- [ ] intro page is text heavy
- [ ] Calico over ethernet fabrics page talks about the document being a tech note.
- [ ] docker overview duplicates the side bar menu. should simplify
- [ ] Open stack guide uses Part 0, Part 1 etc... just use normal section numbering
- [ ] big headings in usage: external connectivity
|
1.0
|
General content improvements (style, consistency etc.) - A lot of our docs are authored in subtly different ways. Would be good to go through the entire docs and provide a more common use of MD directives.
For example:
- [ ] Consistent use of numbered headings when describing a set of instructions
- [ ] Remove the clickable links and replace with inline bold links at the start of the line (e.g. see k8s AWS install instructions)
- [x] calico/node environment table has extra column
- [ ] Index for the calico integrations page has a bulleted list that duplicates the LHS menu - seems like a maintenance nightmare.
- [ ] usage and reference index pages need beefing up with some decent text
- [ ] mesos demo is actually an installation option (vagrant) and numbering is off: 1, 1.2, 3
- [ ] intro page is text heavy
- [ ] Calico over ethernet fabrics page talks about the document being a tech note.
- [ ] docker overview duplicates the side bar menu. should simplify
- [ ] Open stack guide uses Part 0, Part 1 etc... just use normal section numbering
- [ ] big headings in usage: external connectivity
|
non_process
|
general content improvements style consistency etc a lot of our docs are authored in subtly different ways would be good to go through the entire docs and provide a more common use of md directives for example consistent use of numbered headings when describing a set of instructions remove the clickable links and replace with inline bold links at the start of the line e g see aws install instructions calico node environment table has extra column index for the calico integrations page has a bulleted list that duplicates the lhs menu seems like a maintenance nightmare usage and reference index pages need beefing up with some decent text mesos demo is actually an installation option vagrant and numbering is off intro page is text heavy calico over ethernet fabrics page talks about the document being a tech note docker overview duplicates the side bar menu should simplify open stack guide uses part part etc just use normal section numbering big headings in usage external connectivity
| 0
|
55,949
| 14,074,860,158
|
IssuesEvent
|
2020-11-04 08:07:10
|
teena24/WebGoat
|
https://api.github.com/repos/teena24/WebGoat
|
opened
|
CVE-2018-14040 (Medium) detected in bootstrap-3.3.7.jar, bootstrap-3.1.1.min.js
|
security vulnerability
|
## CVE-2018-14040 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-3.3.7.jar</b>, <b>bootstrap-3.1.1.min.js</b></p></summary>
<p>
<details><summary><b>bootstrap-3.3.7.jar</b></p></summary>
<p>WebJar for Bootstrap</p>
<p>Library home page: <a href="http://webjars.org">http://webjars.org</a></p>
<p>Path to dependency file: WebGoat/webgoat-integration-tests/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar,canner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.jar** (Vulnerable Library)
</details>
<details><summary><b>bootstrap-3.1.1.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.1.1/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.1.1/js/bootstrap.min.js</a></p>
<p>Path to vulnerable library: WebGoat/webgoat-lessons/challenge/src/main/resources/js/bootstrap.min.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.1.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/teena24/WebGoat/commit/b8a568f6e08fcde3c08370e69ce7236fef395ad5">b8a568f6e08fcde3c08370e69ce7236fef395ad5</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute.
<p>Publish Date: 2018-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14040>CVE-2018-14040</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/26630">https://github.com/twbs/bootstrap/pull/26630</a></p>
<p>Release Date: 2018-07-13</p>
<p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.webjars","packageName":"bootstrap","packageVersion":"3.3.7","isTransitiveDependency":false,"dependencyTree":"org.webjars:bootstrap:3.3.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0"},{"packageType":"JavaScript","packageName":"twitter-bootstrap","packageVersion":"3.1.1","isTransitiveDependency":false,"dependencyTree":"twitter-bootstrap:3.1.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0"}],"vulnerabilityIdentifier":"CVE-2018-14040","vulnerabilityDetails":"In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14040","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2018-14040 (Medium) detected in bootstrap-3.3.7.jar, bootstrap-3.1.1.min.js - ## CVE-2018-14040 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-3.3.7.jar</b>, <b>bootstrap-3.1.1.min.js</b></p></summary>
<p>
<details><summary><b>bootstrap-3.3.7.jar</b></p></summary>
<p>WebJar for Bootstrap</p>
<p>Library home page: <a href="http://webjars.org">http://webjars.org</a></p>
<p>Path to dependency file: WebGoat/webgoat-integration-tests/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar,canner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.jar** (Vulnerable Library)
</details>
<details><summary><b>bootstrap-3.1.1.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.1.1/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.1.1/js/bootstrap.min.js</a></p>
<p>Path to vulnerable library: WebGoat/webgoat-lessons/challenge/src/main/resources/js/bootstrap.min.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.1.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/teena24/WebGoat/commit/b8a568f6e08fcde3c08370e69ce7236fef395ad5">b8a568f6e08fcde3c08370e69ce7236fef395ad5</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute.
<p>Publish Date: 2018-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14040>CVE-2018-14040</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/26630">https://github.com/twbs/bootstrap/pull/26630</a></p>
<p>Release Date: 2018-07-13</p>
<p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.webjars","packageName":"bootstrap","packageVersion":"3.3.7","isTransitiveDependency":false,"dependencyTree":"org.webjars:bootstrap:3.3.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0"},{"packageType":"JavaScript","packageName":"twitter-bootstrap","packageVersion":"3.1.1","isTransitiveDependency":false,"dependencyTree":"twitter-bootstrap:3.1.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0"}],"vulnerabilityIdentifier":"CVE-2018-14040","vulnerabilityDetails":"In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14040","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in bootstrap jar bootstrap min js cve medium severity vulnerability vulnerable libraries bootstrap jar bootstrap min js bootstrap jar webjar for bootstrap library home page a href path to dependency file webgoat webgoat integration tests pom xml path to vulnerable library home wss scanner repository org webjars bootstrap bootstrap jar canner repository org webjars bootstrap bootstrap jar dependency hierarchy x bootstrap jar vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library webgoat webgoat lessons challenge src main resources js bootstrap min js dependency hierarchy x bootstrap min js vulnerable library found in head commit a href found in base branch develop vulnerability details in bootstrap before xss is possible in the collapse data parent attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org webjars npm bootstrap org webjars bootstrap isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in bootstrap before xss is possible in the collapse data parent attribute vulnerabilityurl
| 0
|
4,542
| 7,374,707,943
|
IssuesEvent
|
2018-03-13 21:13:56
|
KantaraInitiative/wg-uma
|
https://api.github.com/repos/KantaraInitiative/wg-uma
|
closed
|
Update the Release Notes for UMA 2.0
|
V2.0 process
|
Add a new section to the Release Notes covering the changes in UMA 2.0 by the time it's published:
http://kantarainitiative.org/confluence/display/uma/UMA+Release+Notes
|
1.0
|
Update the Release Notes for UMA 2.0 - Add a new section to the Release Notes covering the changes in UMA 2.0 by the time it's published:
http://kantarainitiative.org/confluence/display/uma/UMA+Release+Notes
|
process
|
update the release notes for uma add a new section to the release notes covering the changes in uma by the time it s published
| 1
|
448
| 2,883,308,485
|
IssuesEvent
|
2015-06-11 11:21:38
|
kvakulo/Switcheroo
|
https://api.github.com/repos/kvakulo/Switcheroo
|
closed
|
SmartGit window is missing from the window list if a child window is opened
|
bug in process
|
I think I saw SmartGit window on one of your screenshots, which is good, because this means you can easily reproduce the bug ;)
1. Open SmartGit
2. Open Switcheroo to verify SmartGit window is correctly displayed in the list by default
3. Switch to SmartGit, hit `Ctrl+R` to display reset window
4. Open Switcheroo, BOOM! - there's no SmartGit window in the list anymore
|
1.0
|
SmartGit window is missing from the window list if a child window is opened - I think I saw SmartGit window on one of your screenshots, which is good, because this means you can easily reproduce the bug ;)
1. Open SmartGit
2. Open Switcheroo to verify SmartGit window is correctly displayed in the list by default
3. Switch to SmartGit, hit `Ctrl+R` to display reset window
4. Open Switcheroo, BOOM! - there's no SmartGit window in the list anymore
|
process
|
smartgit window is missing from the window list if a child window is opened i think i saw smartgit window on one of your screenshots which is good because this means you can easily reproduce the bug open smartgit open switcheroo to verify smartgit window is correctly displayed in the list by default switch to smartgit hit ctrl r to display reset window open switcheroo boom there s no smartgit window in the list anymore
| 1
|
13,719
| 16,484,110,825
|
IssuesEvent
|
2021-05-24 15:29:37
|
frictionlessdata/project
|
https://api.github.com/repos/frictionlessdata/project
|
closed
|
Github discussions as potential upgrade of this forum
|
Topic: Process
|
Github announced discussions in beta in early May. Not yet generally available. When they are definitely worth investigating.
|
1.0
|
Github discussions as potential upgrade of this forum - Github announced discussions in beta in early May. Not yet generally available. When they are definitely worth investigating.
|
process
|
github discussions as potential upgrade of this forum github announced discussions in beta in early may not yet generally available when they are definitely worth investigating
| 1
|
18,571
| 24,556,218,886
|
IssuesEvent
|
2022-10-12 16:04:26
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] UI issue for the Scale responce type to choose the value
|
Bug P1 Android Process: Fixed Process: Tested QA Process: Tested dev
|
Steps:-
1. Configure the Questionnarie activity with form step and with the responce type **Scale(Min&Max values = 1&10 respectively)** for the study in Study builder and publish
2. Install and login into the mobile app
3. Enroll into the same study which is mentioned in step-1
4. Navigate inside the activity and observe the UI
A/R:- Currently, Scale is Displaying 11 steps ranging from 1-10
E/R:- Scale should display 10 steps only ranging from 1-10 (min-Max)
**Note:-** Issue should be fixed for both Question step and Form step

|
3.0
|
[Android] UI issue for the Scale responce type to choose the value - Steps:-
1. Configure the Questionnarie activity with form step and with the responce type **Scale(Min&Max values = 1&10 respectively)** for the study in Study builder and publish
2. Install and login into the mobile app
3. Enroll into the same study which is mentioned in step-1
4. Navigate inside the activity and observe the UI
A/R:- Currently, Scale is Displaying 11 steps ranging from 1-10
E/R:- Scale should display 10 steps only ranging from 1-10 (min-Max)
**Note:-** Issue should be fixed for both Question step and Form step

|
process
|
ui issue for the scale responce type to choose the value steps configure the questionnarie activity with form step and with the responce type scale min max values respectively for the study in study builder and publish install and login into the mobile app enroll into the same study which is mentioned in step navigate inside the activity and observe the ui a r currently scale is displaying steps ranging from e r scale should display steps only ranging from min max note issue should be fixed for both question step and form step
| 1
|
74,018
| 19,976,857,465
|
IssuesEvent
|
2022-01-29 08:05:00
|
envoyproxy/envoy
|
https://api.github.com/repos/envoyproxy/envoy
|
opened
|
Newer release available `com_google_protobuf`: v3.19.4 (current: v3.19.3)
|
area/build no stalebot dependencies
|
Package Name: com_google_protobuf
Current Version: v3.19.3@2022-01-11 17:17:30
Available Version: v3.19.4@2022-01-28 03:35:56
Upstream releases: https://github.com/protocolbuffers/protobuf/releases
|
1.0
|
Newer release available `com_google_protobuf`: v3.19.4 (current: v3.19.3) -
Package Name: com_google_protobuf
Current Version: v3.19.3@2022-01-11 17:17:30
Available Version: v3.19.4@2022-01-28 03:35:56
Upstream releases: https://github.com/protocolbuffers/protobuf/releases
|
non_process
|
newer release available com google protobuf current package name com google protobuf current version available version upstream releases
| 0
|
113,440
| 11,802,848,006
|
IssuesEvent
|
2020-03-18 22:32:26
|
mike-goodwin/owasp-threat-dragon
|
https://api.github.com/repos/mike-goodwin/owasp-threat-dragon
|
closed
|
Which browsers are suported?
|
documentation help wanted
|
Having issues to create a basic template any recommendation on which browser i can use?
|
1.0
|
Which browsers are suported? - Having issues to create a basic template any recommendation on which browser i can use?
|
non_process
|
which browsers are suported having issues to create a basic template any recommendation on which browser i can use
| 0
|
13,312
| 15,782,171,683
|
IssuesEvent
|
2021-04-01 12:26:39
|
ooi-data/CE04OSPS-SF01B-3A-FLORTD104-streamed-flort_d_data_record
|
https://api.github.com/repos/ooi-data/CE04OSPS-SF01B-3A-FLORTD104-streamed-flort_d_data_record
|
opened
|
๐ Processing failed: ResponseParserError
|
process
|
## Overview
`ResponseParserError` found in `processing_task` task during run ended on 2021-04-01T12:26:38.912793.
## Details
Flow name: `CE04OSPS-SF01B-3A-FLORTD104-streamed-flort_d_data_record`
Task name: `processing_task`
Error type: `ResponseParserError`
Error message: Unable to parse response (no element found: line 2, column 0), invalid XML received. Further retries may succeed:
b'<?xml version="1.0" encoding="UTF-8"?>\n'
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 71, in processing_task
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 311, in finalize_zarr
source_store.fs.delete(source_store.root, recursive=True)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1146, in delete
return self.rm(path, recursive=recursive, maxdepth=maxdepth)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1445, in rm
super().rm(path, recursive=recursive, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 196, in rm
maybe_sync(self._rm, self, path, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 100, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 71, in sync
raise exc.with_traceback(tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 55, in f
result[0] = await future
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1404, in _rm
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1396, in _bulk_delete
await self._call_s3(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err) from err
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 234, in _call_s3
return await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 140, in _make_api_call
http, parsed_response = await self._make_request(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 160, in _make_request
return await self._endpoint.make_request(operation_model, request_dict)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 101, in _send_request
success_response, exception = await self._get_response(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 120, in _get_response
success_response, exception = await self._do_get_response(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 180, in _do_get_response
parsed_response = parser.parse(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 245, in parse
parsed = self._do_parse(response, shape)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 809, in _do_parse
self._add_modeled_parse(response, shape, final_parsed)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 818, in _add_modeled_parse
self._parse_payload(response, shape, member_shapes, final_parsed)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 858, in _parse_payload
original_parsed = self._initial_body_parse(response['body'])
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 944, in _initial_body_parse
return self._parse_xml_string_to_dom(xml_string)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 454, in _parse_xml_string_to_dom
raise ResponseParserError(
botocore.parsers.ResponseParserError: Unable to parse response (no element found: line 2, column 0), invalid XML received. Further retries may succeed:
b'<?xml version="1.0" encoding="UTF-8"?>\n'
```
</details>
|
1.0
|
๐ Processing failed: ResponseParserError - ## Overview
`ResponseParserError` found in `processing_task` task during run ended on 2021-04-01T12:26:38.912793.
## Details
Flow name: `CE04OSPS-SF01B-3A-FLORTD104-streamed-flort_d_data_record`
Task name: `processing_task`
Error type: `ResponseParserError`
Error message: Unable to parse response (no element found: line 2, column 0), invalid XML received. Further retries may succeed:
b'<?xml version="1.0" encoding="UTF-8"?>\n'
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 71, in processing_task
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 311, in finalize_zarr
source_store.fs.delete(source_store.root, recursive=True)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1146, in delete
return self.rm(path, recursive=recursive, maxdepth=maxdepth)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1445, in rm
super().rm(path, recursive=recursive, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 196, in rm
maybe_sync(self._rm, self, path, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 100, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 71, in sync
raise exc.with_traceback(tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 55, in f
result[0] = await future
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1404, in _rm
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1396, in _bulk_delete
await self._call_s3(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err) from err
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 234, in _call_s3
return await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 140, in _make_api_call
http, parsed_response = await self._make_request(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 160, in _make_request
return await self._endpoint.make_request(operation_model, request_dict)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 101, in _send_request
success_response, exception = await self._get_response(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 120, in _get_response
success_response, exception = await self._do_get_response(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 180, in _do_get_response
parsed_response = parser.parse(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 245, in parse
parsed = self._do_parse(response, shape)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 809, in _do_parse
self._add_modeled_parse(response, shape, final_parsed)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 818, in _add_modeled_parse
self._parse_payload(response, shape, member_shapes, final_parsed)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 858, in _parse_payload
original_parsed = self._initial_body_parse(response['body'])
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 944, in _initial_body_parse
return self._parse_xml_string_to_dom(xml_string)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 454, in _parse_xml_string_to_dom
raise ResponseParserError(
botocore.parsers.ResponseParserError: Unable to parse response (no element found: line 2, column 0), invalid XML received. Further retries may succeed:
b'<?xml version="1.0" encoding="UTF-8"?>\n'
```
</details>
|
process
|
๐ processing failed responseparsererror overview responseparsererror found in processing task task during run ended on details flow name streamed flort d data record task name processing task error type responseparsererror error message unable to parse response no element found line column invalid xml received further retries may succeed b n traceback traceback most recent call last file usr share miniconda envs harvester lib site packages ooi harvester processor pipeline py line in processing task file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize zarr source store fs delete source store root recursive true file srv conda envs notebook lib site packages fsspec spec py line in delete return self rm path recursive recursive maxdepth maxdepth file srv conda envs notebook lib site packages core py line in rm super rm path recursive recursive kwargs file srv conda envs notebook lib site packages fsspec asyn py line in rm maybe sync self rm self path kwargs file srv conda envs notebook lib site packages fsspec asyn py line in maybe sync return sync loop func args kwargs file srv conda envs notebook lib site packages fsspec asyn py line in sync raise exc with traceback tb file srv conda envs notebook lib site packages fsspec asyn py line in f result await future file srv conda envs notebook lib site packages core py line in rm await asyncio gather file srv conda envs notebook lib site packages core py line in bulk delete await self call file srv conda envs notebook lib site packages core py line in call raise translate boto error err from err file srv conda envs notebook lib site packages core py line in call return await method additional kwargs file srv conda envs notebook lib site packages aiobotocore client py line in make api call http parsed response await self make request file srv conda envs notebook lib site packages aiobotocore client py line in make request return await self endpoint make request operation model request dict file srv conda envs notebook lib site packages aiobotocore endpoint py line in send request success response exception await self get response file srv conda envs notebook lib site packages aiobotocore endpoint py line in get response success response exception await self do get response file srv conda envs notebook lib site packages aiobotocore endpoint py line in do get response parsed response parser parse file srv conda envs notebook lib site packages botocore parsers py line in parse parsed self do parse response shape file srv conda envs notebook lib site packages botocore parsers py line in do parse self add modeled parse response shape final parsed file srv conda envs notebook lib site packages botocore parsers py line in add modeled parse self parse payload response shape member shapes final parsed file srv conda envs notebook lib site packages botocore parsers py line in parse payload original parsed self initial body parse response file srv conda envs notebook lib site packages botocore parsers py line in initial body parse return self parse xml string to dom xml string file srv conda envs notebook lib site packages botocore parsers py line in parse xml string to dom raise responseparsererror botocore parsers responseparsererror unable to parse response no element found line column invalid xml received further retries may succeed b n
| 1
|
16,476
| 21,411,466,836
|
IssuesEvent
|
2022-04-22 06:34:04
|
ppy/osu-web
|
https://api.github.com/repos/ppy/osu-web
|
closed
|
Hide online offset option for qualified beatmaps from BNs
|
area:beatmap-processing priority:0
|
There is no reason for BNs to have access to this.
1. They can only access this panel for qualified and pending beatmaps.
2. Standard procedure is to disqualify maps with incorrect offset according to [some official wiki source which I lost the link to]:
> Issues that can be mitigated through online offsets, tags or similar must still be disqualified.
3. Therefore there is no use case for BNs
|
1.0
|
Hide online offset option for qualified beatmaps from BNs - There is no reason for BNs to have access to this.
1. They can only access this panel for qualified and pending beatmaps.
2. Standard procedure is to disqualify maps with incorrect offset according to [some official wiki source which I lost the link to]:
> Issues that can be mitigated through online offsets, tags or similar must still be disqualified.
3. Therefore there is no use case for BNs
|
process
|
hide online offset option for qualified beatmaps from bns there is no reason for bns to have access to this they can only access this panel for qualified and pending beatmaps standard procedure is to disqualify maps with incorrect offset according to issues that can be mitigated through online offsets tags or similar must still be disqualified therefore there is no use case for bns
| 1
|
218,862
| 16,772,911,485
|
IssuesEvent
|
2021-06-14 16:54:51
|
unitaryfund/mitiq
|
https://api.github.com/repos/unitaryfund/mitiq
|
closed
|
Consider reverting cirq-google until deprecation warnings will disappear.
|
discussion documentation infrastructure priority/p1
|
When we passed from `cirq` to `cirq-core`, a lot of deprecation warning appear each time we import something.
E.g. if a user tries to run:
```
python
>>> from mitiq.zne import execute_with_zne
```
The following long message appears:
<details>
--- Logging error ---
Traceback (most recent call last):
File "/home/andrea/anaconda3/lib/python3.7/site-packages/cirq/__init__.py", line 581, in <module>
create_attribute=True,
File "/home/andrea/anaconda3/lib/python3.7/site-packages/cirq/_compat.py", line 546, in deprecated_submodule
new_module = importlib.import_module(new_module_name)
File "/home/andrea/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'cirq_google'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/andrea/anaconda3/lib/python3.7/logging/__init__.py", line 1025, in emit
msg = self.format(record)
File "/home/andrea/anaconda3/lib/python3.7/logging/__init__.py", line 869, in format
return fmt.format(record)
File "/home/andrea/anaconda3/lib/python3.7/logging/__init__.py", line 608, in format
record.message = record.getMessage()
File "/home/andrea/anaconda3/lib/python3.7/logging/__init__.py", line 369, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "<stdin>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/andrea/data/mitiq/mitiq/__init__.py", line 16, in <module>
from mitiq._about import about
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/andrea/data/mitiq/mitiq/_about.py", line 23, in <module>
from cirq import __version__ as cirq_version
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/andrea/anaconda3/lib/python3.7/site-packages/cirq/__init__.py", line 585, in <module>
warning("Can't import cirq.google: ", ex)
Message: "Can't import cirq.google: "
Arguments: (ModuleNotFoundError("No module named 'cirq_google'"),)
</details>
The same problem also appears in some examples in the docs. See e.g.:
https://mitiq.readthedocs.io/en/latest/examples/simple_landscape.html
https://mitiq.readthedocs.io/en/latest/examples/maxcut-demo.html#define-the-qaoa-cost-hamiltonian-using-cirq
** Proposal **
Temporarily require the full `cirq` package, until the warning disappear?
They will probably fix this at some point: https://github.com/quantumlib/Cirq/issues/4106
|
1.0
|
Consider reverting cirq-google until deprecation warnings will disappear. - When we passed from `cirq` to `cirq-core`, a lot of deprecation warning appear each time we import something.
E.g. if a user tries to run:
```
python
>>> from mitiq.zne import execute_with_zne
```
The following long message appears:
<details>
--- Logging error ---
Traceback (most recent call last):
File "/home/andrea/anaconda3/lib/python3.7/site-packages/cirq/__init__.py", line 581, in <module>
create_attribute=True,
File "/home/andrea/anaconda3/lib/python3.7/site-packages/cirq/_compat.py", line 546, in deprecated_submodule
new_module = importlib.import_module(new_module_name)
File "/home/andrea/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'cirq_google'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/andrea/anaconda3/lib/python3.7/logging/__init__.py", line 1025, in emit
msg = self.format(record)
File "/home/andrea/anaconda3/lib/python3.7/logging/__init__.py", line 869, in format
return fmt.format(record)
File "/home/andrea/anaconda3/lib/python3.7/logging/__init__.py", line 608, in format
record.message = record.getMessage()
File "/home/andrea/anaconda3/lib/python3.7/logging/__init__.py", line 369, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "<stdin>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/andrea/data/mitiq/mitiq/__init__.py", line 16, in <module>
from mitiq._about import about
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/andrea/data/mitiq/mitiq/_about.py", line 23, in <module>
from cirq import __version__ as cirq_version
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/andrea/anaconda3/lib/python3.7/site-packages/cirq/__init__.py", line 585, in <module>
warning("Can't import cirq.google: ", ex)
Message: "Can't import cirq.google: "
Arguments: (ModuleNotFoundError("No module named 'cirq_google'"),)
</details>
The same problem also appears in some examples in the docs. See e.g.:
https://mitiq.readthedocs.io/en/latest/examples/simple_landscape.html
https://mitiq.readthedocs.io/en/latest/examples/maxcut-demo.html#define-the-qaoa-cost-hamiltonian-using-cirq
** Proposal **
Temporarily require the full `cirq` package, until the warning disappear?
They will probably fix this at some point: https://github.com/quantumlib/Cirq/issues/4106
|
non_process
|
consider reverting cirq google until deprecation warnings will disappear when we passed from cirq to cirq core a lot of deprecation warning appear each time we import something e g if a user tries to run python from mitiq zne import execute with zne the following long message appears logging error traceback most recent call last file home andrea lib site packages cirq init py line in create attribute true file home andrea lib site packages cirq compat py line in deprecated submodule new module importlib import module new module name file home andrea lib importlib init py line in import module return bootstrap gcd import name package level file line in gcd import file line in find and load file line in find and load unlocked modulenotfounderror no module named cirq google during handling of the above exception another exception occurred traceback most recent call last file home andrea lib logging init py line in emit msg self format record file home andrea lib logging init py line in format return fmt format record file home andrea lib logging init py line in format record message record getmessage file home andrea lib logging init py line in getmessage msg msg self args typeerror not all arguments converted during string formatting call stack file line in file line in find and load file line in find and load unlocked file line in call with frames removed file line in find and load file line in find and load unlocked file line in load unlocked file line in exec module file line in call with frames removed file home andrea data mitiq mitiq init py line in from mitiq about import about file line in find and load file line in find and load unlocked file line in load unlocked file line in exec module file line in call with frames removed file home andrea data mitiq mitiq about py line in from cirq import version as cirq version file line in find and load file line in find and load unlocked file line in load unlocked file line in exec module file line in call with frames removed file home andrea lib site packages cirq init py line in warning can t import cirq google ex message can t import cirq google arguments modulenotfounderror no module named cirq google the same problem also appears in some examples in the docs see e g proposal temporarily require the full cirq package until the warning disappear they will probably fix this at some point
| 0
|
11,945
| 14,708,853,106
|
IssuesEvent
|
2021-01-05 00:46:59
|
yuta252/startlens_react_frontend
|
https://api.github.com/repos/yuta252/startlens_react_frontend
|
closed
|
็ปๅใฎใชใตใคใบๅใณBase64ใธใฎใจใณใณใผใใฃใณใฐ
|
dev process enhancement
|
## ๆฆ่ฆ
ใฆใผใถใผใฎใใญใใฃใผใซ็ปๅใ็ป้ฒใใใซใใใฃใฆใ็ปๅใฎใตใคใบใๅคงใใๅ ดๅใซ้ไฟก้ๅบฆใฎๅคงๅน
ใช้
ๅปถใๆณๅฎใใใใใใฎใใใใญใณใใจใณใใงไธๅบฆ็ปๅใใชใตใคใบใใไธใงBase64ใจใณใณใผใใฃใณใฐใ่กใใใใฏใจใณใใฎAPIใจ้ไฟกใใๅฎ่ฃ
ใ่กใใ
## ๅคๆด็น
- [ ] ็ปๅใฎใขใใใญใผใๆใซ็ธฆๆจชใฎ้ทใใๅคงใใๆนใ500pxใซๅใใใไปๆนใฎๅฐใใๆนใใขใใใญใผใ็ปๅใจๅใ็ธฎๅฐบใงใชใตใคใบใใใ
- [ ] APIใงใใใฏใจใณใใซ้ไฟกใใใใใซBase64ใจใณใณใผใใใ
## ่ฟฝๅ ใฟในใฏ
- [ ] FileAPIใๅฉ็จใใฆ่คๆฐใฎ็ปๅๆ็จฟใซๅฏพใใใขใใใญใผใใๅฏ่ฝใซใใ
- [ ] ็ปๅใขใใใญใผใๆฉ่ฝใฏๅๅฉ็จใใใใ้ขๆฐๅใ่กใ
## ่ชฒ้ก
---
- [ ]
## ๅ็
ง
---
- [ใใฉใฆใถใงใญใผใซใซ็ปๅใใชใตใคใบใใฆใขใใใญใผใ](https://qiita.com/komakomako/items/8efd4184f6d7cf1363f2)
- [ๅคงใใช็ปๅใใใญใณใใงใชใตใคใบใใฆใขใใใญใผใใใๆนๆณ](https://blog.capilano-fw.com/?p=3578)
- [base64ใจใณใณใผใๆนๆณ(github)](https://github.com/lyhd/reactjs/blob/base64_encode/src/index.js)
- [canvas, blob, base64ใธใฎๅคๆใพใจใ](https://www.sejuku.net/blog/67735)
## ๅ่
---
- ็ปๅใฎใชใตใคใบใ่กใใใใซcanvasใ็ๆใใ็ธฆๆจชใฎ้ทใใฎๆๅคงๅคใ500pxใซใชใใใใซ็ธฆๆจชใฎๆฏ็ใ็ถญๆใใคใคใชใตใคใบใ่กใใใจใงๅคงๅน
ใช็ปๅๅง็ธฎใฎๅนๆใๅพใใใใใใ็จๅบฆใฎ่งฃๅๅบฆใๅฟ
่ฆใ ใใไปๅพDeeplearningใงใฎๅญฆ็ฟใ่ฆ่พผใใงใใใใใใ็จๅบฆใฎ็ป่ณชใงๅคง้ใฎ็ปๅใๅๅพใใใใจใๆค่จใใฆใใ๏ผใฉใฎใฟใกๅญฆ็ฟๆใซๅง็ธฎใใใใ๏ผใ็นใซS3ใฎๅฎน้ใฎๅขๅ ใจ้ไฟก้ๅบฆใฎ้
ๅปถใฏๆนๅใใใใ
- blobๅฝขๅผใซๆฏในใbase64ใฏ็ปๅใฎใใผใฟ้ใๅขๅ ใใใใฉใฎใฟใคใใณใฐใงbase64ๅคๆใ่กใใใฏ่ฆๆค่จไบ้
ใ
|
1.0
|
็ปๅใฎใชใตใคใบๅใณBase64ใธใฎใจใณใณใผใใฃใณใฐ - ## ๆฆ่ฆ
ใฆใผใถใผใฎใใญใใฃใผใซ็ปๅใ็ป้ฒใใใซใใใฃใฆใ็ปๅใฎใตใคใบใๅคงใใๅ ดๅใซ้ไฟก้ๅบฆใฎๅคงๅน
ใช้
ๅปถใๆณๅฎใใใใใใฎใใใใญใณใใจใณใใงไธๅบฆ็ปๅใใชใตใคใบใใไธใงBase64ใจใณใณใผใใฃใณใฐใ่กใใใใฏใจใณใใฎAPIใจ้ไฟกใใๅฎ่ฃ
ใ่กใใ
## ๅคๆด็น
- [ ] ็ปๅใฎใขใใใญใผใๆใซ็ธฆๆจชใฎ้ทใใๅคงใใๆนใ500pxใซๅใใใไปๆนใฎๅฐใใๆนใใขใใใญใผใ็ปๅใจๅใ็ธฎๅฐบใงใชใตใคใบใใใ
- [ ] APIใงใใใฏใจใณใใซ้ไฟกใใใใใซBase64ใจใณใณใผใใใ
## ่ฟฝๅ ใฟในใฏ
- [ ] FileAPIใๅฉ็จใใฆ่คๆฐใฎ็ปๅๆ็จฟใซๅฏพใใใขใใใญใผใใๅฏ่ฝใซใใ
- [ ] ็ปๅใขใใใญใผใๆฉ่ฝใฏๅๅฉ็จใใใใ้ขๆฐๅใ่กใ
## ่ชฒ้ก
---
- [ ]
## ๅ็
ง
---
- [ใใฉใฆใถใงใญใผใซใซ็ปๅใใชใตใคใบใใฆใขใใใญใผใ](https://qiita.com/komakomako/items/8efd4184f6d7cf1363f2)
- [ๅคงใใช็ปๅใใใญใณใใงใชใตใคใบใใฆใขใใใญใผใใใๆนๆณ](https://blog.capilano-fw.com/?p=3578)
- [base64ใจใณใณใผใๆนๆณ(github)](https://github.com/lyhd/reactjs/blob/base64_encode/src/index.js)
- [canvas, blob, base64ใธใฎๅคๆใพใจใ](https://www.sejuku.net/blog/67735)
## ๅ่
---
- ็ปๅใฎใชใตใคใบใ่กใใใใซcanvasใ็ๆใใ็ธฆๆจชใฎ้ทใใฎๆๅคงๅคใ500pxใซใชใใใใซ็ธฆๆจชใฎๆฏ็ใ็ถญๆใใคใคใชใตใคใบใ่กใใใจใงๅคงๅน
ใช็ปๅๅง็ธฎใฎๅนๆใๅพใใใใใใ็จๅบฆใฎ่งฃๅๅบฆใๅฟ
่ฆใ ใใไปๅพDeeplearningใงใฎๅญฆ็ฟใ่ฆ่พผใใงใใใใใใ็จๅบฆใฎ็ป่ณชใงๅคง้ใฎ็ปๅใๅๅพใใใใจใๆค่จใใฆใใ๏ผใฉใฎใฟใกๅญฆ็ฟๆใซๅง็ธฎใใใใ๏ผใ็นใซS3ใฎๅฎน้ใฎๅขๅ ใจ้ไฟก้ๅบฆใฎ้
ๅปถใฏๆนๅใใใใ
- blobๅฝขๅผใซๆฏในใbase64ใฏ็ปๅใฎใใผใฟ้ใๅขๅ ใใใใฉใฎใฟใคใใณใฐใงbase64ๅคๆใ่กใใใฏ่ฆๆค่จไบ้
ใ
|
process
|
ๆฆ่ฆ ใฆใผใถใผใฎใใญใใฃใผใซ็ปๅใ็ป้ฒใใใซใใใฃใฆใ็ปๅใฎใตใคใบใๅคงใใๅ ดๅใซ้ไฟก้ๅบฆใฎๅคงๅน
ใช้
ๅปถใๆณๅฎใใใใ ใ ๅคๆด็น ใไปๆนใฎๅฐใใๆนใใขใใใญใผใ็ปๅใจๅใ็ธฎๅฐบใงใชใตใคใบใใใ ่ฟฝๅ ใฟในใฏ fileapiใๅฉ็จใใฆ่คๆฐใฎ็ปๅๆ็จฟใซๅฏพใใใขใใใญใผใใๅฏ่ฝใซใใ ็ปๅใขใใใญใผใๆฉ่ฝใฏๅๅฉ็จใใใใ้ขๆฐๅใ่กใ ่ชฒ้ก ๅ็
ง ๅ่ ็ปๅใฎใชใตใคใบใ่กใใใใซcanvasใ็ๆใใ ใใใ็จๅบฆใฎ่งฃๅๅบฆใๅฟ
่ฆใ ใใไปๅพdeeplearningใงใฎๅญฆ็ฟใ่ฆ่พผใใงใใใใใใ็จๅบฆใฎ็ป่ณชใงๅคง้ใฎ็ปๅใๅๅพใใใใจใๆค่จใใฆใใ๏ผใฉใฎใฟใกๅญฆ็ฟๆใซๅง็ธฎใใใใ๏ผใ ใ blobๅฝขๅผใซๆฏในใ ใ ใ
| 1
|
3,008
| 6,009,097,216
|
IssuesEvent
|
2017-06-06 09:38:05
|
zero-os/0-orchestrator
|
https://api.github.com/repos/zero-os/0-orchestrator
|
closed
|
Container created with fake directory in init process
|
process_wontfix state_question
|
## Request

## Response

container created !
## SV
resourcepool : a30a8b47653db98bbb80d4039a7e5722ccfa156c
|
1.0
|
Container created with fake directory in init process - ## Request

## Response

container created !
## SV
resourcepool : a30a8b47653db98bbb80d4039a7e5722ccfa156c
|
process
|
container created with fake directory in init process request response container created sv resourcepool
| 1
|
13,858
| 16,616,418,624
|
IssuesEvent
|
2021-06-02 17:16:39
|
department-of-veterans-affairs/notification-api
|
https://api.github.com/repos/department-of-veterans-affairs/notification-api
|
closed
|
Register to Google Postmaster Tool
|
Process Task
|
# Value Statement
**As** a VANotify platform admin
**I want to** see our sender reputation with Gmail
**So that** I can proactively respond to any issues that might prevent us from sending email to gmail
# Acceptance Criteria
**GIVEN** Gmail Postmaster account
**WHEN** I log in
**THEN** I can see statistics for notifications.va.gov domain
**GIVEN** Gmail Postmaster account
**WHEN** I log in
**THEN** I can see statistics for messages.va.gov domain
# Checklist
- [x] Figure out which google account to use
- [x] ESECC request to add DNS entries to authenticate notifications.va.gov domain with postmaster tool
- [x] ESECC request to add DNS entries to authenticate messages.va.gov domain with postmaster tool
# Additional Info/Resources
- Created [ticket](https://github.com/department-of-veterans-affairs/va.gov-team/issues/22411#issuecomment-811173751) with Analytics team to learn how they created/manage Google account for Google Analytics
|
1.0
|
Register to Google Postmaster Tool - # Value Statement
**As** a VANotify platform admin
**I want to** see our sender reputation with Gmail
**So that** I can proactively respond to any issues that might prevent us from sending email to gmail
# Acceptance Criteria
**GIVEN** Gmail Postmaster account
**WHEN** I log in
**THEN** I can see statistics for notifications.va.gov domain
**GIVEN** Gmail Postmaster account
**WHEN** I log in
**THEN** I can see statistics for messages.va.gov domain
# Checklist
- [x] Figure out which google account to use
- [x] ESECC request to add DNS entries to authenticate notifications.va.gov domain with postmaster tool
- [x] ESECC request to add DNS entries to authenticate messages.va.gov domain with postmaster tool
# Additional Info/Resources
- Created [ticket](https://github.com/department-of-veterans-affairs/va.gov-team/issues/22411#issuecomment-811173751) with Analytics team to learn how they created/manage Google account for Google Analytics
|
process
|
register to google postmaster tool value statement as a vanotify platform admin i want to see our sender reputation with gmail so that i can proactively respond to any issues that might prevent us from sending email to gmail acceptance criteria given gmail postmaster account when i log in then i can see statistics for notifications va gov domain given gmail postmaster account when i log in then i can see statistics for messages va gov domain checklist figure out which google account to use esecc request to add dns entries to authenticate notifications va gov domain with postmaster tool esecc request to add dns entries to authenticate messages va gov domain with postmaster tool additional info resources created with analytics team to learn how they created manage google account for google analytics
| 1
|
68,771
| 17,396,886,275
|
IssuesEvent
|
2021-08-02 14:27:51
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
opened
|
PyTorch Bazel scripts are incompatible with bazel-4.x
|
enhancement module: build triaged
|
## ๐ Bug
Any build attempts fail in protobuf with:
``
/home/nshulga/.cache/bazel/_bazel_nshulga/949d449b28ab1dc6cc4616fb8666a5b1/external/com_google_protobuf/BUILD:1006:21: in proto_lang_toolchain rule @com_google_protobuf//:cc_toolchain: '@com_google_protobuf//:cc_toolchain' does not have mandatory provider 'ProtoInfo'.
``
## To Reproduce
Steps to reproduce the behavior:
1. Download https://github.com/bazelbuild/bazel/releases/tag/4.1.0
1. Run ` bazel build --sandbox_writable_path ~/.ccache //:caffe2`
## Expected behavior
Build succeeds
## Additional context
Likely to be caused by https://github.com/bazelbuild/bazel/issues/12887, so updating protobuf definitions should help.
|
1.0
|
PyTorch Bazel scripts are incompatible with bazel-4.x - ## ๐ Bug
Any build attempts fail in protobuf with:
``
/home/nshulga/.cache/bazel/_bazel_nshulga/949d449b28ab1dc6cc4616fb8666a5b1/external/com_google_protobuf/BUILD:1006:21: in proto_lang_toolchain rule @com_google_protobuf//:cc_toolchain: '@com_google_protobuf//:cc_toolchain' does not have mandatory provider 'ProtoInfo'.
``
## To Reproduce
Steps to reproduce the behavior:
1. Download https://github.com/bazelbuild/bazel/releases/tag/4.1.0
1. Run ` bazel build --sandbox_writable_path ~/.ccache //:caffe2`
## Expected behavior
Build succeeds
## Additional context
Likely to be caused by https://github.com/bazelbuild/bazel/issues/12887, so updating protobuf definitions should help.
|
non_process
|
pytorch bazel scripts are incompatible with bazel x ๐ bug any build attempts fail in protobuf with home nshulga cache bazel bazel nshulga external com google protobuf build in proto lang toolchain rule com google protobuf cc toolchain com google protobuf cc toolchain does not have mandatory provider protoinfo to reproduce steps to reproduce the behavior download run bazel build sandbox writable path ccache expected behavior build succeeds additional context likely to be caused by so updating protobuf definitions should help
| 0
|
36,728
| 6,547,710,965
|
IssuesEvent
|
2017-09-04 16:03:06
|
reactor/reactor-core
|
https://api.github.com/repos/reactor/reactor-core
|
closed
|
Emitter processor does not satisfy ringbuffer description claimed in class docs
|
documentation wontfix
|
### Expected behavior
Elements are overriding oldest entries in queue if queue is full
### Actual behavior
Processor blocks thread in a infinite loop waiting for a free space in queue
### Steps to reproduce
```
@Test(timeout = 5000)
public void testRingbufferOverflow() throws Exception {
int bufferSize = 10;
EmitterProcessor<Integer> processor = EmitterProcessor.create(bufferSize);
for (int i = 0; i < bufferSize * 10; i++) {
processor.onNext(i);
}
processor.onComplete();
List<Integer> block = processor.collectList().block();
assertThat(block).hasSize(bufferSize);
}
```
### Reactor Core version
3.1.0.BUILD-SNAPSHOT (current master revision)
### JVM version (e.g. `java -version`)
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
### OS version (e.g. `uname -a`)
Linux 4.4.0-79-generic #100-Ubuntu SMP Wed May 17 19:58:14 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
|
1.0
|
Emitter processor does not satisfy ringbuffer description claimed in class docs - ### Expected behavior
Elements are overriding oldest entries in queue if queue is full
### Actual behavior
Processor blocks thread in a infinite loop waiting for a free space in queue
### Steps to reproduce
```
@Test(timeout = 5000)
public void testRingbufferOverflow() throws Exception {
int bufferSize = 10;
EmitterProcessor<Integer> processor = EmitterProcessor.create(bufferSize);
for (int i = 0; i < bufferSize * 10; i++) {
processor.onNext(i);
}
processor.onComplete();
List<Integer> block = processor.collectList().block();
assertThat(block).hasSize(bufferSize);
}
```
### Reactor Core version
3.1.0.BUILD-SNAPSHOT (current master revision)
### JVM version (e.g. `java -version`)
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
### OS version (e.g. `uname -a`)
Linux 4.4.0-79-generic #100-Ubuntu SMP Wed May 17 19:58:14 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
|
non_process
|
emitter processor does not satisfy ringbuffer description claimed in class docs expected behavior elements are overriding oldest entries in queue if queue is full actual behavior processor blocks thread in a infinite loop waiting for a free space in queue steps to reproduce test timeout public void testringbufferoverflow throws exception int buffersize emitterprocessor processor emitterprocessor create buffersize for int i i buffersize i processor onnext i processor oncomplete list block processor collectlist block assertthat block hassize buffersize reactor core version build snapshot current master revision jvm version e g java version java version java tm se runtime environment build java hotspot tm bit server vm build mixed mode os version e g uname a linux generic ubuntu smp wed may utc gnu linux
| 0
|
4,371
| 7,260,515,787
|
IssuesEvent
|
2018-02-18 10:54:32
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE] Subdivide algorithm for QgsGeometry
|
Automatic new feature Processing User Manual
|
Original commit: https://github.com/qgis/QGIS/commit/e74395d95bd326788ce55f2a7caa31435fed9ea9 by nyalldawson
Subdivides the geometry. The returned geometry will be a collection
containing subdivided parts from the original geometry, where no
part has more then the specified maximum number of nodes.
|
1.0
|
[FEATURE] Subdivide algorithm for QgsGeometry - Original commit: https://github.com/qgis/QGIS/commit/e74395d95bd326788ce55f2a7caa31435fed9ea9 by nyalldawson
Subdivides the geometry. The returned geometry will be a collection
containing subdivided parts from the original geometry, where no
part has more then the specified maximum number of nodes.
|
process
|
subdivide algorithm for qgsgeometry original commit by nyalldawson subdivides the geometry the returned geometry will be a collection containing subdivided parts from the original geometry where no part has more then the specified maximum number of nodes
| 1
|
35,236
| 14,654,300,075
|
IssuesEvent
|
2020-12-28 08:21:03
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
az backup protectable-item list error
|
Recovery Services Backup Service Attention customer-response-expected
|
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
az backup protectable-item list failes with an error when used with workload-type AzureFileshare
**Command Name**
`az backup protectable-item list`
**Errors:**
```
'AzureFileShare'
Traceback (most recent call last):
python3.6/site-packages/knack/cli.py, ln 215, in invoke
cmd_result = self.invocation.execute(args)
cli/core/commands/__init__.py, ln 654, in execute
raise ex
cli/core/commands/__init__.py, ln 718, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
cli/core/commands/__init__.py, ln 711, in _run_job
six.reraise(*sys.exc_info())
...
cli/command_modules/backup/custom_base.py, ln 199, in list_protectable_items
return custom_wl.list_protectable_items(client, resource_group_name, vault_name, workload_type, container_uri)
cli/command_modules/backup/custom_wl.py, ln 271, in list_protectable_items
workload_type = workload_type_map[workload_type]
KeyError: 'AzureFileShare'
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az backup protectable-item list --resource-group {} --vault-name {} --workload-type {}`
## Expected Behavior
## Environment Summary
```
Linux-4.15.0-1098-azure-x86_64-with-debian-stretch-sid (Cloud Shell)
Python 3.6.10
Installer: DEB
azure-cli 2.13.0 *
Extensions:
ai-examples 0.2.4
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
|
2.0
|
az backup protectable-item list error -
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
az backup protectable-item list failes with an error when used with workload-type AzureFileshare
**Command Name**
`az backup protectable-item list`
**Errors:**
```
'AzureFileShare'
Traceback (most recent call last):
python3.6/site-packages/knack/cli.py, ln 215, in invoke
cmd_result = self.invocation.execute(args)
cli/core/commands/__init__.py, ln 654, in execute
raise ex
cli/core/commands/__init__.py, ln 718, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
cli/core/commands/__init__.py, ln 711, in _run_job
six.reraise(*sys.exc_info())
...
cli/command_modules/backup/custom_base.py, ln 199, in list_protectable_items
return custom_wl.list_protectable_items(client, resource_group_name, vault_name, workload_type, container_uri)
cli/command_modules/backup/custom_wl.py, ln 271, in list_protectable_items
workload_type = workload_type_map[workload_type]
KeyError: 'AzureFileShare'
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az backup protectable-item list --resource-group {} --vault-name {} --workload-type {}`
## Expected Behavior
## Environment Summary
```
Linux-4.15.0-1098-azure-x86_64-with-debian-stretch-sid (Cloud Shell)
Python 3.6.10
Installer: DEB
azure-cli 2.13.0 *
Extensions:
ai-examples 0.2.4
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
|
non_process
|
az backup protectable item list error this is autogenerated please review and update as needed describe the bug az backup protectable item list failes with an error when used with workload type azurefileshare command name az backup protectable item list errors azurefileshare traceback most recent call last site packages knack cli py ln in invoke cmd result self invocation execute args cli core commands init py ln in execute raise ex cli core commands init py ln in run jobs serially results append self run job expanded arg cmd copy cli core commands init py ln in run job six reraise sys exc info cli command modules backup custom base py ln in list protectable items return custom wl list protectable items client resource group name vault name workload type container uri cli command modules backup custom wl py ln in list protectable items workload type workload type map keyerror azurefileshare to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information put any pre requisite steps here az backup protectable item list resource group vault name workload type expected behavior environment summary linux azure with debian stretch sid cloud shell python installer deb azure cli extensions ai examples additional context
| 0
|
139,084
| 11,243,391,383
|
IssuesEvent
|
2020-01-10 02:55:51
|
goharbor/harbor
|
https://api.github.com/repos/goharbor/harbor
|
closed
|
[Add Jenkins Job] Performance test needs to be triggered by Jenkins
|
area/test automation target/2.0.0
|
Add Performance test by rally into Jenkins.
|
1.0
|
[Add Jenkins Job] Performance test needs to be triggered by Jenkins - Add Performance test by rally into Jenkins.
|
non_process
|
performance test needs to be triggered by jenkins add performance test by rally into jenkins
| 0
|
162,375
| 12,663,177,553
|
IssuesEvent
|
2020-06-18 00:27:30
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Test failure: System.Net.Security.Tests.ServerAsyncAuthenticateTest.ServerAsyncAuthenticate_MismatchProtocols_Fails(serverProtocol: Tls12, clientProtocol: Tls, expectedException: typeof(System.Security.Authentication.AuthenticationException))
|
arch-x64 area-System.Net.Security disabled-test test-run-core
|
failed in job: [runtime 20200510.20 ](https://dev.azure.com/dnceng/public/_build/results?buildId=638932&view=ms.vss-test-web.build-test-results-tab&runId=19860426&resultId=150684&paneView=debug)
Error message
~~~
Assert.IsAssignableFrom() Failure
Expected: typeof(System.Security.Authentication.AuthenticationException)
Actual: typeof(System.TimeoutException)
Stack trace
at System.Net.Security.Tests.ServerAsyncAuthenticateTest.ServerAsyncAuthenticate_MismatchProtocols_Fails(SslProtocols serverProtocol, SslProtocols clientProtocol, Type expectedException) in /_/src/libraries/System.Net.Security/tests/FunctionalTests/ServerAsyncAuthenticateTest.cs:line 62
--- End of stack trace from previous location ---
~~~
|
2.0
|
Test failure: System.Net.Security.Tests.ServerAsyncAuthenticateTest.ServerAsyncAuthenticate_MismatchProtocols_Fails(serverProtocol: Tls12, clientProtocol: Tls, expectedException: typeof(System.Security.Authentication.AuthenticationException)) - failed in job: [runtime 20200510.20 ](https://dev.azure.com/dnceng/public/_build/results?buildId=638932&view=ms.vss-test-web.build-test-results-tab&runId=19860426&resultId=150684&paneView=debug)
Error message
~~~
Assert.IsAssignableFrom() Failure
Expected: typeof(System.Security.Authentication.AuthenticationException)
Actual: typeof(System.TimeoutException)
Stack trace
at System.Net.Security.Tests.ServerAsyncAuthenticateTest.ServerAsyncAuthenticate_MismatchProtocols_Fails(SslProtocols serverProtocol, SslProtocols clientProtocol, Type expectedException) in /_/src/libraries/System.Net.Security/tests/FunctionalTests/ServerAsyncAuthenticateTest.cs:line 62
--- End of stack trace from previous location ---
~~~
|
non_process
|
test failure system net security tests serverasyncauthenticatetest serverasyncauthenticate mismatchprotocols fails serverprotocol clientprotocol tls expectedexception typeof system security authentication authenticationexception failed in job error message assert isassignablefrom failure expected typeof system security authentication authenticationexception actual typeof system timeoutexception stack trace at system net security tests serverasyncauthenticatetest serverasyncauthenticate mismatchprotocols fails sslprotocols serverprotocol sslprotocols clientprotocol type expectedexception in src libraries system net security tests functionaltests serverasyncauthenticatetest cs line end of stack trace from previous location
| 0
|
3,754
| 3,548,067,052
|
IssuesEvent
|
2016-01-20 12:50:36
|
rabbitmq/rabbitmq-management
|
https://api.github.com/repos/rabbitmq/rabbitmq-management
|
closed
|
No error is displayed when policy is created with invalid paramaters
|
effort-low usability
|
RabbitMQ 3.6.0.
It can be reproduced by putting some random text in 'Definition' fields and submitting the form. Nothing will happen but in the Network console you can see the server responds with a `400 Bad Request`:

```
{"error":"bad_request","reason":"Validation failed\n\n[{<<\"random\">>,<<\"random\">>}] are not recognised policy settings\n"}
```
|
True
|
No error is displayed when policy is created with invalid paramaters - RabbitMQ 3.6.0.
It can be reproduced by putting some random text in 'Definition' fields and submitting the form. Nothing will happen but in the Network console you can see the server responds with a `400 Bad Request`:

```
{"error":"bad_request","reason":"Validation failed\n\n[{<<\"random\">>,<<\"random\">>}] are not recognised policy settings\n"}
```
|
non_process
|
no error is displayed when policy is created with invalid paramaters rabbitmq it can be reproduced by putting some random text in definition fields and submitting the form nothing will happen but in the network console you can see the server responds with a bad request error bad request reason validation failed n n are not recognised policy settings n
| 0
|
9,124
| 12,198,062,185
|
IssuesEvent
|
2020-04-29 22:01:01
|
w3c/aria-at
|
https://api.github.com/repos/w3c/aria-at
|
opened
|
Process for tracking implementation bugs found by running the tests
|
process
|
The [working mode](https://github.com/w3c/aria-at/wiki/Working-Mode) currently says
> (AT developer) If there is a bug in AT, file the bug for the AT.
I think it would be good to further track these bugs in this project, both to be able to follow up but also to demonstrate impact. It could also be helpful for AT developers to be able to review "the same" bugs in other ATs. It's also good to ensure that bugs are actually filed consistently when they are found.
One way to do it is to include in our issue template for test plans a list of bugs for each relevant implementation (each browser, each AT), and have the person who reports the bug drop a link to the bug there. As a forcing function, we could say that this section needs to be completed (with either a link or "N/A") before a test results report can be marked as final.
Tracking bugs like this (for normative spec changes) has been part of the WHATWG process the past few years, [with great results](https://blog.whatwg.org/improving-interoperability), and has been adopted by various W3C WGs.
Thoughts?
|
1.0
|
Process for tracking implementation bugs found by running the tests - The [working mode](https://github.com/w3c/aria-at/wiki/Working-Mode) currently says
> (AT developer) If there is a bug in AT, file the bug for the AT.
I think it would be good to further track these bugs in this project, both to be able to follow up but also to demonstrate impact. It could also be helpful for AT developers to be able to review "the same" bugs in other ATs. It's also good to ensure that bugs are actually filed consistently when they are found.
One way to do it is to include in our issue template for test plans a list of bugs for each relevant implementation (each browser, each AT), and have the person who reports the bug drop a link to the bug there. As a forcing function, we could say that this section needs to be completed (with either a link or "N/A") before a test results report can be marked as final.
Tracking bugs like this (for normative spec changes) has been part of the WHATWG process the past few years, [with great results](https://blog.whatwg.org/improving-interoperability), and has been adopted by various W3C WGs.
Thoughts?
|
process
|
process for tracking implementation bugs found by running the tests the currently says at developer if there is a bug in at file the bug for the at i think it would be good to further track these bugs in this project both to be able to follow up but also to demonstrate impact it could also be helpful for at developers to be able to review the same bugs in other ats it s also good to ensure that bugs are actually filed consistently when they are found one way to do it is to include in our issue template for test plans a list of bugs for each relevant implementation each browser each at and have the person who reports the bug drop a link to the bug there as a forcing function we could say that this section needs to be completed with either a link or n a before a test results report can be marked as final tracking bugs like this for normative spec changes has been part of the whatwg process the past few years and has been adopted by various wgs thoughts
| 1
|
25,585
| 7,727,653,109
|
IssuesEvent
|
2018-05-25 04:03:45
|
Microsoft/MixedRealityToolkit-Unity
|
https://api.github.com/repos/Microsoft/MixedRealityToolkit-Unity
|
reopened
|
Unable to Deploy from Build Window
|
Build Tools HoloLens
|
## Overview
Unable to deploy to Hololens.
## Expected Behavior
After successfully building a barebones project and building the Appx, the program deploys correctly to the Hololens
## Actual Behavior.
Builds project correctly. Appears to connect to Hololens.
Does not deploy to the Hololens when pressing "Install"
## Steps to reproduce
The correct WiFi IP address is provided and when I press "Connect" under deploy options, I get the correct Hololens device name. When I press "Open Device Portal" - IE opens the Hololens Device Portal correctly without issue.
When I press "Install" - I get a Network Error: Request timeout.
Then "Failed to install [APP].appx on [DEVICE].
See screenshot.

## Unity Editor Version
2017.3.1f1
## Mixed Reality Toolkit Release Version
2017.2.1.2
## Hololens OS Version
10.0.14393.2068
## Windows OS Version (Development PC)
10.0.16299 Build 16299
|
1.0
|
Unable to Deploy from Build Window - ## Overview
Unable to deploy to Hololens.
## Expected Behavior
After successfully building a barebones project and building the Appx, the program deploys correctly to the Hololens
## Actual Behavior.
Builds project correctly. Appears to connect to Hololens.
Does not deploy to the Hololens when pressing "Install"
## Steps to reproduce
The correct WiFi IP address is provided and when I press "Connect" under deploy options, I get the correct Hololens device name. When I press "Open Device Portal" - IE opens the Hololens Device Portal correctly without issue.
When I press "Install" - I get a Network Error: Request timeout.
Then "Failed to install [APP].appx on [DEVICE].
See screenshot.

## Unity Editor Version
2017.3.1f1
## Mixed Reality Toolkit Release Version
2017.2.1.2
## Hololens OS Version
10.0.14393.2068
## Windows OS Version (Development PC)
10.0.16299 Build 16299
|
non_process
|
unable to deploy from build window overview unable to deploy to hololens expected behavior after successfully building a barebones project and building the appx the program deploys correctly to the hololens actual behavior builds project correctly appears to connect to hololens does not deploy to the hololens when pressing install steps to reproduce the correct wifi ip address is provided and when i press connect under deploy options i get the correct hololens device name when i press open device portal ie opens the hololens device portal correctly without issue when i press install i get a network error request timeout then failed to install appx on see screenshot unity editor version mixed reality toolkit release version hololens os version windows os version development pc build
| 0
|
289,374
| 31,932,921,883
|
IssuesEvent
|
2023-09-19 08:38:24
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
[Security Solution] Alerts Dashboard showing error message in Serverless QA Environment
|
bug triage_needed Team: SecuritySolution Project:Serverless
|
**Describe the bug:**
Alerts dashboard is showing error message `An error occurred` in Status, Severity, User, and Host fields and alerts are not displaying
**Kibana/Elasticsearch Stack version:**
QA serverless environment git hash:
` qa: "243142d9c19d"`
**Server OS version:**
Serverless Project
**Browser and Browser OS versions:**
Firefox
Chrome
**Elastic Endpoint version:**
N/A
**Original install method (e.g. download page, yum, from source, etc.):**
**Functional Area (e.g. Endpoint management, timelines, resolver, etc.):**
**Steps to reproduce:**
1. Login to Serverless QA environment (see [#security-serverless slack ](https://elastic.slack.com/archives/C04ML7QEF5L/p1692822322791639)channel for details on how to login)
2. Navigate to `Security` -> `Alerts`
**Current behavior:**
Alerts dashboard is showing error message in fields and no alerts are displaying
**Expected behavior:**
Alerts should display and there should not be no error messages in the alert dashboard
**Screenshots (if relevant):**
<img width="2543" alt="Screenshot 2023-08-31 at 11 55 53 AM" src="https://github.com/elastic/kibana/assets/35679937/50de1738-fb1c-4878-bee0-7dc8dbb1e35d">
**Errors in browser console (if relevant):**
`/app/security/alerts?sourcerer=(default:(id:security-solution-default,selectedPatterns:!(%27logs-*%27)))&timerange=(global:(linkTo:!(timeline),timerange:(from:%272023-08-24T16:14:42.830Z%27,fromStr:now-1w,kind:relative,to:%272023-08-31T16:14:42.830Z%27,toStr:now)),timeline:(linkTo:!(global),timerange:(from:%272023-08-24T16:14:42.830Z%27,fromStr:now-1w,kind:relative,to:%272023-08-31T16:14:42.830Z%27,toStr:now)))&timeline=(activeTab:query,graphEventId:%27%27,isOpen:!f)&pageFilters=!((exclude:!f,existsSelected:!f,fieldName:kibana.alert.workflow_status,selectedOptions:!(open),title:Status),(exclude:!f,existsSelected:!f,fieldName:kibana.alert.severity,selectedOptions:!(),title:Severity),(exclude:!f,existsSelected:!f,fieldName:user.name,selectedOptions:!(),title:User),(exclude:!f,existsSelected:!f,fieldName:host.name,selectedOptions:!(),title:Host))#?_g=(filters:!(),refreshInterval:(pause:!t,value:60000),time:(from:now-15m,to:now))&_a=(columns:!(),filters:!(),index:security-solution-default,interval:auto,query:(language:kuery,query:''),sort:!(!('@timestamp',desc))):286 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.`
`bootstrap.js:42 ^ A single error about an inline script not firing due to content security policy is expected!`
**Screen recording:**
https://github.com/elastic/kibana/assets/35679937/7f46795f-8583-4ef0-bff5-205cd8356163
**Provide logs and/or server output (if relevant):**
None
**Any additional context (logs, chat logs, magical formulas, etc.):**
None
@MadameSheema FYI
|
True
|
[Security Solution] Alerts Dashboard showing error message in Serverless QA Environment - **Describe the bug:**
Alerts dashboard is showing error message `An error occurred` in Status, Severity, User, and Host fields and alerts are not displaying
**Kibana/Elasticsearch Stack version:**
QA serverless environment git hash:
` qa: "243142d9c19d"`
**Server OS version:**
Serverless Project
**Browser and Browser OS versions:**
Firefox
Chrome
**Elastic Endpoint version:**
N/A
**Original install method (e.g. download page, yum, from source, etc.):**
**Functional Area (e.g. Endpoint management, timelines, resolver, etc.):**
**Steps to reproduce:**
1. Login to Serverless QA environment (see [#security-serverless slack ](https://elastic.slack.com/archives/C04ML7QEF5L/p1692822322791639)channel for details on how to login)
2. Navigate to `Security` -> `Alerts`
**Current behavior:**
Alerts dashboard is showing error message in fields and no alerts are displaying
**Expected behavior:**
Alerts should display and there should not be no error messages in the alert dashboard
**Screenshots (if relevant):**
<img width="2543" alt="Screenshot 2023-08-31 at 11 55 53 AM" src="https://github.com/elastic/kibana/assets/35679937/50de1738-fb1c-4878-bee0-7dc8dbb1e35d">
**Errors in browser console (if relevant):**
`/app/security/alerts?sourcerer=(default:(id:security-solution-default,selectedPatterns:!(%27logs-*%27)))&timerange=(global:(linkTo:!(timeline),timerange:(from:%272023-08-24T16:14:42.830Z%27,fromStr:now-1w,kind:relative,to:%272023-08-31T16:14:42.830Z%27,toStr:now)),timeline:(linkTo:!(global),timerange:(from:%272023-08-24T16:14:42.830Z%27,fromStr:now-1w,kind:relative,to:%272023-08-31T16:14:42.830Z%27,toStr:now)))&timeline=(activeTab:query,graphEventId:%27%27,isOpen:!f)&pageFilters=!((exclude:!f,existsSelected:!f,fieldName:kibana.alert.workflow_status,selectedOptions:!(open),title:Status),(exclude:!f,existsSelected:!f,fieldName:kibana.alert.severity,selectedOptions:!(),title:Severity),(exclude:!f,existsSelected:!f,fieldName:user.name,selectedOptions:!(),title:User),(exclude:!f,existsSelected:!f,fieldName:host.name,selectedOptions:!(),title:Host))#?_g=(filters:!(),refreshInterval:(pause:!t,value:60000),time:(from:now-15m,to:now))&_a=(columns:!(),filters:!(),index:security-solution-default,interval:auto,query:(language:kuery,query:''),sort:!(!('@timestamp',desc))):286 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.`
`bootstrap.js:42 ^ A single error about an inline script not firing due to content security policy is expected!`
**Screen recording:**
https://github.com/elastic/kibana/assets/35679937/7f46795f-8583-4ef0-bff5-205cd8356163
**Provide logs and/or server output (if relevant):**
None
**Any additional context (logs, chat logs, magical formulas, etc.):**
None
@MadameSheema FYI
|
non_process
|
alerts dashboard showing error message in serverless qa environment describe the bug alerts dashboard is showing error message an error occurred in status severity user and host fields and alerts are not displaying kibana elasticsearch stack version qa serverless environment git hash qa server os version serverless project browser and browser os versions firefox chrome elastic endpoint version n a original install method e g download page yum from source etc functional area e g endpoint management timelines resolver etc steps to reproduce login to serverless qa environment see for details on how to login navigate to security alerts current behavior alerts dashboard is showing error message in fields and no alerts are displaying expected behavior alerts should display and there should not be no error messages in the alert dashboard screenshots if relevant img width alt screenshot at am src errors in browser console if relevant app security alerts sourcerer default id security solution default selectedpatterns timerange global linkto timeline timerange from fromstr now kind relative to tostr now timeline linkto global timerange from fromstr now kind relative to tostr now timeline activetab query grapheventid isopen f pagefilters exclude f existsselected f fieldname kibana alert workflow status selectedoptions open title status exclude f existsselected f fieldname kibana alert severity selectedoptions title severity exclude f existsselected f fieldname user name selectedoptions title user exclude f existsselected f fieldname host name selectedoptions title host g filters refreshinterval pause t value time from now to now a columns filters index security solution default interval auto query language kuery query sort timestamp desc refused to execute inline script because it violates the following content security policy directive script src self either the unsafe inline keyword a hash yezujxioi or a nonce nonce is required to enable inline execution bootstrap js a single error about an inline script not firing due to content security policy is expected screen recording provide logs and or server output if relevant none any additional context logs chat logs magical formulas etc none madamesheema fyi
| 0
|
102,989
| 22,161,747,555
|
IssuesEvent
|
2022-06-04 15:52:08
|
IDU-IFP/ifp-iOS-study
|
https://api.github.com/repos/IDU-IFP/ifp-iOS-study
|
opened
|
GitHub์ Xcode ์ฐ๊ฒฐํ๊ธฐ
|
Xcode ์์๋์ด์ผํ ๊ฒ
|
## GitHub์ Xcode ์ฐ๊ฒฐํ๊ธฐ
์ ๊ฐ Xcode ์ฐ๊ฒฐ์ ๋ํด ์ง์ [์ ๋ฆฌํ๋ ๋ธ๋ก๊ทธ](https://velog.io/@blooper20/iOS-%EB%A7%9B%EC%A7%91%EC%86%8C%EA%B0%9C-Xcode%EC%99%80-Github%EC%97%B0%EA%B2%B0)์ ๋ค์ด๊ฐ์ ํ๋์ฉ ๋ณด๋ฉด์ ์ฒ์ฒํ ๋ฐ๋ผํด์ฃผ์๋ฉด ๋ฉ๋๋ค.
## ๊ณผ์
์์ ์ ๋ธ๋ก๊ทธ์ ์ ์ฒ๋ผ `๋จ๊ณ๋ณ๋ก ์ ๋ฆฌ`๋ฅผ ํ๊ณ ์ ๋ฆฌํ ๋ธ๋ก๊ทธ ๋งํฌ๋ฅผ ์ฌ๋ ค์ฃผ์ธ์.
ํน์ ์ดํด๊ฐ ์๋๊ฑฐ๋ ๋์ง์๋ ๋ถ๋ถ๋ ๋จ๊ฒจ์ฃผ์๋ฉด ๋ต๋ณํด๋๋ฆฌ๊ฒ ์ต๋๋ค!!
์ง๋ฌธํ ๋ด์ฉ์ด ์๋ค๋ฉด ๊ฐ๋จํ ์๊ฐ๋ฌธ ํ์ค ์ ๋ ๋จ๊ฒจ์ฃผ์๋ฉด ๋๊ฒ ์ต๋๋ค.
> ์์
์ ๋ฆฌํ [๊ธฐ์ ๋ธ๋ก๊ทธ ๋งํฌ](https://velog.io/@blooper20/iOS-%EB%A7%9B%EC%A7%91%EC%86%8C%EA%B0%9C-Xcode%EC%99%80-Github%EC%97%B0%EA%B2%B0)
์ง๋ฌธ : Personal access tokens ๋ถ๋ถ์ด ์ดํด๊ฐ ์๋์ง ์์ต๋๋ค.
์๊ฐ๋ฌธ : ์ข์ ์ ๋ณด ๊ฐ์ฌํฉ๋๋ค, Xcode๋ด์์๋ ์ปค๋ฐ ํธ์ ํ ๋ชจ๋ ๊ด๋ฆฌ๊ฐ ๋๋ค๋ ์ ์ ๋ชฐ๋์๋ค์ ๋ฑ
|
1.0
|
GitHub์ Xcode ์ฐ๊ฒฐํ๊ธฐ - ## GitHub์ Xcode ์ฐ๊ฒฐํ๊ธฐ
์ ๊ฐ Xcode ์ฐ๊ฒฐ์ ๋ํด ์ง์ [์ ๋ฆฌํ๋ ๋ธ๋ก๊ทธ](https://velog.io/@blooper20/iOS-%EB%A7%9B%EC%A7%91%EC%86%8C%EA%B0%9C-Xcode%EC%99%80-Github%EC%97%B0%EA%B2%B0)์ ๋ค์ด๊ฐ์ ํ๋์ฉ ๋ณด๋ฉด์ ์ฒ์ฒํ ๋ฐ๋ผํด์ฃผ์๋ฉด ๋ฉ๋๋ค.
## ๊ณผ์
์์ ์ ๋ธ๋ก๊ทธ์ ์ ์ฒ๋ผ `๋จ๊ณ๋ณ๋ก ์ ๋ฆฌ`๋ฅผ ํ๊ณ ์ ๋ฆฌํ ๋ธ๋ก๊ทธ ๋งํฌ๋ฅผ ์ฌ๋ ค์ฃผ์ธ์.
ํน์ ์ดํด๊ฐ ์๋๊ฑฐ๋ ๋์ง์๋ ๋ถ๋ถ๋ ๋จ๊ฒจ์ฃผ์๋ฉด ๋ต๋ณํด๋๋ฆฌ๊ฒ ์ต๋๋ค!!
์ง๋ฌธํ ๋ด์ฉ์ด ์๋ค๋ฉด ๊ฐ๋จํ ์๊ฐ๋ฌธ ํ์ค ์ ๋ ๋จ๊ฒจ์ฃผ์๋ฉด ๋๊ฒ ์ต๋๋ค.
> ์์
์ ๋ฆฌํ [๊ธฐ์ ๋ธ๋ก๊ทธ ๋งํฌ](https://velog.io/@blooper20/iOS-%EB%A7%9B%EC%A7%91%EC%86%8C%EA%B0%9C-Xcode%EC%99%80-Github%EC%97%B0%EA%B2%B0)
์ง๋ฌธ : Personal access tokens ๋ถ๋ถ์ด ์ดํด๊ฐ ์๋์ง ์์ต๋๋ค.
์๊ฐ๋ฌธ : ์ข์ ์ ๋ณด ๊ฐ์ฌํฉ๋๋ค, Xcode๋ด์์๋ ์ปค๋ฐ ํธ์ ํ ๋ชจ๋ ๊ด๋ฆฌ๊ฐ ๋๋ค๋ ์ ์ ๋ชฐ๋์๋ค์ ๋ฑ
|
non_process
|
github์ xcode ์ฐ๊ฒฐํ๊ธฐ github์ xcode ์ฐ๊ฒฐํ๊ธฐ ์ ๊ฐ xcode ์ฐ๊ฒฐ์ ๋ํด ์ง์ ๋ค์ด๊ฐ์ ํ๋์ฉ ๋ณด๋ฉด์ ์ฒ์ฒํ ๋ฐ๋ผํด์ฃผ์๋ฉด ๋ฉ๋๋ค ๊ณผ์ ์์ ์ ๋ธ๋ก๊ทธ์ ์ ์ฒ๋ผ ๋จ๊ณ๋ณ๋ก ์ ๋ฆฌ ๋ฅผ ํ๊ณ ์ ๋ฆฌํ ๋ธ๋ก๊ทธ ๋งํฌ๋ฅผ ์ฌ๋ ค์ฃผ์ธ์ ํน์ ์ดํด๊ฐ ์๋๊ฑฐ๋ ๋์ง์๋ ๋ถ๋ถ๋ ๋จ๊ฒจ์ฃผ์๋ฉด ๋ต๋ณํด๋๋ฆฌ๊ฒ ์ต๋๋ค ์ง๋ฌธํ ๋ด์ฉ์ด ์๋ค๋ฉด ๊ฐ๋จํ ์๊ฐ๋ฌธ ํ์ค ์ ๋ ๋จ๊ฒจ์ฃผ์๋ฉด ๋๊ฒ ์ต๋๋ค ์์ ์ ๋ฆฌํ ์ง๋ฌธ personal access tokens ๋ถ๋ถ์ด ์ดํด๊ฐ ์๋์ง ์์ต๋๋ค ์๊ฐ๋ฌธ ์ข์ ์ ๋ณด ๊ฐ์ฌํฉ๋๋ค xcode๋ด์์๋ ์ปค๋ฐ ํธ์ ํ ๋ชจ๋ ๊ด๋ฆฌ๊ฐ ๋๋ค๋ ์ ์ ๋ชฐ๋์๋ค์ ๋ฑ
| 0
|
12,131
| 14,740,919,955
|
IssuesEvent
|
2021-01-07 09:49:40
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
reopened
|
[iOS] Studies list > Progress bar is not updated for study completion % in studies list
|
Bug P1 Process: Dev Process: Reopened iOS
|
**Steps:**
1. Login to iOS mobile
2. Enroll into a study
3. Complete some activities
4. Navigate to studies list
5. Observe the progress bar
**Actual**: Progress bar is not updated for study completion % in studies list
**Expected**: Progress bar is should be updated for study completion % in studies list

Completion % updating in Dashboard:

|
2.0
|
[iOS] Studies list > Progress bar is not updated for study completion % in studies list - **Steps:**
1. Login to iOS mobile
2. Enroll into a study
3. Complete some activities
4. Navigate to studies list
5. Observe the progress bar
**Actual**: Progress bar is not updated for study completion % in studies list
**Expected**: Progress bar is should be updated for study completion % in studies list

Completion % updating in Dashboard:

|
process
|
studies list progress bar is not updated for study completion in studies list steps login to ios mobile enroll into a study complete some activities navigate to studies list observe the progress bar actual progress bar is not updated for study completion in studies list expected progress bar is should be updated for study completion in studies list completion updating in dashboard
| 1
|
20,662
| 27,334,557,975
|
IssuesEvent
|
2023-02-26 02:45:55
|
mehta-lab/microDL
|
https://api.github.com/repos/mehta-lab/microDL
|
closed
|
Add feedback in preprocess.py
|
preprocessing
|
Feature request: Add some sort of feedback (like a progressbar) while preprocess.py is running. This would be more userfriendly, as the script takes a while.
|
1.0
|
Add feedback in preprocess.py - Feature request: Add some sort of feedback (like a progressbar) while preprocess.py is running. This would be more userfriendly, as the script takes a while.
|
process
|
add feedback in preprocess py feature request add some sort of feedback like a progressbar while preprocess py is running this would be more userfriendly as the script takes a while
| 1
|
77,517
| 27,033,520,408
|
IssuesEvent
|
2023-02-12 13:56:07
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Changing a user's power level can result in your own power level being lowered
|
T-Defect
|
### Steps to reproduce
I tried to lower a problematic user's power level as follows:
1. click on their profile picture
2. select "Custom level" in the "Power level" dropdown
3. unrelated bug: try to enter -1, but the input does not accept `-`, so I either entered 0 or left the field blank
4. press enter
5. an `m.room.power_levels` event is sent, adding an entry for the user with level 0, and changing the entry for the own user from 100 to 50
For future reference: event `$eOIZHbZLm0g4eCm1WGMQ47qrLfkweOzhmkAyY2D9FQ0` in `!H55QZaxP7zgghhx48z:computer.surgery`.
### Outcome
#### What did you expect?
Changing another user's power level does not demote me from Admin to Moderator.
#### What happened instead?
I think the latest change to `m.room.power_levels` before this happened may have been my promotion from `50` to `100`, so maybe that was undone for some reason?
### Operating system
Arch Linux
### Application version
Element version: 1.11.20, Olm version: 3.2.12
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
Yes
|
1.0
|
Changing a user's power level can result in your own power level being lowered - ### Steps to reproduce
I tried to lower a problematic user's power level as follows:
1. click on their profile picture
2. select "Custom level" in the "Power level" dropdown
3. unrelated bug: try to enter -1, but the input does not accept `-`, so I either entered 0 or left the field blank
4. press enter
5. an `m.room.power_levels` event is sent, adding an entry for the user with level 0, and changing the entry for the own user from 100 to 50
For future reference: event `$eOIZHbZLm0g4eCm1WGMQ47qrLfkweOzhmkAyY2D9FQ0` in `!H55QZaxP7zgghhx48z:computer.surgery`.
### Outcome
#### What did you expect?
Changing another user's power level does not demote me from Admin to Moderator.
#### What happened instead?
I think the latest change to `m.room.power_levels` before this happened may have been my promotion from `50` to `100`, so maybe that was undone for some reason?
### Operating system
Arch Linux
### Application version
Element version: 1.11.20, Olm version: 3.2.12
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
Yes
|
non_process
|
changing a user s power level can result in your own power level being lowered steps to reproduce i tried to lower a problematic user s power level as follows click on their profile picture select custom level in the power level dropdown unrelated bug try to enter but the input does not accept so i either entered or left the field blank press enter an m room power levels event is sent adding an entry for the user with level and changing the entry for the own user from to for future reference event in computer surgery outcome what did you expect changing another user s power level does not demote me from admin to moderator what happened instead i think the latest change to m room power levels before this happened may have been my promotion from to so maybe that was undone for some reason operating system arch linux application version element version olm version how did you install the app no response homeserver no response will you send logs yes
| 0
|
19,781
| 26,162,827,037
|
IssuesEvent
|
2022-12-31 21:15:19
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Viewing each visit's timestamp?
|
duplicate question log-processing
|
Hello, is there any option to enable a timestamp next to each visitor's visit?
If not, could you point me to which part of the parser/display files need be modified in order to add this functionality?
|
1.0
|
Viewing each visit's timestamp? - Hello, is there any option to enable a timestamp next to each visitor's visit?
If not, could you point me to which part of the parser/display files need be modified in order to add this functionality?
|
process
|
viewing each visit s timestamp hello is there any option to enable a timestamp next to each visitor s visit if not could you point me to which part of the parser display files need be modified in order to add this functionality
| 1
|
120,696
| 10,131,963,815
|
IssuesEvent
|
2019-08-01 20:56:58
|
mono/mono
|
https://api.github.com/repos/mono/mono
|
closed
|
[netcore] System.Drawing.Drawing2D.Tests.ColorBlendTests.Ctor_LargeCount_ThrowsOutOfMemoryException
|
area-netcore: CoreLib epic: CoreFX tests
|
The test currently fails with:
```
System.Drawing.Drawing2D.Tests.ColorBlendTests.Ctor_LargeCount_ThrowsOutOfMemoryException [FAIL]
Assert.Throws() Failure
Expected: typeof(System.OutOfMemoryException)
Actual: (No exception was thrown)
Stack Trace:
at System.Drawing.Drawing2D.Tests.ColorBlendTests.Ctor_LargeCount_ThrowsOutOfMemoryException()
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
```
Determine why an OOM isn't being thrown and make the test pass.
|
1.0
|
[netcore] System.Drawing.Drawing2D.Tests.ColorBlendTests.Ctor_LargeCount_ThrowsOutOfMemoryException - The test currently fails with:
```
System.Drawing.Drawing2D.Tests.ColorBlendTests.Ctor_LargeCount_ThrowsOutOfMemoryException [FAIL]
Assert.Throws() Failure
Expected: typeof(System.OutOfMemoryException)
Actual: (No exception was thrown)
Stack Trace:
at System.Drawing.Drawing2D.Tests.ColorBlendTests.Ctor_LargeCount_ThrowsOutOfMemoryException()
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
```
Determine why an OOM isn't being thrown and make the test pass.
|
non_process
|
system drawing tests colorblendtests ctor largecount throwsoutofmemoryexception the test currently fails with system drawing tests colorblendtests ctor largecount throwsoutofmemoryexception assert throws failure expected typeof system outofmemoryexception actual no exception was thrown stack trace at system drawing tests colorblendtests ctor largecount throwsoutofmemoryexception at system reflection runtimemethodinfo invoke object obj bindingflags invokeattr binder binder object parameters cultureinfo culture determine why an oom isn t being thrown and make the test pass
| 0
|
124,803
| 4,933,586,907
|
IssuesEvent
|
2016-11-28 16:44:02
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Delete API: Rows in instance_label_map label, mount, service_event, instance_link, external_handler_external_handler_process_map tables are never marked as removed
|
kind/bug priority/-1 status/blocker status/reopened
|
**Rancher Version:** master @ 4e3bc12ec8fd66bd910efbaffacd2a43cb408637
**Docker Version:** 1.12.3
**OS and where are the hosts located? (cloud, bare metal, etc):** ubuntu 14.04
**Setup Details: (single node rancher vs. HA rancher, internal DB vs. external DB)** single node
**Environment Type: (Cattle/Kubernetes/Swarm/Mesos)** cattle
**Steps to Reproduce:**
1. Launch a stack
2. Delete a stack
**Results:**
The instance_label_map label, mount, service_event, instance_link, external_handler_external_handler_process_map rows aren't marked as removed, preventing deletion of instance rows. Database cleanup logic doesn't delete these rows and must skip other removed rows being referenced by foreign key constraints.
**Expected:**
The rows are marked as removed and the database cleanup logic deletes the rows after the configured cutoff time.
|
1.0
|
Delete API: Rows in instance_label_map label, mount, service_event, instance_link, external_handler_external_handler_process_map tables are never marked as removed - **Rancher Version:** master @ 4e3bc12ec8fd66bd910efbaffacd2a43cb408637
**Docker Version:** 1.12.3
**OS and where are the hosts located? (cloud, bare metal, etc):** ubuntu 14.04
**Setup Details: (single node rancher vs. HA rancher, internal DB vs. external DB)** single node
**Environment Type: (Cattle/Kubernetes/Swarm/Mesos)** cattle
**Steps to Reproduce:**
1. Launch a stack
2. Delete a stack
**Results:**
The instance_label_map label, mount, service_event, instance_link, external_handler_external_handler_process_map rows aren't marked as removed, preventing deletion of instance rows. Database cleanup logic doesn't delete these rows and must skip other removed rows being referenced by foreign key constraints.
**Expected:**
The rows are marked as removed and the database cleanup logic deletes the rows after the configured cutoff time.
|
non_process
|
delete api rows in instance label map label mount service event instance link external handler external handler process map tables are never marked as removed rancher version master docker version os and where are the hosts located cloud bare metal etc ubuntu setup details single node rancher vs ha rancher internal db vs external db single node environment type cattle kubernetes swarm mesos cattle steps to reproduce launch a stack delete a stack results the instance label map label mount service event instance link external handler external handler process map rows aren t marked as removed preventing deletion of instance rows database cleanup logic doesn t delete these rows and must skip other removed rows being referenced by foreign key constraints expected the rows are marked as removed and the database cleanup logic deletes the rows after the configured cutoff time
| 0
|
25
| 2,496,383,210
|
IssuesEvent
|
2015-01-06 19:08:52
|
GsDevKit/gsApplicationTools
|
https://api.github.com/repos/GsDevKit/gsApplicationTools
|
closed
|
transaction semantics needs some work for snapping off continuations
|
in process
|
When an error occurs and we snap off a continuation for the object log, we need to make sure that we do a commit without interfering with the server's transaction semantics ... to complicate things a bit more, when we `pass` a non-resumable exception we might want to do a commit no matter what ... it's possible that we can depend upon the commit that we perform when the gem server exits?
|
1.0
|
transaction semantics needs some work for snapping off continuations - When an error occurs and we snap off a continuation for the object log, we need to make sure that we do a commit without interfering with the server's transaction semantics ... to complicate things a bit more, when we `pass` a non-resumable exception we might want to do a commit no matter what ... it's possible that we can depend upon the commit that we perform when the gem server exits?
|
process
|
transaction semantics needs some work for snapping off continuations when an error occurs and we snap off a continuation for the object log we need to make sure that we do a commit without interfering with the server s transaction semantics to complicate things a bit more when we pass a non resumable exception we might want to do a commit no matter what it s possible that we can depend upon the commit that we perform when the gem server exits
| 1
|
539,545
| 15,790,590,708
|
IssuesEvent
|
2021-04-02 01:56:56
|
codeforsanjose/gov-agenda-notifier
|
https://api.github.com/repos/codeforsanjose/gov-agenda-notifier
|
closed
|
Community Meeting view: Disable 'Subscribe' action for agenda items of some statuses
|
Medium Priority frontend
|
Not sure about the exact requirements but I've noticed that `Subscribe` action is disabled for `In progress` meeting items [in the mocks](https://xd.adobe.com/view/0958884f-1dff-4d93-bbfe-b8c863210438-5c0b/screen/a04a517d-a579-486c-9700-1abcb7f86f7d). I think it makes sense. I also think that `Subscribe` should be disabled for `Completed` items as well.
|
1.0
|
Community Meeting view: Disable 'Subscribe' action for agenda items of some statuses - Not sure about the exact requirements but I've noticed that `Subscribe` action is disabled for `In progress` meeting items [in the mocks](https://xd.adobe.com/view/0958884f-1dff-4d93-bbfe-b8c863210438-5c0b/screen/a04a517d-a579-486c-9700-1abcb7f86f7d). I think it makes sense. I also think that `Subscribe` should be disabled for `Completed` items as well.
|
non_process
|
community meeting view disable subscribe action for agenda items of some statuses not sure about the exact requirements but i ve noticed that subscribe action is disabled for in progress meeting items i think it makes sense i also think that subscribe should be disabled for completed items as well
| 0
|
607
| 3,075,506,070
|
IssuesEvent
|
2015-08-20 14:01:49
|
pwittchen/ReactiveNetwork
|
https://api.github.com/repos/pwittchen/ReactiveNetwork
|
opened
|
Update version to 0.0.2 in README.md & new create GitHub release
|
release process
|
Update version to 0.0.2 in README.md & new create GitHub release after Maven Sync.
Archives are already uploaded to Maven Central. We're waiting for Maven Sync.
Check changes here: https://github.com/pwittchen/ReactiveNetwork/compare/v0.0.1...master
Proposed release notes:
- Improved WiFi Access Points scanning
- Updated documentation
|
1.0
|
Update version to 0.0.2 in README.md & new create GitHub release - Update version to 0.0.2 in README.md & new create GitHub release after Maven Sync.
Archives are already uploaded to Maven Central. We're waiting for Maven Sync.
Check changes here: https://github.com/pwittchen/ReactiveNetwork/compare/v0.0.1...master
Proposed release notes:
- Improved WiFi Access Points scanning
- Updated documentation
|
process
|
update version to in readme md new create github release update version to in readme md new create github release after maven sync archives are already uploaded to maven central we re waiting for maven sync check changes here proposed release notes improved wifi access points scanning updated documentation
| 1
|
4,494
| 7,346,219,648
|
IssuesEvent
|
2018-03-07 19:58:14
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Azure CLI Docs are v1 specific
|
app-service cxp doc-bug in-process triaged
|
It appears that the Azure cli docs are Azure CLI v1 specific as Azure CLI v2 uses az instead of azure as the keyword. It also doesn't have an az site. The docs here should be specific by Azure CLI version.
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 46653d6e-d057-e698-ef94-32d48e12d39f
* Version Independent ID: 74beaf3c-60f9-006d-110a-fbe2348d4797
* [Content](https://docs.microsoft.com/en-us/azure/app-service/web-sites-enable-diagnostic-log)
* [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/web-sites-enable-diagnostic-log.md)
* Service: app-service
|
1.0
|
Azure CLI Docs are v1 specific - It appears that the Azure cli docs are Azure CLI v1 specific as Azure CLI v2 uses az instead of azure as the keyword. It also doesn't have an az site. The docs here should be specific by Azure CLI version.
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 46653d6e-d057-e698-ef94-32d48e12d39f
* Version Independent ID: 74beaf3c-60f9-006d-110a-fbe2348d4797
* [Content](https://docs.microsoft.com/en-us/azure/app-service/web-sites-enable-diagnostic-log)
* [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/web-sites-enable-diagnostic-log.md)
* Service: app-service
|
process
|
azure cli docs are specific it appears that the azure cli docs are azure cli specific as azure cli uses az instead of azure as the keyword it also doesn t have an az site the docs here should be specific by azure cli version document details โ do not edit this section it is required for docs microsoft com โ github issue linking id version independent id service app service
| 1
|
21,997
| 30,496,042,571
|
IssuesEvent
|
2023-07-18 10:53:33
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
hpcflow-new2 0.2.0a64 has 2 GuardDog issues
|
guarddog exec-base64 silent-process-execution
|
https://pypi.org/project/hpcflow-new2
https://inspector.pypi.io/project/hpcflow-new2
```{
"dependency": "hpcflow-new2",
"version": "0.2.0a64",
"result": {
"issues": 2,
"errors": {},
"results": {
"exec-base64": [
{
"location": "hpcflow_new2-0.2.0a64/hpcflow/sdk/submission/jobscript.py:998",
"code": " init_proc = subprocess.Popen(\n args=args,\n cwd=str(self.workflow.path),\n creationflags=subprocess.CREATE_NO_WINDOW,\n )",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
],
"silent-process-execution": [
{
"location": "hpcflow_new2-0.2.0a64/hpcflow/sdk/helper/helper.py:112",
"code": " proc = subprocess.Popen(\n args=args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n **kwargs,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpip6d_sur/hpcflow-new2"
}
}```
|
1.0
|
hpcflow-new2 0.2.0a64 has 2 GuardDog issues - https://pypi.org/project/hpcflow-new2
https://inspector.pypi.io/project/hpcflow-new2
```{
"dependency": "hpcflow-new2",
"version": "0.2.0a64",
"result": {
"issues": 2,
"errors": {},
"results": {
"exec-base64": [
{
"location": "hpcflow_new2-0.2.0a64/hpcflow/sdk/submission/jobscript.py:998",
"code": " init_proc = subprocess.Popen(\n args=args,\n cwd=str(self.workflow.path),\n creationflags=subprocess.CREATE_NO_WINDOW,\n )",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
],
"silent-process-execution": [
{
"location": "hpcflow_new2-0.2.0a64/hpcflow/sdk/helper/helper.py:112",
"code": " proc = subprocess.Popen(\n args=args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n **kwargs,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpip6d_sur/hpcflow-new2"
}
}```
|
process
|
hpcflow has guarddog issues dependency hpcflow version result issues errors results exec location hpcflow hpcflow sdk submission jobscript py code init proc subprocess popen n args args n cwd str self workflow path n creationflags subprocess create no window n message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n silent process execution location hpcflow hpcflow sdk helper helper py code proc subprocess popen n args args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n kwargs n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp sur hpcflow
| 1
|
14,390
| 17,403,912,677
|
IssuesEvent
|
2021-08-03 01:14:31
|
googleapis/python-tpu
|
https://api.github.com/repos/googleapis/python-tpu
|
closed
|
Release as GA
|
api: tpu type: process
|
[GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface. See [release history](https://github.com/googleapis/python-tpu/releases).
- [x] Server API is GA. The [release notes](https://cloud.google.com/tpu/docs/release-notes) don't mention GA, however drift lists the API as GA.
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
1.0
|
Release as GA - [GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface. See [release history](https://github.com/googleapis/python-tpu/releases).
- [x] Server API is GA. The [release notes](https://cloud.google.com/tpu/docs/release-notes) don't mention GA, however drift lists the API as GA.
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
process
|
release as ga required days elapsed since last beta release with new api surface see server api is ga the don t mention ga however drift lists the api as ga package api is stable and we can commit to backward compatibility all dependencies are ga
| 1
|
359,238
| 10,667,225,793
|
IssuesEvent
|
2019-10-19 10:42:05
|
clojure-emacs/cider
|
https://api.github.com/repos/clojure-emacs/cider
|
closed
|
Red squiggly in source buffer is gone with Clojure 1.10
|
bug high priority
|
## Expected behavior
Whenever I do `C-c C-c` with a form that has an error, I get a red squiggly in my source buffer indicating where the error is.
## Actual behavior
Iโm not getting the red squiggly, but I get a rather friendly message in the `*cider-repl..*` buffer telling me that I have an error somewhere in my source file
## Steps to reproduce the problem
Create a project with Clojure 1.10, jack-in, create a form with an error in it, evaluate it with `C-c C-c`. You will not have a red squiggly under the form with error, but you will get a message in the `*cider-repl ...*` buffer
## Environment & Version information
Any recent version of Cider, Clojure 1.10.
## Additional information
The messages I get in the repl are: for 1.10
```
Syntax error compiling at (core.clj:10:3).
Unable to resolve symbol: somthing in this context
```
Wheras for 1.9.0 itโs
```
CompilerException java.lang.RuntimeException: Unable to resolve symbol: somthing in this context, compiling:(/Users/erik/Documents/github.com/refactor-nrepl-ns-kw-bug/src/foo/core.clj:10:3)
```
So a good starting point would be to look at this regex here:
https://github.com/clojure-emacs/cider/blob/master/cider-eval.el#L300
|
1.0
|
Red squiggly in source buffer is gone with Clojure 1.10 - ## Expected behavior
Whenever I do `C-c C-c` with a form that has an error, I get a red squiggly in my source buffer indicating where the error is.
## Actual behavior
Iโm not getting the red squiggly, but I get a rather friendly message in the `*cider-repl..*` buffer telling me that I have an error somewhere in my source file
## Steps to reproduce the problem
Create a project with Clojure 1.10, jack-in, create a form with an error in it, evaluate it with `C-c C-c`. You will not have a red squiggly under the form with error, but you will get a message in the `*cider-repl ...*` buffer
## Environment & Version information
Any recent version of Cider, Clojure 1.10.
## Additional information
The messages I get in the repl are: for 1.10
```
Syntax error compiling at (core.clj:10:3).
Unable to resolve symbol: somthing in this context
```
Wheras for 1.9.0 itโs
```
CompilerException java.lang.RuntimeException: Unable to resolve symbol: somthing in this context, compiling:(/Users/erik/Documents/github.com/refactor-nrepl-ns-kw-bug/src/foo/core.clj:10:3)
```
So a good starting point would be to look at this regex here:
https://github.com/clojure-emacs/cider/blob/master/cider-eval.el#L300
|
non_process
|
red squiggly in source buffer is gone with clojure expected behavior whenever i do c c c c with a form that has an error i get a red squiggly in my source buffer indicating where the error is actual behavior iโm not getting the red squiggly but i get a rather friendly message in the cider repl buffer telling me that i have an error somewhere in my source file steps to reproduce the problem create a project with clojure jack in create a form with an error in it evaluate it with c c c c you will not have a red squiggly under the form with error but you will get a message in the cider repl buffer environment version information any recent version of cider clojure additional information the messages i get in the repl are for syntax error compiling at core clj unable to resolve symbol somthing in this context wheras for itโs compilerexception java lang runtimeexception unable to resolve symbol somthing in this context compiling users erik documents github com refactor nrepl ns kw bug src foo core clj so a good starting point would be to look at this regex here
| 0
|
5,228
| 8,029,877,991
|
IssuesEvent
|
2018-07-27 17:34:45
|
GoogleCloudPlatform/google-cloud-python
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
|
closed
|
Flaky error-reporting system test
|
api: error reporting flaky testing type: process
|
See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/4182
```python
Traceback (most recent call last):
File "/var/code/gcp/error_reporting/tests/system.py", line 123, in test_report_exception
error_count = wrapped_get_count(class_name, Config.CLIENT)
File "/var/code/gcp/.nox/sys-2-7/lib/python2.7/site-packages/test_utils/retry.py", line 155, in wrapped_function
raise BackoffFailed()
BackoffFailed
```
|
1.0
|
Flaky error-reporting system test - See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/4182
```python
Traceback (most recent call last):
File "/var/code/gcp/error_reporting/tests/system.py", line 123, in test_report_exception
error_count = wrapped_get_count(class_name, Config.CLIENT)
File "/var/code/gcp/.nox/sys-2-7/lib/python2.7/site-packages/test_utils/retry.py", line 155, in wrapped_function
raise BackoffFailed()
BackoffFailed
```
|
process
|
flaky error reporting system test see python traceback most recent call last file var code gcp error reporting tests system py line in test report exception error count wrapped get count class name config client file var code gcp nox sys lib site packages test utils retry py line in wrapped function raise backofffailed backofffailed
| 1
|
11,820
| 14,634,443,420
|
IssuesEvent
|
2020-12-24 05:29:00
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Incorrect script processing in the case of repeating top-level properties in destructuring of nested objects
|
AREA: server SYSTEM: script processing health-monitor
|
Found in Health Monitor results (https://www.instagram.com/).
Originnal script (./static/bundles/es6/Consumer.js/b75e6b6ebcdc.js):
```js
e.getPromptButtonRenderPropsFromScreenPayload = function(t) {
const {response: {prompt_button: n}, response: {primary_button: o}} = t
, l = n || o;
if (!l)
return null;
const {action_type: u, title: c, url: s} = l;
return null == u ? null : {
actionType: u,
label: (null === c || void 0 === c ? void 0 : c.text) || r(d[0]).DEFAULT_PROMPT_BUTTON_LABEL,
href: s,
canSubmitOnClick: !s
}
}
```
Test (Uncaught SyntaxError: Identifier '_hh$temp0$foo' has already been declared):
```js
it('repeating top-level properties in destructuring of nested objects', () => {
testProcessing([
{
src: 'const {foo: {bar: var1}, foo: {baz: var2}} = o',
expected: '' // const _hh$temp0 = o, _hh$temp0$foo = _hh$temp0.foo, var1 = _hh$temp0$foo.bar, _hh$temp0$foo = _hh$temp0.foo, var2 = _hh$temp0$foo.baz
},
{
src: 't = function (o) {' +
' const {foo: {bar: v1}, foo: {baz: v2}} = o' +
'}',
expected: '' // t = function (o) { const _hh$temp0=o, _hh$temp0$foo=_hh$temp0.foo, v1 = _hh$temp0$foo.bar, _hh$temp0$foo = _hh$temp0.foo, v2=_hh$temp0$foo.baz }
}
]);
});
```
|
1.0
|
Incorrect script processing in the case of repeating top-level properties in destructuring of nested objects - Found in Health Monitor results (https://www.instagram.com/).
Originnal script (./static/bundles/es6/Consumer.js/b75e6b6ebcdc.js):
```js
e.getPromptButtonRenderPropsFromScreenPayload = function(t) {
const {response: {prompt_button: n}, response: {primary_button: o}} = t
, l = n || o;
if (!l)
return null;
const {action_type: u, title: c, url: s} = l;
return null == u ? null : {
actionType: u,
label: (null === c || void 0 === c ? void 0 : c.text) || r(d[0]).DEFAULT_PROMPT_BUTTON_LABEL,
href: s,
canSubmitOnClick: !s
}
}
```
Test (Uncaught SyntaxError: Identifier '_hh$temp0$foo' has already been declared):
```js
it('repeating top-level properties in destructuring of nested objects', () => {
testProcessing([
{
src: 'const {foo: {bar: var1}, foo: {baz: var2}} = o',
expected: '' // const _hh$temp0 = o, _hh$temp0$foo = _hh$temp0.foo, var1 = _hh$temp0$foo.bar, _hh$temp0$foo = _hh$temp0.foo, var2 = _hh$temp0$foo.baz
},
{
src: 't = function (o) {' +
' const {foo: {bar: v1}, foo: {baz: v2}} = o' +
'}',
expected: '' // t = function (o) { const _hh$temp0=o, _hh$temp0$foo=_hh$temp0.foo, v1 = _hh$temp0$foo.bar, _hh$temp0$foo = _hh$temp0.foo, v2=_hh$temp0$foo.baz }
}
]);
});
```
|
process
|
incorrect script processing in the case of repeating top level properties in destructuring of nested objects found in health monitor results originnal script static bundles consumer js js js e getpromptbuttonrenderpropsfromscreenpayload function t const response prompt button n response primary button o t l n o if l return null const action type u title c url s l return null u null actiontype u label null c void c void c text r d default prompt button label href s cansubmitonclick s test uncaught syntaxerror identifier hh foo has already been declared js it repeating top level properties in destructuring of nested objects testprocessing src const foo bar foo baz o expected const hh o hh foo hh foo hh foo bar hh foo hh foo hh foo baz src t function o const foo bar foo baz o expected t function o const hh o hh foo hh foo hh foo bar hh foo hh foo hh foo baz
| 1
|
18,949
| 24,908,397,239
|
IssuesEvent
|
2022-10-29 14:59:16
|
tokio-rs/tokio
|
https://api.github.com/repos/tokio-rs/tokio
|
closed
|
Create a tokio::process::Child in a new process group
|
A-tokio M-process C-feature-request
|
**Is your feature request related to a problem? Please describe.**
I want to create a child process and later be able to kill this child with all of it's sub- and sub-sub- children on unix-style operating systems.
**Describe the solution you'd like**
Something along the lines of:
`.set_process_group(0)`
**Describe alternatives you've considered**
I'm currently using this code to implement it:
```rust
unsafe {
cmd.pre_exec(|| {
// create a new process group
let pid = nix::unistd::getpid();
if let Err(err) = nix::unistd::setpgid(pid, Pid::from_raw(0)) {
warn!("Failed to create new process group: {:#?}", err);
}
Ok(())
});
}
```
`setpgid` needs to be called before `exec` is called after `fork`.
**Additional context**
https://www.gnu.org/software/libc/manual/html_node/Process-Group-Functions.html
|
1.0
|
Create a tokio::process::Child in a new process group - **Is your feature request related to a problem? Please describe.**
I want to create a child process and later be able to kill this child with all of it's sub- and sub-sub- children on unix-style operating systems.
**Describe the solution you'd like**
Something along the lines of:
`.set_process_group(0)`
**Describe alternatives you've considered**
I'm currently using this code to implement it:
```rust
unsafe {
cmd.pre_exec(|| {
// create a new process group
let pid = nix::unistd::getpid();
if let Err(err) = nix::unistd::setpgid(pid, Pid::from_raw(0)) {
warn!("Failed to create new process group: {:#?}", err);
}
Ok(())
});
}
```
`setpgid` needs to be called before `exec` is called after `fork`.
**Additional context**
https://www.gnu.org/software/libc/manual/html_node/Process-Group-Functions.html
|
process
|
create a tokio process child in a new process group is your feature request related to a problem please describe i want to create a child process and later be able to kill this child with all of it s sub and sub sub children on unix style operating systems describe the solution you d like something along the lines of set process group describe alternatives you ve considered i m currently using this code to implement it rust unsafe cmd pre exec create a new process group let pid nix unistd getpid if let err err nix unistd setpgid pid pid from raw warn failed to create new process group err ok setpgid needs to be called before exec is called after fork additional context
| 1
|
13,341
| 15,801,356,784
|
IssuesEvent
|
2021-04-03 04:27:26
|
ooi-data/RS01SBPD-DP01A-04-FLNTUA102-recovered_wfp-dpc_flnturtd_instrument_recovered
|
https://api.github.com/repos/ooi-data/RS01SBPD-DP01A-04-FLNTUA102-recovered_wfp-dpc_flnturtd_instrument_recovered
|
opened
|
๐ Processing failed: PermissionError
|
process
|
## Overview
`PermissionError` found in `processing_task` task during run ended on 2021-04-03T04:27:26.019877.
## Details
Flow name: `RS01SBPD-DP01A-04-FLNTUA102-recovered_wfp-dpc_flnturtd_instrument_recovered`
Task name: `processing_task`
Error type: `PermissionError`
Error message: The difference between the request time and the current time is too large.
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 234, in _call_s3
return await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the PutObject operation: The difference between the request time and the current time is too large.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 71, in processing_task
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 305, in finalize_zarr
array_plan.execute()
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/rechunker/api.py", line 76, in execute
self._executor.execute_plan(self._plan, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/rechunker/executors/dask.py", line 24, in execute_plan
return plan.compute(**kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/base.py", line 283, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/base.py", line 565, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/threaded.py", line 76, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/local.py", line 487, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/local.py", line 317, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/local.py", line 222, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/core.py", line 121, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/array/core.py", line 3984, in store_chunk
return load_store_chunk(x, out, index, lock, return_stored, False)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/array/core.py", line 3971, in load_store_chunk
out[index] = x
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1211, in __setitem__
self.set_basic_selection(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1306, in set_basic_selection
return self._set_basic_selection_nd(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1597, in _set_basic_selection_nd
self._set_selection(indexer, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1669, in _set_selection
self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values,
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1861, in _chunk_setitems
self.chunk_store.setitems(values)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/mapping.py", line 111, in setitems
self.fs.pipe(values)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 121, in wrapper
return maybe_sync(func, self, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 100, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 71, in sync
raise exc.with_traceback(tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 55, in f
result[0] = await future
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 224, in _pipe
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 753, in _pipe_file
return await self._call_s3(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err) from err
PermissionError: The difference between the request time and the current time is too large.
```
</details>
|
1.0
|
๐ Processing failed: PermissionError - ## Overview
`PermissionError` found in `processing_task` task during run ended on 2021-04-03T04:27:26.019877.
## Details
Flow name: `RS01SBPD-DP01A-04-FLNTUA102-recovered_wfp-dpc_flnturtd_instrument_recovered`
Task name: `processing_task`
Error type: `PermissionError`
Error message: The difference between the request time and the current time is too large.
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 234, in _call_s3
return await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the PutObject operation: The difference between the request time and the current time is too large.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 71, in processing_task
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 305, in finalize_zarr
array_plan.execute()
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/rechunker/api.py", line 76, in execute
self._executor.execute_plan(self._plan, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/rechunker/executors/dask.py", line 24, in execute_plan
return plan.compute(**kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/base.py", line 283, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/base.py", line 565, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/threaded.py", line 76, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/local.py", line 487, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/local.py", line 317, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/local.py", line 222, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/core.py", line 121, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/array/core.py", line 3984, in store_chunk
return load_store_chunk(x, out, index, lock, return_stored, False)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/array/core.py", line 3971, in load_store_chunk
out[index] = x
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1211, in __setitem__
self.set_basic_selection(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1306, in set_basic_selection
return self._set_basic_selection_nd(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1597, in _set_basic_selection_nd
self._set_selection(indexer, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1669, in _set_selection
self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values,
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1861, in _chunk_setitems
self.chunk_store.setitems(values)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/mapping.py", line 111, in setitems
self.fs.pipe(values)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 121, in wrapper
return maybe_sync(func, self, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 100, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 71, in sync
raise exc.with_traceback(tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 55, in f
result[0] = await future
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 224, in _pipe
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 753, in _pipe_file
return await self._call_s3(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err) from err
PermissionError: The difference between the request time and the current time is too large.
```
</details>
|
process
|
๐ processing failed permissionerror overview permissionerror found in processing task task during run ended on details flow name recovered wfp dpc flnturtd instrument recovered task name processing task error type permissionerror error message the difference between the request time and the current time is too large traceback traceback most recent call last file srv conda envs notebook lib site packages core py line in call return await method additional kwargs file srv conda envs notebook lib site packages aiobotocore client py line in make api call raise error class parsed response operation name botocore exceptions clienterror an error occurred requesttimetooskewed when calling the putobject operation the difference between the request time and the current time is too large the above exception was the direct cause of the following exception traceback most recent call last file usr share miniconda envs harvester lib site packages ooi harvester processor pipeline py line in processing task file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize zarr array plan execute file srv conda envs notebook lib site packages rechunker api py line in execute self executor execute plan self plan kwargs file srv conda envs notebook lib site packages rechunker executors dask py line in execute plan return plan compute kwargs file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in store chunk return load store chunk x out index lock return stored false file srv conda envs notebook lib site packages dask array core py line in load store chunk out x file srv conda envs notebook lib site packages zarr core py line in setitem self set basic selection selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection return self set basic selection nd selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection nd self set selection indexer value fields fields file srv conda envs notebook lib site packages zarr core py line in set selection self chunk setitems lchunk coords lchunk selection chunk values file srv conda envs notebook lib site packages zarr core py line in chunk setitems self chunk store setitems values file srv conda envs notebook lib site packages fsspec mapping py line in setitems self fs pipe values file srv conda envs notebook lib site packages fsspec asyn py line in wrapper return maybe sync func self args kwargs file srv conda envs notebook lib site packages fsspec asyn py line in maybe sync return sync loop func args kwargs file srv conda envs notebook lib site packages fsspec asyn py line in sync raise exc with traceback tb file srv conda envs notebook lib site packages fsspec asyn py line in f result await future file srv conda envs notebook lib site packages fsspec asyn py line in pipe await asyncio gather file srv conda envs notebook lib site packages core py line in pipe file return await self call file srv conda envs notebook lib site packages core py line in call raise translate boto error err from err permissionerror the difference between the request time and the current time is too large
| 1
|
22,517
| 31,567,520,236
|
IssuesEvent
|
2023-09-04 00:50:07
|
tdwg/hc
|
https://api.github.com/repos/tdwg/hc
|
opened
|
New Term - geospatialScopeAreaInSquareKilometers
|
Term - add normative Process - under public review Class - Event
|
## New term
* Submitter: Humboldt Extension Task Group
* Efficacy Justification (why is this term necessary?): Part of a package of terms in support of biological inventory data.
* Demand Justification (name at least two organizations that independently need this term): The Humboldt Extension Task Group proposing this term consists of numerous organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: None
Proposed attributes of the new term:
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): geospatialScopeAreaInSquareKilometers
* Term label (English, not normative): Geospatial Scope Area In Square Kilometers
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): Event
* Definition of the term (normative): Total area of the geospatial scope of the dwc:Event in square kilometers.
* Usage comments (recommendations regarding content, etc., not normative): Geospatial scope refers to the dwc:Event location reported using the terms organized in Darwin Core under dcterms:Location. This area is always greater than or equal to the totalAreaSampledInSquareKilometers.
* Examples (not normative): 25
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
1.0
|
New Term - geospatialScopeAreaInSquareKilometers - ## New term
* Submitter: Humboldt Extension Task Group
* Efficacy Justification (why is this term necessary?): Part of a package of terms in support of biological inventory data.
* Demand Justification (name at least two organizations that independently need this term): The Humboldt Extension Task Group proposing this term consists of numerous organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: None
Proposed attributes of the new term:
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): geospatialScopeAreaInSquareKilometers
* Term label (English, not normative): Geospatial Scope Area In Square Kilometers
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): Event
* Definition of the term (normative): Total area of the geospatial scope of the dwc:Event in square kilometers.
* Usage comments (recommendations regarding content, etc., not normative): Geospatial scope refers to the dwc:Event location reported using the terms organized in Darwin Core under dcterms:Location. This area is always greater than or equal to the totalAreaSampledInSquareKilometers.
* Examples (not normative): 25
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
process
|
new term geospatialscopeareainsquarekilometers new term submitter humboldt extension task group efficacy justification why is this term necessary part of a package of terms in support of biological inventory data demand justification name at least two organizations that independently need this term the humboldt extension task group proposing this term consists of numerous organizations stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version none proposed attributes of the new term term name in lowercamelcase for properties uppercamelcase for classes geospatialscopeareainsquarekilometers term label english not normative geospatial scope area in square kilometers organized in class e g occurrence event location taxon event definition of the term normative total area of the geospatial scope of the dwc event in square kilometers usage comments recommendations regarding content etc not normative geospatial scope refers to the dwc event location reported using the terms organized in darwin core under dcterms location this area is always greater than or equal to the totalareasampledinsquarekilometers examples not normative refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd
| 1
|
387,246
| 11,458,760,920
|
IssuesEvent
|
2020-02-07 04:44:52
|
servicemesher/istio-official-translation
|
https://api.github.com/repos/servicemesher/istio-official-translation
|
closed
|
/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md
|
finished lang/zh priority/P0 sync/update version/1.5
|
Source File: [/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md](https://github.com/istio/istio.io/tree/master/content/en/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md)
Diff:
~~~diff
diff --git a/content/en/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md b/content/en/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md
index 90618bdb7..bd20a00ed 100644
--- a/content/en/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md
+++ b/content/en/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md
@@ -18,7 +18,7 @@ proceed to [setting up your local computer](/docs/examples/microservices-istio/s
You can use the [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/quickstart) or the
[IBM Cloud Kubernetes Service](https://cloud.ibm.com/docs/containers?topic=containers-getting-started).
-1. Connect to your cluster and create an environment variable to store the name
+1. Create an environment variable to store the name
of a namespace that you will use when you run the tutorial commands.
You can use any name, for example `tutorial`.
@@ -38,7 +38,7 @@ proceed to [setting up your local computer](/docs/examples/microservices-istio/s
simultaneously by multiple participants.
{{< /tip >}}
-1. [Install Istio](/docs/setup/) with strict mutual TLS enabled.
+1. [Install Istio](/docs/setup/).
1. [Enable Envoy's access logging](/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging).
@@ -76,7 +76,7 @@ proceed to [setting up your local computer](/docs/examples/microservices-istio/s
- path: /
backend:
serviceName: tracing
- servicePort: 80
+ servicePort: 9411
- host: my-istio-logs-database.io
http:
paths:
@@ -236,8 +236,8 @@ proceed to [setting up your local computer](/docs/examples/microservices-istio/s
You will need this file later in the tutorial.
If you are an instructor, send the generated configuration files to each
- participant who should copy it to their local computer.
+ participant. The participants must copy their configuration file to their local computer.
-Congratulations, you configured your cluster for the tutorials!
+Congratulations, you configured your cluster for the tutorial!
You are ready to [setup a local computer](/docs/examples/microservices-istio/setup-local-computer).
~~~
|
1.0
|
/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md - Source File: [/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md](https://github.com/istio/istio.io/tree/master/content/en/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md)
Diff:
~~~diff
diff --git a/content/en/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md b/content/en/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md
index 90618bdb7..bd20a00ed 100644
--- a/content/en/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md
+++ b/content/en/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md
@@ -18,7 +18,7 @@ proceed to [setting up your local computer](/docs/examples/microservices-istio/s
You can use the [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/quickstart) or the
[IBM Cloud Kubernetes Service](https://cloud.ibm.com/docs/containers?topic=containers-getting-started).
-1. Connect to your cluster and create an environment variable to store the name
+1. Create an environment variable to store the name
of a namespace that you will use when you run the tutorial commands.
You can use any name, for example `tutorial`.
@@ -38,7 +38,7 @@ proceed to [setting up your local computer](/docs/examples/microservices-istio/s
simultaneously by multiple participants.
{{< /tip >}}
-1. [Install Istio](/docs/setup/) with strict mutual TLS enabled.
+1. [Install Istio](/docs/setup/).
1. [Enable Envoy's access logging](/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging).
@@ -76,7 +76,7 @@ proceed to [setting up your local computer](/docs/examples/microservices-istio/s
- path: /
backend:
serviceName: tracing
- servicePort: 80
+ servicePort: 9411
- host: my-istio-logs-database.io
http:
paths:
@@ -236,8 +236,8 @@ proceed to [setting up your local computer](/docs/examples/microservices-istio/s
You will need this file later in the tutorial.
If you are an instructor, send the generated configuration files to each
- participant who should copy it to their local computer.
+ participant. The participants must copy their configuration file to their local computer.
-Congratulations, you configured your cluster for the tutorials!
+Congratulations, you configured your cluster for the tutorial!
You are ready to [setup a local computer](/docs/examples/microservices-istio/setup-local-computer).
~~~
|
non_process
|
docs examples microservices istio setup kubernetes cluster index md source file diff diff diff git a content en docs examples microservices istio setup kubernetes cluster index md b content en docs examples microservices istio setup kubernetes cluster index md index a content en docs examples microservices istio setup kubernetes cluster index md b content en docs examples microservices istio setup kubernetes cluster index md proceed to docs examples microservices istio s you can use the or the connect to your cluster and create an environment variable to store the name create an environment variable to store the name of a namespace that you will use when you run the tutorial commands you can use any name for example tutorial proceed to docs examples microservices istio s simultaneously by multiple participants docs setup with strict mutual tls enabled docs setup docs tasks observability logs access log enable envoy s access logging proceed to docs examples microservices istio s path backend servicename tracing serviceport serviceport host my istio logs database io http paths proceed to docs examples microservices istio s you will need this file later in the tutorial if you are an instructor send the generated configuration files to each participant who should copy it to their local computer participant the participants must copy their configuration file to their local computer congratulations you configured your cluster for the tutorials congratulations you configured your cluster for the tutorial you are ready to docs examples microservices istio setup local computer
| 0
|
132,824
| 28,364,383,221
|
IssuesEvent
|
2023-04-12 13:00:55
|
apache/daffodil-vscode
|
https://api.github.com/repos/apache/daffodil-vscode
|
closed
|
Nested element and sequence tags return incorrect element completion items
|
bug code completion
|
When an element tag is nested in another element tag, and the cursor is placed before the closing tag of the initial element tag, and control space is press, the incorrect items are suggested. The logic in-correctly assumes the closing tag for the nested (inner) element tag is the closing tag for the outer element. The same situation occurs with nested sequence tags.
|
1.0
|
Nested element and sequence tags return incorrect element completion items - When an element tag is nested in another element tag, and the cursor is placed before the closing tag of the initial element tag, and control space is press, the incorrect items are suggested. The logic in-correctly assumes the closing tag for the nested (inner) element tag is the closing tag for the outer element. The same situation occurs with nested sequence tags.
|
non_process
|
nested element and sequence tags return incorrect element completion items when an element tag is nested in another element tag and the cursor is placed before the closing tag of the initial element tag and control space is press the incorrect items are suggested the logic in correctly assumes the closing tag for the nested inner element tag is the closing tag for the outer element the same situation occurs with nested sequence tags
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.