Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
339,023 | 30,337,363,979 | IssuesEvent | 2023-07-11 10:24:57 | GSM-MSG/Hi-v2-BackEnd | https://api.github.com/repos/GSM-MSG/Hi-v2-BackEnd | opened | 대표자 위임 usecase testcode | ✅ Test | ### Describe
대표자 위임 유즈케이스 테스트코드를 작성합니다.
### Additional
_No response_ | 1.0 | 대표자 위임 usecase testcode - ### Describe
대표자 위임 유즈케이스 테스트코드를 작성합니다.
### Additional
_No response_ | test | 대표자 위임 usecase testcode describe 대표자 위임 유즈케이스 테스트코드를 작성합니다 additional no response | 1 |
48,200 | 5,949,065,981 | IssuesEvent | 2017-05-26 13:21:11 | pixelhumain/co2 | https://api.github.com/repos/pixelhumain/co2 | closed | Création description courte - compteur ne diminue pas | to test | Création d'une organisation (entreprise dans mon cas). Le compteur de caractère de la description courte ne diminue pas.

| 1.0 | Création description courte - compteur ne diminue pas - Création d'une organisation (entreprise dans mon cas). Le compteur de caractère de la description courte ne diminue pas.

| test | création description courte compteur ne diminue pas création d une organisation entreprise dans mon cas le compteur de caractère de la description courte ne diminue pas | 1 |
774,913 | 27,215,002,831 | IssuesEvent | 2023-02-20 20:36:57 | ascheid/itsg33-pbmm-issue-gen | https://api.github.com/repos/ascheid/itsg33-pbmm-issue-gen | closed | SA-4 ACQUISITION PROCESS | Priority: P3 | (A) The organization includes the following requirements, descriptions, and criteria, explicitly or by reference, in the acquisition contract for the information system, system component, or information system service in accordance with applicable GC legislation and TBS policies, directives and standards, and organizational mission/business needs:
(a) Security functional requirements;
(b) Security strength requirements;
(c) Security assurance requirements;
(d) Security-related documentation requirements;
(e) Requirements for protecting security-related documentation;
(f) Description of the information system development environment and environment in which the system is intended to operate; and
(g) Acceptance criteria.
(AA) The organization includes security-related documentation, requirements and/or specifications, explicitly or by reference, in information system acquisition contracts based on an assessment of risk and in accordance with the TBS Security and Contracting Management Standard [Reference 25].
(BB) The organization includes the development and evaluation-related requirements and/or specifications, explicitly or by reference, in information system acquisition contracts based on an assessment of risk and in accordance with applicable GC legislation and TBS policies, directives and standards. | 1.0 | SA-4 ACQUISITION PROCESS - (A) The organization includes the following requirements, descriptions, and criteria, explicitly or by reference, in the acquisition contract for the information system, system component, or information system service in accordance with applicable GC legislation and TBS policies, directives and standards, and organizational mission/business needs:
(a) Security functional requirements;
(b) Security strength requirements;
(c) Security assurance requirements;
(d) Security-related documentation requirements;
(e) Requirements for protecting security-related documentation;
(f) Description of the information system development environment and environment in which the system is intended to operate; and
(g) Acceptance criteria.
(AA) The organization includes security-related documentation, requirements and/or specifications, explicitly or by reference, in information system acquisition contracts based on an assessment of risk and in accordance with the TBS Security and Contracting Management Standard [Reference 25].
(BB) The organization includes the development and evaluation-related requirements and/or specifications, explicitly or by reference, in information system acquisition contracts based on an assessment of risk and in accordance with applicable GC legislation and TBS policies, directives and standards. | non_test | sa acquisition process a the organization includes the following requirements descriptions and criteria explicitly or by reference in the acquisition contract for the information system system component or information system service in accordance with applicable gc legislation and tbs policies directives and standards and organizational mission business needs a security functional requirements b security strength requirements c security assurance requirements d security related documentation requirements e requirements for protecting security related documentation f description of the information system development environment and environment in which the system is intended to operate and g acceptance criteria aa the organization includes security related documentation requirements and or specifications explicitly or by reference in information system acquisition contracts based on an assessment of risk and in accordance with the tbs security and contracting management standard bb the organization includes the development and evaluation related requirements and or specifications explicitly or by reference in information system acquisition contracts based on an assessment of risk and in accordance with applicable gc legislation and tbs policies directives and standards | 0 |
138,865 | 11,220,387,331 | IssuesEvent | 2020-01-07 15:43:10 | appsody/appsody | https://api.github.com/repos/appsody/appsody | closed | Code coverage analysis for stack_create.go | testing | - [x] if len (args) <1
test stack create without argument and verify error
- [x] if config.Dryrun
test stack create in dry run mode enabled and verify output
- [x] if dry run in unzip
test unzip with dry run mode enabled and verify output
test unzip with illegal file path, illegal dest
- [x] if runtime.GOOS == "windows"
test unzip on windows | 1.0 | Code coverage analysis for stack_create.go - - [x] if len (args) <1
test stack create without argument and verify error
- [x] if config.Dryrun
test stack create in dry run mode enabled and verify output
- [x] if dry run in unzip
test unzip with dry run mode enabled and verify output
test unzip with illegal file path, illegal dest
- [x] if runtime.GOOS == "windows"
test unzip on windows | test | code coverage analysis for stack create go if len args test stack create without argument and verify error if config dryrun test stack create in dry run mode enabled and verify output if dry run in unzip test unzip with dry run mode enabled and verify output test unzip with illegal file path illegal dest if runtime goos windows test unzip on windows | 1 |
4,509 | 2,730,490,715 | IssuesEvent | 2015-04-16 15:09:09 | uscensusbureau/citysdk | https://api.github.com/repos/uscensusbureau/citysdk | closed | Defining a batch call by lat/long + geographic scope + geographic resolution (e.g., give me all the blocks within *this* county). | bus-4 test | As a developer, I want to define a scope boundary which would allow a user to batch request variables & geographies within by a lat/long.
E.g., "I want all the *block-groups*(geo resolution) within *this*(lat/long) *county*(geo scope)" | 1.0 | Defining a batch call by lat/long + geographic scope + geographic resolution (e.g., give me all the blocks within *this* county). - As a developer, I want to define a scope boundary which would allow a user to batch request variables & geographies within by a lat/long.
E.g., "I want all the *block-groups*(geo resolution) within *this*(lat/long) *county*(geo scope)" | test | defining a batch call by lat long geographic scope geographic resolution e g give me all the blocks within this county as a developer i want to define a scope boundary which would allow a user to batch request variables geographies within by a lat long e g i want all the block groups geo resolution within this lat long county geo scope | 1 |
176,031 | 13,624,262,429 | IssuesEvent | 2020-09-24 07:47:32 | WoWManiaUK/Redemption | https://api.github.com/repos/WoWManiaUK/Redemption | closed | [Item] Meteorite Crystal | Fix - Tester Confirmed | **Links:** http://www.wow-mania.com/armory?item=46051
**What is Happening:** If there is a beacon of light casted on someone and heals are done while the trinket is activated, the stacks will still build up only 1 by 1.
In addition, Holy Shock spell does not trigger stacks at all. I would assume this is because of the ability being heal/damage at the same time.
**What Should happen:** Casting heals while beacon is present should award 2 stacks each. Holy Shock should award stacks as well

One of many comments from https://www.wowhead.com/item=46051/meteorite-crystal#comments:id=772039 | 1.0 | [Item] Meteorite Crystal - **Links:** http://www.wow-mania.com/armory?item=46051
**What is Happening:** If there is a beacon of light casted on someone and heals are done while the trinket is activated, the stacks will still build up only 1 by 1.
In addition, Holy Shock spell does not trigger stacks at all. I would assume this is because of the ability being heal/damage at the same time.
**What Should happen:** Casting heals while beacon is present should award 2 stacks each. Holy Shock should award stacks as well

One of many comments from https://www.wowhead.com/item=46051/meteorite-crystal#comments:id=772039 | test | meteorite crystal links what is happening if there is a beacon of light casted on someone and heals are done while the trinket is activated the stacks will still build up only by in addition holy shock spell does not trigger stacks at all i would assume this is because of the ability being heal damage at the same time what should happen casting heals while beacon is present should award stacks each holy shock should award stacks as well one of many comments from | 1 |
57,991 | 6,564,757,667 | IssuesEvent | 2017-09-08 04:00:12 | USEPA/E-Enterprise-Portal | https://api.github.com/repos/USEPA/E-Enterprise-Portal | closed | Build BWI response modal dynamically from service | EE-1941 Ready To Test Sprint 36 - TBD Technical task | Read list of contaminants from xml and build form ~ 8hrs
Build response dynamically from service ~ 13hrs | 1.0 | Build BWI response modal dynamically from service - Read list of contaminants from xml and build form ~ 8hrs
Build response dynamically from service ~ 13hrs | test | build bwi response modal dynamically from service read list of contaminants from xml and build form build response dynamically from service | 1 |
97,545 | 8,659,598,511 | IssuesEvent | 2018-11-28 06:50:39 | shahkhan40/shantestrep | https://api.github.com/repos/shahkhan40/shantestrep | closed | testing FX841 : ApiV1ProjectsIdProjectChecksumsGetQueryParamPagesizeInvalidDatatype | testing FX841 | Project : testing FX841
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZWJmYmUyMDUtYzZkZC00N2FkLWJkYjgtZmE4YmQ5NTcxNmU5; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 28 Nov 2018 06:49:17 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/projects/OBMcxyZJ/project-checksums?pageSize=nZC5GW
Request :
Response :
{
"timestamp" : "2018-11-28T06:49:18.068+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/projects/OBMcxyZJ/project-checksums"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot --- | 1.0 | testing FX841 : ApiV1ProjectsIdProjectChecksumsGetQueryParamPagesizeInvalidDatatype - Project : testing FX841
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZWJmYmUyMDUtYzZkZC00N2FkLWJkYjgtZmE4YmQ5NTcxNmU5; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 28 Nov 2018 06:49:17 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/projects/OBMcxyZJ/project-checksums?pageSize=nZC5GW
Request :
Response :
{
"timestamp" : "2018-11-28T06:49:18.068+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/projects/OBMcxyZJ/project-checksums"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot --- | test | testing project testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api projects obmcxyzj project checksums logs assertion resolved to result assertion resolved to result fx bot | 1 |
134,687 | 10,927,054,063 | IssuesEvent | 2019-11-22 15:54:50 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] DatafeedJobsIT.testRealtime_GivenProcessIsKilled fails to stop process | :ml >test-failure | *Original comment by @dakrone:*
Seems like it failed to stop the process:
```
FAILURE 335s | DatafeedJobsIT.testRealtime_GivenProcessIsKilled <<< FAILURES!
> Throwable LINK REDACTED: java.lang.AssertionError:
> Expected: <stopped>
> but: was <started>
> at __randomizedtesting.SeedInfo.seed([F8EED5748A2F4CE7:AA558501F4BAB52D]:0)
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.elasticsearch.xpack.ml.integration.DatafeedJobsIT.lambda$testRealtime_GivenProcessIsKilled$6(DatafeedJobsIT.java:235)
> at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:732)
> at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:706)
> at org.elasticsearch.xpack.ml.integration.DatafeedJobsIT.testRealtime_GivenProcessIsKilled(DatafeedJobsIT.java:232)
> at java.lang.Thread.run(Thread.java:748)
> Suppressed: java.lang.AssertionError:
```
I was not able to reproduce this
```
./gradlew :x-pack-elasticsearch:qa:ml-native-tests:integTestRunner -Dtests.seed=F8EED5748A2F4CE7 -Dtests.class=org.elasticsearch.xpack.ml.integration.DatafeedJobsIT -Dtests.method="testRealtime_GivenProcessIsKilled" -Dtests.security.manager=true -Dtests.locale=de-GR -Dtests.timezone=Pacific/Majuro
```
LINK REDACTED | 1.0 | [CI] DatafeedJobsIT.testRealtime_GivenProcessIsKilled fails to stop process - *Original comment by @dakrone:*
Seems like it failed to stop the process:
```
FAILURE 335s | DatafeedJobsIT.testRealtime_GivenProcessIsKilled <<< FAILURES!
> Throwable LINK REDACTED: java.lang.AssertionError:
> Expected: <stopped>
> but: was <started>
> at __randomizedtesting.SeedInfo.seed([F8EED5748A2F4CE7:AA558501F4BAB52D]:0)
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.elasticsearch.xpack.ml.integration.DatafeedJobsIT.lambda$testRealtime_GivenProcessIsKilled$6(DatafeedJobsIT.java:235)
> at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:732)
> at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:706)
> at org.elasticsearch.xpack.ml.integration.DatafeedJobsIT.testRealtime_GivenProcessIsKilled(DatafeedJobsIT.java:232)
> at java.lang.Thread.run(Thread.java:748)
> Suppressed: java.lang.AssertionError:
```
I was not able to reproduce this
```
./gradlew :x-pack-elasticsearch:qa:ml-native-tests:integTestRunner -Dtests.seed=F8EED5748A2F4CE7 -Dtests.class=org.elasticsearch.xpack.ml.integration.DatafeedJobsIT -Dtests.method="testRealtime_GivenProcessIsKilled" -Dtests.security.manager=true -Dtests.locale=de-GR -Dtests.timezone=Pacific/Majuro
```
LINK REDACTED | test | datafeedjobsit testrealtime givenprocessiskilled fails to stop process original comment by dakrone seems like it failed to stop the process failure datafeedjobsit testrealtime givenprocessiskilled failures throwable link redacted java lang assertionerror expected but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch xpack ml integration datafeedjobsit lambda testrealtime givenprocessiskilled datafeedjobsit java at org elasticsearch test estestcase assertbusy estestcase java at org elasticsearch test estestcase assertbusy estestcase java at org elasticsearch xpack ml integration datafeedjobsit testrealtime givenprocessiskilled datafeedjobsit java at java lang thread run thread java suppressed java lang assertionerror i was not able to reproduce this gradlew x pack elasticsearch qa ml native tests integtestrunner dtests seed dtests class org elasticsearch xpack ml integration datafeedjobsit dtests method testrealtime givenprocessiskilled dtests security manager true dtests locale de gr dtests timezone pacific majuro link redacted | 1 |
98,874 | 30,208,812,705 | IssuesEvent | 2023-07-05 11:20:47 | audacity/audacity | https://api.github.com/repos/audacity/audacity | closed | Building VST3SDK with Conan and gcc 12.2.0 fails | bug Build / CI | ### Bug description
_No response_
### Steps to reproduce
1. Download the latest release of Audacity: https://github.com/audacity/audacity/releases/tag/Audacity-3.2.3
2. Try to build it with CMake and Conan.
### Expected behavior
The program should complie
### Actual behavior
Compilation fails when Conan tries to build vst3sdk. The error message is:
```
/tmp/.conan/data/vst3sdk/3.7.3/_/_/build/e7c1133d61ad7d5d0234a08d33a88035fe98bd68/vst3sdk/pluginterfaces/base/funknown.cpp: In function 'Steinberg::int32 Steinberg::FUnknownPrivate::atomicAdd(Steinberg::int32&, Steinberg::int32)':
/tmp/.conan/data/vst3sdk/3.7.3/_/_/build/e7c1133d61ad7d5d0234a08d33a88035fe98bd68/vst3sdk/pluginterfaces/base/funknown.cpp:91:51: error: 'atomic_int_least32_t' does not name a type
91 | return atomic_fetch_add (reinterpret_cast<atomic_int_least32_t*> (&var), d) + d;
| ^~~~~~~~~~~~~~~~~~~~
/tmp/.conan/data/vst3sdk/3.7.3/_/_/build/e7c1133d61ad7d5d0234a08d33a88035fe98bd68/vst3sdk/pluginterfaces/base/funknown.cpp:91:71: error: expected '>' before '*' token
91 | return atomic_fetch_add (reinterpret_cast<atomic_int_least32_t*> (&var), d) + d;
| ^
/tmp/.conan/data/vst3sdk/3.7.3/_/_/build/e7c1133d61ad7d5d0234a08d33a88035fe98bd68/vst3sdk/pluginterfaces/base/funknown.cpp:91:71: error: expected '(' before '*' token
91 | return atomic_fetch_add (reinterpret_cast<atomic_int_least32_t*> (&var), d) + d;
| ^
| (
/tmp/.conan/data/vst3sdk/3.7.3/_/_/build/e7c1133d61ad7d5d0234a08d33a88035fe98bd68/vst3sdk/pluginterfaces/base/funknown.cpp:91:72: error: expected primary-expression before '>' token
91 | return atomic_fetch_add (reinterpret_cast<atomic_int_least32_t*> (&var), d) + d;
```
Based on [this thread on forums.steinberg.net](https://forums.steinberg.net/t/pluginterfaces-lib-compilation-error-win-10-vs-2022/768976/8), it seems like defining `SMTG_USE_STDATOMIC_H=OFF` is the solution, but I'm not familiar enough with audacity's build system to figure out where to put this to get it to compile on my system.
### Audacity Version
latest stable version (from audacityteam.org/download)
### Operating system
Linux
### Additional context
My system has gcc v12.2.0 which I think is relevant to this issue as noted in the forum thread linked above.
The specific configure arguments I used are:
```
-Daudacity_use_ffmpeg=loaded
-Daudacity_lib_preference=system
-DCMAKE_BUILD_TYPE=Release
-Daudacity_conan_enabled=On
-DCMAKE_INSTALL_PREFIX=/usr
``` | 1.0 | Building VST3SDK with Conan and gcc 12.2.0 fails - ### Bug description
_No response_
### Steps to reproduce
1. Download the latest release of Audacity: https://github.com/audacity/audacity/releases/tag/Audacity-3.2.3
2. Try to build it with CMake and Conan.
### Expected behavior
The program should complie
### Actual behavior
Compilation fails when Conan tries to build vst3sdk. The error message is:
```
/tmp/.conan/data/vst3sdk/3.7.3/_/_/build/e7c1133d61ad7d5d0234a08d33a88035fe98bd68/vst3sdk/pluginterfaces/base/funknown.cpp: In function 'Steinberg::int32 Steinberg::FUnknownPrivate::atomicAdd(Steinberg::int32&, Steinberg::int32)':
/tmp/.conan/data/vst3sdk/3.7.3/_/_/build/e7c1133d61ad7d5d0234a08d33a88035fe98bd68/vst3sdk/pluginterfaces/base/funknown.cpp:91:51: error: 'atomic_int_least32_t' does not name a type
91 | return atomic_fetch_add (reinterpret_cast<atomic_int_least32_t*> (&var), d) + d;
| ^~~~~~~~~~~~~~~~~~~~
/tmp/.conan/data/vst3sdk/3.7.3/_/_/build/e7c1133d61ad7d5d0234a08d33a88035fe98bd68/vst3sdk/pluginterfaces/base/funknown.cpp:91:71: error: expected '>' before '*' token
91 | return atomic_fetch_add (reinterpret_cast<atomic_int_least32_t*> (&var), d) + d;
| ^
/tmp/.conan/data/vst3sdk/3.7.3/_/_/build/e7c1133d61ad7d5d0234a08d33a88035fe98bd68/vst3sdk/pluginterfaces/base/funknown.cpp:91:71: error: expected '(' before '*' token
91 | return atomic_fetch_add (reinterpret_cast<atomic_int_least32_t*> (&var), d) + d;
| ^
| (
/tmp/.conan/data/vst3sdk/3.7.3/_/_/build/e7c1133d61ad7d5d0234a08d33a88035fe98bd68/vst3sdk/pluginterfaces/base/funknown.cpp:91:72: error: expected primary-expression before '>' token
91 | return atomic_fetch_add (reinterpret_cast<atomic_int_least32_t*> (&var), d) + d;
```
Based on [this thread on forums.steinberg.net](https://forums.steinberg.net/t/pluginterfaces-lib-compilation-error-win-10-vs-2022/768976/8), it seems like defining `SMTG_USE_STDATOMIC_H=OFF` is the solution, but I'm not familiar enough with audacity's build system to figure out where to put this to get it to compile on my system.
### Audacity Version
latest stable version (from audacityteam.org/download)
### Operating system
Linux
### Additional context
My system has gcc v12.2.0 which I think is relevant to this issue as noted in the forum thread linked above.
The specific configure arguments I used are:
```
-Daudacity_use_ffmpeg=loaded
-Daudacity_lib_preference=system
-DCMAKE_BUILD_TYPE=Release
-Daudacity_conan_enabled=On
-DCMAKE_INSTALL_PREFIX=/usr
``` | non_test | building with conan and gcc fails bug description no response steps to reproduce download the latest release of audacity try to build it with cmake and conan expected behavior the program should complie actual behavior compilation fails when conan tries to build the error message is tmp conan data build pluginterfaces base funknown cpp in function steinberg steinberg funknownprivate atomicadd steinberg steinberg tmp conan data build pluginterfaces base funknown cpp error atomic int t does not name a type return atomic fetch add reinterpret cast var d d tmp conan data build pluginterfaces base funknown cpp error expected before token return atomic fetch add reinterpret cast var d d tmp conan data build pluginterfaces base funknown cpp error expected before token return atomic fetch add reinterpret cast var d d tmp conan data build pluginterfaces base funknown cpp error expected primary expression before token return atomic fetch add reinterpret cast var d d based on it seems like defining smtg use stdatomic h off is the solution but i m not familiar enough with audacity s build system to figure out where to put this to get it to compile on my system audacity version latest stable version from audacityteam org download operating system linux additional context my system has gcc which i think is relevant to this issue as noted in the forum thread linked above the specific configure arguments i used are daudacity use ffmpeg loaded daudacity lib preference system dcmake build type release daudacity conan enabled on dcmake install prefix usr | 0 |
322,394 | 27,598,331,560 | IssuesEvent | 2023-03-09 08:20:09 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | opened | Fix jax_numpy_math.test_jax_numpy_log2 | JAX Frontend Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4369130149/jobs/7642569725" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4369130149/jobs/7642569725" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4369130149/jobs/7642569725" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4369130149/jobs/7642569725" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_math.py::test_jax_numpy_log2[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-08T23:15:11.5396484Z E jax._src.traceback_util.UnfilteredStackTrace: TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5396835Z E
2023-03-08T23:15:11.5397122Z E The stack trace below excludes JAX-internal frames.
2023-03-08T23:15:11.5397447Z E The preceding is the original exception that occurred, unmodified.
2023-03-08T23:15:11.5397701Z E
2023-03-08T23:15:11.5397919Z E --------------------
2023-03-08T23:15:11.5401095Z E TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5401426Z E Falsifying example: test_jax_numpy_log2(
2023-03-08T23:15:11.5401773Z E dtype_and_x=(['float16'], [array(-1., dtype=float16)]),
2023-03-08T23:15:11.5402069Z E test_flags=FrontendFunctionTestFlags(
2023-03-08T23:15:11.5403110Z E num_positional_args=0,
2023-03-08T23:15:11.5403336Z E with_out=False,
2023-03-08T23:15:11.5403547Z E inplace=False,
2023-03-08T23:15:11.5403764Z E as_variable=[False],
2023-03-08T23:15:11.5403988Z E native_arrays=[False],
2023-03-08T23:15:11.5404193Z E ),
2023-03-08T23:15:11.5404523Z E fn_tree='ivy.functional.frontends.jax.numpy.log2',
2023-03-08T23:15:11.5404828Z E on_device='cpu',
2023-03-08T23:15:11.5405067Z E frontend='jax',
2023-03-08T23:15:11.5405243Z E )
2023-03-08T23:15:11.5405411Z E
2023-03-08T23:15:11.5405902Z E You can reproduce this example by temporarily adding @reproduce_failure('6.68.2', b'AXicY2AAAkYGCADTAAAkAAM=') as a decorator on your test case
</details>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_math.py::test_jax_numpy_log2[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-08T23:15:11.5396484Z E jax._src.traceback_util.UnfilteredStackTrace: TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5396835Z E
2023-03-08T23:15:11.5397122Z E The stack trace below excludes JAX-internal frames.
2023-03-08T23:15:11.5397447Z E The preceding is the original exception that occurred, unmodified.
2023-03-08T23:15:11.5397701Z E
2023-03-08T23:15:11.5397919Z E --------------------
2023-03-08T23:15:11.5401095Z E TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5401426Z E Falsifying example: test_jax_numpy_log2(
2023-03-08T23:15:11.5401773Z E dtype_and_x=(['float16'], [array(-1., dtype=float16)]),
2023-03-08T23:15:11.5402069Z E test_flags=FrontendFunctionTestFlags(
2023-03-08T23:15:11.5403110Z E num_positional_args=0,
2023-03-08T23:15:11.5403336Z E with_out=False,
2023-03-08T23:15:11.5403547Z E inplace=False,
2023-03-08T23:15:11.5403764Z E as_variable=[False],
2023-03-08T23:15:11.5403988Z E native_arrays=[False],
2023-03-08T23:15:11.5404193Z E ),
2023-03-08T23:15:11.5404523Z E fn_tree='ivy.functional.frontends.jax.numpy.log2',
2023-03-08T23:15:11.5404828Z E on_device='cpu',
2023-03-08T23:15:11.5405067Z E frontend='jax',
2023-03-08T23:15:11.5405243Z E )
2023-03-08T23:15:11.5405411Z E
2023-03-08T23:15:11.5405902Z E You can reproduce this example by temporarily adding @reproduce_failure('6.68.2', b'AXicY2AAAkYGCADTAAAkAAM=') as a decorator on your test case
</details>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_math.py::test_jax_numpy_log2[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-08T23:15:11.5396484Z E jax._src.traceback_util.UnfilteredStackTrace: TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5396835Z E
2023-03-08T23:15:11.5397122Z E The stack trace below excludes JAX-internal frames.
2023-03-08T23:15:11.5397447Z E The preceding is the original exception that occurred, unmodified.
2023-03-08T23:15:11.5397701Z E
2023-03-08T23:15:11.5397919Z E --------------------
2023-03-08T23:15:11.5401095Z E TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5401426Z E Falsifying example: test_jax_numpy_log2(
2023-03-08T23:15:11.5401773Z E dtype_and_x=(['float16'], [array(-1., dtype=float16)]),
2023-03-08T23:15:11.5402069Z E test_flags=FrontendFunctionTestFlags(
2023-03-08T23:15:11.5403110Z E num_positional_args=0,
2023-03-08T23:15:11.5403336Z E with_out=False,
2023-03-08T23:15:11.5403547Z E inplace=False,
2023-03-08T23:15:11.5403764Z E as_variable=[False],
2023-03-08T23:15:11.5403988Z E native_arrays=[False],
2023-03-08T23:15:11.5404193Z E ),
2023-03-08T23:15:11.5404523Z E fn_tree='ivy.functional.frontends.jax.numpy.log2',
2023-03-08T23:15:11.5404828Z E on_device='cpu',
2023-03-08T23:15:11.5405067Z E frontend='jax',
2023-03-08T23:15:11.5405243Z E )
2023-03-08T23:15:11.5405411Z E
2023-03-08T23:15:11.5405902Z E You can reproduce this example by temporarily adding @reproduce_failure('6.68.2', b'AXicY2AAAkYGCADTAAAkAAM=') as a decorator on your test case
</details>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_math.py::test_jax_numpy_log2[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-08T23:15:11.5396484Z E jax._src.traceback_util.UnfilteredStackTrace: TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5396835Z E
2023-03-08T23:15:11.5397122Z E The stack trace below excludes JAX-internal frames.
2023-03-08T23:15:11.5397447Z E The preceding is the original exception that occurred, unmodified.
2023-03-08T23:15:11.5397701Z E
2023-03-08T23:15:11.5397919Z E --------------------
2023-03-08T23:15:11.5401095Z E TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5401426Z E Falsifying example: test_jax_numpy_log2(
2023-03-08T23:15:11.5401773Z E dtype_and_x=(['float16'], [array(-1., dtype=float16)]),
2023-03-08T23:15:11.5402069Z E test_flags=FrontendFunctionTestFlags(
2023-03-08T23:15:11.5403110Z E num_positional_args=0,
2023-03-08T23:15:11.5403336Z E with_out=False,
2023-03-08T23:15:11.5403547Z E inplace=False,
2023-03-08T23:15:11.5403764Z E as_variable=[False],
2023-03-08T23:15:11.5403988Z E native_arrays=[False],
2023-03-08T23:15:11.5404193Z E ),
2023-03-08T23:15:11.5404523Z E fn_tree='ivy.functional.frontends.jax.numpy.log2',
2023-03-08T23:15:11.5404828Z E on_device='cpu',
2023-03-08T23:15:11.5405067Z E frontend='jax',
2023-03-08T23:15:11.5405243Z E )
2023-03-08T23:15:11.5405411Z E
2023-03-08T23:15:11.5405902Z E You can reproduce this example by temporarily adding @reproduce_failure('6.68.2', b'AXicY2AAAkYGCADTAAAkAAM=') as a decorator on your test case
</details>
| 1.0 | Fix jax_numpy_math.test_jax_numpy_log2 - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4369130149/jobs/7642569725" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4369130149/jobs/7642569725" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4369130149/jobs/7642569725" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4369130149/jobs/7642569725" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_math.py::test_jax_numpy_log2[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-08T23:15:11.5396484Z E jax._src.traceback_util.UnfilteredStackTrace: TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5396835Z E
2023-03-08T23:15:11.5397122Z E The stack trace below excludes JAX-internal frames.
2023-03-08T23:15:11.5397447Z E The preceding is the original exception that occurred, unmodified.
2023-03-08T23:15:11.5397701Z E
2023-03-08T23:15:11.5397919Z E --------------------
2023-03-08T23:15:11.5401095Z E TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5401426Z E Falsifying example: test_jax_numpy_log2(
2023-03-08T23:15:11.5401773Z E dtype_and_x=(['float16'], [array(-1., dtype=float16)]),
2023-03-08T23:15:11.5402069Z E test_flags=FrontendFunctionTestFlags(
2023-03-08T23:15:11.5403110Z E num_positional_args=0,
2023-03-08T23:15:11.5403336Z E with_out=False,
2023-03-08T23:15:11.5403547Z E inplace=False,
2023-03-08T23:15:11.5403764Z E as_variable=[False],
2023-03-08T23:15:11.5403988Z E native_arrays=[False],
2023-03-08T23:15:11.5404193Z E ),
2023-03-08T23:15:11.5404523Z E fn_tree='ivy.functional.frontends.jax.numpy.log2',
2023-03-08T23:15:11.5404828Z E on_device='cpu',
2023-03-08T23:15:11.5405067Z E frontend='jax',
2023-03-08T23:15:11.5405243Z E )
2023-03-08T23:15:11.5405411Z E
2023-03-08T23:15:11.5405902Z E You can reproduce this example by temporarily adding @reproduce_failure('6.68.2', b'AXicY2AAAkYGCADTAAAkAAM=') as a decorator on your test case
</details>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_math.py::test_jax_numpy_log2[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-08T23:15:11.5396484Z E jax._src.traceback_util.UnfilteredStackTrace: TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5396835Z E
2023-03-08T23:15:11.5397122Z E The stack trace below excludes JAX-internal frames.
2023-03-08T23:15:11.5397447Z E The preceding is the original exception that occurred, unmodified.
2023-03-08T23:15:11.5397701Z E
2023-03-08T23:15:11.5397919Z E --------------------
2023-03-08T23:15:11.5401095Z E TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5401426Z E Falsifying example: test_jax_numpy_log2(
2023-03-08T23:15:11.5401773Z E dtype_and_x=(['float16'], [array(-1., dtype=float16)]),
2023-03-08T23:15:11.5402069Z E test_flags=FrontendFunctionTestFlags(
2023-03-08T23:15:11.5403110Z E num_positional_args=0,
2023-03-08T23:15:11.5403336Z E with_out=False,
2023-03-08T23:15:11.5403547Z E inplace=False,
2023-03-08T23:15:11.5403764Z E as_variable=[False],
2023-03-08T23:15:11.5403988Z E native_arrays=[False],
2023-03-08T23:15:11.5404193Z E ),
2023-03-08T23:15:11.5404523Z E fn_tree='ivy.functional.frontends.jax.numpy.log2',
2023-03-08T23:15:11.5404828Z E on_device='cpu',
2023-03-08T23:15:11.5405067Z E frontend='jax',
2023-03-08T23:15:11.5405243Z E )
2023-03-08T23:15:11.5405411Z E
2023-03-08T23:15:11.5405902Z E You can reproduce this example by temporarily adding @reproduce_failure('6.68.2', b'AXicY2AAAkYGCADTAAAkAAM=') as a decorator on your test case
</details>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_math.py::test_jax_numpy_log2[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-08T23:15:11.5396484Z E jax._src.traceback_util.UnfilteredStackTrace: TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5396835Z E
2023-03-08T23:15:11.5397122Z E The stack trace below excludes JAX-internal frames.
2023-03-08T23:15:11.5397447Z E The preceding is the original exception that occurred, unmodified.
2023-03-08T23:15:11.5397701Z E
2023-03-08T23:15:11.5397919Z E --------------------
2023-03-08T23:15:11.5401095Z E TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5401426Z E Falsifying example: test_jax_numpy_log2(
2023-03-08T23:15:11.5401773Z E dtype_and_x=(['float16'], [array(-1., dtype=float16)]),
2023-03-08T23:15:11.5402069Z E test_flags=FrontendFunctionTestFlags(
2023-03-08T23:15:11.5403110Z E num_positional_args=0,
2023-03-08T23:15:11.5403336Z E with_out=False,
2023-03-08T23:15:11.5403547Z E inplace=False,
2023-03-08T23:15:11.5403764Z E as_variable=[False],
2023-03-08T23:15:11.5403988Z E native_arrays=[False],
2023-03-08T23:15:11.5404193Z E ),
2023-03-08T23:15:11.5404523Z E fn_tree='ivy.functional.frontends.jax.numpy.log2',
2023-03-08T23:15:11.5404828Z E on_device='cpu',
2023-03-08T23:15:11.5405067Z E frontend='jax',
2023-03-08T23:15:11.5405243Z E )
2023-03-08T23:15:11.5405411Z E
2023-03-08T23:15:11.5405902Z E You can reproduce this example by temporarily adding @reproduce_failure('6.68.2', b'AXicY2AAAkYGCADTAAAkAAM=') as a decorator on your test case
</details>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_jax/test_jax_numpy_math.py::test_jax_numpy_log2[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-08T23:15:11.5396484Z E jax._src.traceback_util.UnfilteredStackTrace: TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5396835Z E
2023-03-08T23:15:11.5397122Z E The stack trace below excludes JAX-internal frames.
2023-03-08T23:15:11.5397447Z E The preceding is the original exception that occurred, unmodified.
2023-03-08T23:15:11.5397701Z E
2023-03-08T23:15:11.5397919Z E --------------------
2023-03-08T23:15:11.5401095Z E TypeError: log2() got some positional-only arguments passed as keyword arguments: 'x'
2023-03-08T23:15:11.5401426Z E Falsifying example: test_jax_numpy_log2(
2023-03-08T23:15:11.5401773Z E dtype_and_x=(['float16'], [array(-1., dtype=float16)]),
2023-03-08T23:15:11.5402069Z E test_flags=FrontendFunctionTestFlags(
2023-03-08T23:15:11.5403110Z E num_positional_args=0,
2023-03-08T23:15:11.5403336Z E with_out=False,
2023-03-08T23:15:11.5403547Z E inplace=False,
2023-03-08T23:15:11.5403764Z E as_variable=[False],
2023-03-08T23:15:11.5403988Z E native_arrays=[False],
2023-03-08T23:15:11.5404193Z E ),
2023-03-08T23:15:11.5404523Z E fn_tree='ivy.functional.frontends.jax.numpy.log2',
2023-03-08T23:15:11.5404828Z E on_device='cpu',
2023-03-08T23:15:11.5405067Z E frontend='jax',
2023-03-08T23:15:11.5405243Z E )
2023-03-08T23:15:11.5405411Z E
2023-03-08T23:15:11.5405902Z E You can reproduce this example by temporarily adding @reproduce_failure('6.68.2', b'AXicY2AAAkYGCADTAAAkAAM=') as a decorator on your test case
</details>
| test | fix jax numpy math test jax numpy tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test frontends test jax test jax numpy math py test jax numpy e jax src traceback util unfilteredstacktrace typeerror got some positional only arguments passed as keyword arguments x e e the stack trace below excludes jax internal frames e the preceding is the original exception that occurred unmodified e e e typeerror got some positional only arguments passed as keyword arguments x e falsifying example test jax numpy e dtype and x e test flags frontendfunctiontestflags e num positional args e with out false e inplace false e as variable e native arrays e e fn tree ivy functional frontends jax numpy e on device cpu e frontend jax e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case failed ivy tests test ivy test frontends test jax test jax numpy math py test jax numpy e jax src traceback util unfilteredstacktrace typeerror got some positional only arguments passed as keyword arguments x e e the stack trace below excludes jax internal frames e the preceding is the original exception that occurred unmodified e e e typeerror got some positional only arguments passed as keyword arguments x e falsifying example test jax numpy e dtype and x e test flags frontendfunctiontestflags e num positional args e with out false e inplace false e as variable e native arrays e e fn tree ivy functional frontends jax numpy e on device cpu e frontend jax e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case failed ivy tests test ivy test frontends test jax test jax numpy math py test jax numpy e jax src traceback util unfilteredstacktrace typeerror got some positional only arguments passed as keyword arguments x e e the stack trace below excludes jax internal frames e the preceding is the original exception that occurred unmodified e e e typeerror got some positional only arguments passed as keyword arguments x e falsifying example test jax numpy e dtype and x e test flags frontendfunctiontestflags e num positional args e with out false e inplace false e as variable e native arrays e e fn tree ivy functional frontends jax numpy e on device cpu e frontend jax e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case failed ivy tests test ivy test frontends test jax test jax numpy math py test jax numpy e jax src traceback util unfilteredstacktrace typeerror got some positional only arguments passed as keyword arguments x e e the stack trace below excludes jax internal frames e the preceding is the original exception that occurred unmodified e e e typeerror got some positional only arguments passed as keyword arguments x e falsifying example test jax numpy e dtype and x e test flags frontendfunctiontestflags e num positional args e with out false e inplace false e as variable e native arrays e e fn tree ivy functional frontends jax numpy e on device cpu e frontend jax e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case | 1 |
41,513 | 10,728,915,917 | IssuesEvent | 2019-10-28 14:44:46 | xamarin/xamarin-android | https://api.github.com/repos/xamarin/xamarin-android | opened | App builds hang when executing task LinkAssemblies if linking enabled | Area: App+Library Build | ### Steps to Reproduce
1. Build app with AndroidLinkMode=Full or AndroidLinkMode=SdkOnly
<!--
If you have a repro project, you may drag & drop the .zip/etc. onto the issue editor to attach it.
-->
### Expected Behavior
App builds successfully.
### Actual Behavior
Build hangs
### Version Information
Microsoft Visual Studio Enterprise 2019
Version 16.3.6
VisualStudio.16.Release/16.3.6+29418.71
Microsoft .NET Framework
Version 4.8.03752
Installed Version: Enterprise
Visual C++ 2019 00435-60000-00000-AA197
Microsoft Visual C++ 2019
ADL Tools Service Provider 1.0
This package contains services used by Data Lake tools
ASP.NET and Web Tools 2019 16.3.286.43615
ASP.NET and Web Tools 2019
ASP.NET Web Frameworks and Tools 2019 16.3.286.43615
For additional information, visit https://www.asp.net/
Azure App Service Tools v3.0.0 16.3.286.43615
Azure App Service Tools v3.0.0
Azure Data Lake Node 1.0
This package contains the Data Lake integration nodes for Server Explorer.
Azure Data Lake Tools for Visual Studio 2.4.2000.0
Microsoft Azure Data Lake Tools for Visual Studio
Azure Functions and Web Jobs Tools 16.3.286.43615
Azure Functions and Web Jobs Tools
Azure Logic Apps Tools for Visual Studio 1.0
Add-in for the Azure Resource Group project to support the Logic App Designer and template creation.
Azure Stream Analytics Tools for Visual Studio 2.4.2000.0
Microsoft Azure Stream Analytics Tools for Visual Studio
C# Tools 3.3.1-beta3-19461-02+2fd12c210e22f7d6245805c60340f6a34af6875b
C# components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Common Azure Tools 1.10
Provides common services for use by Azure Mobile Services and Microsoft Azure Tools.
Cookiecutter 16.3.19252.1
Provides tools for finding, instantiating and customizing templates in cookiecutter format.
EditorConfig Language Service 1.17.260
Language service for .editorconfig files.
EditorConfig helps developers define and maintain consistent coding styles between different editors and IDEs.
Extensibility Message Bus 1.2.0 (d16-2@8b56e20)
Provides common messaging-based MEF services for loosely coupled Visual Studio extension components communication and integration.
Fabric.DiagnosticEvents 1.0
Fabric Diagnostic Events
File Icons 2.7
Adds icons for files that are not recognized by Solution Explorer
GitHub.VisualStudio 2.10.8.8132
A Visual Studio Extension that brings the GitHub Flow into Visual Studio.
IntelliCode Extension 1.0
IntelliCode Visual Studio Extension Detailed Info
Markdown Editor 1.12.236
A full featured Markdown editor with live preview and syntax highlighting. Supports GitHub flavored Markdown.
Microsoft Azure HDInsight Azure Node 2.4.2000.0
HDInsight Node under Azure Node
Microsoft Azure Hive Query Language Service 2.4.2000.0
Language service for Hive query
Microsoft Azure Service Fabric Tools for Visual Studio 16.0
Microsoft Azure Service Fabric Tools for Visual Studio
Microsoft Azure Stream Analytics Language Service 2.4.2000.0
Language service for Azure Stream Analytics
Microsoft Azure Stream Analytics Node 1.0
Azure Stream Analytics Node under Azure Node
Microsoft Azure Tools 2.9
Microsoft Azure Tools for Microsoft Visual Studio 0x10 - v2.9.20816.1
Microsoft Continuous Delivery Tools for Visual Studio 0.4
Simplifying the configuration of Azure DevOps pipelines from within the Visual Studio IDE.
Microsoft JVM Debugger 1.0
Provides support for connecting the Visual Studio debugger to JDWP compatible Java Virtual Machines
Microsoft Library Manager 2.0.83+gbc8a4b23ec
Install client-side libraries easily to any web project
Microsoft MI-Based Debugger 1.0
Provides support for connecting Visual Studio to MI compatible debuggers
Microsoft Visual C++ Wizards 1.0
Microsoft Visual C++ Wizards
Microsoft Visual Studio Tools for Containers 1.1
Develop, run, validate your ASP.NET Core applications in the target environment. F5 your application directly into a container with debugging, or CTRL + F5 to edit & refresh your app without having to rebuild the container.
Microsoft Visual Studio VC Package 1.0
Microsoft Visual Studio VC Package
Mono Debugging for Visual Studio 16.3.7 (9d260c5)
Support for debugging Mono processes with Visual Studio.
NuGet Package Manager 5.3.1
NuGet Package Manager in Visual Studio. For more information about NuGet, visit https://docs.nuget.org/
Open Command Line 2.4.226
2.4.226
PowerShell Pro Tools for Visual Studio 1.0
A set of tools for developing and debugging PowerShell scripts and modules in Visual Studio.
Productivity Power Tools 2017/2019 16.0
Installs the individual extensions of Productivity Power Tools 2017/2019
Project File Tools 1.0.1
Provides Intellisense and other tooling for XML based project files such as .csproj and .vbproj files.
ProjectServicesPackage Extension 1.0
ProjectServicesPackage Visual Studio Extension Detailed Info
Python 16.3.19252.1
Provides IntelliSense, projects, templates, debugging, interactive windows, and other support for Python developers.
Python - Conda support 16.3.19252.1
Conda support for Python projects.
Python - Django support 16.3.19252.1
Provides templates and integration for the Django web framework.
Python - IronPython support 16.3.19252.1
Provides templates and integration for IronPython-based projects.
Python - Profiling support 16.3.19252.1
Profiling support for Python projects.
Redgate SQL Prompt 9.5.20.11737
Write, format, and refactor SQL effortlessly
Snapshot Debugging Extension 1.0
Snapshot Debugging Visual Studio Extension Detailed Info
SQL Server Data Tools 16.0.61908.27190
Microsoft SQL Server Data Tools
Test Adapter for Boost.Test 1.0
Enables Visual Studio's testing tools with unit tests written for Boost.Test. The use terms and Third Party Notices are available in the extension installation directory.
Test Adapter for Google Test 1.0
Enables Visual Studio's testing tools with unit tests written for Google Test. The use terms and Third Party Notices are available in the extension installation directory.
ToolWindowHostedEditor 1.0
Hosting json editor into a tool window
TypeScript Tools 16.0.10821.2002
TypeScript Tools for Microsoft Visual Studio
Visual Basic Tools 3.3.1-beta3-19461-02+2fd12c210e22f7d6245805c60340f6a34af6875b
Visual Basic components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Visual C++ for Cross Platform Mobile Development (Android) 16.0.29230.54
Visual C++ for Cross Platform Mobile Development (Android)
Visual F# Tools 10.4 for F# 4.6 16.3.0-beta.19455.1+0422ff293bb2cc722fe5021b85ef50378a9af823
Microsoft Visual F# Tools 10.4 for F# 4.6
Visual Studio Code Debug Adapter Host Package 1.0
Interop layer for hosting Visual Studio Code debug adapters in Visual Studio
Visual Studio Tools for CMake 1.0
Visual Studio Tools for CMake
Visual Studio Tools for CMake 1.0
Visual Studio Tools for CMake
Visual Studio Tools for Containers 1.0
Visual Studio Tools for Containers
Visual Studio Tools for Kubernetes 1.0
Visual Studio Tools for Kubernetes
VisualStudio.Mac 1.0
Mac Extension for Visual Studio
Xamarin 16.3.0.277 (d16-3@c0fcab7)
Visual Studio extension to enable development for Xamarin.iOS and Xamarin.Android.
Xamarin Designer 16.3.0.246 (remotes/origin/d16-3@bd2f86892)
Visual Studio extension to enable Xamarin Designer tools in Visual Studio.
Xamarin Templates 16.3.565 (27e9746)
Templates for building iOS, Android, and Windows apps with Xamarin and Xamarin.Forms.
Xamarin.Android SDK 10.0.3.0 (d16-3/4d45b41)
Xamarin.Android Reference Assemblies and MSBuild support.
Mono: mono/mono/2019-06@5608fe0abb3
Java.Interop: xamarin/java.interop/d16-3@5836f58
LibZipSharp: grendello/LibZipSharp/d16-3@71f4a94
LibZip: nih-at/libzip/rel-1-5-1@b95cf3fd
ProGuard: xamarin/proguard/master@905836d
SQLite: xamarin/sqlite/3.27.1@8212a2d
Xamarin.Android Tools: xamarin/xamarin-android-tools/d16-3@cb41333
Xamarin.iOS and Xamarin.Mac SDK 13.4.0.2 (e37549b)
Xamarin.iOS and Xamarin.Mac Reference Assemblies and MSBuild support.
| 1.0 | App builds hang when executing task LinkAssemblies if linking enabled - ### Steps to Reproduce
1. Build app with AndroidLinkMode=Full or AndroidLinkMode=SdkOnly
<!--
If you have a repro project, you may drag & drop the .zip/etc. onto the issue editor to attach it.
-->
### Expected Behavior
App builds successfully.
### Actual Behavior
Build hangs
### Version Information
Microsoft Visual Studio Enterprise 2019
Version 16.3.6
VisualStudio.16.Release/16.3.6+29418.71
Microsoft .NET Framework
Version 4.8.03752
Installed Version: Enterprise
Visual C++ 2019 00435-60000-00000-AA197
Microsoft Visual C++ 2019
ADL Tools Service Provider 1.0
This package contains services used by Data Lake tools
ASP.NET and Web Tools 2019 16.3.286.43615
ASP.NET and Web Tools 2019
ASP.NET Web Frameworks and Tools 2019 16.3.286.43615
For additional information, visit https://www.asp.net/
Azure App Service Tools v3.0.0 16.3.286.43615
Azure App Service Tools v3.0.0
Azure Data Lake Node 1.0
This package contains the Data Lake integration nodes for Server Explorer.
Azure Data Lake Tools for Visual Studio 2.4.2000.0
Microsoft Azure Data Lake Tools for Visual Studio
Azure Functions and Web Jobs Tools 16.3.286.43615
Azure Functions and Web Jobs Tools
Azure Logic Apps Tools for Visual Studio 1.0
Add-in for the Azure Resource Group project to support the Logic App Designer and template creation.
Azure Stream Analytics Tools for Visual Studio 2.4.2000.0
Microsoft Azure Stream Analytics Tools for Visual Studio
C# Tools 3.3.1-beta3-19461-02+2fd12c210e22f7d6245805c60340f6a34af6875b
C# components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Common Azure Tools 1.10
Provides common services for use by Azure Mobile Services and Microsoft Azure Tools.
Cookiecutter 16.3.19252.1
Provides tools for finding, instantiating and customizing templates in cookiecutter format.
EditorConfig Language Service 1.17.260
Language service for .editorconfig files.
EditorConfig helps developers define and maintain consistent coding styles between different editors and IDEs.
Extensibility Message Bus 1.2.0 (d16-2@8b56e20)
Provides common messaging-based MEF services for loosely coupled Visual Studio extension components communication and integration.
Fabric.DiagnosticEvents 1.0
Fabric Diagnostic Events
File Icons 2.7
Adds icons for files that are not recognized by Solution Explorer
GitHub.VisualStudio 2.10.8.8132
A Visual Studio Extension that brings the GitHub Flow into Visual Studio.
IntelliCode Extension 1.0
IntelliCode Visual Studio Extension Detailed Info
Markdown Editor 1.12.236
A full featured Markdown editor with live preview and syntax highlighting. Supports GitHub flavored Markdown.
Microsoft Azure HDInsight Azure Node 2.4.2000.0
HDInsight Node under Azure Node
Microsoft Azure Hive Query Language Service 2.4.2000.0
Language service for Hive query
Microsoft Azure Service Fabric Tools for Visual Studio 16.0
Microsoft Azure Service Fabric Tools for Visual Studio
Microsoft Azure Stream Analytics Language Service 2.4.2000.0
Language service for Azure Stream Analytics
Microsoft Azure Stream Analytics Node 1.0
Azure Stream Analytics Node under Azure Node
Microsoft Azure Tools 2.9
Microsoft Azure Tools for Microsoft Visual Studio 0x10 - v2.9.20816.1
Microsoft Continuous Delivery Tools for Visual Studio 0.4
Simplifying the configuration of Azure DevOps pipelines from within the Visual Studio IDE.
Microsoft JVM Debugger 1.0
Provides support for connecting the Visual Studio debugger to JDWP compatible Java Virtual Machines
Microsoft Library Manager 2.0.83+gbc8a4b23ec
Install client-side libraries easily to any web project
Microsoft MI-Based Debugger 1.0
Provides support for connecting Visual Studio to MI compatible debuggers
Microsoft Visual C++ Wizards 1.0
Microsoft Visual C++ Wizards
Microsoft Visual Studio Tools for Containers 1.1
Develop, run, validate your ASP.NET Core applications in the target environment. F5 your application directly into a container with debugging, or CTRL + F5 to edit & refresh your app without having to rebuild the container.
Microsoft Visual Studio VC Package 1.0
Microsoft Visual Studio VC Package
Mono Debugging for Visual Studio 16.3.7 (9d260c5)
Support for debugging Mono processes with Visual Studio.
NuGet Package Manager 5.3.1
NuGet Package Manager in Visual Studio. For more information about NuGet, visit https://docs.nuget.org/
Open Command Line 2.4.226
2.4.226
PowerShell Pro Tools for Visual Studio 1.0
A set of tools for developing and debugging PowerShell scripts and modules in Visual Studio.
Productivity Power Tools 2017/2019 16.0
Installs the individual extensions of Productivity Power Tools 2017/2019
Project File Tools 1.0.1
Provides Intellisense and other tooling for XML based project files such as .csproj and .vbproj files.
ProjectServicesPackage Extension 1.0
ProjectServicesPackage Visual Studio Extension Detailed Info
Python 16.3.19252.1
Provides IntelliSense, projects, templates, debugging, interactive windows, and other support for Python developers.
Python - Conda support 16.3.19252.1
Conda support for Python projects.
Python - Django support 16.3.19252.1
Provides templates and integration for the Django web framework.
Python - IronPython support 16.3.19252.1
Provides templates and integration for IronPython-based projects.
Python - Profiling support 16.3.19252.1
Profiling support for Python projects.
Redgate SQL Prompt 9.5.20.11737
Write, format, and refactor SQL effortlessly
Snapshot Debugging Extension 1.0
Snapshot Debugging Visual Studio Extension Detailed Info
SQL Server Data Tools 16.0.61908.27190
Microsoft SQL Server Data Tools
Test Adapter for Boost.Test 1.0
Enables Visual Studio's testing tools with unit tests written for Boost.Test. The use terms and Third Party Notices are available in the extension installation directory.
Test Adapter for Google Test 1.0
Enables Visual Studio's testing tools with unit tests written for Google Test. The use terms and Third Party Notices are available in the extension installation directory.
ToolWindowHostedEditor 1.0
Hosting json editor into a tool window
TypeScript Tools 16.0.10821.2002
TypeScript Tools for Microsoft Visual Studio
Visual Basic Tools 3.3.1-beta3-19461-02+2fd12c210e22f7d6245805c60340f6a34af6875b
Visual Basic components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Visual C++ for Cross Platform Mobile Development (Android) 16.0.29230.54
Visual C++ for Cross Platform Mobile Development (Android)
Visual F# Tools 10.4 for F# 4.6 16.3.0-beta.19455.1+0422ff293bb2cc722fe5021b85ef50378a9af823
Microsoft Visual F# Tools 10.4 for F# 4.6
Visual Studio Code Debug Adapter Host Package 1.0
Interop layer for hosting Visual Studio Code debug adapters in Visual Studio
Visual Studio Tools for CMake 1.0
Visual Studio Tools for CMake
Visual Studio Tools for CMake 1.0
Visual Studio Tools for CMake
Visual Studio Tools for Containers 1.0
Visual Studio Tools for Containers
Visual Studio Tools for Kubernetes 1.0
Visual Studio Tools for Kubernetes
VisualStudio.Mac 1.0
Mac Extension for Visual Studio
Xamarin 16.3.0.277 (d16-3@c0fcab7)
Visual Studio extension to enable development for Xamarin.iOS and Xamarin.Android.
Xamarin Designer 16.3.0.246 (remotes/origin/d16-3@bd2f86892)
Visual Studio extension to enable Xamarin Designer tools in Visual Studio.
Xamarin Templates 16.3.565 (27e9746)
Templates for building iOS, Android, and Windows apps with Xamarin and Xamarin.Forms.
Xamarin.Android SDK 10.0.3.0 (d16-3/4d45b41)
Xamarin.Android Reference Assemblies and MSBuild support.
Mono: mono/mono/2019-06@5608fe0abb3
Java.Interop: xamarin/java.interop/d16-3@5836f58
LibZipSharp: grendello/LibZipSharp/d16-3@71f4a94
LibZip: nih-at/libzip/rel-1-5-1@b95cf3fd
ProGuard: xamarin/proguard/master@905836d
SQLite: xamarin/sqlite/3.27.1@8212a2d
Xamarin.Android Tools: xamarin/xamarin-android-tools/d16-3@cb41333
Xamarin.iOS and Xamarin.Mac SDK 13.4.0.2 (e37549b)
Xamarin.iOS and Xamarin.Mac Reference Assemblies and MSBuild support.
| non_test | app builds hang when executing task linkassemblies if linking enabled steps to reproduce build app with androidlinkmode full or androidlinkmode sdkonly if you have a repro project you may drag drop the zip etc onto the issue editor to attach it expected behavior app builds successfully actual behavior build hangs version information microsoft visual studio enterprise version visualstudio release microsoft net framework version installed version enterprise visual c microsoft visual c adl tools service provider this package contains services used by data lake tools asp net and web tools asp net and web tools asp net web frameworks and tools for additional information visit azure app service tools azure app service tools azure data lake node this package contains the data lake integration nodes for server explorer azure data lake tools for visual studio microsoft azure data lake tools for visual studio azure functions and web jobs tools azure functions and web jobs tools azure logic apps tools for visual studio add in for the azure resource group project to support the logic app designer and template creation azure stream analytics tools for visual studio microsoft azure stream analytics tools for visual studio c tools c components used in the ide depending on your project type and settings a different version of the compiler may be used common azure tools provides common services for use by azure mobile services and microsoft azure tools cookiecutter provides tools for finding instantiating and customizing templates in cookiecutter format editorconfig language service language service for editorconfig files editorconfig helps developers define and maintain consistent coding styles between different editors and ides extensibility message bus provides common messaging based mef services for loosely coupled visual studio extension components communication and integration fabric diagnosticevents fabric diagnostic events file icons adds icons for files that are not recognized by solution explorer github visualstudio a visual studio extension that brings the github flow into visual studio intellicode extension intellicode visual studio extension detailed info markdown editor a full featured markdown editor with live preview and syntax highlighting supports github flavored markdown microsoft azure hdinsight azure node hdinsight node under azure node microsoft azure hive query language service language service for hive query microsoft azure service fabric tools for visual studio microsoft azure service fabric tools for visual studio microsoft azure stream analytics language service language service for azure stream analytics microsoft azure stream analytics node azure stream analytics node under azure node microsoft azure tools microsoft azure tools for microsoft visual studio microsoft continuous delivery tools for visual studio simplifying the configuration of azure devops pipelines from within the visual studio ide microsoft jvm debugger provides support for connecting the visual studio debugger to jdwp compatible java virtual machines microsoft library manager install client side libraries easily to any web project microsoft mi based debugger provides support for connecting visual studio to mi compatible debuggers microsoft visual c wizards microsoft visual c wizards microsoft visual studio tools for containers develop run validate your asp net core applications in the target environment your application directly into a container with debugging or ctrl to edit refresh your app without having to rebuild the container microsoft visual studio vc package microsoft visual studio vc package mono debugging for visual studio support for debugging mono processes with visual studio nuget package manager nuget package manager in visual studio for more information about nuget visit open command line powershell pro tools for visual studio a set of tools for developing and debugging powershell scripts and modules in visual studio productivity power tools installs the individual extensions of productivity power tools project file tools provides intellisense and other tooling for xml based project files such as csproj and vbproj files projectservicespackage extension projectservicespackage visual studio extension detailed info python provides intellisense projects templates debugging interactive windows and other support for python developers python conda support conda support for python projects python django support provides templates and integration for the django web framework python ironpython support provides templates and integration for ironpython based projects python profiling support profiling support for python projects redgate sql prompt write format and refactor sql effortlessly snapshot debugging extension snapshot debugging visual studio extension detailed info sql server data tools microsoft sql server data tools test adapter for boost test enables visual studio s testing tools with unit tests written for boost test the use terms and third party notices are available in the extension installation directory test adapter for google test enables visual studio s testing tools with unit tests written for google test the use terms and third party notices are available in the extension installation directory toolwindowhostededitor hosting json editor into a tool window typescript tools typescript tools for microsoft visual studio visual basic tools visual basic components used in the ide depending on your project type and settings a different version of the compiler may be used visual c for cross platform mobile development android visual c for cross platform mobile development android visual f tools for f beta microsoft visual f tools for f visual studio code debug adapter host package interop layer for hosting visual studio code debug adapters in visual studio visual studio tools for cmake visual studio tools for cmake visual studio tools for cmake visual studio tools for cmake visual studio tools for containers visual studio tools for containers visual studio tools for kubernetes visual studio tools for kubernetes visualstudio mac mac extension for visual studio xamarin visual studio extension to enable development for xamarin ios and xamarin android xamarin designer remotes origin visual studio extension to enable xamarin designer tools in visual studio xamarin templates templates for building ios android and windows apps with xamarin and xamarin forms xamarin android sdk xamarin android reference assemblies and msbuild support mono mono mono java interop xamarin java interop libzipsharp grendello libzipsharp libzip nih at libzip rel proguard xamarin proguard master sqlite xamarin sqlite xamarin android tools xamarin xamarin android tools xamarin ios and xamarin mac sdk xamarin ios and xamarin mac reference assemblies and msbuild support | 0 |
267,380 | 23,296,674,483 | IssuesEvent | 2022-08-06 17:34:03 | python/cpython | https://api.github.com/repos/python/cpython | closed | Similar to `test_grp` in `test_pwd`, add a test with null value in name | tests | Both getpwnam(name) and getgrnam(name) should return ValueErr if null is entered in the name value
In test_grp, the part is being tested, but in test_pwd, the part is not being tested.
Is it ok to write a PR that adds that test? | 1.0 | Similar to `test_grp` in `test_pwd`, add a test with null value in name - Both getpwnam(name) and getgrnam(name) should return ValueErr if null is entered in the name value
In test_grp, the part is being tested, but in test_pwd, the part is not being tested.
Is it ok to write a PR that adds that test? | test | similar to test grp in test pwd add a test with null value in name both getpwnam name and getgrnam name should return valueerr if null is entered in the name value in test grp the part is being tested but in test pwd the part is not being tested is it ok to write a pr that adds that test | 1 |
158,480 | 12,416,730,646 | IssuesEvent | 2020-05-22 18:53:10 | homebridge-xiaomi-roborock-vacuum/homebridge-xiaomi-roborock-vacuum | https://api.github.com/repos/homebridge-xiaomi-roborock-vacuum/homebridge-xiaomi-roborock-vacuum | closed | Report two issues with S5 FW 3.5.7 | enhancement help wanted please test | Hi,
Thanks for the plugin, works very well except for two minor issues with my S5 FW 3.5.7:
1) Looks like pause doesn't work. When in cleaning, I can pause it but the switch then flips back in 2 seconds, so I can't continue afterwards.
2) I can't seem to be able to enable Gentle mode. Reading the code S5 with 3.5.7 use the same speeds as Gen3, but Gen3 doesn't. have mopping function, maybe Gen4 is more correct?
gen3: [
// 0% = Off / Aus
{ homekitTopLevel: 0, miLevel: 0, name: "Off" },
// 1-38% = "Quiet / Leise"
{ homekitTopLevel: 38, miLevel: 101, name: "Quiet" },
// 39-60% = "Balanced / Standard"
{ homekitTopLevel: 60, miLevel: 102, name: "Balanced" },
// 61-77% = "Turbo / Stark"
{ homekitTopLevel: 77, miLevel: 103, name: "Turbo" },
// 78-100% = "Full Speed / Max Speed / Max"
{ homekitTopLevel: 100, miLevel: 104, name: "Max" }
],
One more suggestion:
Instead of allow any percentage of fan speed, only allow the speeds like 0, 20, 40, 60, 80, 100 for models with 5 speeds, and 0, 25, 50, 75, 100 for models with 4 speeds. Something like:
that.fanService.getCharacteristic(Characteristic.RotationSpeed).setProps({
minStep: 20
}).on('get', that.getSpeed.bind(that)).on('set', that.setSpeed.bind(that));
Thank you! | 1.0 | Report two issues with S5 FW 3.5.7 - Hi,
Thanks for the plugin, works very well except for two minor issues with my S5 FW 3.5.7:
1) Looks like pause doesn't work. When in cleaning, I can pause it but the switch then flips back in 2 seconds, so I can't continue afterwards.
2) I can't seem to be able to enable Gentle mode. Reading the code S5 with 3.5.7 use the same speeds as Gen3, but Gen3 doesn't. have mopping function, maybe Gen4 is more correct?
gen3: [
// 0% = Off / Aus
{ homekitTopLevel: 0, miLevel: 0, name: "Off" },
// 1-38% = "Quiet / Leise"
{ homekitTopLevel: 38, miLevel: 101, name: "Quiet" },
// 39-60% = "Balanced / Standard"
{ homekitTopLevel: 60, miLevel: 102, name: "Balanced" },
// 61-77% = "Turbo / Stark"
{ homekitTopLevel: 77, miLevel: 103, name: "Turbo" },
// 78-100% = "Full Speed / Max Speed / Max"
{ homekitTopLevel: 100, miLevel: 104, name: "Max" }
],
One more suggestion:
Instead of allow any percentage of fan speed, only allow the speeds like 0, 20, 40, 60, 80, 100 for models with 5 speeds, and 0, 25, 50, 75, 100 for models with 4 speeds. Something like:
that.fanService.getCharacteristic(Characteristic.RotationSpeed).setProps({
minStep: 20
}).on('get', that.getSpeed.bind(that)).on('set', that.setSpeed.bind(that));
Thank you! | test | report two issues with fw hi thanks for the plugin works very well except for two minor issues with my fw looks like pause doesn t work when in cleaning i can pause it but the switch then flips back in seconds so i can t continue afterwards i can t seem to be able to enable gentle mode reading the code with use the same speeds as but doesn t have mopping function maybe is more correct off aus homekittoplevel milevel name off quiet leise homekittoplevel milevel name quiet balanced standard homekittoplevel milevel name balanced turbo stark homekittoplevel milevel name turbo full speed max speed max homekittoplevel milevel name max one more suggestion instead of allow any percentage of fan speed only allow the speeds like for models with speeds and for models with speeds something like that fanservice getcharacteristic characteristic rotationspeed setprops minstep on get that getspeed bind that on set that setspeed bind that thank you | 1 |
400,318 | 11,772,911,026 | IssuesEvent | 2020-03-16 05:41:27 | omou-org/mainframe | https://api.github.com/repos/omou-org/mainframe | closed | Create unpaid sessions endpoint | 3 hours enhancement priority | ## Background
This will be on the Admin tab providing a list of student's names, # of sessions taken, course name, $ amount owed. User will be able to identify at a glance all students who owe tuition payment, for what course, and the amount the student owes.
(There will be some wording adjustments)

~~In our database, we should be tracking paid sessions with an `is_paid` attribute. This will be complete when #55 is done.~~
REMINDER: enrollment - a student + course relationship. Refer to swagger for more detailed description.
## Development
We first need to identify the number of paid sessions left for an enrollment. The following steps should be taken to identify paid sessions left:
1. Count number of present day + future sessions that have a "is_paid" status as "true"
2. If there are no sessions on the present day + future that have a "is_paid" as true, we need to check if there were any sessions in the past that have an unpaid status.
3. Count the number of sessions that "is_paid" is false and "is_confirmed" is true
Next, we need to identify the amount due. If there are paid sessions left, no amount is due. If there are no paid sessions left and there is at least 1 unpaid session(s) left: Calculate the total amount due. This is done by:
1. Calculating the amount due per session. Take the duration of past session where "is_paid" is false and "is_confirmed" is true multiply it by the course's hourly rate.
2. Total the sum of all the amount due per session
This 2-step process is required in the case that a previous sessions where "is_paid" is false and "is_confirmed" is true had an extended duration.
We need to calculate this for each tutoring/small group enrollment. If the enrollment has less than or equal to 3 sessions left, we want to notify the receptionist.
## Request + Response
The endpoint should be something like `/payments/list-of-unpaid-students`
The expected response will be a JSON object with student_id keys, and for each student_id keys, there will be an array of objects describing the enrollment payment status. Only include enrollments that have at most 3 paid sessions left. Example:
```
{
[student_id]: [
{
student_id: //int,
paid_sessions: //int <- this should be at most 3,
amount_due: //double <- sum total of unpaid tutitions if paid_sessions < 0,
course_id: //int,
},
...
],
....
}
```
| 1.0 | Create unpaid sessions endpoint - ## Background
This will be on the Admin tab providing a list of student's names, # of sessions taken, course name, $ amount owed. User will be able to identify at a glance all students who owe tuition payment, for what course, and the amount the student owes.
(There will be some wording adjustments)

~~In our database, we should be tracking paid sessions with an `is_paid` attribute. This will be complete when #55 is done.~~
REMINDER: enrollment - a student + course relationship. Refer to swagger for more detailed description.
## Development
We first need to identify the number of paid sessions left for an enrollment. The following steps should be taken to identify paid sessions left:
1. Count number of present day + future sessions that have a "is_paid" status as "true"
2. If there are no sessions on the present day + future that have a "is_paid" as true, we need to check if there were any sessions in the past that have an unpaid status.
3. Count the number of sessions that "is_paid" is false and "is_confirmed" is true
Next, we need to identify the amount due. If there are paid sessions left, no amount is due. If there are no paid sessions left and there is at least 1 unpaid session(s) left: Calculate the total amount due. This is done by:
1. Calculating the amount due per session. Take the duration of past session where "is_paid" is false and "is_confirmed" is true multiply it by the course's hourly rate.
2. Total the sum of all the amount due per session
This 2-step process is required in the case that a previous sessions where "is_paid" is false and "is_confirmed" is true had an extended duration.
We need to calculate this for each tutoring/small group enrollment. If the enrollment has less than or equal to 3 sessions left, we want to notify the receptionist.
## Request + Response
The endpoint should be something like `/payments/list-of-unpaid-students`
The expected response will be a JSON object with student_id keys, and for each student_id keys, there will be an array of objects describing the enrollment payment status. Only include enrollments that have at most 3 paid sessions left. Example:
```
{
[student_id]: [
{
student_id: //int,
paid_sessions: //int <- this should be at most 3,
amount_due: //double <- sum total of unpaid tutitions if paid_sessions < 0,
course_id: //int,
},
...
],
....
}
```
| non_test | create unpaid sessions endpoint background this will be on the admin tab providing a list of student s names of sessions taken course name amount owed user will be able to identify at a glance all students who owe tuition payment for what course and the amount the student owes there will be some wording adjustments in our database we should be tracking paid sessions with an is paid attribute this will be complete when is done reminder enrollment a student course relationship refer to swagger for more detailed description development we first need to identify the number of paid sessions left for an enrollment the following steps should be taken to identify paid sessions left count number of present day future sessions that have a is paid status as true if there are no sessions on the present day future that have a is paid as true we need to check if there were any sessions in the past that have an unpaid status count the number of sessions that is paid is false and is confirmed is true next we need to identify the amount due if there are paid sessions left no amount is due if there are no paid sessions left and there is at least unpaid session s left calculate the total amount due this is done by calculating the amount due per session take the duration of past session where is paid is false and is confirmed is true multiply it by the course s hourly rate total the sum of all the amount due per session this step process is required in the case that a previous sessions where is paid is false and is confirmed is true had an extended duration we need to calculate this for each tutoring small group enrollment if the enrollment has less than or equal to sessions left we want to notify the receptionist request response the endpoint should be something like payments list of unpaid students the expected response will be a json object with student id keys and for each student id keys there will be an array of objects describing the enrollment payment status only include enrollments that have at most paid sessions left example student id int paid sessions int this should be at most amount due double sum total of unpaid tutitions if paid sessions course id int | 0 |
224,528 | 17,754,746,179 | IssuesEvent | 2021-08-28 14:29:42 | MuzaffarMohammed/kfmenterprises-ecommerce | https://api.github.com/repos/MuzaffarMohammed/kfmenterprises-ecommerce | closed | Delivery confirmation mail feature | enhancement Priority Tested - Dev | Acceptance Criteria:
1. Delivery confirmation mail feature. | 1.0 | Delivery confirmation mail feature - Acceptance Criteria:
1. Delivery confirmation mail feature. | test | delivery confirmation mail feature acceptance criteria delivery confirmation mail feature | 1 |
57,953 | 24,279,181,536 | IssuesEvent | 2022-09-28 15:55:52 | dwp/design-system | https://api.github.com/repos/dwp/design-system | closed | Evidence for internal services header | 🔗 component 📚 docs internal service header | ## What
Adding the evidence for the internal services header component from previous tickets and emails
## Why
This will help users understand why the pattern was created the way it was and log any decisions made.
## Done when
- [x] Review closed tickets relating to the pattern in Github
- [x] Review emails and information held in other areas
- [x] Compile information in confluence pages created by Martin
## Outcomes
There is a clear log of all decisions made in the creation of the component so we can justify the current design, answer any questions and help decide on any future iterations.
## Who needs to know about this
Design system team
## Related stories
https://github.com/dwp/design-system/issues/387
| 1.0 | Evidence for internal services header - ## What
Adding the evidence for the internal services header component from previous tickets and emails
## Why
This will help users understand why the pattern was created the way it was and log any decisions made.
## Done when
- [x] Review closed tickets relating to the pattern in Github
- [x] Review emails and information held in other areas
- [x] Compile information in confluence pages created by Martin
## Outcomes
There is a clear log of all decisions made in the creation of the component so we can justify the current design, answer any questions and help decide on any future iterations.
## Who needs to know about this
Design system team
## Related stories
https://github.com/dwp/design-system/issues/387
| non_test | evidence for internal services header what adding the evidence for the internal services header component from previous tickets and emails why this will help users understand why the pattern was created the way it was and log any decisions made done when review closed tickets relating to the pattern in github review emails and information held in other areas compile information in confluence pages created by martin outcomes there is a clear log of all decisions made in the creation of the component so we can justify the current design answer any questions and help decide on any future iterations who needs to know about this design system team related stories | 0 |
146,956 | 19,476,091,210 | IssuesEvent | 2021-12-24 12:38:51 | sewace/PhaseShift | https://api.github.com/repos/sewace/PhaseShift | opened | CVE-2021-27515 (Medium) detected in url-parse-1.4.7.tgz | security vulnerability | ## CVE-2021-27515 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: PhaseShift/package.json</p>
<p>Path to vulnerable library: PhaseShift/node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- expo-33.0.7.tgz (Root Library)
- expo-asset-5.0.1.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sewace/PhaseShift/commit/4fed7911a2622b8cefac707597cba1816054a701">4fed7911a2622b8cefac707597cba1816054a701</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse before 1.5.0 mishandles certain uses of backslash such as http:\/ and interprets the URI as a relative path.
<p>Publish Date: 2021-02-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27515>CVE-2021-27515</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27515">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27515</a></p>
<p>Release Date: 2021-02-22</p>
<p>Fix Resolution: 1.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-27515 (Medium) detected in url-parse-1.4.7.tgz - ## CVE-2021-27515 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: PhaseShift/package.json</p>
<p>Path to vulnerable library: PhaseShift/node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- expo-33.0.7.tgz (Root Library)
- expo-asset-5.0.1.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sewace/PhaseShift/commit/4fed7911a2622b8cefac707597cba1816054a701">4fed7911a2622b8cefac707597cba1816054a701</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse before 1.5.0 mishandles certain uses of backslash such as http:\/ and interprets the URI as a relative path.
<p>Publish Date: 2021-02-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27515>CVE-2021-27515</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27515">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27515</a></p>
<p>Release Date: 2021-02-22</p>
<p>Fix Resolution: 1.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in url parse tgz cve medium severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file phaseshift package json path to vulnerable library phaseshift node modules url parse package json dependency hierarchy expo tgz root library expo asset tgz x url parse tgz vulnerable library found in head commit a href found in base branch master vulnerability details url parse before mishandles certain uses of backslash such as http and interprets the uri as a relative path publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
84,696 | 7,929,799,190 | IssuesEvent | 2018-07-06 16:17:11 | researchstudio-sat/webofneeds | https://api.github.com/repos/researchstudio-sat/webofneeds | closed | Improve experience for first-time visit | UX prio: high testing | There are two main options:
1. start off with need creation (use case overview)
2. start with a What's Around
However, 1 would be much better, at least at the beginning, because it shows the potential, not the actual content (which is not much at first).
Current first-time visit Desktop view:

Current first-time visit mobile view:

| 1.0 | Improve experience for first-time visit - There are two main options:
1. start off with need creation (use case overview)
2. start with a What's Around
However, 1 would be much better, at least at the beginning, because it shows the potential, not the actual content (which is not much at first).
Current first-time visit Desktop view:

Current first-time visit mobile view:

| test | improve experience for first time visit there are two main options start off with need creation use case overview start with a what s around however would be much better at least at the beginning because it shows the potential not the actual content which is not much at first current first time visit desktop view current first time visit mobile view | 1 |
88,329 | 17,568,211,146 | IssuesEvent | 2021-08-14 05:46:21 | Alice52/Algorithms | https://api.github.com/repos/Alice52/Algorithms | closed | [daily] 2021-08-14 | documentation raw-question easy leetcode | - [x] issue 1
1. [reference](https://leetcode-cn.com/problems/average-of-levels-in-binary-tree/)
2. discription
- 二叉树的层平均值
3. core
- dfs: 记录每层的和 + 每层的个数
- bfs:queue + size + offer
---
- [x] issue 2
1. [reference](https://leetcode-cn.com/problems/maximum-average-subarray-i/)
2. discription
- 子数组最大平均数 I
3. core
- 滑动窗口
---
- [x] issue 3
1. [reference](https://leetcode-cn.com/problems/set-mismatch/)
2. discription
- 1~n 缺少一个数字, 且重复一个数字
3. core
- 原地 hash 标识 `num[i]-1` + 重复的在标识的第二遍之前就是负数了 + 标识完之后还是整数的则为缺失的数字
---
- [x] issue 4
1. [reference](https://leetcode-cn.com/problems/two-sum-iv-input-is-a-bst/)
2. discription
- 两数之和 IV - 输入 BST
3. core
- bst + 中序遍历 = 有序数组 + 双指针[有序数组的和值问题]
- 遍历 hashset + 判断
---
- [x] issue 5
1. [reference](https://leetcode-cn.com/problems/robot-return-to-origin/)
2. discription
- 返回远点
3. core
- 计数
---
| 1.0 | [daily] 2021-08-14 - - [x] issue 1
1. [reference](https://leetcode-cn.com/problems/average-of-levels-in-binary-tree/)
2. discription
- 二叉树的层平均值
3. core
- dfs: 记录每层的和 + 每层的个数
- bfs:queue + size + offer
---
- [x] issue 2
1. [reference](https://leetcode-cn.com/problems/maximum-average-subarray-i/)
2. discription
- 子数组最大平均数 I
3. core
- 滑动窗口
---
- [x] issue 3
1. [reference](https://leetcode-cn.com/problems/set-mismatch/)
2. discription
- 1~n 缺少一个数字, 且重复一个数字
3. core
- 原地 hash 标识 `num[i]-1` + 重复的在标识的第二遍之前就是负数了 + 标识完之后还是整数的则为缺失的数字
---
- [x] issue 4
1. [reference](https://leetcode-cn.com/problems/two-sum-iv-input-is-a-bst/)
2. discription
- 两数之和 IV - 输入 BST
3. core
- bst + 中序遍历 = 有序数组 + 双指针[有序数组的和值问题]
- 遍历 hashset + 判断
---
- [x] issue 5
1. [reference](https://leetcode-cn.com/problems/robot-return-to-origin/)
2. discription
- 返回远点
3. core
- 计数
---
| non_test | issue discription 二叉树的层平均值 core dfs 记录每层的和 每层的个数 bfs queue size offer issue discription 子数组最大平均数 i core 滑动窗口 issue discription n 缺少一个数字 且重复一个数字 core 原地 hash 标识 num 重复的在标识的第二遍之前就是负数了 标识完之后还是整数的则为缺失的数字 issue discription 两数之和 iv 输入 bst core bst 中序遍历 有序数组 双指针 遍历 hashset 判断 issue discription 返回远点 core 计数 | 0 |
84,309 | 10,369,377,813 | IssuesEvent | 2019-09-08 02:22:31 | WuhangYan/minecraft | https://api.github.com/repos/WuhangYan/minecraft | opened | the num of cells for each difficulty | documentation | easy: 9X9 . 10
medium: 16X16 . 40
hard: 16X30 . 99
aXb c
a=height, b=width, c=num of mines | 1.0 | the num of cells for each difficulty - easy: 9X9 . 10
medium: 16X16 . 40
hard: 16X30 . 99
aXb c
a=height, b=width, c=num of mines | non_test | the num of cells for each difficulty easy medium hard axb c a height b width c num of mines | 0 |
8,147 | 2,964,319,902 | IssuesEvent | 2015-07-10 15:57:58 | rapidsms/rapidsms | https://api.github.com/repos/rapidsms/rapidsms | opened | Tests break with latest version of mock | tests | Change the version in tests/requirements/dev.txt to a version >= 1.1.1 and the tests will fail. | 1.0 | Tests break with latest version of mock - Change the version in tests/requirements/dev.txt to a version >= 1.1.1 and the tests will fail. | test | tests break with latest version of mock change the version in tests requirements dev txt to a version and the tests will fail | 1 |
398,924 | 27,217,023,237 | IssuesEvent | 2023-02-20 23:17:31 | aws/sagemaker-python-sdk | https://api.github.com/repos/aws/sagemaker-python-sdk | closed | Accessing LambdaStep output | type: documentation component: pipelines | **Describe the bug**
Following the [docs](https://github.com/aws/sagemaker-python-sdk/blob/v2.116.0/doc/amazon_sagemaker_model_building_pipeline.rst#lambdastep), I can't access LambdaStep output using :
`step_lambda.OutputParameters["model_arn"]`
I'm getting this error :
`AttributeError: 'LambdaStep' object has no attribute 'OutputParameters'`
**To reproduce**
Below my code :
```
from sagemaker.workflow.lambda_step import (
LambdaStep,
LambdaOutput,
LambdaOutputTypeEnum,
)
from sagemaker.lambda_helper import Lambda
from sagemaker.workflow.pipeline_context import PipelineSession
model_arn = LambdaOutput(output_name="model_arn", output_type=LambdaOutputTypeEnum.String)
step_lambda = LambdaStep(
name="LambdaStepgGetLastModel",
lambda_func=Lambda(
function_arn="xx",
session=PipelineSession()),
inputs={
"model_package_group": "xxx"
},
outputs=[model_arn],
)
step_lambda.OutputParameters["model_arn"]
```
**Expected behavior**
Return the output of lambda
**Screenshots or logs**
If applicable, add screenshots or logs to help explain your problem.
**System information**
A description of your system. Please provide:
- **SageMaker Python SDK version**: 2.118.0
- **Framework name (eg. PyTorch) or algorithm (eg. KMeans)**:
- **Framework version**:
- **Python version**:
- **CPU or GPU**:
- **Custom Docker image (Y/N)**:
**Additional context**
Add any other context about the problem here.
| 1.0 | Accessing LambdaStep output - **Describe the bug**
Following the [docs](https://github.com/aws/sagemaker-python-sdk/blob/v2.116.0/doc/amazon_sagemaker_model_building_pipeline.rst#lambdastep), I can't access LambdaStep output using :
`step_lambda.OutputParameters["model_arn"]`
I'm getting this error :
`AttributeError: 'LambdaStep' object has no attribute 'OutputParameters'`
**To reproduce**
Below my code :
```
from sagemaker.workflow.lambda_step import (
LambdaStep,
LambdaOutput,
LambdaOutputTypeEnum,
)
from sagemaker.lambda_helper import Lambda
from sagemaker.workflow.pipeline_context import PipelineSession
model_arn = LambdaOutput(output_name="model_arn", output_type=LambdaOutputTypeEnum.String)
step_lambda = LambdaStep(
name="LambdaStepgGetLastModel",
lambda_func=Lambda(
function_arn="xx",
session=PipelineSession()),
inputs={
"model_package_group": "xxx"
},
outputs=[model_arn],
)
step_lambda.OutputParameters["model_arn"]
```
**Expected behavior**
Return the output of lambda
**Screenshots or logs**
If applicable, add screenshots or logs to help explain your problem.
**System information**
A description of your system. Please provide:
- **SageMaker Python SDK version**: 2.118.0
- **Framework name (eg. PyTorch) or algorithm (eg. KMeans)**:
- **Framework version**:
- **Python version**:
- **CPU or GPU**:
- **Custom Docker image (Y/N)**:
**Additional context**
Add any other context about the problem here.
| non_test | accessing lambdastep output describe the bug following the i can t access lambdastep output using step lambda outputparameters i m getting this error attributeerror lambdastep object has no attribute outputparameters to reproduce below my code from sagemaker workflow lambda step import lambdastep lambdaoutput lambdaoutputtypeenum from sagemaker lambda helper import lambda from sagemaker workflow pipeline context import pipelinesession model arn lambdaoutput output name model arn output type lambdaoutputtypeenum string step lambda lambdastep name lambdastepggetlastmodel lambda func lambda function arn xx session pipelinesession inputs model package group xxx outputs step lambda outputparameters expected behavior return the output of lambda screenshots or logs if applicable add screenshots or logs to help explain your problem system information a description of your system please provide sagemaker python sdk version framework name eg pytorch or algorithm eg kmeans framework version python version cpu or gpu custom docker image y n additional context add any other context about the problem here | 0 |
41,667 | 2,869,071,664 | IssuesEvent | 2015-06-05 23:06:12 | dart-lang/polymer-dart | https://api.github.com/repos/dart-lang/polymer-dart | opened | Polymer template repeat on table row is extremely slow | bug PolymerMilestone-Next Priority-High | <a href="https://github.com/johnmccutchan"><img src="https://avatars.githubusercontent.com/u/224266?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [johnmccutchan](https://github.com/johnmccutchan)**
_Originally opened as dart-lang/sdk#19064_
----
Compiled to JavaScript, Observatory's profiler tree expansion is extremely slow.
I have a list of precomputed table cell strings and do the following in my HTML:
<tr template repeat="{{row in tree.rows }}" style="{{}}">
<td>{{row.columns[0]}} ...</td>
<td>{{row.columns[1]}} ...</td>
</tr>
It takes 50 seconds to update the display when inserting ~1,000 rows.
Chrome 27. | 1.0 | Polymer template repeat on table row is extremely slow - <a href="https://github.com/johnmccutchan"><img src="https://avatars.githubusercontent.com/u/224266?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [johnmccutchan](https://github.com/johnmccutchan)**
_Originally opened as dart-lang/sdk#19064_
----
Compiled to JavaScript, Observatory's profiler tree expansion is extremely slow.
I have a list of precomputed table cell strings and do the following in my HTML:
<tr template repeat="{{row in tree.rows }}" style="{{}}">
<td>{{row.columns[0]}} ...</td>
<td>{{row.columns[1]}} ...</td>
</tr>
It takes 50 seconds to update the display when inserting ~1,000 rows.
Chrome 27. | non_test | polymer template repeat on table row is extremely slow issue by originally opened as dart lang sdk compiled to javascript observatory s profiler tree expansion is extremely slow i have a list of precomputed table cell strings and do the following in my html lt tr template repeat quot row in tree rows quot style quot quot gt nbsp nbsp lt td gt row columns lt td gt nbsp nbsp lt td gt row columns lt td gt lt tr gt it takes seconds to update the display when inserting rows chrome | 0 |
57,616 | 14,166,818,805 | IssuesEvent | 2020-11-12 09:26:59 | resindrake/frontiersmen | https://api.github.com/repos/resindrake/frontiersmen | closed | Premise of Game | question worldbuilding | If we do eventually want to make this game into a survival game, what should the premise and lore be?
### I propose the following:
* Game is set in a frontier, such as Alaska or Yukon
* Player and NPCs move to the frontier for gold or oil operations
* Connections to the rest of North America exist, but is very remote
* I.e. you can purchase equipment online and have it shipped, but it takes a long time to arrive
* Perhaps you need to construct a reception tower before you can reach the rest of the world reliably?
* Game name should then be something along the lines of "Frontierplanner"
Obstacles:
* The cold
* Cave-ins
* Wild animals, esp. bears
Drawbacks:
* This would make the game focused on gold mining. This may be good, but it also prevents the player from, say, pursuing farming or mining iron
* Unless the player is simply the one who enables miners to come in and prospect, and works as a farmer or whatever on the side
* Apart from wildlife, there isn't really much combat in the game (but maybe that's a good thing)
**Please discuss below.**
| 1.0 | Premise of Game - If we do eventually want to make this game into a survival game, what should the premise and lore be?
### I propose the following:
* Game is set in a frontier, such as Alaska or Yukon
* Player and NPCs move to the frontier for gold or oil operations
* Connections to the rest of North America exist, but is very remote
* I.e. you can purchase equipment online and have it shipped, but it takes a long time to arrive
* Perhaps you need to construct a reception tower before you can reach the rest of the world reliably?
* Game name should then be something along the lines of "Frontierplanner"
Obstacles:
* The cold
* Cave-ins
* Wild animals, esp. bears
Drawbacks:
* This would make the game focused on gold mining. This may be good, but it also prevents the player from, say, pursuing farming or mining iron
* Unless the player is simply the one who enables miners to come in and prospect, and works as a farmer or whatever on the side
* Apart from wildlife, there isn't really much combat in the game (but maybe that's a good thing)
**Please discuss below.**
| non_test | premise of game if we do eventually want to make this game into a survival game what should the premise and lore be i propose the following game is set in a frontier such as alaska or yukon player and npcs move to the frontier for gold or oil operations connections to the rest of north america exist but is very remote i e you can purchase equipment online and have it shipped but it takes a long time to arrive perhaps you need to construct a reception tower before you can reach the rest of the world reliably game name should then be something along the lines of frontierplanner obstacles the cold cave ins wild animals esp bears drawbacks this would make the game focused on gold mining this may be good but it also prevents the player from say pursuing farming or mining iron unless the player is simply the one who enables miners to come in and prospect and works as a farmer or whatever on the side apart from wildlife there isn t really much combat in the game but maybe that s a good thing please discuss below | 0 |
5,703 | 20,797,705,472 | IssuesEvent | 2022-03-17 10:54:23 | o3de/o3de | https://api.github.com/repos/o3de/o3de | opened | AR Bug Report: Android asset_profile job red due to materialtype assets failing to be processed | kind/bug needs-triage sig/graphics-audio kind/automation | **Describe the bug**
AR job Android asset_profile is failing with errors when processing materialtype assets.
**Failed Jenkins Job Information:**
Android asset_profile
https://jenkins.build.o3de.org/blue/organizations/jenkins/O3DE/detail/development/2002/pipeline/741/
````
AssetProcessor: Failed TestData/Materials/Types/AutoBrick.materialtype, (android)...
AssetProcessor: Failed TestData/Materials/Types/MinimalPBR.materialtype, (android)...
AssetProcessor: Failed Materials/Terrain/PbrTerrain.materialtype, (android)...
AssetProcessor: Failed Materials/Types/Skin.materialtype, (android)...
AssetProcessor: Failed Materials/Types/StandardPBR.materialtype, (android)...
AssetProcessor: Failed Materials/Types/EnhancedPBR.materialtype, (android)...
AssetProcessor: Failed Materials/Types/StandardMultilayerPBR.materialtype, (android)...
AssetProcessor: Failed Materials/Types/BasePBR.materialtype, (android)...
````
This is the first error reported in all of them:
````
ERROR | > All shaders in a material must use the same object ShaderResourceGroup. from AssetBuilder
````
**Attachments**
[log.txt](https://github.com/o3de/o3de/files/8284043/log.txt)
| 1.0 | AR Bug Report: Android asset_profile job red due to materialtype assets failing to be processed - **Describe the bug**
AR job Android asset_profile is failing with errors when processing materialtype assets.
**Failed Jenkins Job Information:**
Android asset_profile
https://jenkins.build.o3de.org/blue/organizations/jenkins/O3DE/detail/development/2002/pipeline/741/
````
AssetProcessor: Failed TestData/Materials/Types/AutoBrick.materialtype, (android)...
AssetProcessor: Failed TestData/Materials/Types/MinimalPBR.materialtype, (android)...
AssetProcessor: Failed Materials/Terrain/PbrTerrain.materialtype, (android)...
AssetProcessor: Failed Materials/Types/Skin.materialtype, (android)...
AssetProcessor: Failed Materials/Types/StandardPBR.materialtype, (android)...
AssetProcessor: Failed Materials/Types/EnhancedPBR.materialtype, (android)...
AssetProcessor: Failed Materials/Types/StandardMultilayerPBR.materialtype, (android)...
AssetProcessor: Failed Materials/Types/BasePBR.materialtype, (android)...
````
This is the first error reported in all of them:
````
ERROR | > All shaders in a material must use the same object ShaderResourceGroup. from AssetBuilder
````
**Attachments**
[log.txt](https://github.com/o3de/o3de/files/8284043/log.txt)
| non_test | ar bug report android asset profile job red due to materialtype assets failing to be processed describe the bug ar job android asset profile is failing with errors when processing materialtype assets failed jenkins job information android asset profile assetprocessor failed testdata materials types autobrick materialtype android assetprocessor failed testdata materials types minimalpbr materialtype android assetprocessor failed materials terrain pbrterrain materialtype android assetprocessor failed materials types skin materialtype android assetprocessor failed materials types standardpbr materialtype android assetprocessor failed materials types enhancedpbr materialtype android assetprocessor failed materials types standardmultilayerpbr materialtype android assetprocessor failed materials types basepbr materialtype android this is the first error reported in all of them error all shaders in a material must use the same object shaderresourcegroup from assetbuilder attachments | 0 |
105,302 | 13,175,308,142 | IssuesEvent | 2020-08-12 01:12:11 | microsoft/botframework-solutions | https://api.github.com/repos/microsoft/botframework-solutions | closed | Speech UX | Needs design Stale | Speech UX
Design for voice/speech across VA/skills, feedback and coordination for speech support with Composer, SDK, direct line speech, etc. | 1.0 | Speech UX - Speech UX
Design for voice/speech across VA/skills, feedback and coordination for speech support with Composer, SDK, direct line speech, etc. | non_test | speech ux speech ux design for voice speech across va skills feedback and coordination for speech support with composer sdk direct line speech etc | 0 |
248,700 | 21,052,376,679 | IssuesEvent | 2022-03-31 21:47:55 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | com.hazelcast.internal.util.phonehome.PhoneHomeCPSubsystemTest.testCountdownLatchesCount [HZ-1039] | Team: Core Type: Test-Failure Source: Internal to-jira | _5.1.z_ (commit 56d28803b4c636318555373c3f5236e88b0c0690)
Failed on ZingJDK8: https://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-5.maintenance-ZingJDK8/48/testReport/com.hazelcast.internal.util.phonehome/PhoneHomeCPSubsystemTest/testCountdownLatchesCount/
<details><summary>Stacktrace:</summary>
```
org.junit.ComparisonFailure: expected:<"[3]"> but was:<"[2]">
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at com.hazelcast.internal.util.phonehome.PhoneHomeCPSubsystemTest.testCountdownLatchesCount(PhoneHomeCPSubsystemTest.java:125)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
```
</details> | 1.0 | com.hazelcast.internal.util.phonehome.PhoneHomeCPSubsystemTest.testCountdownLatchesCount [HZ-1039] - _5.1.z_ (commit 56d28803b4c636318555373c3f5236e88b0c0690)
Failed on ZingJDK8: https://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-5.maintenance-ZingJDK8/48/testReport/com.hazelcast.internal.util.phonehome/PhoneHomeCPSubsystemTest/testCountdownLatchesCount/
<details><summary>Stacktrace:</summary>
```
org.junit.ComparisonFailure: expected:<"[3]"> but was:<"[2]">
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at com.hazelcast.internal.util.phonehome.PhoneHomeCPSubsystemTest.testCountdownLatchesCount(PhoneHomeCPSubsystemTest.java:125)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
```
</details> | test | com hazelcast internal util phonehome phonehomecpsubsystemtest testcountdownlatchescount z commit failed on stacktrace org junit comparisonfailure expected but was at sun reflect nativeconstructoraccessorimpl native method at sun reflect nativeconstructoraccessorimpl newinstance nativeconstructoraccessorimpl java at sun reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java at com hazelcast internal util phonehome phonehomecpsubsystemtest testcountdownlatchescount phonehomecpsubsystemtest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java util concurrent futuretask run futuretask java at java lang thread run thread java | 1 |
378,536 | 26,325,253,863 | IssuesEvent | 2023-01-10 05:46:50 | RDFLib/prez | https://api.github.com/repos/RDFLib/prez | closed | Confirm the annotated RDF format | documentation | When we return annotated/labelled RDF, we need to indicate this, something like a media type of `text/labels+turtle` or `application/labels+ld+json` etc. Need to ensure these are legal and would be understood as plain RDF (i.e. `text/turtle`, `application/ld+json`) if a system didn't know about "labels+". | 1.0 | Confirm the annotated RDF format - When we return annotated/labelled RDF, we need to indicate this, something like a media type of `text/labels+turtle` or `application/labels+ld+json` etc. Need to ensure these are legal and would be understood as plain RDF (i.e. `text/turtle`, `application/ld+json`) if a system didn't know about "labels+". | non_test | confirm the annotated rdf format when we return annotated labelled rdf we need to indicate this something like a media type of text labels turtle or application labels ld json etc need to ensure these are legal and would be understood as plain rdf i e text turtle application ld json if a system didn t know about labels | 0 |
318,704 | 9,696,542,074 | IssuesEvent | 2019-05-25 08:34:44 | ideaq/home | https://api.github.com/repos/ideaq/home | closed | Electricity | Installing the BTE | priority-2 | We have had approval from EDP to increase the electricity supply to the house (although not as much as we would like but of course we will have #20 + #25) and the next step is to have all of the new equipment installed that EDP requires in order to provide us with electricity from the grid.
This is now quite high priority as it is a pre-cursor to having a decent electrical output in the house.
These were the two steps outlined in the letter from EDP:

Step 1 is to install the BTE (Baixa Tensão Especial), for which we'll need electricians and a construction worker to knock into the garden wall (where it needs to be installed) without compromising it. After this, EDP will come for an inspection, let us know what else is required and hook everything up.
## Tasks
+ [x] Get estimated budgets for the work
+ [x] Confirm order of steps that need to be taken are as above **[in progress - email sent]**
+ [x] Book installation of BTE date
+ [ ] Call EDP and confirm installation date, booking date for visit/inspection
+ [ ] Get clarification from BTE on how the tariff structure works and whether we can be on 'normal' electricity with the BTE installed before we have to start paying the fixed monthly fees associated with a higher electricity consumption until we need it: https://www.edpsu.pt/pt/tarifasehorarios/pages/tarifasbte.aspx
+ [ ] BTE installation
+ [ ] EDP visit
+ [ ] Get next steps from EDP
_Note: Further information on BTEs here: https://lojaluz.com/faq/baixa-tensao-especial-media-tensao 🇵🇹 _ | 1.0 | Electricity | Installing the BTE - We have had approval from EDP to increase the electricity supply to the house (although not as much as we would like but of course we will have #20 + #25) and the next step is to have all of the new equipment installed that EDP requires in order to provide us with electricity from the grid.
This is now quite high priority as it is a pre-cursor to having a decent electrical output in the house.
These were the two steps outlined in the letter from EDP:

Step 1 is to install the BTE (Baixa Tensão Especial), for which we'll need electricians and a construction worker to knock into the garden wall (where it needs to be installed) without compromising it. After this, EDP will come for an inspection, let us know what else is required and hook everything up.
## Tasks
+ [x] Get estimated budgets for the work
+ [x] Confirm order of steps that need to be taken are as above **[in progress - email sent]**
+ [x] Book installation of BTE date
+ [ ] Call EDP and confirm installation date, booking date for visit/inspection
+ [ ] Get clarification from BTE on how the tariff structure works and whether we can be on 'normal' electricity with the BTE installed before we have to start paying the fixed monthly fees associated with a higher electricity consumption until we need it: https://www.edpsu.pt/pt/tarifasehorarios/pages/tarifasbte.aspx
+ [ ] BTE installation
+ [ ] EDP visit
+ [ ] Get next steps from EDP
_Note: Further information on BTEs here: https://lojaluz.com/faq/baixa-tensao-especial-media-tensao 🇵🇹 _ | non_test | electricity installing the bte we have had approval from edp to increase the electricity supply to the house although not as much as we would like but of course we will have and the next step is to have all of the new equipment installed that edp requires in order to provide us with electricity from the grid this is now quite high priority as it is a pre cursor to having a decent electrical output in the house these were the two steps outlined in the letter from edp step is to install the bte baixa tensão especial for which we ll need electricians and a construction worker to knock into the garden wall where it needs to be installed without compromising it after this edp will come for an inspection let us know what else is required and hook everything up tasks get estimated budgets for the work confirm order of steps that need to be taken are as above book installation of bte date call edp and confirm installation date booking date for visit inspection get clarification from bte on how the tariff structure works and whether we can be on normal electricity with the bte installed before we have to start paying the fixed monthly fees associated with a higher electricity consumption until we need it bte installation edp visit get next steps from edp note further information on btes here 🇵🇹 | 0 |
651,300 | 21,472,696,619 | IssuesEvent | 2022-04-26 10:58:26 | Amulet-Team/Amulet-Map-Editor | https://api.github.com/repos/Amulet-Team/Amulet-Map-Editor | closed | [Bug Report] SSL Verification Failed | type: bug priority: high | ## Error
Traceback (most recent call last):
File "urllib\request.py", line 1350, in do_open
File "http\client.py", line 1277, in request
File "http\client.py", line 1323, in _send_request
File "http\client.py", line 1272, in endheaders
File "http\client.py", line 1032, in _send_output
File "http\client.py", line 972, in send
File "http\client.py", line 1447, in connect
File "ssl.py", line 423, in wrap_socket
File "ssl.py", line 870, in _create
File "ssl.py", line 1139, in do_handshake
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1091)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "amulet_map_editor\programs\edit\api\canvas\base_edit_canvas.py", line 160, in _setup
File "minecraft_model_reader\api\resource_pack\java\download_resources.py", line 99, in get_java_vanilla_latest_iter
File "minecraft_model_reader\api\resource_pack\java\download_resources.py", line 63, in get_latest_iter
File "minecraft_model_reader\api\resource_pack\java\download_resources.py", line 56, in get_latest_iter
File "minecraft_model_reader\api\resource_pack\java\download_resources.py", line 25, in get_launcher_manifest
File "urllib\request.py", line 222, in urlopen
File "urllib\request.py", line 525, in open
File "urllib\request.py", line 543, in _open
File "urllib\request.py", line 503, in _call_chain
File "urllib\request.py", line 1393, in https_open
File "urllib\request.py", line 1352, in do_open
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1091)>
### Current Behaviour:
All textures are missing and majority of the world is invisible when trying to open up the 3D Editor
### Expected behavior:
I mean I kind of just expected it to work
### Steps To Reproduce:
1. Open Amulet
2. Load a world
3. Switch over to 3D Editor
### Environment:
- OS: Windows 10 pro version 21H2
- Minecraft Platform: Java
- Minecraft Version: 1.9.x / 1.12.x / 1.14.x / 1.15.x / 1.16.x / 1.17.x / 1.18.x
- Amulet Version: 0.8.21 64bit / 0.9.0b0 64bit
### Additional context
I also tried manually loading a resource pack, when doing so the world mostly stops being invisible but instead all of the textures are missing and appear as pink and black.
I already searched for the issue and a fix has been discovered but unfortunately said fix only mentions mac OS and as such I haven't been able to fix this issue on my side.
### Screenshot

| 1.0 | [Bug Report] SSL Verification Failed - ## Error
Traceback (most recent call last):
File "urllib\request.py", line 1350, in do_open
File "http\client.py", line 1277, in request
File "http\client.py", line 1323, in _send_request
File "http\client.py", line 1272, in endheaders
File "http\client.py", line 1032, in _send_output
File "http\client.py", line 972, in send
File "http\client.py", line 1447, in connect
File "ssl.py", line 423, in wrap_socket
File "ssl.py", line 870, in _create
File "ssl.py", line 1139, in do_handshake
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1091)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "amulet_map_editor\programs\edit\api\canvas\base_edit_canvas.py", line 160, in _setup
File "minecraft_model_reader\api\resource_pack\java\download_resources.py", line 99, in get_java_vanilla_latest_iter
File "minecraft_model_reader\api\resource_pack\java\download_resources.py", line 63, in get_latest_iter
File "minecraft_model_reader\api\resource_pack\java\download_resources.py", line 56, in get_latest_iter
File "minecraft_model_reader\api\resource_pack\java\download_resources.py", line 25, in get_launcher_manifest
File "urllib\request.py", line 222, in urlopen
File "urllib\request.py", line 525, in open
File "urllib\request.py", line 543, in _open
File "urllib\request.py", line 503, in _call_chain
File "urllib\request.py", line 1393, in https_open
File "urllib\request.py", line 1352, in do_open
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1091)>
### Current Behaviour:
All textures are missing and majority of the world is invisible when trying to open up the 3D Editor
### Expected behavior:
I mean I kind of just expected it to work
### Steps To Reproduce:
1. Open Amulet
2. Load a world
3. Switch over to 3D Editor
### Environment:
- OS: Windows 10 pro version 21H2
- Minecraft Platform: Java
- Minecraft Version: 1.9.x / 1.12.x / 1.14.x / 1.15.x / 1.16.x / 1.17.x / 1.18.x
- Amulet Version: 0.8.21 64bit / 0.9.0b0 64bit
### Additional context
I also tried manually loading a resource pack, when doing so the world mostly stops being invisible but instead all of the textures are missing and appear as pink and black.
I already searched for the issue and a fix has been discovered but unfortunately said fix only mentions mac OS and as such I haven't been able to fix this issue on my side.
### Screenshot

| non_test | ssl verification failed error traceback most recent call last file urllib request py line in do open file http client py line in request file http client py line in send request file http client py line in endheaders file http client py line in send output file http client py line in send file http client py line in connect file ssl py line in wrap socket file ssl py line in create file ssl py line in do handshake ssl sslcertverificationerror certificate verify failed unable to get issuer certificate ssl c during handling of the above exception another exception occurred traceback most recent call last file amulet map editor programs edit api canvas base edit canvas py line in setup file minecraft model reader api resource pack java download resources py line in get java vanilla latest iter file minecraft model reader api resource pack java download resources py line in get latest iter file minecraft model reader api resource pack java download resources py line in get latest iter file minecraft model reader api resource pack java download resources py line in get launcher manifest file urllib request py line in urlopen file urllib request py line in open file urllib request py line in open file urllib request py line in call chain file urllib request py line in https open file urllib request py line in do open urllib error urlerror current behaviour all textures are missing and majority of the world is invisible when trying to open up the editor expected behavior i mean i kind of just expected it to work steps to reproduce open amulet load a world switch over to editor environment os windows pro version minecraft platform java minecraft version x x x x x x x amulet version additional context i also tried manually loading a resource pack when doing so the world mostly stops being invisible but instead all of the textures are missing and appear as pink and black i already searched for the issue and a fix has been discovered but unfortunately said fix only mentions mac os and as such i haven t been able to fix this issue on my side screenshot | 0 |
325,361 | 27,871,439,588 | IssuesEvent | 2023-03-21 13:43:03 | elastic/elasticsearch-net | https://api.github.com/repos/elastic/elasticsearch-net | opened | [TESTS] DeleteByQueryWithSlicesApiTests is flakey | Flakey test 8.x | Most of the time, `DeleteByQueryWithSlicesApiTests` succeeds, but it has been seen to fail with a specific seed.
```
.\build.bat seed:27658 random:sourceserializer:false random:httpcompression integrate 8.0.1 "intrusiveoperation" "deletebyquerywithslices"
```
With the above seed, the delete by query operation deletes ten records instead of the expected 5 for the first scrolled slice.
We should investigate the underlying cause. | 1.0 | [TESTS] DeleteByQueryWithSlicesApiTests is flakey - Most of the time, `DeleteByQueryWithSlicesApiTests` succeeds, but it has been seen to fail with a specific seed.
```
.\build.bat seed:27658 random:sourceserializer:false random:httpcompression integrate 8.0.1 "intrusiveoperation" "deletebyquerywithslices"
```
With the above seed, the delete by query operation deletes ten records instead of the expected 5 for the first scrolled slice.
We should investigate the underlying cause. | test | deletebyquerywithslicesapitests is flakey most of the time deletebyquerywithslicesapitests succeeds but it has been seen to fail with a specific seed build bat seed random sourceserializer false random httpcompression integrate intrusiveoperation deletebyquerywithslices with the above seed the delete by query operation deletes ten records instead of the expected for the first scrolled slice we should investigate the underlying cause | 1 |
148,326 | 23,340,141,321 | IssuesEvent | 2022-08-09 13:26:04 | wazuh/wazuh-kibana-app | https://api.github.com/repos/wazuh/wazuh-kibana-app | opened | Test Cases Design for 4.3.7 | Test Case Design | This issue aims to track the effort for the analysis and design of the test cases for the new issues of the release 4.3.7.
Task to be perform:
- [x] Analysis of PR on Review
- [:clock1: ] https://github.com/wazuh/wazuh-kibana-app/issues/4293
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4179
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4337
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4347
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4346
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4348
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4349
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4181
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4340
- [:clock1: ] https://github.com/wazuh/wazuh-kibana-app/issues/4277
| 1.0 | Test Cases Design for 4.3.7 - This issue aims to track the effort for the analysis and design of the test cases for the new issues of the release 4.3.7.
Task to be perform:
- [x] Analysis of PR on Review
- [:clock1: ] https://github.com/wazuh/wazuh-kibana-app/issues/4293
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4179
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4337
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4347
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4346
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4348
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4349
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4181
- [x] https://github.com/wazuh/wazuh-kibana-app/issues/4340
- [:clock1: ] https://github.com/wazuh/wazuh-kibana-app/issues/4277
| non_test | test cases design for this issue aims to track the effort for the analysis and design of the test cases for the new issues of the release task to be perform analysis of pr on review | 0 |
317,664 | 27,253,174,757 | IssuesEvent | 2023-02-22 09:38:27 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | kvnemesis: inject clock skew | C-qa A-testing T-kv | Currently, kvnemesis runs a set of nodes on the local machine. These will all be using the same system clock, and thus have 0 clock skew. This prevents kvnemesis from testing single-key linearizability, an important database guarantee. We should inject random clock skew on these nodes up to MaxOffset. | 1.0 | kvnemesis: inject clock skew - Currently, kvnemesis runs a set of nodes on the local machine. These will all be using the same system clock, and thus have 0 clock skew. This prevents kvnemesis from testing single-key linearizability, an important database guarantee. We should inject random clock skew on these nodes up to MaxOffset. | test | kvnemesis inject clock skew currently kvnemesis runs a set of nodes on the local machine these will all be using the same system clock and thus have clock skew this prevents kvnemesis from testing single key linearizability an important database guarantee we should inject random clock skew on these nodes up to maxoffset | 1 |
124,243 | 16,599,567,854 | IssuesEvent | 2021-06-01 17:26:17 | ParabolInc/parabol | https://api.github.com/repos/ParabolInc/parabol | closed | Self-provisioning SSO designed | design icebox stale | AC proposed concept(s) to gather feedback from team, users
EE 8 hours
| 1.0 | Self-provisioning SSO designed - AC proposed concept(s) to gather feedback from team, users
EE 8 hours
| non_test | self provisioning sso designed ac proposed concept s to gather feedback from team users ee hours | 0 |
112,663 | 17,095,381,135 | IssuesEvent | 2021-07-09 01:09:21 | RG4421/openedr | https://api.github.com/repos/RG4421/openedr | closed | CVE-2019-5443 (High) detected in curlcurl-7_63_0 - autoclosed | security vulnerability | ## CVE-2019-5443 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>curlcurl-7_63_0</b></p></summary>
<p>
<p>A command line tool and library for transferring data with URL syntax, supporting HTTP, HTTPS, FTP, FTPS, GOPHER, TFTP, SCP, SFTP, SMB, TELNET, DICT, LDAP, LDAPS, FILE, IMAP, SMTP, POP3, RTSP and RTMP. libcurl offers a myriad of powerful features</p>
<p>Library home page: <a href=https://github.com/curl/curl.git>https://github.com/curl/curl.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/RG4421/openedr/commit/f991dbd97bf34917a1d61c43ef4b41832708779c">f991dbd97bf34917a1d61c43ef4b41832708779c</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>openedr/edrav2/eprj/curl/lib/vtls/openssl.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>openedr/edrav2/eprj/curl/lib/vtls/openssl.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A non-privileged user or program can put code and a config file in a known non-privileged path (under C:/usr/local/) that will make curl <= 7.65.1 automatically run the code (as an openssl "engine") on invocation. If that curl is invoked by a privileged user it can do anything it wants.
<p>Publish Date: 2019-07-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-5443>CVE-2019-5443</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://curl.haxx.se/docs/CVE-2019-5443.html">https://curl.haxx.se/docs/CVE-2019-5443.html</a></p>
<p>Release Date: 2019-06-30</p>
<p>Fix Resolution: 7.65.2</p>
</p>
</details>
<p></p>
| True | CVE-2019-5443 (High) detected in curlcurl-7_63_0 - autoclosed - ## CVE-2019-5443 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>curlcurl-7_63_0</b></p></summary>
<p>
<p>A command line tool and library for transferring data with URL syntax, supporting HTTP, HTTPS, FTP, FTPS, GOPHER, TFTP, SCP, SFTP, SMB, TELNET, DICT, LDAP, LDAPS, FILE, IMAP, SMTP, POP3, RTSP and RTMP. libcurl offers a myriad of powerful features</p>
<p>Library home page: <a href=https://github.com/curl/curl.git>https://github.com/curl/curl.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/RG4421/openedr/commit/f991dbd97bf34917a1d61c43ef4b41832708779c">f991dbd97bf34917a1d61c43ef4b41832708779c</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>openedr/edrav2/eprj/curl/lib/vtls/openssl.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>openedr/edrav2/eprj/curl/lib/vtls/openssl.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A non-privileged user or program can put code and a config file in a known non-privileged path (under C:/usr/local/) that will make curl <= 7.65.1 automatically run the code (as an openssl "engine") on invocation. If that curl is invoked by a privileged user it can do anything it wants.
<p>Publish Date: 2019-07-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-5443>CVE-2019-5443</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://curl.haxx.se/docs/CVE-2019-5443.html">https://curl.haxx.se/docs/CVE-2019-5443.html</a></p>
<p>Release Date: 2019-06-30</p>
<p>Fix Resolution: 7.65.2</p>
</p>
</details>
<p></p>
| non_test | cve high detected in curlcurl autoclosed cve high severity vulnerability vulnerable library curlcurl a command line tool and library for transferring data with url syntax supporting http https ftp ftps gopher tftp scp sftp smb telnet dict ldap ldaps file imap smtp rtsp and rtmp libcurl offers a myriad of powerful features library home page a href found in head commit a href found in base branch main vulnerable source files openedr eprj curl lib vtls openssl c openedr eprj curl lib vtls openssl c vulnerability details a non privileged user or program can put code and a config file in a known non privileged path under c usr local that will make curl automatically run the code as an openssl engine on invocation if that curl is invoked by a privileged user it can do anything it wants publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
232,503 | 17,784,375,040 | IssuesEvent | 2021-08-31 09:16:27 | TWAllison/coding-quiz | https://api.github.com/repos/TWAllison/coding-quiz | opened | JS functionality | documentation enhancement | - create the quiz questions array
- give the buttons functionality with .addEventListener
- create functions to handle score keeping
- create a for loop for questions
- store high score to local storage
- create timer function using .setInterval | 1.0 | JS functionality - - create the quiz questions array
- give the buttons functionality with .addEventListener
- create functions to handle score keeping
- create a for loop for questions
- store high score to local storage
- create timer function using .setInterval | non_test | js functionality create the quiz questions array give the buttons functionality with addeventlistener create functions to handle score keeping create a for loop for questions store high score to local storage create timer function using setinterval | 0 |
123,329 | 10,264,247,038 | IssuesEvent | 2019-08-22 15:56:07 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | opened | Replace legacy `describe` tests with one of `describes.{realWin|sandboxed|fakeWin|integration}` | P2: Soon Related to: Flaky Tests Type: Bug WG: access-subscriptions WG: ads WG: analytics WG: infra WG: performance WG: runtime WG: stories WG: ui-and-a11y | This is part of a larger effort to isolate AMP's tests and prevent global state from leaking across tests. See https://github.com/ampproject/amphtml/pull/24090#issuecomment-523684518 for the discussion that led to this issue being filed.
Related to #24090
| 1.0 | Replace legacy `describe` tests with one of `describes.{realWin|sandboxed|fakeWin|integration}` - This is part of a larger effort to isolate AMP's tests and prevent global state from leaking across tests. See https://github.com/ampproject/amphtml/pull/24090#issuecomment-523684518 for the discussion that led to this issue being filed.
Related to #24090
| test | replace legacy describe tests with one of describes realwin sandboxed fakewin integration this is part of a larger effort to isolate amp s tests and prevent global state from leaking across tests see for the discussion that led to this issue being filed related to | 1 |
219,242 | 17,081,386,018 | IssuesEvent | 2021-07-08 05:58:04 | ristekoss/susunjadwal | https://api.github.com/repos/ristekoss/susunjadwal | closed | Implement new beta testers contact form page for beta testing purposes | backend beta-testing frontend p1.high | - Nama
- Email
- ID LINE
- Fakultas
- Jurusan | 1.0 | Implement new beta testers contact form page for beta testing purposes - - Nama
- Email
- ID LINE
- Fakultas
- Jurusan | test | implement new beta testers contact form page for beta testing purposes nama email id line fakultas jurusan | 1 |
79,837 | 7,725,604,460 | IssuesEvent | 2018-05-24 18:30:24 | Iridescent-CM/technovation-app | https://api.github.com/repos/Iridescent-CM/technovation-app | closed | Moving teams to semi-finals | 4 - Test <= 8 [sprint topic] judging added during sprint | QA needed:
1. In the admin, change a few teams to semifinalists (remember which ones)
2. Change the judging round to semifinals
3. Log in as a virtual judge (make sure you are not a mentor for any teams) and make sure you can judge the semifinalists that you selected.
Use this if you do not have a virtual judge account:
email: alli+m3@iridescentlearning.org
password: alli+m3@iridescentlearning.org
<!---
@huboard:{"order":4.8176102247944e-31,"milestone_order":7.848812059156018e-45,"custom_state":""}
-->
| 1.0 | Moving teams to semi-finals - QA needed:
1. In the admin, change a few teams to semifinalists (remember which ones)
2. Change the judging round to semifinals
3. Log in as a virtual judge (make sure you are not a mentor for any teams) and make sure you can judge the semifinalists that you selected.
Use this if you do not have a virtual judge account:
email: alli+m3@iridescentlearning.org
password: alli+m3@iridescentlearning.org
<!---
@huboard:{"order":4.8176102247944e-31,"milestone_order":7.848812059156018e-45,"custom_state":""}
-->
| test | moving teams to semi finals qa needed in the admin change a few teams to semifinalists remember which ones change the judging round to semifinals log in as a virtual judge make sure you are not a mentor for any teams and make sure you can judge the semifinalists that you selected use this if you do not have a virtual judge account email alli iridescentlearning org password alli iridescentlearning org huboard order milestone order custom state | 1 |
234,996 | 7,733,628,814 | IssuesEvent | 2018-05-26 14:12:14 | Inter-Actief/amelie | https://api.github.com/repos/Inter-Actief/amelie | closed | Implement audit-log for member data export | Back-end Front-end Priority Pull Request easy-fix enhancement | Every time a data-export occurs based on member data, a clear reason should be provided and logged in an audit-log. This log should specify:
- [ ] who requested the export
- [ ] for what reason the data was exported | 1.0 | Implement audit-log for member data export - Every time a data-export occurs based on member data, a clear reason should be provided and logged in an audit-log. This log should specify:
- [ ] who requested the export
- [ ] for what reason the data was exported | non_test | implement audit log for member data export every time a data export occurs based on member data a clear reason should be provided and logged in an audit log this log should specify who requested the export for what reason the data was exported | 0 |
20,001 | 3,287,628,315 | IssuesEvent | 2015-10-29 11:24:02 | MarcusWolschon/osmeditor4android | https://api.github.com/repos/MarcusWolschon/osmeditor4android | closed | Saving state with empty data | auto-migrated Defect Medium Priority | ```
Pro memoria: the additional fix in 922 gurantees that we don't write out an
empty state file, which might happen if onStop gets called before we have even
started reading the state file.
However the way it is currently implementd leads to not being able to save a
file that is valid empty (because for example you downloaded an area which
actually contains nothing). Likely better to use a flag of sorts.
```
Original issue reported on code.google.com by `sp8...@gmail.com` on 13 Dec 2014 at 10:15 | 1.0 | Saving state with empty data - ```
Pro memoria: the additional fix in 922 gurantees that we don't write out an
empty state file, which might happen if onStop gets called before we have even
started reading the state file.
However the way it is currently implementd leads to not being able to save a
file that is valid empty (because for example you downloaded an area which
actually contains nothing). Likely better to use a flag of sorts.
```
Original issue reported on code.google.com by `sp8...@gmail.com` on 13 Dec 2014 at 10:15 | non_test | saving state with empty data pro memoria the additional fix in gurantees that we don t write out an empty state file which might happen if onstop gets called before we have even started reading the state file however the way it is currently implementd leads to not being able to save a file that is valid empty because for example you downloaded an area which actually contains nothing likely better to use a flag of sorts original issue reported on code google com by gmail com on dec at | 0 |
2,371 | 2,610,496,222 | IssuesEvent | 2015-02-26 20:44:26 | 18F/midas | https://api.github.com/repos/18F/midas | closed | No search results should still show something | design enhancement | When you search for a tag but there's no projects or opportunities associated with it, it should still show projects. For example, the screen should say "Nothing matches your search, however these projects might be relevant..." and then a series of project cards.
@azmiria or @sharonlo feel free to proposal IA/UX. | 1.0 | No search results should still show something - When you search for a tag but there's no projects or opportunities associated with it, it should still show projects. For example, the screen should say "Nothing matches your search, however these projects might be relevant..." and then a series of project cards.
@azmiria or @sharonlo feel free to proposal IA/UX. | non_test | no search results should still show something when you search for a tag but there s no projects or opportunities associated with it it should still show projects for example the screen should say nothing matches your search however these projects might be relevant and then a series of project cards azmiria or sharonlo feel free to proposal ia ux | 0 |
141,950 | 11,448,831,828 | IssuesEvent | 2020-02-06 05:01:59 | istio/istio | https://api.github.com/repos/istio/istio | closed | All Istio charts should be consolidated into one release directory | area/test and release bug bashed lifecycle/needs-triage | (NOTE: This is used to report product bugs:
To report a security vulnerability, please visit <https://istio.io/about/security-vulnerabilities/>
To ask questions about how to use Istio, please visit <https://discuss.istio.io>
)
**Bug description**
We support https://istio.io/charts however it only contains the latest charts version. See
**Affected product area (please put an X in all that apply)**
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[X] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
**Expected behavior**
**Steps to reproduce the bug**
It is not possible to install old versions of Istio charts nor upgrade the charts from the same location. See https://github.com/istio/istio.io/issues/3789 for more details. (the second part of the issue there)
Note it is completely fine to keep the per-version charts repo, but all charts need to additionally be consolidated into one directory and an index created for them.
**Version (include the output of `istioctl version --remote` and `kubectl version`)**
All versions.
| 1.0 | All Istio charts should be consolidated into one release directory - (NOTE: This is used to report product bugs:
To report a security vulnerability, please visit <https://istio.io/about/security-vulnerabilities/>
To ask questions about how to use Istio, please visit <https://discuss.istio.io>
)
**Bug description**
We support https://istio.io/charts however it only contains the latest charts version. See
**Affected product area (please put an X in all that apply)**
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[X] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
**Expected behavior**
**Steps to reproduce the bug**
It is not possible to install old versions of Istio charts nor upgrade the charts from the same location. See https://github.com/istio/istio.io/issues/3789 for more details. (the second part of the issue there)
Note it is completely fine to keep the per-version charts repo, but all charts need to additionally be consolidated into one directory and an index created for them.
**Version (include the output of `istioctl version --remote` and `kubectl version`)**
All versions.
| test | all istio charts should be consolidated into one release directory note this is used to report product bugs to report a security vulnerability please visit to ask questions about how to use istio please visit bug description we support however it only contains the latest charts version see affected product area please put an x in all that apply configuration infrastructure docs installation networking performance and scalability policies and telemetry security test and release user experience developer infrastructure expected behavior steps to reproduce the bug it is not possible to install old versions of istio charts nor upgrade the charts from the same location see for more details the second part of the issue there note it is completely fine to keep the per version charts repo but all charts need to additionally be consolidated into one directory and an index created for them version include the output of istioctl version remote and kubectl version all versions | 1 |
167,541 | 13,034,161,583 | IssuesEvent | 2020-07-28 08:16:00 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Chrome UI Functional Tests.test/functional/apps/visualize/_tsvb_table·ts - visualize app visual builder table should display correct values on changing group by field and column name | Team:KibanaApp failed-test | A test failed on a tracked branch
```
{ TimeoutError: Waiting for element to be located By(css selector, [data-test-subj~="tableView"])
Wait timed out after 10034ms
at node_modules/selenium-webdriver/lib/webdriver.js:834:17
at process._tickCallback (internal/process/next_tick.js:68:7) name: 'TimeoutError', remoteStacktrace: '' }
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.3/JOB=kibana-ciGroup12,node=immutable/7/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome UI Functional Tests.test/functional/apps/visualize/_tsvb_table·ts","test.name":"visualize app visual builder table should display correct values on changing group by field and column name","test.failCount":2}} --> | 1.0 | Failing test: Chrome UI Functional Tests.test/functional/apps/visualize/_tsvb_table·ts - visualize app visual builder table should display correct values on changing group by field and column name - A test failed on a tracked branch
```
{ TimeoutError: Waiting for element to be located By(css selector, [data-test-subj~="tableView"])
Wait timed out after 10034ms
at node_modules/selenium-webdriver/lib/webdriver.js:834:17
at process._tickCallback (internal/process/next_tick.js:68:7) name: 'TimeoutError', remoteStacktrace: '' }
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.3/JOB=kibana-ciGroup12,node=immutable/7/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome UI Functional Tests.test/functional/apps/visualize/_tsvb_table·ts","test.name":"visualize app visual builder table should display correct values on changing group by field and column name","test.failCount":2}} --> | test | failing test chrome ui functional tests test functional apps visualize tsvb table·ts visualize app visual builder table should display correct values on changing group by field and column name a test failed on a tracked branch timeouterror waiting for element to be located by css selector wait timed out after at node modules selenium webdriver lib webdriver js at process tickcallback internal process next tick js name timeouterror remotestacktrace first failure | 1 |
203,646 | 15,378,237,927 | IssuesEvent | 2021-03-02 18:03:47 | nih-cfde/cfde-deriva | https://api.github.com/repos/nih-cfde/cfde-deriva | closed | Indicate horizontal scroll is an option | Testing | One of my testers pointed out that on pages like [File](https://app-staging.nih-cfde.org/chaise/recordset/#1/CFDE:file@sort(RID)) there are more columns off the right of the screen, but no indication of that, so the only way you can find out is accidentally scrolling it, or by resizing your window.
I always look at it full screen on my dual monitor and so until 10 minutes ago I thought we were only displaying View, ID Namespace, Local Id, Filename, Project and Size In Bytes. I think if I've been looking at this thing for 9 months and had no idea there were more columns, then we need some kind of indicator :)
My normal view

Surprise extra columns:

| 1.0 | Indicate horizontal scroll is an option - One of my testers pointed out that on pages like [File](https://app-staging.nih-cfde.org/chaise/recordset/#1/CFDE:file@sort(RID)) there are more columns off the right of the screen, but no indication of that, so the only way you can find out is accidentally scrolling it, or by resizing your window.
I always look at it full screen on my dual monitor and so until 10 minutes ago I thought we were only displaying View, ID Namespace, Local Id, Filename, Project and Size In Bytes. I think if I've been looking at this thing for 9 months and had no idea there were more columns, then we need some kind of indicator :)
My normal view

Surprise extra columns:

| test | indicate horizontal scroll is an option one of my testers pointed out that on pages like there are more columns off the right of the screen but no indication of that so the only way you can find out is accidentally scrolling it or by resizing your window i always look at it full screen on my dual monitor and so until minutes ago i thought we were only displaying view id namespace local id filename project and size in bytes i think if i ve been looking at this thing for months and had no idea there were more columns then we need some kind of indicator my normal view surprise extra columns | 1 |
167,994 | 13,054,924,612 | IssuesEvent | 2020-07-30 00:04:05 | rust-lang/cargo | https://api.github.com/repos/rust-lang/cargo | opened | close_output test is randomly failing | A-testing-cargo-itself C-bug | TLDR: Should we run some flaky tests single-threaded?
The `build::close_output` test is randomly failing on CI. There were some fixes applied in #8286 in May 26, but there appears to be more recent failures:
https://github.com/rust-lang/rust/pull/74312#issuecomment-657964827
https://github.com/rust-lang/rust/pull/74408#issuecomment-659603027
https://github.com/rust-lang/rust/pull/74923 (https://github.com/rust-lang-ci/rust/runs/924743383)
The failure is:
```
---- build::close_output stdout ----
thread 'build::close_output' panicked at 'assertion failed: !status.success()', src/tools/cargo/tests/testsuite/build.rs:5016:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
I am uncertain how this is possible, so maybe someone could double check that what I wrote makes sense. [The test](https://github.com/rust-lang/cargo/blob/974eb438da8ced6e3becda2bbf63d9b643eacdeb/tests/testsuite/build.rs#L4928-L5044) covers what happens when stdout or stderr is closed in the middle of the build. It uses a proc-macro as a sync point so that the test can know when compilation has started, and to emit data to stdout or stderr during the build. It should follow this sequence:
1. Starts a TCP server.
2. Starts the build.
3. The proc-macro starts building.
4. The proc-macro connects to the TCP server, and waits for the test to tell it it is OK to continue.
5. Test receives connection from proc-macro
6. Test **closes stdout**.
7. Test tells proc-macro to continue.
8. Proc-macro starts spewing stuff to stdout to cargo, which through the internal job queue ends up attempting to [write to stdout](https://github.com/rust-lang/cargo/blob/974eb438da8ced6e3becda2bbf63d9b643eacdeb/src/cargo/core/compiler/job_queue.rs#L501-L503). Since stdout was closed in step 6, this should fail.
9. Cargo should exit with an error after rustc is done.
For some reason, at step 8, it successfully writes to stdout, and step 9 returns success.
I've been doing a few tests, and it gets worse based on the number of concurrent tests running. When run single threaded, I cannot get it to fail (even with the system under heavy load).
I'm feeling this is somewhat related to #7858. Is there still a race condition, even with atomic O_CLOEXEC? That is, AIUI, the file descriptors are still inherited across `fork`, and only closed when `exec` is called. If so, then there is a small window where the file descriptors have extra duplicates which prevent them from fully closing immediately.
I'm thinking a simple solution would be to isolate these tests into a separate test executable which runs with `--test-threads=1` (or maybe a simple no-harness test?). This should prevent concurrent tests from interfering with one another. The downside is that this makes it more cumbersome to run all of the test suite.
| 1.0 | close_output test is randomly failing - TLDR: Should we run some flaky tests single-threaded?
The `build::close_output` test is randomly failing on CI. There were some fixes applied in #8286 in May 26, but there appears to be more recent failures:
https://github.com/rust-lang/rust/pull/74312#issuecomment-657964827
https://github.com/rust-lang/rust/pull/74408#issuecomment-659603027
https://github.com/rust-lang/rust/pull/74923 (https://github.com/rust-lang-ci/rust/runs/924743383)
The failure is:
```
---- build::close_output stdout ----
thread 'build::close_output' panicked at 'assertion failed: !status.success()', src/tools/cargo/tests/testsuite/build.rs:5016:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
I am uncertain how this is possible, so maybe someone could double check that what I wrote makes sense. [The test](https://github.com/rust-lang/cargo/blob/974eb438da8ced6e3becda2bbf63d9b643eacdeb/tests/testsuite/build.rs#L4928-L5044) covers what happens when stdout or stderr is closed in the middle of the build. It uses a proc-macro as a sync point so that the test can know when compilation has started, and to emit data to stdout or stderr during the build. It should follow this sequence:
1. Starts a TCP server.
2. Starts the build.
3. The proc-macro starts building.
4. The proc-macro connects to the TCP server, and waits for the test to tell it it is OK to continue.
5. Test receives connection from proc-macro
6. Test **closes stdout**.
7. Test tells proc-macro to continue.
8. Proc-macro starts spewing stuff to stdout to cargo, which through the internal job queue ends up attempting to [write to stdout](https://github.com/rust-lang/cargo/blob/974eb438da8ced6e3becda2bbf63d9b643eacdeb/src/cargo/core/compiler/job_queue.rs#L501-L503). Since stdout was closed in step 6, this should fail.
9. Cargo should exit with an error after rustc is done.
For some reason, at step 8, it successfully writes to stdout, and step 9 returns success.
I've been doing a few tests, and it gets worse based on the number of concurrent tests running. When run single threaded, I cannot get it to fail (even with the system under heavy load).
I'm feeling this is somewhat related to #7858. Is there still a race condition, even with atomic O_CLOEXEC? That is, AIUI, the file descriptors are still inherited across `fork`, and only closed when `exec` is called. If so, then there is a small window where the file descriptors have extra duplicates which prevent them from fully closing immediately.
I'm thinking a simple solution would be to isolate these tests into a separate test executable which runs with `--test-threads=1` (or maybe a simple no-harness test?). This should prevent concurrent tests from interfering with one another. The downside is that this makes it more cumbersome to run all of the test suite.
| test | close output test is randomly failing tldr should we run some flaky tests single threaded the build close output test is randomly failing on ci there were some fixes applied in in may but there appears to be more recent failures the failure is build close output stdout thread build close output panicked at assertion failed status success src tools cargo tests testsuite build rs note run with rust backtrace environment variable to display a backtrace i am uncertain how this is possible so maybe someone could double check that what i wrote makes sense covers what happens when stdout or stderr is closed in the middle of the build it uses a proc macro as a sync point so that the test can know when compilation has started and to emit data to stdout or stderr during the build it should follow this sequence starts a tcp server starts the build the proc macro starts building the proc macro connects to the tcp server and waits for the test to tell it it is ok to continue test receives connection from proc macro test closes stdout test tells proc macro to continue proc macro starts spewing stuff to stdout to cargo which through the internal job queue ends up attempting to since stdout was closed in step this should fail cargo should exit with an error after rustc is done for some reason at step it successfully writes to stdout and step returns success i ve been doing a few tests and it gets worse based on the number of concurrent tests running when run single threaded i cannot get it to fail even with the system under heavy load i m feeling this is somewhat related to is there still a race condition even with atomic o cloexec that is aiui the file descriptors are still inherited across fork and only closed when exec is called if so then there is a small window where the file descriptors have extra duplicates which prevent them from fully closing immediately i m thinking a simple solution would be to isolate these tests into a separate test executable which runs with test threads or maybe a simple no harness test this should prevent concurrent tests from interfering with one another the downside is that this makes it more cumbersome to run all of the test suite | 1 |
54,098 | 13,251,357,508 | IssuesEvent | 2020-08-20 01:55:14 | beaverbuilder/feature-requests | https://api.github.com/repos/beaverbuilder/feature-requests | opened | Add Publish button beside Done button | Beaver Builder | Add a Publish button beside the Done button so that people don't have to click twice to publish a page.
I could type control-P if my hand was on the keyboard, but my hand is always on the mouse when building pages and setting style options in modules.
It's an easy technical fix, has low risk of failure, doesn't require extensive testing, and would save customers millions of clicks. | 1.0 | Add Publish button beside Done button - Add a Publish button beside the Done button so that people don't have to click twice to publish a page.
I could type control-P if my hand was on the keyboard, but my hand is always on the mouse when building pages and setting style options in modules.
It's an easy technical fix, has low risk of failure, doesn't require extensive testing, and would save customers millions of clicks. | non_test | add publish button beside done button add a publish button beside the done button so that people don t have to click twice to publish a page i could type control p if my hand was on the keyboard but my hand is always on the mouse when building pages and setting style options in modules it s an easy technical fix has low risk of failure doesn t require extensive testing and would save customers millions of clicks | 0 |
34,649 | 4,938,107,451 | IssuesEvent | 2016-11-29 10:08:36 | mautic/mautic | https://api.github.com/repos/mautic/mautic | closed | [Feature] Add a API endpoint for do not contact | Feature Request Ready To Test | Hi,
It would be good to either have a specific '/doNotContact' API endpoint or for the data to be returned when retrieving '/leads', e.g. with a field of of say "donotcontact"
| 1.0 | [Feature] Add a API endpoint for do not contact - Hi,
It would be good to either have a specific '/doNotContact' API endpoint or for the data to be returned when retrieving '/leads', e.g. with a field of of say "donotcontact"
| test | add a api endpoint for do not contact hi it would be good to either have a specific donotcontact api endpoint or for the data to be returned when retrieving leads e g with a field of of say donotcontact | 1 |
323,832 | 27,754,341,214 | IssuesEvent | 2023-03-16 00:20:53 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix jax_numpy_manipulation.test_jax_numpy_stack | JAX Frontend Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4391735438/jobs/7691077943" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4331498369/jobs/7563368081" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4391735438/jobs/7691077443" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4391735438/jobs/7691077690" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>Not found</summary>
Not found
</details>
<details>
<summary>Not found</summary>
Not found
</details>
<details>
<summary>Not found</summary>
Not found
</details>
| 1.0 | Fix jax_numpy_manipulation.test_jax_numpy_stack - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4391735438/jobs/7691077943" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4331498369/jobs/7563368081" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4391735438/jobs/7691077443" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4391735438/jobs/7691077690" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>Not found</summary>
Not found
</details>
<details>
<summary>Not found</summary>
Not found
</details>
<details>
<summary>Not found</summary>
Not found
</details>
| test | fix jax numpy manipulation test jax numpy stack tensorflow img src torch img src numpy img src jax img src not found not found not found not found not found not found | 1 |
284,044 | 21,387,067,817 | IssuesEvent | 2022-04-21 00:34:47 | jimenezmiguela/Help_Find | https://api.github.com/repos/jimenezmiguela/Help_Find | closed | Break apart Newsdata operations from ajax_operations.js | documentation | Consider reducing DRY
Simplifying functions
Improving logic
Adding funcionality | 1.0 | Break apart Newsdata operations from ajax_operations.js - Consider reducing DRY
Simplifying functions
Improving logic
Adding funcionality | non_test | break apart newsdata operations from ajax operations js consider reducing dry simplifying functions improving logic adding funcionality | 0 |
1,177 | 13,563,599,469 | IssuesEvent | 2020-09-18 08:47:35 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Visual Studio crashes when I type couple of cyrillic letters in cs file | Area-IDE Bug Developer Community Tenet-Reliability | _This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/content/problem/1085582/visual-studio-crashes-when-i-type-couple-of-cyrill.html)._
---
[regression] [worked-in:16.5.4]
After upgrade to 16.6.2.
Visual Studio crashes, when I try to type cyrillic letters in *.cs file.
It is some codepage problem related to source files.
It happens in files with only english letters and only if the option is set:
Environment -> Documents -> Save documents as Unicode when data cannot be saved in codepage.
If it is not set, everythin is ok in all scenarios.
---
### Original Comments
#### Visual Studio Feedback System on 6/19/2020, 07:05 AM:
<p>We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.</p>
#### Rebecca Peng [MSFT] on 6/22/2020, 02:01 AM:
<p>Hi customer,</p>
<p>Thanks for your feedback. In order for us to investigate this further, could you please provide following information:</p>
<ol>
<li>Did you meet this issue only once or always?</li>
<li>This issue only occurs on cs file or all type of files?</li>
</ol>
<p>We are looking forward to hearing from you soon.<br>
Thanks</p>
#### scazy on 6/22/2020, 04:30 AM:
<p>1. It appears sometimes while a day.</p><p>2. Yes, just cs files. I can't reproduce it in other file types.</p>
#### Visual Studio Feedback System on 6/22/2020, 06:53 PM:
<p>We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.</p>
---
### Original Solutions
(no solutions) | True | Visual Studio crashes when I type couple of cyrillic letters in cs file - _This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/content/problem/1085582/visual-studio-crashes-when-i-type-couple-of-cyrill.html)._
---
[regression] [worked-in:16.5.4]
After upgrade to 16.6.2.
Visual Studio crashes, when I try to type cyrillic letters in *.cs file.
It is some codepage problem related to source files.
It happens in files with only english letters and only if the option is set:
Environment -> Documents -> Save documents as Unicode when data cannot be saved in codepage.
If it is not set, everythin is ok in all scenarios.
---
### Original Comments
#### Visual Studio Feedback System on 6/19/2020, 07:05 AM:
<p>We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.</p>
#### Rebecca Peng [MSFT] on 6/22/2020, 02:01 AM:
<p>Hi customer,</p>
<p>Thanks for your feedback. In order for us to investigate this further, could you please provide following information:</p>
<ol>
<li>Did you meet this issue only once or always?</li>
<li>This issue only occurs on cs file or all type of files?</li>
</ol>
<p>We are looking forward to hearing from you soon.<br>
Thanks</p>
#### scazy on 6/22/2020, 04:30 AM:
<p>1. It appears sometimes while a day.</p><p>2. Yes, just cs files. I can't reproduce it in other file types.</p>
#### Visual Studio Feedback System on 6/22/2020, 06:53 PM:
<p>We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.</p>
---
### Original Solutions
(no solutions) | non_test | visual studio crashes when i type couple of cyrillic letters in cs file this issue has been moved from after upgrade to visual studio crashes when i try to type cyrillic letters in cs file it is some codepage problem related to source files it happens in files with only english letters and only if the option is set environment documents save documents as unicode when data cannot be saved in codepage if it is not set everythin is ok in all scenarios original comments visual studio feedback system on am we have directed your feedback to the appropriate engineering team for further evaluation the team will review the feedback and notify you about the next steps rebecca peng on am hi customer thanks for your feedback in order for us to investigate this further could you please provide following information did you meet this issue only once or always this issue only occurs on cs file or all type of files we are looking forward to hearing from you soon thanks scazy on am it appears sometimes while a day yes just cs files i can t reproduce it in other file types visual studio feedback system on pm we have directed your feedback to the appropriate engineering team for further evaluation the team will review the feedback and notify you about the next steps original solutions no solutions | 0 |
656,399 | 21,729,218,260 | IssuesEvent | 2022-05-11 10:24:49 | bedita/manager | https://api.github.com/repos/bedita/manager | closed | Handle abstract types in filter by type | bug Priority - Normal | ## Expected behaviour
Filter by type should handle abstract types as `objects` and `media`, expanding them in concrete types.
## Actual behaviour
Abstract types in relations are not present in filter by type combo. | 1.0 | Handle abstract types in filter by type - ## Expected behaviour
Filter by type should handle abstract types as `objects` and `media`, expanding them in concrete types.
## Actual behaviour
Abstract types in relations are not present in filter by type combo. | non_test | handle abstract types in filter by type expected behaviour filter by type should handle abstract types as objects and media expanding them in concrete types actual behaviour abstract types in relations are not present in filter by type combo | 0 |
270,581 | 8,467,357,916 | IssuesEvent | 2018-10-23 16:44:47 | club-soda/club-soda-guide | https://api.github.com/repos/club-soda/club-soda-guide | opened | Randomise drinks that appear on landing page carousel | Jussi - Admin priority-4 | Splitting this portion of https://github.com/club-soda/club-soda-guide/issues/8 out as agreed with CS team it was a lower priority for now.
As a Club Soda administrator
I would like the list of featured drinks on the homepage to be randomised,
So that as our list grows and we have a large number of member brands and therefore a number of higher weighted drinks (#74) that exceeds the carousel limit, all drinks can get some 'air time' in the carousel.
## Acceptance Criteria
+ [ ] Drinks displayed are randomised (each user should see a different set of drinks) - this may require us to fix the number of drinks in each weighting category displayed in the carousel, TBC | 1.0 | Randomise drinks that appear on landing page carousel - Splitting this portion of https://github.com/club-soda/club-soda-guide/issues/8 out as agreed with CS team it was a lower priority for now.
As a Club Soda administrator
I would like the list of featured drinks on the homepage to be randomised,
So that as our list grows and we have a large number of member brands and therefore a number of higher weighted drinks (#74) that exceeds the carousel limit, all drinks can get some 'air time' in the carousel.
## Acceptance Criteria
+ [ ] Drinks displayed are randomised (each user should see a different set of drinks) - this may require us to fix the number of drinks in each weighting category displayed in the carousel, TBC | non_test | randomise drinks that appear on landing page carousel splitting this portion of out as agreed with cs team it was a lower priority for now as a club soda administrator i would like the list of featured drinks on the homepage to be randomised so that as our list grows and we have a large number of member brands and therefore a number of higher weighted drinks that exceeds the carousel limit all drinks can get some air time in the carousel acceptance criteria drinks displayed are randomised each user should see a different set of drinks this may require us to fix the number of drinks in each weighting category displayed in the carousel tbc | 0 |
306,213 | 26,447,825,713 | IssuesEvent | 2023-01-16 08:57:45 | wazuh/wazuh | https://api.github.com/repos/wazuh/wazuh | opened | Release 4.4.0 - Beta 1 - Integration tests | team/qa release test/4.4.0 |
| Wazuh QA: Branch | Wazuh QA: Commit | Wazuh: Tag | Wazuh: Commit |
|:--:|:--:|:--:|:--:|
| `4.4-beta1` | | `v4.4.0-alpha2` | https://github.com/wazuh/wazuh/commit/90134ed0cc7a4479a216af6d10743bd013fe5700 |
We are going to check that the integration tests of the `4.4-beta1` branch of `wazuh-qa` work correctly using the `4.4.0-beta1` version of `wazuh`.
The tests will be performed both in the local environment and in Jenkins using `CentOS` as the manager OS. As for the agents, `Linux`, `Windows` and `macOS` will be used as required.
## Tests Integration - Status
#### Main RC issue
- https://github.com/wazuh/wazuh/issues/15891
#### References
|Color|Status |
|:--:|:--|
|🟢|All tests passed successfully|
|🟡|All tests passed but there are some warnings, xfails or xpassed|
|🔴|Some tests have failures or errors|
|🔵|Test execution in progress|
|⚫|To Do|
|🟠|Tests failed and passed after relaunching it|
|:purple_circle:| All skipped |
## Test Integration - Results
<table>
<thead>
<tr>
<th style="width: 175px;">Name</th>
<th style="width: 499px;" colspan="6">Jenkins</th>
</tr>
</thead>
<tbody>
<tr>
<td style="width: 175px;">OS</td>
<td style="width: 208px;" colspan="2">Linux</td>
<td style="width: 79px;">Windows</td>
<td style="width: 97px;">Solaris</td>
<td style="width: 97px;" colspan="2">macOS</td>
</tr>
<tr>
<td style="width: 175px;">Target</td>
<td style="width: 103px;">Manager</td>
<td style="width: 99px;">Agent</td>
<td style="width: 79px;">Agent</td>
<td style="width: 97px;">Agent</td>
<td style="width: 97px;" colspan="2">Agent</td>
</tr>
<tr>
<td style="width: 175px;"><strong>active_response</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35024/"> ⚫ </a></td>
<td style="width: 99px;"><a href="https://ci.wazuh.info/job/Test_integration/35025/"> ⚫ </a></td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35025/"> ⚫ </a></td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>agentd</strong></td>
<td style="width: 103px;">NA</td>
<td style="width: 99px;"><a href="https://ci.wazuh.info/job/Test_integration/35026/"> ⚫ </a></td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35026/"> ⚫ </a></td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>analysisd</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35027/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>api</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35028/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>authd</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35029/"> ⚫ </a></td>
<td style="width: 99px;"> NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>enrollment<br/></strong></td>
<td style="width: 103px;">NA</td>
<td style="width: 99px;"><a href="https://ci.wazuh.info/job/Test_integration/35030/"> ⚫ </a></td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35030/"> ⚫ </a></td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>fim</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35031/"> ⚫ </a></td>
<td style="width: 99px;"><a href="https://ci.wazuh.info/job/Test_integration/35032/"> ⚫ </a></td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35032/"> ⚫ </a></td>
<td style="width: 97px;"><a href="https://ci.wazuh.info/job/Test_integration/35033/"> ⚫ </a></td>
<td style="width: 97px;" colspan="2"><a href="https://ci.wazuh.info/job/Test_integration/35033//"> ⚫ </a></td>
</tr>
<tr>
<td style="width: 175px;"><strong>gcloud<br/></strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35034/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>github<br /></strong></td>
<td style="width: 103px;"> <a href="https://ci.wazuh.info/job/Test_integration/35035/"> ⚫ </a></td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35036/"> ⚫ </a></td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>logcollector</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35037/"> ⚫ </td>
<td style="width: 99px;"><a href="https://ci.wazuh.info/job/Test_integration/35038/"> ⚫ </td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35038/"> ⚫ </td>
<td style="width: 97px;"><a href="https://ci.wazuh.info/job/Test_integration/35038/"> ⚫ </td>
<td style="width: 97px;" colspan="2"><a href="https://ci.wazuh.info/job/Test_integration/35038/"> ⚫ </td>
</tr>
<tr>
<td style="width: 175px;"><strong>logtest<br/></strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35039/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>office365<br /></strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35040/"> ⚫ </a></td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35041/"> ⚫ </a></td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>remoted</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35042/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>rids</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35043/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>rootcheck</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35044/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>vulnerability_detector</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35045/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>wazuh_db</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35046/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
</tbody>
</table>
| 1.0 | Release 4.4.0 - Beta 1 - Integration tests -
| Wazuh QA: Branch | Wazuh QA: Commit | Wazuh: Tag | Wazuh: Commit |
|:--:|:--:|:--:|:--:|
| `4.4-beta1` | | `v4.4.0-alpha2` | https://github.com/wazuh/wazuh/commit/90134ed0cc7a4479a216af6d10743bd013fe5700 |
We are going to check that the integration tests of the `4.4-beta1` branch of `wazuh-qa` work correctly using the `4.4.0-beta1` version of `wazuh`.
The tests will be performed both in the local environment and in Jenkins using `CentOS` as the manager OS. As for the agents, `Linux`, `Windows` and `macOS` will be used as required.
## Tests Integration - Status
#### Main RC issue
- https://github.com/wazuh/wazuh/issues/15891
#### References
|Color|Status |
|:--:|:--|
|🟢|All tests passed successfully|
|🟡|All tests passed but there are some warnings, xfails or xpassed|
|🔴|Some tests have failures or errors|
|🔵|Test execution in progress|
|⚫|To Do|
|🟠|Tests failed and passed after relaunching it|
|:purple_circle:| All skipped |
## Test Integration - Results
<table>
<thead>
<tr>
<th style="width: 175px;">Name</th>
<th style="width: 499px;" colspan="6">Jenkins</th>
</tr>
</thead>
<tbody>
<tr>
<td style="width: 175px;">OS</td>
<td style="width: 208px;" colspan="2">Linux</td>
<td style="width: 79px;">Windows</td>
<td style="width: 97px;">Solaris</td>
<td style="width: 97px;" colspan="2">macOS</td>
</tr>
<tr>
<td style="width: 175px;">Target</td>
<td style="width: 103px;">Manager</td>
<td style="width: 99px;">Agent</td>
<td style="width: 79px;">Agent</td>
<td style="width: 97px;">Agent</td>
<td style="width: 97px;" colspan="2">Agent</td>
</tr>
<tr>
<td style="width: 175px;"><strong>active_response</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35024/"> ⚫ </a></td>
<td style="width: 99px;"><a href="https://ci.wazuh.info/job/Test_integration/35025/"> ⚫ </a></td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35025/"> ⚫ </a></td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>agentd</strong></td>
<td style="width: 103px;">NA</td>
<td style="width: 99px;"><a href="https://ci.wazuh.info/job/Test_integration/35026/"> ⚫ </a></td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35026/"> ⚫ </a></td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>analysisd</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35027/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>api</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35028/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>authd</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35029/"> ⚫ </a></td>
<td style="width: 99px;"> NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>enrollment<br/></strong></td>
<td style="width: 103px;">NA</td>
<td style="width: 99px;"><a href="https://ci.wazuh.info/job/Test_integration/35030/"> ⚫ </a></td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35030/"> ⚫ </a></td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>fim</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35031/"> ⚫ </a></td>
<td style="width: 99px;"><a href="https://ci.wazuh.info/job/Test_integration/35032/"> ⚫ </a></td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35032/"> ⚫ </a></td>
<td style="width: 97px;"><a href="https://ci.wazuh.info/job/Test_integration/35033/"> ⚫ </a></td>
<td style="width: 97px;" colspan="2"><a href="https://ci.wazuh.info/job/Test_integration/35033//"> ⚫ </a></td>
</tr>
<tr>
<td style="width: 175px;"><strong>gcloud<br/></strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35034/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>github<br /></strong></td>
<td style="width: 103px;"> <a href="https://ci.wazuh.info/job/Test_integration/35035/"> ⚫ </a></td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35036/"> ⚫ </a></td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>logcollector</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35037/"> ⚫ </td>
<td style="width: 99px;"><a href="https://ci.wazuh.info/job/Test_integration/35038/"> ⚫ </td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35038/"> ⚫ </td>
<td style="width: 97px;"><a href="https://ci.wazuh.info/job/Test_integration/35038/"> ⚫ </td>
<td style="width: 97px;" colspan="2"><a href="https://ci.wazuh.info/job/Test_integration/35038/"> ⚫ </td>
</tr>
<tr>
<td style="width: 175px;"><strong>logtest<br/></strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35039/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>office365<br /></strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35040/"> ⚫ </a></td>
<td style="width: 79px;"><a href="https://ci.wazuh.info/job/Test_integration/35041/"> ⚫ </a></td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>remoted</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35042/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>rids</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35043/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>rootcheck</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35044/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>vulnerability_detector</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35045/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
<tr>
<td style="width: 175px;"><strong>wazuh_db</strong></td>
<td style="width: 103px;"><a href="https://ci.wazuh.info/job/Test_integration/35046/"> ⚫ </a></td>
<td style="width: 99px;">NA</td>
<td style="width: 79px;">NA</td>
<td style="text-align: center; width: 97px;">NA</td>
<td style="text-align: center; width: 97px;" colspan="2">NA</td>
</tr>
</tbody>
</table>
| test | release beta integration tests wazuh qa branch wazuh qa commit wazuh tag wazuh commit we are going to check that the integration tests of the branch of wazuh qa work correctly using the version of wazuh the tests will be performed both in the local environment and in jenkins using centos as the manager os as for the agents linux windows and macos will be used as required tests integration status main rc issue references color status 🟢 all tests passed successfully 🟡 all tests passed but there are some warnings xfails or xpassed 🔴 some tests have failures or errors 🔵 test execution in progress ⚫ to do 🟠 tests failed and passed after relaunching it purple circle all skipped test integration results name jenkins os linux windows solaris macos target manager agent agent agent agent active response na na agentd na na na analysisd na na na na api na na na na authd nbsp na na na na enrollment na na na fim gcloud na na na na github na na logcollector logtest na na na na na na remoted na na na na rids na na na na rootcheck na na na na vulnerability detector na na na na wazuh db na na na na | 1 |
111,189 | 9,521,820,948 | IssuesEvent | 2019-04-27 01:52:01 | SNLComputation/Albany | https://api.github.com/repos/SNLComputation/Albany | closed | LCM FPEs on skybridge build with Intel | LCM Testing | The following tests started failing in the skybridge intel build yesterday:
MechanicsPorePressureSimple_Serial
MechanicsPorePressureLocalized_Serial
MechanicsPorePressureParallelFlow_Serial
due to FPEs, e.g. http://cdash.sandia.gov/CDash-2-3-0/testSummary.php?project=10&name=MechanicsPorePressureSimple_Serial&date=2019-04-24 . @lxmota , could this be related to your check ins yesterday? | 1.0 | LCM FPEs on skybridge build with Intel - The following tests started failing in the skybridge intel build yesterday:
MechanicsPorePressureSimple_Serial
MechanicsPorePressureLocalized_Serial
MechanicsPorePressureParallelFlow_Serial
due to FPEs, e.g. http://cdash.sandia.gov/CDash-2-3-0/testSummary.php?project=10&name=MechanicsPorePressureSimple_Serial&date=2019-04-24 . @lxmota , could this be related to your check ins yesterday? | test | lcm fpes on skybridge build with intel the following tests started failing in the skybridge intel build yesterday mechanicsporepressuresimple serial mechanicsporepressurelocalized serial mechanicsporepressureparallelflow serial due to fpes e g lxmota could this be related to your check ins yesterday | 1 |
125,984 | 10,372,430,741 | IssuesEvent | 2019-09-09 02:59:03 | OA-PASS/pass-ember | https://api.github.com/repos/OA-PASS/pass-ember | closed | Propose a plan for creating additional tests for the pass-ember code base | testing | Please document your proposal at the end of this document https://docs.google.com/document/d/1qgWmwsF3tbjnq8QeCYi7-s_hCRosI0IfBdUTdVD0gHI/edit
| 1.0 | Propose a plan for creating additional tests for the pass-ember code base - Please document your proposal at the end of this document https://docs.google.com/document/d/1qgWmwsF3tbjnq8QeCYi7-s_hCRosI0IfBdUTdVD0gHI/edit
| test | propose a plan for creating additional tests for the pass ember code base please document your proposal at the end of this document | 1 |
53,576 | 6,734,141,112 | IssuesEvent | 2017-10-18 16:59:28 | DeckOfPandas/nhs-ideas-lab | https://api.github.com/repos/DeckOfPandas/nhs-ideas-lab | closed | Social Media Icons | design | Selection of square icons uploaded onto slack channel. Let me know if you need any other sizes or changes. | 1.0 | Social Media Icons - Selection of square icons uploaded onto slack channel. Let me know if you need any other sizes or changes. | non_test | social media icons selection of square icons uploaded onto slack channel let me know if you need any other sizes or changes | 0 |
63,525 | 26,422,921,063 | IssuesEvent | 2023-01-13 22:44:24 | aws/aws-sdk | https://api.github.com/repos/aws/aws-sdk | closed | Cognito Migrate User Between Pools | feature-request service-api cognito | ### Describe the feature
Support a single command / API in cognito-idp to support migrating a user pool from one pool to another (without forcing them to reset their password).
### Use Case
Cognito user pools are great but there are many properties that cannot be changed once the pool is created. Several times now I've found instances where I'd need/like to set a property that cannot be changed and this creates a problem when there are users in the pool (and it's a production app).
The current solutions are
- export and do an import and reset everyones password (poor user experience)
- create a migration lambda (cumbersome and time consuming)
This exact situation just happened to me when I received a customer support ticket that a custom cannot login and found out the user pool is set to be case-sensitive which is the default for cloudformation but not for the console. This property cannot be changed.
It would be highly beneficial to be able migrate users seamlessly from one pool to another.
### Proposed Solution
It's understandable that cognito would not expose plain text passwords which is why the export / import doesn't work seamlessly but cognito must know the encrypted password so I would expect that can be transferred over aws servers along wtih all user data into a new pool. But I didn't develop cognito so that's just an assumption, seems like something should go in this box.
### Other Information
I'm not sure if this is the best place for this request, I'd prefer it to be part of the API and built in to the appropriate sdk's rather than cli but I couldn't figure out the git repo for cognito itself. It would be nice for it to also be part of cli though.
### Acknowledgements
- [ ] I may be able to implement this feature request
- [ ] This feature might incur a breaking change
### CLI version used
2.2.23
### Environment details (OS name and version, etc.)
Mac OS 11.6 | 1.0 | Cognito Migrate User Between Pools - ### Describe the feature
Support a single command / API in cognito-idp to support migrating a user pool from one pool to another (without forcing them to reset their password).
### Use Case
Cognito user pools are great but there are many properties that cannot be changed once the pool is created. Several times now I've found instances where I'd need/like to set a property that cannot be changed and this creates a problem when there are users in the pool (and it's a production app).
The current solutions are
- export and do an import and reset everyones password (poor user experience)
- create a migration lambda (cumbersome and time consuming)
This exact situation just happened to me when I received a customer support ticket that a custom cannot login and found out the user pool is set to be case-sensitive which is the default for cloudformation but not for the console. This property cannot be changed.
It would be highly beneficial to be able migrate users seamlessly from one pool to another.
### Proposed Solution
It's understandable that cognito would not expose plain text passwords which is why the export / import doesn't work seamlessly but cognito must know the encrypted password so I would expect that can be transferred over aws servers along wtih all user data into a new pool. But I didn't develop cognito so that's just an assumption, seems like something should go in this box.
### Other Information
I'm not sure if this is the best place for this request, I'd prefer it to be part of the API and built in to the appropriate sdk's rather than cli but I couldn't figure out the git repo for cognito itself. It would be nice for it to also be part of cli though.
### Acknowledgements
- [ ] I may be able to implement this feature request
- [ ] This feature might incur a breaking change
### CLI version used
2.2.23
### Environment details (OS name and version, etc.)
Mac OS 11.6 | non_test | cognito migrate user between pools describe the feature support a single command api in cognito idp to support migrating a user pool from one pool to another without forcing them to reset their password use case cognito user pools are great but there are many properties that cannot be changed once the pool is created several times now i ve found instances where i d need like to set a property that cannot be changed and this creates a problem when there are users in the pool and it s a production app the current solutions are export and do an import and reset everyones password poor user experience create a migration lambda cumbersome and time consuming this exact situation just happened to me when i received a customer support ticket that a custom cannot login and found out the user pool is set to be case sensitive which is the default for cloudformation but not for the console this property cannot be changed it would be highly beneficial to be able migrate users seamlessly from one pool to another proposed solution it s understandable that cognito would not expose plain text passwords which is why the export import doesn t work seamlessly but cognito must know the encrypted password so i would expect that can be transferred over aws servers along wtih all user data into a new pool but i didn t develop cognito so that s just an assumption seems like something should go in this box other information i m not sure if this is the best place for this request i d prefer it to be part of the api and built in to the appropriate sdk s rather than cli but i couldn t figure out the git repo for cognito itself it would be nice for it to also be part of cli though acknowledgements i may be able to implement this feature request this feature might incur a breaking change cli version used environment details os name and version etc mac os | 0 |
311,420 | 26,791,235,487 | IssuesEvent | 2023-02-01 08:39:55 | thomas0812/uptime | https://api.github.com/repos/thomas0812/uptime | opened | 🛑 navertest is down | status navertest | In [`d1b8d0c`](https://github.com/thomas0812/uptime/commit/d1b8d0cb5bf4f2cfe273fca8cf4adafcaa4d30a7
), navertest (https://www.navtestestteer.com/) was **down**:
- HTTP code: 0
- Response time: 0 ms
| 1.0 | 🛑 navertest is down - In [`d1b8d0c`](https://github.com/thomas0812/uptime/commit/d1b8d0cb5bf4f2cfe273fca8cf4adafcaa4d30a7
), navertest (https://www.navtestestteer.com/) was **down**:
- HTTP code: 0
- Response time: 0 ms
| test | 🛑 navertest is down in navertest was down http code response time ms | 1 |
232,556 | 7,661,397,886 | IssuesEvent | 2018-05-11 14:07:19 | ObeidaElJundi/Correctomation | https://api.github.com/repos/ObeidaElJundi/Correctomation | closed | Consider each variable alone for correction | bug high priority | Currently: cout<<">>>"<<var1<<":"<<var2<<":"<<var3<<"<<<"<<endl;
The whole test case fails if one of the variables is wrong.
Its better to have a partial grade for each variable. | 1.0 | Consider each variable alone for correction - Currently: cout<<">>>"<<var1<<":"<<var2<<":"<<var3<<"<<<"<<endl;
The whole test case fails if one of the variables is wrong.
Its better to have a partial grade for each variable. | non_test | consider each variable alone for correction currently cout endl the whole test case fails if one of the variables is wrong its better to have a partial grade for each variable | 0 |
2,545 | 12,260,873,039 | IssuesEvent | 2020-05-06 19:03:36 | mozilla-mobile/fenix | https://api.github.com/repos/mozilla-mobile/fenix | closed | Disable bitrise and switch to taskcluster only | P3 eng:automation | After asking on Slack it seems like no one opposes shutting down bitrise and switching to taskcluster only - which is the same pipeline we use for releases etc.
- [x] Disable bitrise runs
- [x] Mark "taskcluster" as required check instead of bitrise.
- [ ] Update Docker image - for fast builds | 1.0 | Disable bitrise and switch to taskcluster only - After asking on Slack it seems like no one opposes shutting down bitrise and switching to taskcluster only - which is the same pipeline we use for releases etc.
- [x] Disable bitrise runs
- [x] Mark "taskcluster" as required check instead of bitrise.
- [ ] Update Docker image - for fast builds | non_test | disable bitrise and switch to taskcluster only after asking on slack it seems like no one opposes shutting down bitrise and switching to taskcluster only which is the same pipeline we use for releases etc disable bitrise runs mark taskcluster as required check instead of bitrise update docker image for fast builds | 0 |
27,121 | 27,710,964,935 | IssuesEvent | 2023-03-14 14:13:52 | informalsystems/apalache | https://api.github.com/repos/informalsystems/apalache | closed | [BUG] Skolem operator wrapping a non-existential triggers unrelated exceptions | bug usability product-owner-triage | ## Input specification
```
---------- MODULE test ----------
EXTENDS Apalache
Init == TRUE
Next == TRUE
Inv == Skolem(TRUE)
====================
```
## The command line parameters used to run the tool
`--inv=Inv --length=1`
## Description
BMC pass fails with:
```
PASS #13: BoundedChecker I@14:59:47.324
State 0: Checking 1 state invariants I@14:59:47.811
test.tla:6:8-6:19: rewriter error: No rewriting rule applies to expression: Apalache!Skolem(TRUE) E@14:59:47.819
at.forsyte.apalache.tla.bmcmt.RewriterException: No rewriting rule applies to expression: Apalache!Skolem(TRUE)
at at.forsyte.apalache.tla.bmcmt.SymbStateRewriterImpl.doRecursive$1(SymbStateRewriterImpl.scala:382)
at at.forsyte.apalache.tla.bmcmt.SymbStateRewriterImpl.rewriteUntilDone(SymbStateRewriterImpl.scala:401)
at at.forsyte.apalache.tla.bmcmt.rules.NegRule.apply(NegRule.scala:28)
at at.forsyte.apalache.tla.bmcmt.SymbStateRewriterImpl.rewriteOnce(SymbStateRewriterImpl.scala:336)
at at.forsyte.apalache.tla.bmcmt.SymbStateRewriterImpl.doRecursive$1(SymbStateRewriterImpl.scala:367)
at at.forsyte.apalache.tla.bmcmt.SymbStateRewriterImpl.rewriteUntilDone(SymbStateRewriterImpl.scala:401)
at at.forsyte.apalache.tla.bmcmt.trex.TransitionExecutorImpl.assertState(TransitionExecutorImpl.scala:196)
at at.forsyte.apalache.tla.bmcmt.trex.FilteredTransitionExecutor.assertState(FilteredTransitionExecutor.scala:88)
at at.forsyte.apalache.tla.bmcmt.trex.ConstrainedTransitionExecutor.assertState(ConstrainedTransitionExecutor.scala:102)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.$anonfun$checkInvariants$2(SeqModelChecker.scala:267)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.$anonfun$checkInvariants$2$adapted(SeqModelChecker.scala:255)
at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:985)
at scala.collection.immutable.List.foreach(List.scala:431)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:984)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.checkInvariants(SeqModelChecker.scala:255)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.$anonfun$prepareTransitionsAndCheckInvariants$6(SeqModelChecker.scala:177)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.$anonfun$prepareTransitionsAndCheckInvariants$6$adapted(SeqModelChecker.scala:141)
at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:985)
at scala.collection.immutable.List.foreach(List.scala:431)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:984)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.prepareTransitionsAndCheckInvariants(SeqModelChecker.scala:141)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.makeStep(SeqModelChecker.scala:63)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.run(SeqModelChecker.scala:47)
at at.forsyte.apalache.tla.bmcmt.passes.BoundedCheckerPassImpl.runIncrementalChecker(BoundedCheckerPassImpl.scala:131)
at at.forsyte.apalache.tla.bmcmt.passes.BoundedCheckerPassImpl.execute(BoundedCheckerPassImpl.scala:98)
at at.forsyte.apalache.infra.passes.PassChainExecutor.exec$1(PassChainExecutor.scala:22)
at at.forsyte.apalache.infra.passes.PassChainExecutor.run(PassChainExecutor.scala:37)
at at.forsyte.apalache.tla.Tool$.runCheck(Tool.scala:187)
at at.forsyte.apalache.tla.Tool$.$anonfun$run$3(Tool.scala:95)
at at.forsyte.apalache.tla.Tool$.handleExceptions(Tool.scala:322)
at at.forsyte.apalache.tla.Tool$.run(Tool.scala:95)
at at.forsyte.apalache.tla.Tool$.main(Tool.scala:45)
at at.forsyte.apalache.tla.Tool.main(Tool.scala)
```
## Expected behavior
Static analysis should report that `Skolem` is wrapping a non-existential and exit cleanly.
| True | [BUG] Skolem operator wrapping a non-existential triggers unrelated exceptions - ## Input specification
```
---------- MODULE test ----------
EXTENDS Apalache
Init == TRUE
Next == TRUE
Inv == Skolem(TRUE)
====================
```
## The command line parameters used to run the tool
`--inv=Inv --length=1`
## Description
BMC pass fails with:
```
PASS #13: BoundedChecker I@14:59:47.324
State 0: Checking 1 state invariants I@14:59:47.811
test.tla:6:8-6:19: rewriter error: No rewriting rule applies to expression: Apalache!Skolem(TRUE) E@14:59:47.819
at.forsyte.apalache.tla.bmcmt.RewriterException: No rewriting rule applies to expression: Apalache!Skolem(TRUE)
at at.forsyte.apalache.tla.bmcmt.SymbStateRewriterImpl.doRecursive$1(SymbStateRewriterImpl.scala:382)
at at.forsyte.apalache.tla.bmcmt.SymbStateRewriterImpl.rewriteUntilDone(SymbStateRewriterImpl.scala:401)
at at.forsyte.apalache.tla.bmcmt.rules.NegRule.apply(NegRule.scala:28)
at at.forsyte.apalache.tla.bmcmt.SymbStateRewriterImpl.rewriteOnce(SymbStateRewriterImpl.scala:336)
at at.forsyte.apalache.tla.bmcmt.SymbStateRewriterImpl.doRecursive$1(SymbStateRewriterImpl.scala:367)
at at.forsyte.apalache.tla.bmcmt.SymbStateRewriterImpl.rewriteUntilDone(SymbStateRewriterImpl.scala:401)
at at.forsyte.apalache.tla.bmcmt.trex.TransitionExecutorImpl.assertState(TransitionExecutorImpl.scala:196)
at at.forsyte.apalache.tla.bmcmt.trex.FilteredTransitionExecutor.assertState(FilteredTransitionExecutor.scala:88)
at at.forsyte.apalache.tla.bmcmt.trex.ConstrainedTransitionExecutor.assertState(ConstrainedTransitionExecutor.scala:102)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.$anonfun$checkInvariants$2(SeqModelChecker.scala:267)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.$anonfun$checkInvariants$2$adapted(SeqModelChecker.scala:255)
at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:985)
at scala.collection.immutable.List.foreach(List.scala:431)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:984)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.checkInvariants(SeqModelChecker.scala:255)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.$anonfun$prepareTransitionsAndCheckInvariants$6(SeqModelChecker.scala:177)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.$anonfun$prepareTransitionsAndCheckInvariants$6$adapted(SeqModelChecker.scala:141)
at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:985)
at scala.collection.immutable.List.foreach(List.scala:431)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:984)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.prepareTransitionsAndCheckInvariants(SeqModelChecker.scala:141)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.makeStep(SeqModelChecker.scala:63)
at at.forsyte.apalache.tla.bmcmt.SeqModelChecker.run(SeqModelChecker.scala:47)
at at.forsyte.apalache.tla.bmcmt.passes.BoundedCheckerPassImpl.runIncrementalChecker(BoundedCheckerPassImpl.scala:131)
at at.forsyte.apalache.tla.bmcmt.passes.BoundedCheckerPassImpl.execute(BoundedCheckerPassImpl.scala:98)
at at.forsyte.apalache.infra.passes.PassChainExecutor.exec$1(PassChainExecutor.scala:22)
at at.forsyte.apalache.infra.passes.PassChainExecutor.run(PassChainExecutor.scala:37)
at at.forsyte.apalache.tla.Tool$.runCheck(Tool.scala:187)
at at.forsyte.apalache.tla.Tool$.$anonfun$run$3(Tool.scala:95)
at at.forsyte.apalache.tla.Tool$.handleExceptions(Tool.scala:322)
at at.forsyte.apalache.tla.Tool$.run(Tool.scala:95)
at at.forsyte.apalache.tla.Tool$.main(Tool.scala:45)
at at.forsyte.apalache.tla.Tool.main(Tool.scala)
```
## Expected behavior
Static analysis should report that `Skolem` is wrapping a non-existential and exit cleanly.
| non_test | skolem operator wrapping a non existential triggers unrelated exceptions input specification module test extends apalache init true next true inv skolem true the command line parameters used to run the tool inv inv length description bmc pass fails with pass boundedchecker i state checking state invariants i test tla rewriter error no rewriting rule applies to expression apalache skolem true e at forsyte apalache tla bmcmt rewriterexception no rewriting rule applies to expression apalache skolem true at at forsyte apalache tla bmcmt symbstaterewriterimpl dorecursive symbstaterewriterimpl scala at at forsyte apalache tla bmcmt symbstaterewriterimpl rewriteuntildone symbstaterewriterimpl scala at at forsyte apalache tla bmcmt rules negrule apply negrule scala at at forsyte apalache tla bmcmt symbstaterewriterimpl rewriteonce symbstaterewriterimpl scala at at forsyte apalache tla bmcmt symbstaterewriterimpl dorecursive symbstaterewriterimpl scala at at forsyte apalache tla bmcmt symbstaterewriterimpl rewriteuntildone symbstaterewriterimpl scala at at forsyte apalache tla bmcmt trex transitionexecutorimpl assertstate transitionexecutorimpl scala at at forsyte apalache tla bmcmt trex filteredtransitionexecutor assertstate filteredtransitionexecutor scala at at forsyte apalache tla bmcmt trex constrainedtransitionexecutor assertstate constrainedtransitionexecutor scala at at forsyte apalache tla bmcmt seqmodelchecker anonfun checkinvariants seqmodelchecker scala at at forsyte apalache tla bmcmt seqmodelchecker anonfun checkinvariants adapted seqmodelchecker scala at scala collection traversablelike withfilter anonfun foreach traversablelike scala at scala collection immutable list foreach list scala at scala collection traversablelike withfilter foreach traversablelike scala at at forsyte apalache tla bmcmt seqmodelchecker checkinvariants seqmodelchecker scala at at forsyte apalache tla bmcmt seqmodelchecker anonfun preparetransitionsandcheckinvariants seqmodelchecker scala at at forsyte apalache tla bmcmt seqmodelchecker anonfun preparetransitionsandcheckinvariants adapted seqmodelchecker scala at scala collection traversablelike withfilter anonfun foreach traversablelike scala at scala collection immutable list foreach list scala at scala collection traversablelike withfilter foreach traversablelike scala at at forsyte apalache tla bmcmt seqmodelchecker preparetransitionsandcheckinvariants seqmodelchecker scala at at forsyte apalache tla bmcmt seqmodelchecker makestep seqmodelchecker scala at at forsyte apalache tla bmcmt seqmodelchecker run seqmodelchecker scala at at forsyte apalache tla bmcmt passes boundedcheckerpassimpl runincrementalchecker boundedcheckerpassimpl scala at at forsyte apalache tla bmcmt passes boundedcheckerpassimpl execute boundedcheckerpassimpl scala at at forsyte apalache infra passes passchainexecutor exec passchainexecutor scala at at forsyte apalache infra passes passchainexecutor run passchainexecutor scala at at forsyte apalache tla tool runcheck tool scala at at forsyte apalache tla tool anonfun run tool scala at at forsyte apalache tla tool handleexceptions tool scala at at forsyte apalache tla tool run tool scala at at forsyte apalache tla tool main tool scala at at forsyte apalache tla tool main tool scala expected behavior static analysis should report that skolem is wrapping a non existential and exit cleanly | 0 |
159,675 | 25,031,342,535 | IssuesEvent | 2022-11-04 12:39:43 | webaverse/app | https://api.github.com/repos/webaverse/app | closed | Art: Hacker character ('Drake') | in progress p1 design | 


Description: A hacker from Zone 0. He waits for drop storms to attract a drop hunter, then snipes them out of the sky with his arsenal of hacked guns. He steals their drops, then uses the drop hunter’s reputation to sell the stolen goods for Silk. He also works jobs for the Zero Syndicate, a lighthearted lulz group funded by ransomware. He has a small platoon of henchmen that he can call in to perform his dirty work and to guard his stash.
He uses an arsenal of guns, most commonly SMGs.

Can be based on Kanji model for the face style and possibly the hair. I don't know whether or not our Kanji rendition is a good place to start.
Kanji was purchased and should be in the GDrive. It seems to not be on booth anymore but it is a non-downloadable from Vroid Hub.
https://booth.pm/en/items/1880254
https://hub.vroid.com/en/characters/8708676096431470179/models/3441778242791612085
- - -
The main thing here seems like having a cool jacket, with some sort of cool collar, and being a male, orange (opposite colors), more evil version of Scillia.
| 1.0 | Art: Hacker character ('Drake') - 


Description: A hacker from Zone 0. He waits for drop storms to attract a drop hunter, then snipes them out of the sky with his arsenal of hacked guns. He steals their drops, then uses the drop hunter’s reputation to sell the stolen goods for Silk. He also works jobs for the Zero Syndicate, a lighthearted lulz group funded by ransomware. He has a small platoon of henchmen that he can call in to perform his dirty work and to guard his stash.
He uses an arsenal of guns, most commonly SMGs.

Can be based on Kanji model for the face style and possibly the hair. I don't know whether or not our Kanji rendition is a good place to start.
Kanji was purchased and should be in the GDrive. It seems to not be on booth anymore but it is a non-downloadable from Vroid Hub.
https://booth.pm/en/items/1880254
https://hub.vroid.com/en/characters/8708676096431470179/models/3441778242791612085
- - -
The main thing here seems like having a cool jacket, with some sort of cool collar, and being a male, orange (opposite colors), more evil version of Scillia.
| non_test | art hacker character drake description a hacker from zone he waits for drop storms to attract a drop hunter then snipes them out of the sky with his arsenal of hacked guns he steals their drops then uses the drop hunter’s reputation to sell the stolen goods for silk he also works jobs for the zero syndicate a lighthearted lulz group funded by ransomware he has a small platoon of henchmen that he can call in to perform his dirty work and to guard his stash he uses an arsenal of guns most commonly smgs can be based on kanji model for the face style and possibly the hair i don t know whether or not our kanji rendition is a good place to start kanji was purchased and should be in the gdrive it seems to not be on booth anymore but it is a non downloadable from vroid hub the main thing here seems like having a cool jacket with some sort of cool collar and being a male orange opposite colors more evil version of scillia | 0 |
265,036 | 23,145,538,637 | IssuesEvent | 2022-07-29 00:04:11 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | closed | Teste de generalizacao para a tag Servidores - Proventos de aposentadoria - Coluna | generalization test development template-Síntese tecnologia informatica subtag-Proventos de Aposentadoria tag-Servidores | DoD: Realizar o teste de Generalização do validador da tag Servidores - Proventos de aposentadoria para o Município de Coluna. | 1.0 | Teste de generalizacao para a tag Servidores - Proventos de aposentadoria - Coluna - DoD: Realizar o teste de Generalização do validador da tag Servidores - Proventos de aposentadoria para o Município de Coluna. | test | teste de generalizacao para a tag servidores proventos de aposentadoria coluna dod realizar o teste de generalização do validador da tag servidores proventos de aposentadoria para o município de coluna | 1 |
108,095 | 9,259,637,278 | IssuesEvent | 2019-03-18 01:04:37 | astropy/astropy | https://api.github.com/repos/astropy/astropy | closed | Remote access without remote_data mark | coordinates testing utils | During the test, I git the following warning:
```
coordinates/tests/test_sky_coord.py::test_repr_altaz
/usr/lib/python3/dist-packages/astropy/utils/iers/iers.py:584: AstropyWarning:
failed to download http://maia.usno.navy.mil/ser7/finals2000A.all, using local IERS-B:
<urlopen error An attempt was made to connect to the internet by a test that was not marked `remote_data`. The requested host was: maia.usno.navy.mil>
.format(conf.iers_auto_url, str(err))))
``` | 1.0 | Remote access without remote_data mark - During the test, I git the following warning:
```
coordinates/tests/test_sky_coord.py::test_repr_altaz
/usr/lib/python3/dist-packages/astropy/utils/iers/iers.py:584: AstropyWarning:
failed to download http://maia.usno.navy.mil/ser7/finals2000A.all, using local IERS-B:
<urlopen error An attempt was made to connect to the internet by a test that was not marked `remote_data`. The requested host was: maia.usno.navy.mil>
.format(conf.iers_auto_url, str(err))))
``` | test | remote access without remote data mark during the test i git the following warning coordinates tests test sky coord py test repr altaz usr lib dist packages astropy utils iers iers py astropywarning failed to download using local iers b format conf iers auto url str err | 1 |
188,192 | 22,046,200,360 | IssuesEvent | 2022-05-30 02:11:41 | AkshayMukkavilli/Tensorflow | https://api.github.com/repos/AkshayMukkavilli/Tensorflow | opened | CVE-2022-29197 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl | security vulnerability | ## CVE-2022-29197 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /Tensorflow/src/requirements.txt</p>
<p>Path to vulnerable library: /teSource-ArchiveExtractor_5ea86033-7612-4210-97f3-8edb65806ddf/20190525011619_2843/20190525011537_depth_0/2/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.UnsortedSegmentJoin` does not fully validate the input arguments. This results in a `CHECK`-failure which can be used to trigger a denial of service attack. The code assumes `num_segments` is a scalar but there is no validation for this before accessing its value. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.
<p>Publish Date: 2022-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29197>CVE-2022-29197</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197</a></p>
<p>Release Date: 2022-05-20</p>
<p>Fix Resolution: tensorflow - 2.6.4,2.7.2,2.8.1,2.9.0;tensorflow-cpu - 2.6.4,2.7.2,2.8.1,2.9.0;tensorflow-gpu - 2.6.4,2.7.2,2.8.1,2.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-29197 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2022-29197 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /Tensorflow/src/requirements.txt</p>
<p>Path to vulnerable library: /teSource-ArchiveExtractor_5ea86033-7612-4210-97f3-8edb65806ddf/20190525011619_2843/20190525011537_depth_0/2/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.UnsortedSegmentJoin` does not fully validate the input arguments. This results in a `CHECK`-failure which can be used to trigger a denial of service attack. The code assumes `num_segments` is a scalar but there is no validation for this before accessing its value. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.
<p>Publish Date: 2022-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29197>CVE-2022-29197</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197</a></p>
<p>Release Date: 2022-05-20</p>
<p>Fix Resolution: tensorflow - 2.6.4,2.7.2,2.8.1,2.9.0;tensorflow-cpu - 2.6.4,2.7.2,2.8.1,2.9.0;tensorflow-gpu - 2.6.4,2.7.2,2.8.1,2.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file tensorflow src requirements txt path to vulnerable library tesource archiveextractor depth tensorflow tensorflow data purelib tensorflow dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an open source platform for machine learning prior to versions and the implementation of tf raw ops unsortedsegmentjoin does not fully validate the input arguments this results in a check failure which can be used to trigger a denial of service attack the code assumes num segments is a scalar but there is no validation for this before accessing its value versions and contain a patch for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend | 0 |
186,214 | 14,394,659,613 | IssuesEvent | 2020-12-03 01:49:20 | github-vet/rangeclosure-findings | https://api.github.com/repos/github-vet/rangeclosure-findings | closed | kubevirt/kubernetes-device-plugins: vendor/github.com/vishvananda/netlink/conntrack_test.go; 8 LoC | fresh test tiny vendored |
Found a possible issue in [kubevirt/kubernetes-device-plugins](https://www.github.com/kubevirt/kubernetes-device-plugins) at [vendor/github.com/vishvananda/netlink/conntrack_test.go](https://github.com/kubevirt/kubernetes-device-plugins/blob/2439489f2cd0b3ddc00c5779dd5129680f0c2dcd/vendor/github.com/vishvananda/netlink/conntrack_test.go#L66-L73)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/kubevirt/kubernetes-device-plugins/blob/2439489f2cd0b3ddc00c5779dd5129680f0c2dcd/vendor/github.com/vishvananda/netlink/conntrack_test.go#L66-L73)
<details>
<summary>Click here to show the 8 line(s) of Go which triggered the analyzer.</summary>
```go
for _, flow := range flowList {
if ipv4Filter.MatchConntrackFlow(&flow) == true {
ipv4Match++
}
if ipv6Filter.MatchConntrackFlow(&flow) == true {
ipv6Match++
}
}
```
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to flow at line 67 may start a goroutine
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 2439489f2cd0b3ddc00c5779dd5129680f0c2dcd
| 1.0 | kubevirt/kubernetes-device-plugins: vendor/github.com/vishvananda/netlink/conntrack_test.go; 8 LoC -
Found a possible issue in [kubevirt/kubernetes-device-plugins](https://www.github.com/kubevirt/kubernetes-device-plugins) at [vendor/github.com/vishvananda/netlink/conntrack_test.go](https://github.com/kubevirt/kubernetes-device-plugins/blob/2439489f2cd0b3ddc00c5779dd5129680f0c2dcd/vendor/github.com/vishvananda/netlink/conntrack_test.go#L66-L73)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/kubevirt/kubernetes-device-plugins/blob/2439489f2cd0b3ddc00c5779dd5129680f0c2dcd/vendor/github.com/vishvananda/netlink/conntrack_test.go#L66-L73)
<details>
<summary>Click here to show the 8 line(s) of Go which triggered the analyzer.</summary>
```go
for _, flow := range flowList {
if ipv4Filter.MatchConntrackFlow(&flow) == true {
ipv4Match++
}
if ipv6Filter.MatchConntrackFlow(&flow) == true {
ipv6Match++
}
}
```
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to flow at line 67 may start a goroutine
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 2439489f2cd0b3ddc00c5779dd5129680f0c2dcd
| test | kubevirt kubernetes device plugins vendor github com vishvananda netlink conntrack test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for flow range flowlist if matchconntrackflow flow true if matchconntrackflow flow true below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to flow at line may start a goroutine leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 1 |
116,975 | 9,904,982,542 | IssuesEvent | 2019-06-27 10:25:04 | chef/automate | https://api.github.com/repos/chef/automate | closed | flaky test: compliance scanner | bug testing | # User Story
```
1) Failure:
--
| 30_jobs_docker_spec.rb#test_0005_works [/var/lib/buildkite-agent/builds/single-use-privileged-i-0634b28ea655af127-1/chef-oss/.golang/chef-automate-master-verify/src/github.com/chef/automate/components/compliance-service/api/tests/30_jobs_docker_spec.rb:669]:
| expected | 0s
| actual | 1m 30s
| @@ -1 +1 @@
| -1
| +2
```
https://buildkite.com/chef-oss/chef-automate-master-verify/builds/1796#db69b150-b637-418c-837e-4e70f13322c5
we've seen this multiple times.
i believe it's related to https://github.com/chef/automate/pull/377/files and https://github.com/chef/automate/pull/399 ....sorry :/
| 1.0 | flaky test: compliance scanner - # User Story
```
1) Failure:
--
| 30_jobs_docker_spec.rb#test_0005_works [/var/lib/buildkite-agent/builds/single-use-privileged-i-0634b28ea655af127-1/chef-oss/.golang/chef-automate-master-verify/src/github.com/chef/automate/components/compliance-service/api/tests/30_jobs_docker_spec.rb:669]:
| expected | 0s
| actual | 1m 30s
| @@ -1 +1 @@
| -1
| +2
```
https://buildkite.com/chef-oss/chef-automate-master-verify/builds/1796#db69b150-b637-418c-837e-4e70f13322c5
we've seen this multiple times.
i believe it's related to https://github.com/chef/automate/pull/377/files and https://github.com/chef/automate/pull/399 ....sorry :/
| test | flaky test compliance scanner user story failure jobs docker spec rb test works expected actual we ve seen this multiple times i believe it s related to and sorry | 1 |
278,750 | 8,649,872,092 | IssuesEvent | 2018-11-26 20:44:56 | cBioPortal/cbioportal | https://api.github.com/repos/cBioPortal/cbioportal | opened | ESR1-intragenic is annotated as oncogenic | bug critical oncokb priority | We should not link intregenic SVs to fusion annotations. Linking them to deletion is probably appropriate.
https://cbioportal.mskcc.org/patient?studyId=mskimpact&sampleId=P-0035290-T01-IM6

http://oncokb.org/#/gene/ESR1/alteration/Fusions | 1.0 | ESR1-intragenic is annotated as oncogenic - We should not link intregenic SVs to fusion annotations. Linking them to deletion is probably appropriate.
https://cbioportal.mskcc.org/patient?studyId=mskimpact&sampleId=P-0035290-T01-IM6

http://oncokb.org/#/gene/ESR1/alteration/Fusions | non_test | intragenic is annotated as oncogenic we should not link intregenic svs to fusion annotations linking them to deletion is probably appropriate | 0 |
247,681 | 20,987,612,365 | IssuesEvent | 2022-03-29 05:59:29 | mozilla-mobile/focus-android | https://api.github.com/repos/mozilla-mobile/focus-android | closed | Intermittent UI test failure - URLAutocompleteTest.duplicateCustomUrlNotAllowedTest | eng:ui-test eng:intermittent-test | ### Firebase Test Run:
https://console.firebase.google.com/project/moz-focus-android/testlab/histories/bh.2189b040bbce6d5a/matrices/6220473969579829785/executions/bs.b244474c1b4cd015/testcases/1
### Stacktrace:
`androidx.test.espresso.NoMatchingViewException: No views in hierarchy found matching: with string from resource id: <2131886649>[preference_autocomplete_subitem_manage_sites] value: Manage sites
at dalvik.system.VMStack.getThreadStackTrace(Native Method)
at java.lang.Thread.getStackTrace(Thread.java:1538)
at androidx.test.espresso.base.DefaultFailureHandler.getUserFriendlyError(DefaultFailureHandler.java:96)
at androidx.test.espresso.base.DefaultFailureHandler.handle(DefaultFailureHandler.java:59)
at androidx.test.espresso.ViewInteraction.waitForAndHandleInteractionResults(ViewInteraction.java:322)
at androidx.test.espresso.ViewInteraction.check(ViewInteraction.java:306)
at org.mozilla.focus.activity.robots.SearchSettingsRobot.openManageSitesSubMenu(SettingsSearchMenuRobot.kt:66)
at org.mozilla.focus.activity.URLAutocompleteTest$duplicateCustomUrlNotAllowedTest$4.invoke(URLAutocompleteTest.kt:135)
at org.mozilla.focus.activity.URLAutocompleteTest$duplicateCustomUrlNotAllowedTest$4.invoke(URLAutocompleteTest.kt:133)
at org.mozilla.focus.activity.robots.SettingsRobot$Transition.openSearchSettingsMenu(SettingsRobot.kt:36)
at org.mozilla.focus.activity.URLAutocompleteTest.duplicateCustomUrlNotAllowedTest(URLAutocompleteTest.kt:133)`
### Build:
1/4/22 | 2.0 | Intermittent UI test failure - URLAutocompleteTest.duplicateCustomUrlNotAllowedTest - ### Firebase Test Run:
https://console.firebase.google.com/project/moz-focus-android/testlab/histories/bh.2189b040bbce6d5a/matrices/6220473969579829785/executions/bs.b244474c1b4cd015/testcases/1
### Stacktrace:
`androidx.test.espresso.NoMatchingViewException: No views in hierarchy found matching: with string from resource id: <2131886649>[preference_autocomplete_subitem_manage_sites] value: Manage sites
at dalvik.system.VMStack.getThreadStackTrace(Native Method)
at java.lang.Thread.getStackTrace(Thread.java:1538)
at androidx.test.espresso.base.DefaultFailureHandler.getUserFriendlyError(DefaultFailureHandler.java:96)
at androidx.test.espresso.base.DefaultFailureHandler.handle(DefaultFailureHandler.java:59)
at androidx.test.espresso.ViewInteraction.waitForAndHandleInteractionResults(ViewInteraction.java:322)
at androidx.test.espresso.ViewInteraction.check(ViewInteraction.java:306)
at org.mozilla.focus.activity.robots.SearchSettingsRobot.openManageSitesSubMenu(SettingsSearchMenuRobot.kt:66)
at org.mozilla.focus.activity.URLAutocompleteTest$duplicateCustomUrlNotAllowedTest$4.invoke(URLAutocompleteTest.kt:135)
at org.mozilla.focus.activity.URLAutocompleteTest$duplicateCustomUrlNotAllowedTest$4.invoke(URLAutocompleteTest.kt:133)
at org.mozilla.focus.activity.robots.SettingsRobot$Transition.openSearchSettingsMenu(SettingsRobot.kt:36)
at org.mozilla.focus.activity.URLAutocompleteTest.duplicateCustomUrlNotAllowedTest(URLAutocompleteTest.kt:133)`
### Build:
1/4/22 | test | intermittent ui test failure urlautocompletetest duplicatecustomurlnotallowedtest firebase test run stacktrace androidx test espresso nomatchingviewexception no views in hierarchy found matching with string from resource id value manage sites at dalvik system vmstack getthreadstacktrace native method at java lang thread getstacktrace thread java at androidx test espresso base defaultfailurehandler getuserfriendlyerror defaultfailurehandler java at androidx test espresso base defaultfailurehandler handle defaultfailurehandler java at androidx test espresso viewinteraction waitforandhandleinteractionresults viewinteraction java at androidx test espresso viewinteraction check viewinteraction java at org mozilla focus activity robots searchsettingsrobot openmanagesitessubmenu settingssearchmenurobot kt at org mozilla focus activity urlautocompletetest duplicatecustomurlnotallowedtest invoke urlautocompletetest kt at org mozilla focus activity urlautocompletetest duplicatecustomurlnotallowedtest invoke urlautocompletetest kt at org mozilla focus activity robots settingsrobot transition opensearchsettingsmenu settingsrobot kt at org mozilla focus activity urlautocompletetest duplicatecustomurlnotallowedtest urlautocompletetest kt build | 1 |
72,310 | 3,378,516,097 | IssuesEvent | 2015-11-25 11:12:10 | Mobicents/RestComm | https://api.github.com/repos/Mobicents/RestComm | opened | Setup RestComm + XMS instance on EC2 | AWS Normal-Priority ready XMS | Setup a RestComm instance on EC2 to test integration with latest Dialogic XMS. | 1.0 | Setup RestComm + XMS instance on EC2 - Setup a RestComm instance on EC2 to test integration with latest Dialogic XMS. | non_test | setup restcomm xms instance on setup a restcomm instance on to test integration with latest dialogic xms | 0 |
240,161 | 20,014,425,876 | IssuesEvent | 2022-02-01 10:33:22 | TeamGalacticraft/Galacticraft-Legacy | https://api.github.com/repos/TeamGalacticraft/Galacticraft-Legacy | closed | Various generators crash the game when placed next to energy condensers | Bug [Priority] [Status] Requires Testing [Status] Triage | ### Forge Version
14.23.5.2859
### Galacticraft Version
4.0.2.280
### Log or Crash Report
https://gist.github.com/Sethy152/13f8d21edc150e28b431de291363d809
I placed a simple coal generator next to the energy storage unit from Galacticraft. Instant crash. When I try to load back in it crashes in the same way.
https://gist.github.com/Sethy152/fc03d8f95a57153fc287aa41d5838341
This one happened when I placed a creative engine (From buildcraft) next to the energy storage unit from Galacticraft. NOT an instant crash, instead the game froze. My computer wouldn't listen to any inputs, from alt tab, windows tab, control shift escape, etc. Eventually it said that minecraft wasn't responding, so I closed it. When I load back into the world, it runs just fine. The block is placed and it's as if it never crashed.
### Reproduction steps
1 Place down an Energy Storage Module (normal or advanced) from Galacticraft
2 Place down a Simple Coal Generator from Simple Generators next to any side.
3 Crash
I don't know if it crashes with EVERY type of generator from Simple Generators.
1 Place down any powered machine from Galacticraft
2 Place down an engine from Buildcraft on the green input side of the machine
3 Computer and game become unresponsive
I tried with three different Galacticraft blocks, and it "crashed" on all of them.
IMPORTANT: I'm using Galacticraft from Curseforge, the 4.0.2.282 version. There just wasn't an option to choose that. :D | 1.0 | Various generators crash the game when placed next to energy condensers - ### Forge Version
14.23.5.2859
### Galacticraft Version
4.0.2.280
### Log or Crash Report
https://gist.github.com/Sethy152/13f8d21edc150e28b431de291363d809
I placed a simple coal generator next to the energy storage unit from Galacticraft. Instant crash. When I try to load back in it crashes in the same way.
https://gist.github.com/Sethy152/fc03d8f95a57153fc287aa41d5838341
This one happened when I placed a creative engine (From buildcraft) next to the energy storage unit from Galacticraft. NOT an instant crash, instead the game froze. My computer wouldn't listen to any inputs, from alt tab, windows tab, control shift escape, etc. Eventually it said that minecraft wasn't responding, so I closed it. When I load back into the world, it runs just fine. The block is placed and it's as if it never crashed.
### Reproduction steps
1 Place down an Energy Storage Module (normal or advanced) from Galacticraft
2 Place down a Simple Coal Generator from Simple Generators next to any side.
3 Crash
I don't know if it crashes with EVERY type of generator from Simple Generators.
1 Place down any powered machine from Galacticraft
2 Place down an engine from Buildcraft on the green input side of the machine
3 Computer and game become unresponsive
I tried with three different Galacticraft blocks, and it "crashed" on all of them.
IMPORTANT: I'm using Galacticraft from Curseforge, the 4.0.2.282 version. There just wasn't an option to choose that. :D | test | various generators crash the game when placed next to energy condensers forge version galacticraft version log or crash report i placed a simple coal generator next to the energy storage unit from galacticraft instant crash when i try to load back in it crashes in the same way this one happened when i placed a creative engine from buildcraft next to the energy storage unit from galacticraft not an instant crash instead the game froze my computer wouldn t listen to any inputs from alt tab windows tab control shift escape etc eventually it said that minecraft wasn t responding so i closed it when i load back into the world it runs just fine the block is placed and it s as if it never crashed reproduction steps place down an energy storage module normal or advanced from galacticraft place down a simple coal generator from simple generators next to any side crash i don t know if it crashes with every type of generator from simple generators place down any powered machine from galacticraft place down an engine from buildcraft on the green input side of the machine computer and game become unresponsive i tried with three different galacticraft blocks and it crashed on all of them important i m using galacticraft from curseforge the version there just wasn t an option to choose that d | 1 |
81,334 | 30,804,914,389 | IssuesEvent | 2023-08-01 06:20:10 | zed-industries/community | https://api.github.com/repos/zed-industries/community | opened | vim mode: support P (paste before) | defect triage admin read | ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
Currently zed vim mode support `p` but not `P` for pasting text, it's better to also support the paste before variant so it is consistent.
Related:
https://github.com/zed-industries/community/issues/469#issuecomment-1227458187
### Environment
Zed: v0.96.4 (stable)
OS: macOS 13.4.1
Memory: 16 GiB
Architecture: aarch64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
_No response_ | 1.0 | vim mode: support P (paste before) - ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
Currently zed vim mode support `p` but not `P` for pasting text, it's better to also support the paste before variant so it is consistent.
Related:
https://github.com/zed-industries/community/issues/469#issuecomment-1227458187
### Environment
Zed: v0.96.4 (stable)
OS: macOS 13.4.1
Memory: 16 GiB
Architecture: aarch64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
_No response_ | non_test | vim mode support p paste before check for existing issues completed describe the bug provide steps to reproduce it currently zed vim mode support p but not p for pasting text it s better to also support the paste before variant so it is consistent related environment zed stable os macos memory gib architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last no response | 0 |
222,078 | 17,392,073,371 | IssuesEvent | 2021-08-02 08:43:16 | nens/lizard-client | https://api.github.com/repos/nens/lizard-client | closed | Some unnecessary arrows when loading a charts favourite | Sprint disruption Test result bug | On staging: When I load a charts favourite (with state containing: `{"context": "charts"}`) I see some weird arrows drawn in the left side bar. When you click on them they fold up an empty box in the bar and the arrows disappear. It happens with both older favourites and newly generated ones.

On production this doesn't happen. | 1.0 | Some unnecessary arrows when loading a charts favourite - On staging: When I load a charts favourite (with state containing: `{"context": "charts"}`) I see some weird arrows drawn in the left side bar. When you click on them they fold up an empty box in the bar and the arrows disappear. It happens with both older favourites and newly generated ones.

On production this doesn't happen. | test | some unnecessary arrows when loading a charts favourite on staging when i load a charts favourite with state containing context charts i see some weird arrows drawn in the left side bar when you click on them they fold up an empty box in the bar and the arrows disappear it happens with both older favourites and newly generated ones on production this doesn t happen | 1 |
74,895 | 15,380,232,169 | IssuesEvent | 2021-03-02 20:47:51 | kaidisn/Showkase | https://api.github.com/repos/kaidisn/Showkase | opened | CVE-2018-1000180 (High) detected in bcprov-jdk15on-1.56.jar | security vulnerability | ## CVE-2018-1000180 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.56.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.8.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: Showkase/showkase-processor-testing/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk15on/1.56/a153c6f9744a3e9dd6feab5e210e1c9861362ec7/bcprov-jdk15on-1.56.jar</p>
<p>
Dependency Hierarchy:
- lint-gradle-27.2.0-alpha15.jar (Root Library)
- sdk-common-27.2.0-alpha15.jar
- :x: **bcprov-jdk15on-1.56.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kaidisn/Showkase/commit/6cbf40ecadbc6e10078b225046eca6c26ae2b059">6cbf40ecadbc6e10078b225046eca6c26ae2b059</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Bouncy Castle BC 1.54 - 1.59, BC-FJA 1.0.0, BC-FJA 1.0.1 and earlier have a flaw in the Low-level interface to RSA key pair generator, specifically RSA Key Pairs generated in low-level API with added certainty may have less M-R tests than expected. This appears to be fixed in versions BC 1.60 beta 4 and later, BC-FJA 1.0.2 and later.
<p>Publish Date: 2018-06-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000180>CVE-2018-1000180</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000180">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000180</a></p>
<p>Release Date: 2018-06-05</p>
<p>Fix Resolution: org.bouncycastle:bcprov-jdk15on:1.60,org.bouncycastle:bcprov-jdk14:1.60,org.bouncycastle:bcprov-ext-jdk14:1.60,org.bouncycastle:bcprov-ext-jdk15on:1.60</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.bouncycastle","packageName":"bcprov-jdk15on","packageVersion":"1.56","packageFilePaths":["/showkase-processor-testing/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.android.tools.lint:lint-gradle:27.2.0-alpha15;com.android.tools:sdk-common:27.2.0-alpha15;org.bouncycastle:bcprov-jdk15on:1.56","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.bouncycastle:bcprov-jdk15on:1.60,org.bouncycastle:bcprov-jdk14:1.60,org.bouncycastle:bcprov-ext-jdk14:1.60,org.bouncycastle:bcprov-ext-jdk15on:1.60"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-1000180","vulnerabilityDetails":"Bouncy Castle BC 1.54 - 1.59, BC-FJA 1.0.0, BC-FJA 1.0.1 and earlier have a flaw in the Low-level interface to RSA key pair generator, specifically RSA Key Pairs generated in low-level API with added certainty may have less M-R tests than expected. This appears to be fixed in versions BC 1.60 beta 4 and later, BC-FJA 1.0.2 and later.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000180","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-1000180 (High) detected in bcprov-jdk15on-1.56.jar - ## CVE-2018-1000180 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.56.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.8.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: Showkase/showkase-processor-testing/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk15on/1.56/a153c6f9744a3e9dd6feab5e210e1c9861362ec7/bcprov-jdk15on-1.56.jar</p>
<p>
Dependency Hierarchy:
- lint-gradle-27.2.0-alpha15.jar (Root Library)
- sdk-common-27.2.0-alpha15.jar
- :x: **bcprov-jdk15on-1.56.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kaidisn/Showkase/commit/6cbf40ecadbc6e10078b225046eca6c26ae2b059">6cbf40ecadbc6e10078b225046eca6c26ae2b059</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Bouncy Castle BC 1.54 - 1.59, BC-FJA 1.0.0, BC-FJA 1.0.1 and earlier have a flaw in the Low-level interface to RSA key pair generator, specifically RSA Key Pairs generated in low-level API with added certainty may have less M-R tests than expected. This appears to be fixed in versions BC 1.60 beta 4 and later, BC-FJA 1.0.2 and later.
<p>Publish Date: 2018-06-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000180>CVE-2018-1000180</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000180">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000180</a></p>
<p>Release Date: 2018-06-05</p>
<p>Fix Resolution: org.bouncycastle:bcprov-jdk15on:1.60,org.bouncycastle:bcprov-jdk14:1.60,org.bouncycastle:bcprov-ext-jdk14:1.60,org.bouncycastle:bcprov-ext-jdk15on:1.60</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.bouncycastle","packageName":"bcprov-jdk15on","packageVersion":"1.56","packageFilePaths":["/showkase-processor-testing/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.android.tools.lint:lint-gradle:27.2.0-alpha15;com.android.tools:sdk-common:27.2.0-alpha15;org.bouncycastle:bcprov-jdk15on:1.56","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.bouncycastle:bcprov-jdk15on:1.60,org.bouncycastle:bcprov-jdk14:1.60,org.bouncycastle:bcprov-ext-jdk14:1.60,org.bouncycastle:bcprov-ext-jdk15on:1.60"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-1000180","vulnerabilityDetails":"Bouncy Castle BC 1.54 - 1.59, BC-FJA 1.0.0, BC-FJA 1.0.1 and earlier have a flaw in the Low-level interface to RSA key pair generator, specifically RSA Key Pairs generated in low-level API with added certainty may have less M-R tests than expected. This appears to be fixed in versions BC 1.60 beta 4 and later, BC-FJA 1.0.2 and later.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000180","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_test | cve high detected in bcprov jar cve high severity vulnerability vulnerable library bcprov jar the bouncy castle crypto package is a java implementation of cryptographic algorithms this jar contains jce provider and lightweight api for the bouncy castle cryptography apis for jdk to jdk library home page a href path to dependency file showkase showkase processor testing build gradle path to vulnerable library home wss scanner gradle caches modules files org bouncycastle bcprov bcprov jar dependency hierarchy lint gradle jar root library sdk common jar x bcprov jar vulnerable library found in head commit a href found in base branch master vulnerability details bouncy castle bc bc fja bc fja and earlier have a flaw in the low level interface to rsa key pair generator specifically rsa key pairs generated in low level api with added certainty may have less m r tests than expected this appears to be fixed in versions bc beta and later bc fja and later publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org bouncycastle bcprov org bouncycastle bcprov org bouncycastle bcprov ext org bouncycastle bcprov ext isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com android tools lint lint gradle com android tools sdk common org bouncycastle bcprov isminimumfixversionavailable true minimumfixversion org bouncycastle bcprov org bouncycastle bcprov org bouncycastle bcprov ext org bouncycastle bcprov ext basebranches vulnerabilityidentifier cve vulnerabilitydetails bouncy castle bc bc fja bc fja and earlier have a flaw in the low level interface to rsa key pair generator specifically rsa key pairs generated in low level api with added certainty may have less m r tests than expected this appears to be fixed in versions bc beta and later bc fja and later vulnerabilityurl | 0 |
206,772 | 15,774,008,012 | IssuesEvent | 2021-04-01 00:16:58 | Azure/azure-sdk-for-cpp | https://api.github.com/repos/Azure/azure-sdk-for-cpp | opened | Do not require setting the `AZURE_KEYVAULT_HSM_URL` environment variable, for running KeyVault tests locally that don't need it | KeyVault test enhancement | cc @vhvb1989 | 1.0 | Do not require setting the `AZURE_KEYVAULT_HSM_URL` environment variable, for running KeyVault tests locally that don't need it - cc @vhvb1989 | test | do not require setting the azure keyvault hsm url environment variable for running keyvault tests locally that don t need it cc | 1 |
140,663 | 21,181,655,029 | IssuesEvent | 2022-04-08 08:33:36 | ugent-library/biblio-backend | https://api.github.com/repos/ugent-library/biblio-backend | opened | easier assignment of UGent affiliation to authors | enhancement design | **Describe the improvement**
Currently, users consider assigning the UGent affiliation to authors as time consuming as a new window needs to be opened for each new author.
Two complementary options:
- change in design (e.g. by adding affiliation in main view instead of in the add/edit modal)
- automation of assignment of affiliation (e.g. by checking if authors are UGent members, by checking whether the affiliation can be retrieved from CrossRef/DataCite, by checking whether the ORCID can be useful)
| 1.0 | easier assignment of UGent affiliation to authors - **Describe the improvement**
Currently, users consider assigning the UGent affiliation to authors as time consuming as a new window needs to be opened for each new author.
Two complementary options:
- change in design (e.g. by adding affiliation in main view instead of in the add/edit modal)
- automation of assignment of affiliation (e.g. by checking if authors are UGent members, by checking whether the affiliation can be retrieved from CrossRef/DataCite, by checking whether the ORCID can be useful)
| non_test | easier assignment of ugent affiliation to authors describe the improvement currently users consider assigning the ugent affiliation to authors as time consuming as a new window needs to be opened for each new author two complementary options change in design e g by adding affiliation in main view instead of in the add edit modal automation of assignment of affiliation e g by checking if authors are ugent members by checking whether the affiliation can be retrieved from crossref datacite by checking whether the orcid can be useful | 0 |
4,747 | 7,604,097,260 | IssuesEvent | 2018-04-29 21:15:59 | frc1418/VictiScout | https://api.github.com/repos/frc1418/VictiScout | closed | Add strikethrough when hovering over added files (data processing) | aesthetic easy processing | Add a strikethrough and "clickable" cursor when hovering over an element to show it can be removed upon clicking. | 1.0 | Add strikethrough when hovering over added files (data processing) - Add a strikethrough and "clickable" cursor when hovering over an element to show it can be removed upon clicking. | non_test | add strikethrough when hovering over added files data processing add a strikethrough and clickable cursor when hovering over an element to show it can be removed upon clicking | 0 |
82,948 | 23,927,799,789 | IssuesEvent | 2022-09-10 04:10:40 | lowRISC/opentitan | https://api.github.com/repos/lowRISC/opentitan | closed | [bazel] Handle site-specific environments through env | Component:Software Priority:P2 Type:Enhancement Hotlist:Software SW:Bazel Adoption Requirement SW:Build System | Due to the multi-tenant nature of our development system almost nothing we use is installed in the default directory. Our `PATH`, `LD_LIBRARY_PATH` and `PKG_CONFIG_PATH` environments consist of long strings of site-specific paths holding everything. We have environment module tools in place to quickly switch from one project to another, and we generally try to do everything through environment variables. When file configurations are necessary, we try to store them in common space shared by the team.
Could we use environment variables to pass site-specific `PATH`s into bazel via `bazelisk.sh`. For example one could create an OPENTITAN_BAZEL_BUILD_ENV variable, that could for instance contain any number of `--action_env` options for `bazelisk.sh` to pass into `.bin/bazelisk`.
**Rationale for this request:**
I realize that the same functionality can be accommodated using `$HOME/.bazelrc`, but this is suboptimal for two reasons:
1. Fixed configuration files would make it hard for us to reproduce issues or switch projects. We generally discourage users from storing configuration information in their home directory for reasons related to reproducibility.
2. Our paths are configured via many different files that are updated independently.
| 1.0 | [bazel] Handle site-specific environments through env - Due to the multi-tenant nature of our development system almost nothing we use is installed in the default directory. Our `PATH`, `LD_LIBRARY_PATH` and `PKG_CONFIG_PATH` environments consist of long strings of site-specific paths holding everything. We have environment module tools in place to quickly switch from one project to another, and we generally try to do everything through environment variables. When file configurations are necessary, we try to store them in common space shared by the team.
Could we use environment variables to pass site-specific `PATH`s into bazel via `bazelisk.sh`. For example one could create an OPENTITAN_BAZEL_BUILD_ENV variable, that could for instance contain any number of `--action_env` options for `bazelisk.sh` to pass into `.bin/bazelisk`.
**Rationale for this request:**
I realize that the same functionality can be accommodated using `$HOME/.bazelrc`, but this is suboptimal for two reasons:
1. Fixed configuration files would make it hard for us to reproduce issues or switch projects. We generally discourage users from storing configuration information in their home directory for reasons related to reproducibility.
2. Our paths are configured via many different files that are updated independently.
| non_test | handle site specific environments through env due to the multi tenant nature of our development system almost nothing we use is installed in the default directory our path ld library path and pkg config path environments consist of long strings of site specific paths holding everything we have environment module tools in place to quickly switch from one project to another and we generally try to do everything through environment variables when file configurations are necessary we try to store them in common space shared by the team could we use environment variables to pass site specific path s into bazel via bazelisk sh for example one could create an opentitan bazel build env variable that could for instance contain any number of action env options for bazelisk sh to pass into bin bazelisk rationale for this request i realize that the same functionality can be accommodated using home bazelrc but this is suboptimal for two reasons fixed configuration files would make it hard for us to reproduce issues or switch projects we generally discourage users from storing configuration information in their home directory for reasons related to reproducibility our paths are configured via many different files that are updated independently | 0 |
1,446 | 3,966,183,256 | IssuesEvent | 2016-05-03 11:52:49 | google/end-to-end | https://api.github.com/repos/google/end-to-end | closed | Feature Request: Key QR Code export | compatibility enhancement imported keyring logic |
_From [rum...@google.com](https://code.google.com/u/110046148200173983257/) on November 13, 2014 20:05:20_
It'd be awesome if the public and private keys could be rendered as QR Codes, so that these can be easily imported into mobile phones, e.g. OpenKeychain.
For the public key, it'd probably be easiest to use the Google charts API to render the QR code image.
For the private key sending it somewhere else may not be a good idea, so rendering could use a library like qrcodejs.
_Original issue:_ <http://code.google.com/p/end-to-end/issues/detail?id=158>
| True | Feature Request: Key QR Code export -
_From [rum...@google.com](https://code.google.com/u/110046148200173983257/) on November 13, 2014 20:05:20_
It'd be awesome if the public and private keys could be rendered as QR Codes, so that these can be easily imported into mobile phones, e.g. OpenKeychain.
For the public key, it'd probably be easiest to use the Google charts API to render the QR code image.
For the private key sending it somewhere else may not be a good idea, so rendering could use a library like qrcodejs.
_Original issue:_ <http://code.google.com/p/end-to-end/issues/detail?id=158>
| non_test | feature request key qr code export from on november it d be awesome if the public and private keys could be rendered as qr codes so that these can be easily imported into mobile phones e g openkeychain for the public key it d probably be easiest to use the google charts api to render the qr code image for the private key sending it somewhere else may not be a good idea so rendering could use a library like qrcodejs original issue | 0 |
505,219 | 14,629,974,803 | IssuesEvent | 2020-12-23 16:48:32 | microsoft/AdaptiveCards | https://api.github.com/repos/microsoft/AdaptiveCards | closed | [Android][Rendering] [Column Width Issue] | AdaptiveCards v21.01 Area-Inconsistency Bug Priority-Now Triage-Needed | # Platform
What platform is your issue or question related to? (Delete other platforms).
- [ ] Android
# Author or host
Webex
# Version of SDK
2.4.0 for Android
# Details
Column width issue (collapsing) in a column set between columns with images.
Please refer to the screenshots below.
2.4.0 Version

1.2.11 Version

The expectation is to have the card similar to the one rendered in 1.2.11.
JSON payload for the card:
[column_width_json.docx](https://github.com/microsoft/AdaptiveCards/files/5735457/column_width_json.docx)
| 1.0 | [Android][Rendering] [Column Width Issue] - # Platform
What platform is your issue or question related to? (Delete other platforms).
- [ ] Android
# Author or host
Webex
# Version of SDK
2.4.0 for Android
# Details
Column width issue (collapsing) in a column set between columns with images.
Please refer to the screenshots below.
2.4.0 Version

1.2.11 Version

The expectation is to have the card similar to the one rendered in 1.2.11.
JSON payload for the card:
[column_width_json.docx](https://github.com/microsoft/AdaptiveCards/files/5735457/column_width_json.docx)
| non_test | platform what platform is your issue or question related to delete other platforms android author or host webex version of sdk for android details column width issue collapsing in a column set between columns with images please refer to the screenshots below version version the expectation is to have the card similar to the one rendered in json payload for the card | 0 |
262,441 | 19,803,927,009 | IssuesEvent | 2022-01-19 03:01:19 | NameNotBeenUsed/UFMingle | https://api.github.com/repos/NameNotBeenUsed/UFMingle | opened | Temp User Story | documentation | At the login page, user can login, register or retieve account/password.
A user can log in if he/she has an account and the password.
A UF current student/faculty can register at most three UFMingle account using his/her gatorlink account.
A user can retrieve his/her account and coresponding password by gatorlink
A user has his/her own home page.
A user can edit his/her profile to change birthday, head sculpture, motto and so on.
User has a message box in home page to receive message from system, administrator or other users.
| 1.0 | Temp User Story - At the login page, user can login, register or retieve account/password.
A user can log in if he/she has an account and the password.
A UF current student/faculty can register at most three UFMingle account using his/her gatorlink account.
A user can retrieve his/her account and coresponding password by gatorlink
A user has his/her own home page.
A user can edit his/her profile to change birthday, head sculpture, motto and so on.
User has a message box in home page to receive message from system, administrator or other users.
| non_test | temp user story at the login page user can login register or retieve account password a user can log in if he she has an account and the password a uf current student faculty can register at most three ufmingle account using his her gatorlink account a user can retrieve his her account and coresponding password by gatorlink a user has his her own home page a user can edit his her profile to change birthday head sculpture motto and so on user has a message box in home page to receive message from system administrator or other users | 0 |
740,860 | 25,771,357,190 | IssuesEvent | 2022-12-09 08:16:19 | zulip/zulip-mobile | https://api.github.com/repos/zulip/zulip-mobile | opened | "Mark all as read" appears to mark all as *unread* | P1 high-priority | (Marking P1 because this came up during work to have Flow check flagsReducer-test.js, toward https://github.com/zulip/zulip-mobile/issues/5102, disallow ancient servers.)
To reproduce:
- Arrange to have just a few unreads in the "All messages" view, and go to that view
- Tap "Mark all as read"
- See that the unread marker doesn't disappear from the unread messages in the list
- See also that an unread marker *appears* on all the other loaded messages in the list
As long as the API request succeeds, the server does actually mark the messages as read. So what's going on?
Well, we have this `Object.keys` in flagsReducer.js:
```js
const eventUpdateMessageFlags = (state, action) => {
if (action.all) {
if (action.op === 'add') {
return addFlagsForMessages(initialState, Object.keys(action.allMessages).map(Number), [
action.flag,
]);
}
```
But note that:
- `allMessages` is a `MessagesState` value, so it's an `Immutable.Map<number, Message>`.
- If you `Object.keys` one of those, then you get… `['size', '_root', '__ownerID', '__hash', '__altered']`. That's not [how you're supposed get keys from an Immutable.Map](https://immutable-js.com/docs/v4.1.0/Map/#keys()).
- We map those string keys through `Number`, giving `[NaN, NaN, NaN, NaN, NaN]`.
- We wipe any data in the flags state (see the `initialState` in the quoted code above, and see #5596)… but then, when we _mean_ to fill in the IDs of the messages you've just marked as read, we instead put an object with `NaN: true` as `state.read`. | 1.0 | "Mark all as read" appears to mark all as *unread* - (Marking P1 because this came up during work to have Flow check flagsReducer-test.js, toward https://github.com/zulip/zulip-mobile/issues/5102, disallow ancient servers.)
To reproduce:
- Arrange to have just a few unreads in the "All messages" view, and go to that view
- Tap "Mark all as read"
- See that the unread marker doesn't disappear from the unread messages in the list
- See also that an unread marker *appears* on all the other loaded messages in the list
As long as the API request succeeds, the server does actually mark the messages as read. So what's going on?
Well, we have this `Object.keys` in flagsReducer.js:
```js
const eventUpdateMessageFlags = (state, action) => {
if (action.all) {
if (action.op === 'add') {
return addFlagsForMessages(initialState, Object.keys(action.allMessages).map(Number), [
action.flag,
]);
}
```
But note that:
- `allMessages` is a `MessagesState` value, so it's an `Immutable.Map<number, Message>`.
- If you `Object.keys` one of those, then you get… `['size', '_root', '__ownerID', '__hash', '__altered']`. That's not [how you're supposed get keys from an Immutable.Map](https://immutable-js.com/docs/v4.1.0/Map/#keys()).
- We map those string keys through `Number`, giving `[NaN, NaN, NaN, NaN, NaN]`.
- We wipe any data in the flags state (see the `initialState` in the quoted code above, and see #5596)… but then, when we _mean_ to fill in the IDs of the messages you've just marked as read, we instead put an object with `NaN: true` as `state.read`. | non_test | mark all as read appears to mark all as unread marking because this came up during work to have flow check flagsreducer test js toward disallow ancient servers to reproduce arrange to have just a few unreads in the all messages view and go to that view tap mark all as read see that the unread marker doesn t disappear from the unread messages in the list see also that an unread marker appears on all the other loaded messages in the list as long as the api request succeeds the server does actually mark the messages as read so what s going on well we have this object keys in flagsreducer js js const eventupdatemessageflags state action if action all if action op add return addflagsformessages initialstate object keys action allmessages map number action flag but note that allmessages is a messagesstate value so it s an immutable map if you object keys one of those then you get… that s not we map those string keys through number giving we wipe any data in the flags state see the initialstate in the quoted code above and see … but then when we mean to fill in the ids of the messages you ve just marked as read we instead put an object with nan true as state read | 0 |
697,075 | 23,926,898,246 | IssuesEvent | 2022-09-10 01:03:14 | ntop/ntopng | https://api.github.com/repos/ntop/ntopng | closed | Implement Local Hosts BlackList Check | Feature Request Priority Ticket | Whenever blacklists are updated, it is requested to check if any local host (-m) are listed in the blacklist and in this case trigger an alert. | 1.0 | Implement Local Hosts BlackList Check - Whenever blacklists are updated, it is requested to check if any local host (-m) are listed in the blacklist and in this case trigger an alert. | non_test | implement local hosts blacklist check whenever blacklists are updated it is requested to check if any local host m are listed in the blacklist and in this case trigger an alert | 0 |
147,648 | 11,800,136,355 | IssuesEvent | 2020-03-18 17:01:17 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | Job remains in pending | component:api priority:high state:needs_test type:bug | <!---
The Ansible community is highly committed to the security of our open source
projects. Security concerns should be reported directly by email to
security@ansible.com. For more information on the Ansible community's
practices regarding responsible disclosure, see
https://www.ansible.com/security
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### SUMMARY
Job remains in pending, In the log I see a lot of "awx.main.tasks Not running scheduler, another task holds lock"
There are some closed issues regarding this one, but didn't find any solution.
Randomly AWX get stuck where all jobs remain in "pending"
In the logs I see
`task_1 | 2020-01-09 16:08:59,344 DEBUG awx.main.dispatch task a1c42d7f-17b2-4590-b013-b82f76105f0c starting awx.main.scheduler.tasks.run_task_manager(*[])
task_1 | 2020-01-09 16:08:59,347 DEBUG awx.main.scheduler Running Tower task manager.
task_1 | 2020-01-09 16:08:59,359 DEBUG awx.main.scheduler Not running scheduler, another task holds lock
task_1 | 2020-01-09 16:08:59,360 DEBUG awx.main.dispatch task a1c42d7f-17b2-4590-b013-b82f76105f0c is finished`
After a lot of time (this time 16min) it suddenly starts to execute the task
`task_1 | 2020-01-09 16:24:19,814 DEBUG awx.main.dispatch task b832fea9-c31f-4cfb-8ce3-fda16a1ac5cb starting awx.main.scheduler.tasks.run_task_manager(*[])
task_1 | 2020-01-09 16:24:19,816 DEBUG awx.main.scheduler Running Tower task manager.
task_1 | 2020-01-09 16:24:19,827 DEBUG awx.main.scheduler Not running scheduler, another task holds lock
task_1 | 2020-01-09 16:24:19,828 DEBUG awx.main.dispatch task b832fea9-c31f-4cfb-8ce3-fda16a1ac5cb is finished
task_1 | 2020-01-09 16:24:38,979 DEBUG awx.main.dispatch publish awx.main.tasks.cluster_node_heartbeat(aacec6cc-b438-4cb7-b3f8-2a9ba0b3f399, queue=awx)
task_1 | [2020-01-09 16:24:38,979: DEBUG/Process-1] publish awx.main.tasks.cluster_node_heartbeat(aacec6cc-b438-4cb7-b3f8-2a9ba0b3f399, queue=awx)
rabbitmq_1 | 2020-01-09 16:24:38.989 [info] <0.3775.0> accepting AMQP connection <0.3775.0> (172.19.0.5:48346 -> 172.19.0.3:5672)
rabbitmq_1 | 2020-01-09 16:24:38.991 [info] <0.3775.0> connection <0.3775.0> (172.19.0.5:48346 -> 172.19.0.3:5672): user 'guest' authenticated and granted access to vhost 'awx'
rabbitmq_1 | 2020-01-09 16:24:38.996 [info] <0.3775.0> closing AMQP connection <0.3775.0> (172.19.0.5:48346 -> 172.19.0.3:5672, vhost: 'awx', user: 'guest')
task_1 | 2020-01-09 16:24:39,003 DEBUG awx.main.dispatch delivered aacec6cc-b438-4cb7-b3f8-2a9ba0b3f399 to worker[178] qsize 0
task_1 | 2020-01-09 16:24:39,005 DEBUG awx.main.dispatch task aacec6cc-b438-4cb7-b3f8-2a9ba0b3f399 starting awx.main.tasks.cluster_node_heartbeat(*[])
task_1 | 2020-01-09 16:24:39,008 DEBUG awx.main.tasks Cluster node heartbeat task.
task_1 | 2020-01-09 16:24:39,020 DEBUG awx.main.dispatch task aacec6cc-b438-4cb7-b3f8-2a9ba0b3f399 is finished
task_1 | 2020-01-09 16:24:39,090 DEBUG awx.main.dispatch publish awx.main.tasks.awx_k8s_reaper(e33c3d96-163b-487f-8101-00dd80f101a7, queue=awx)
task_1 | [2020-01-09 16:24:39,090: DEBUG/Process-1] publish awx.main.tasks.awx_k8s_reaper(e33c3d96-163b-487f-8101-00dd80f101a7, queue=awx)
rabbitmq_1 | 2020-01-09 16:24:39.100 [info] <0.3788.0> accepting AMQP connection <0.3788.0> (172.19.0.5:48350 -> 172.19.0.3:5672)
rabbitmq_1 | 2020-01-09 16:24:39.102 [info] <0.3788.0> connection <0.3788.0> (172.19.0.5:48350 -> 172.19.0.3:5672): user 'guest' authenticated and granted access to vhost 'awx'
rabbitmq_1 | 2020-01-09 16:24:39.104 [info] <0.3788.0> closing AMQP connection <0.3788.0> (172.19.0.5:48350 -> 172.19.0.3:5672, vhost: 'awx', user: 'guest')
task_1 | 2020-01-09 16:24:39,105 DEBUG awx.main.dispatch delivered e33c3d96-163b-487f-8101-00dd80f101a7 to worker[178] qsize 0
task_1 | 2020-01-09 16:24:39,108 DEBUG awx.main.dispatch task e33c3d96-163b-487f-8101-00dd80f101a7 starting awx.main.tasks.awx_k8s_reaper(*[])
task_1 | 2020-01-09 16:24:39,117 DEBUG awx.main.dispatch task e33c3d96-163b-487f-8101-00dd80f101a7 is finished
task_1 | 2020-01-09 16:24:39,344 DEBUG awx.main.dispatch publish awx.main.tasks.awx_periodic_scheduler(97554fd6-f2c2-4b5f-a25d-bf09743b9105, queue=awx_private_queue)
task_1 | [2020-01-09 16:24:39,344: DEBUG/Process-1] publish awx.main.tasks.awx_periodic_scheduler(97554fd6-f2c2-4b5f-a25d-bf09743b9105, queue=awx_private_queue)
rabbitmq_1 | 2020-01-09 16:24:39.354 [info] <0.3801.0> accepting AMQP connection <0.3801.0> (172.19.0.5:48354 -> 172.19.0.3:5672)
rabbitmq_1 | 2020-01-09 16:24:39.356 [info] <0.3801.0> connection <0.3801.0> (172.19.0.5:48354 -> 172.19.0.3:5672): user 'guest' authenticated and granted access to vhost 'awx'
task_1 | 2020-01-09 16:24:39,359 DEBUG awx.main.dispatch delivered 97554fd6-f2c2-4b5f-a25d-bf09743b9105 to worker[178] qsize 0
rabbitmq_1 | 2020-01-09 16:24:39.359 [info] <0.3801.0> closing AMQP connection <0.3801.0> (172.19.0.5:48354 -> 172.19.0.3:5672, vhost: 'awx', user: 'guest')
task_1 | 2020-01-09 16:24:39,361 DEBUG awx.main.dispatch task 97554fd6-f2c2-4b5f-a25d-bf09743b9105 starting awx.main.tasks.awx_periodic_scheduler(*[])
task_1 | 2020-01-09 16:24:39,377 DEBUG awx.main.tasks Starting periodic scheduler
task_1 | 2020-01-09 16:24:39,380 DEBUG awx.main.tasks Last scheduler run was: 2020-01-09 16:05:39.310989+00:00
task_1 | 2020-01-09 16:24:39,390 DEBUG awx.main.dispatch task 97554fd6-f2c2-4b5f-a25d-bf09743b9105 is finished`
Is there a way to know which other task blocks the scheduler?
##### ENVIRONMENT
* AWX version: 9.1.0
* AWX install method: docker on linux
* Operating System: Centos 7
| 1.0 | Job remains in pending - <!---
The Ansible community is highly committed to the security of our open source
projects. Security concerns should be reported directly by email to
security@ansible.com. For more information on the Ansible community's
practices regarding responsible disclosure, see
https://www.ansible.com/security
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### SUMMARY
Job remains in pending, In the log I see a lot of "awx.main.tasks Not running scheduler, another task holds lock"
There are some closed issues regarding this one, but didn't find any solution.
Randomly AWX get stuck where all jobs remain in "pending"
In the logs I see
`task_1 | 2020-01-09 16:08:59,344 DEBUG awx.main.dispatch task a1c42d7f-17b2-4590-b013-b82f76105f0c starting awx.main.scheduler.tasks.run_task_manager(*[])
task_1 | 2020-01-09 16:08:59,347 DEBUG awx.main.scheduler Running Tower task manager.
task_1 | 2020-01-09 16:08:59,359 DEBUG awx.main.scheduler Not running scheduler, another task holds lock
task_1 | 2020-01-09 16:08:59,360 DEBUG awx.main.dispatch task a1c42d7f-17b2-4590-b013-b82f76105f0c is finished`
After a lot of time (this time 16min) it suddenly starts to execute the task
`task_1 | 2020-01-09 16:24:19,814 DEBUG awx.main.dispatch task b832fea9-c31f-4cfb-8ce3-fda16a1ac5cb starting awx.main.scheduler.tasks.run_task_manager(*[])
task_1 | 2020-01-09 16:24:19,816 DEBUG awx.main.scheduler Running Tower task manager.
task_1 | 2020-01-09 16:24:19,827 DEBUG awx.main.scheduler Not running scheduler, another task holds lock
task_1 | 2020-01-09 16:24:19,828 DEBUG awx.main.dispatch task b832fea9-c31f-4cfb-8ce3-fda16a1ac5cb is finished
task_1 | 2020-01-09 16:24:38,979 DEBUG awx.main.dispatch publish awx.main.tasks.cluster_node_heartbeat(aacec6cc-b438-4cb7-b3f8-2a9ba0b3f399, queue=awx)
task_1 | [2020-01-09 16:24:38,979: DEBUG/Process-1] publish awx.main.tasks.cluster_node_heartbeat(aacec6cc-b438-4cb7-b3f8-2a9ba0b3f399, queue=awx)
rabbitmq_1 | 2020-01-09 16:24:38.989 [info] <0.3775.0> accepting AMQP connection <0.3775.0> (172.19.0.5:48346 -> 172.19.0.3:5672)
rabbitmq_1 | 2020-01-09 16:24:38.991 [info] <0.3775.0> connection <0.3775.0> (172.19.0.5:48346 -> 172.19.0.3:5672): user 'guest' authenticated and granted access to vhost 'awx'
rabbitmq_1 | 2020-01-09 16:24:38.996 [info] <0.3775.0> closing AMQP connection <0.3775.0> (172.19.0.5:48346 -> 172.19.0.3:5672, vhost: 'awx', user: 'guest')
task_1 | 2020-01-09 16:24:39,003 DEBUG awx.main.dispatch delivered aacec6cc-b438-4cb7-b3f8-2a9ba0b3f399 to worker[178] qsize 0
task_1 | 2020-01-09 16:24:39,005 DEBUG awx.main.dispatch task aacec6cc-b438-4cb7-b3f8-2a9ba0b3f399 starting awx.main.tasks.cluster_node_heartbeat(*[])
task_1 | 2020-01-09 16:24:39,008 DEBUG awx.main.tasks Cluster node heartbeat task.
task_1 | 2020-01-09 16:24:39,020 DEBUG awx.main.dispatch task aacec6cc-b438-4cb7-b3f8-2a9ba0b3f399 is finished
task_1 | 2020-01-09 16:24:39,090 DEBUG awx.main.dispatch publish awx.main.tasks.awx_k8s_reaper(e33c3d96-163b-487f-8101-00dd80f101a7, queue=awx)
task_1 | [2020-01-09 16:24:39,090: DEBUG/Process-1] publish awx.main.tasks.awx_k8s_reaper(e33c3d96-163b-487f-8101-00dd80f101a7, queue=awx)
rabbitmq_1 | 2020-01-09 16:24:39.100 [info] <0.3788.0> accepting AMQP connection <0.3788.0> (172.19.0.5:48350 -> 172.19.0.3:5672)
rabbitmq_1 | 2020-01-09 16:24:39.102 [info] <0.3788.0> connection <0.3788.0> (172.19.0.5:48350 -> 172.19.0.3:5672): user 'guest' authenticated and granted access to vhost 'awx'
rabbitmq_1 | 2020-01-09 16:24:39.104 [info] <0.3788.0> closing AMQP connection <0.3788.0> (172.19.0.5:48350 -> 172.19.0.3:5672, vhost: 'awx', user: 'guest')
task_1 | 2020-01-09 16:24:39,105 DEBUG awx.main.dispatch delivered e33c3d96-163b-487f-8101-00dd80f101a7 to worker[178] qsize 0
task_1 | 2020-01-09 16:24:39,108 DEBUG awx.main.dispatch task e33c3d96-163b-487f-8101-00dd80f101a7 starting awx.main.tasks.awx_k8s_reaper(*[])
task_1 | 2020-01-09 16:24:39,117 DEBUG awx.main.dispatch task e33c3d96-163b-487f-8101-00dd80f101a7 is finished
task_1 | 2020-01-09 16:24:39,344 DEBUG awx.main.dispatch publish awx.main.tasks.awx_periodic_scheduler(97554fd6-f2c2-4b5f-a25d-bf09743b9105, queue=awx_private_queue)
task_1 | [2020-01-09 16:24:39,344: DEBUG/Process-1] publish awx.main.tasks.awx_periodic_scheduler(97554fd6-f2c2-4b5f-a25d-bf09743b9105, queue=awx_private_queue)
rabbitmq_1 | 2020-01-09 16:24:39.354 [info] <0.3801.0> accepting AMQP connection <0.3801.0> (172.19.0.5:48354 -> 172.19.0.3:5672)
rabbitmq_1 | 2020-01-09 16:24:39.356 [info] <0.3801.0> connection <0.3801.0> (172.19.0.5:48354 -> 172.19.0.3:5672): user 'guest' authenticated and granted access to vhost 'awx'
task_1 | 2020-01-09 16:24:39,359 DEBUG awx.main.dispatch delivered 97554fd6-f2c2-4b5f-a25d-bf09743b9105 to worker[178] qsize 0
rabbitmq_1 | 2020-01-09 16:24:39.359 [info] <0.3801.0> closing AMQP connection <0.3801.0> (172.19.0.5:48354 -> 172.19.0.3:5672, vhost: 'awx', user: 'guest')
task_1 | 2020-01-09 16:24:39,361 DEBUG awx.main.dispatch task 97554fd6-f2c2-4b5f-a25d-bf09743b9105 starting awx.main.tasks.awx_periodic_scheduler(*[])
task_1 | 2020-01-09 16:24:39,377 DEBUG awx.main.tasks Starting periodic scheduler
task_1 | 2020-01-09 16:24:39,380 DEBUG awx.main.tasks Last scheduler run was: 2020-01-09 16:05:39.310989+00:00
task_1 | 2020-01-09 16:24:39,390 DEBUG awx.main.dispatch task 97554fd6-f2c2-4b5f-a25d-bf09743b9105 is finished`
Is there a way to know which other task blocks the scheduler?
##### ENVIRONMENT
* AWX version: 9.1.0
* AWX install method: docker on linux
* Operating System: Centos 7
| test | job remains in pending the ansible community is highly committed to the security of our open source projects security concerns should be reported directly by email to security ansible com for more information on the ansible community s practices regarding responsible disclosure see issue type bug report summary job remains in pending in the log i see a lot of awx main tasks not running scheduler another task holds lock there are some closed issues regarding this one but didn t find any solution randomly awx get stuck where all jobs remain in pending in the logs i see task debug awx main dispatch task starting awx main scheduler tasks run task manager task debug awx main scheduler running tower task manager task debug awx main scheduler not running scheduler another task holds lock task debug awx main dispatch task is finished after a lot of time this time it suddenly starts to execute the task task debug awx main dispatch task starting awx main scheduler tasks run task manager task debug awx main scheduler running tower task manager task debug awx main scheduler not running scheduler another task holds lock task debug awx main dispatch task is finished task debug awx main dispatch publish awx main tasks cluster node heartbeat queue awx task publish awx main tasks cluster node heartbeat queue awx rabbitmq accepting amqp connection rabbitmq connection user guest authenticated and granted access to vhost awx rabbitmq closing amqp connection vhost awx user guest task debug awx main dispatch delivered to worker qsize task debug awx main dispatch task starting awx main tasks cluster node heartbeat task debug awx main tasks cluster node heartbeat task task debug awx main dispatch task is finished task debug awx main dispatch publish awx main tasks awx reaper queue awx task publish awx main tasks awx reaper queue awx rabbitmq accepting amqp connection rabbitmq connection user guest authenticated and granted access to vhost awx rabbitmq closing amqp connection vhost awx user guest task debug awx main dispatch delivered to worker qsize task debug awx main dispatch task starting awx main tasks awx reaper task debug awx main dispatch task is finished task debug awx main dispatch publish awx main tasks awx periodic scheduler queue awx private queue task publish awx main tasks awx periodic scheduler queue awx private queue rabbitmq accepting amqp connection rabbitmq connection user guest authenticated and granted access to vhost awx task debug awx main dispatch delivered to worker qsize rabbitmq closing amqp connection vhost awx user guest task debug awx main dispatch task starting awx main tasks awx periodic scheduler task debug awx main tasks starting periodic scheduler task debug awx main tasks last scheduler run was task debug awx main dispatch task is finished is there a way to know which other task blocks the scheduler environment awx version awx install method docker on linux operating system centos | 1 |
129,347 | 17,773,890,036 | IssuesEvent | 2021-08-30 16:38:37 | hackforla/website | https://api.github.com/repos/hackforla/website | opened | Adjust "...See More" placement on Wins Cards | P-Feature: Wins Page role: design Size: Small | ### Overview
We need to move the "...see more" link on the Wins cards from the end of the text to the bottom right of the card in order to standardize card design, making the design and functionality the same regardless of whether the cards do or do not have expandable text.
### Action Items
Design:
- [ ] Duplicate current [Wins card](https://www.figma.com/file/0RRPy1Ph7HafI3qOITg0Mr/Hack-for-LA-Website?node-id=7991%3A41757) in Figma
- [ ] Move "...see more" to bottom right of card
- [ ] Denote the new updated version in Figma
Development:
- [ ] Implement above design change in Wins card.
### Resources/Instructions
[Wins Page](https://www.hackforla.org/wins/)
[Figma - Wins Page](https://www.figma.com/file/0RRPy1Ph7HafI3qOITg0Mr/Hack-for-LA-Website?node-id=7991%3A41757)
| 1.0 | Adjust "...See More" placement on Wins Cards - ### Overview
We need to move the "...see more" link on the Wins cards from the end of the text to the bottom right of the card in order to standardize card design, making the design and functionality the same regardless of whether the cards do or do not have expandable text.
### Action Items
Design:
- [ ] Duplicate current [Wins card](https://www.figma.com/file/0RRPy1Ph7HafI3qOITg0Mr/Hack-for-LA-Website?node-id=7991%3A41757) in Figma
- [ ] Move "...see more" to bottom right of card
- [ ] Denote the new updated version in Figma
Development:
- [ ] Implement above design change in Wins card.
### Resources/Instructions
[Wins Page](https://www.hackforla.org/wins/)
[Figma - Wins Page](https://www.figma.com/file/0RRPy1Ph7HafI3qOITg0Mr/Hack-for-LA-Website?node-id=7991%3A41757)
| non_test | adjust see more placement on wins cards overview we need to move the see more link on the wins cards from the end of the text to the bottom right of the card in order to standardize card design making the design and functionality the same regardless of whether the cards do or do not have expandable text action items design duplicate current in figma move see more to bottom right of card denote the new updated version in figma development implement above design change in wins card resources instructions | 0 |
53,979 | 23,121,107,246 | IssuesEvent | 2022-07-27 21:36:46 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Marshal.PtrToStructure<T>(IntPtr) throws NullReferenceException if IntPtr.Zero is used | area-System.Runtime.InteropServices | ### Description
According to [documentation/remarks](https://docs.microsoft.com/en-us/dotnet/api/system.runtime.interopservices.marshal.ptrtostructure?view=net-6.0#system-runtime-interopservices-marshal-ptrtostructure-1(system-intptr)) `Marshal.PtrToStructure<T>` shoul accept `value types`.
> [PtrToStructure<T>(IntPtr)](https://docs.microsoft.com/en-us/dotnet/api/system.runtime.interopservices.marshal.ptrtostructure?view=net-6.0#system-runtime-interopservices-marshal-ptrtostructure-1(system-intptr)) is often necessary in COM interop and platform invoke when structure parameters are represented as [System.IntPtr](https://docs.microsoft.com/en-us/dotnet/api/system.intptr?view=net-6.0) values. You can pass a value type to this method overload. If the ptr parameter equals [IntPtr.Zero](https://docs.microsoft.com/en-us/dotnet/api/system.intptr.zero?view=net-6.0), null will be returned.
Calling `Marshal.PtrToStructure<T>` with `IntPtr.Zero` throws `NullReferenceException` instead of returning `null`.
### Reproduction Steps
```
using System.Runtime.InteropServices;
public class MarshalPtrToStructureTests
{
public struct MyStruct
{
}
[Fact]
public void IntPtrZero_PtrToStructureMyStruct_ShouldBeNull()
{
Assert.Null(Marshal.PtrToStructure<MyStruct>(IntPtr.Zero));
}
}
```
### Expected behavior
`Marshal.PtrToStructure<T>` to return `null`
### Actual behavior
Throws `NullReferenceException`
### Regression?
Same behavior on .NET 3.1, .NET 5.
### Known Workarounds
Explicit null check
### Configuration
```
> dotnet --info
.NET SDK (reflecting any global.json):
Version: 6.0.302
Commit: c857713418
Runtime Environment:
OS Name: Windows
OS Version: 10.0.22000
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\6.0.302\
global.json file:
Not found
Host:
Version: 6.0.7
Architecture: x64
Commit: 0ec02c8c96
.NET SDKs installed:
6.0.302 [C:\Program Files\dotnet\sdk]
.NET runtimes installed:
Microsoft.AspNetCore.App 6.0.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 6.0.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 6.0.7 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
```
### Other information
_No response_ | 1.0 | Marshal.PtrToStructure<T>(IntPtr) throws NullReferenceException if IntPtr.Zero is used - ### Description
According to [documentation/remarks](https://docs.microsoft.com/en-us/dotnet/api/system.runtime.interopservices.marshal.ptrtostructure?view=net-6.0#system-runtime-interopservices-marshal-ptrtostructure-1(system-intptr)) `Marshal.PtrToStructure<T>` shoul accept `value types`.
> [PtrToStructure<T>(IntPtr)](https://docs.microsoft.com/en-us/dotnet/api/system.runtime.interopservices.marshal.ptrtostructure?view=net-6.0#system-runtime-interopservices-marshal-ptrtostructure-1(system-intptr)) is often necessary in COM interop and platform invoke when structure parameters are represented as [System.IntPtr](https://docs.microsoft.com/en-us/dotnet/api/system.intptr?view=net-6.0) values. You can pass a value type to this method overload. If the ptr parameter equals [IntPtr.Zero](https://docs.microsoft.com/en-us/dotnet/api/system.intptr.zero?view=net-6.0), null will be returned.
Calling `Marshal.PtrToStructure<T>` with `IntPtr.Zero` throws `NullReferenceException` instead of returning `null`.
### Reproduction Steps
```
using System.Runtime.InteropServices;
public class MarshalPtrToStructureTests
{
public struct MyStruct
{
}
[Fact]
public void IntPtrZero_PtrToStructureMyStruct_ShouldBeNull()
{
Assert.Null(Marshal.PtrToStructure<MyStruct>(IntPtr.Zero));
}
}
```
### Expected behavior
`Marshal.PtrToStructure<T>` to return `null`
### Actual behavior
Throws `NullReferenceException`
### Regression?
Same behavior on .NET 3.1, .NET 5.
### Known Workarounds
Explicit null check
### Configuration
```
> dotnet --info
.NET SDK (reflecting any global.json):
Version: 6.0.302
Commit: c857713418
Runtime Environment:
OS Name: Windows
OS Version: 10.0.22000
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\6.0.302\
global.json file:
Not found
Host:
Version: 6.0.7
Architecture: x64
Commit: 0ec02c8c96
.NET SDKs installed:
6.0.302 [C:\Program Files\dotnet\sdk]
.NET runtimes installed:
Microsoft.AspNetCore.App 6.0.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 6.0.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 6.0.7 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
```
### Other information
_No response_ | non_test | marshal ptrtostructure intptr throws nullreferenceexception if intptr zero is used description according to marshal ptrtostructure shoul accept value types is often necessary in com interop and platform invoke when structure parameters are represented as values you can pass a value type to this method overload if the ptr parameter equals null will be returned calling marshal ptrtostructure with intptr zero throws nullreferenceexception instead of returning null reproduction steps using system runtime interopservices public class marshalptrtostructuretests public struct mystruct public void intptrzero ptrtostructuremystruct shouldbenull assert null marshal ptrtostructure intptr zero expected behavior marshal ptrtostructure to return null actual behavior throws nullreferenceexception regression same behavior on net net known workarounds explicit null check configuration dotnet info net sdk reflecting any global json version commit runtime environment os name windows os version os platform windows rid base path c program files dotnet sdk global json file not found host version architecture commit net sdks installed net runtimes installed microsoft aspnetcore app microsoft netcore app microsoft windowsdesktop app other information no response | 0 |
25,720 | 4,166,863,661 | IssuesEvent | 2016-06-20 06:54:37 | linkedpipes/etl | https://api.github.com/repos/linkedpipes/etl | closed | SPARQL Endpoint: Support for default graph IRIs | enhancement test | When generating pipelines from LP-VIZ, I need to specify default graph IRIs.
Scenario:
I have a pipeline that starts with a datasource, which is a named graph in LODcz. That is bound to a component, an extractor, which contains a SPARQL construct query. In order to make this correct for LP-ETL, I should merge those 2 components and use just SPARQL Endpoint component in ETL. However, I would have to rewrite the extractor's query to contain named graph(s) specification. | 1.0 | SPARQL Endpoint: Support for default graph IRIs - When generating pipelines from LP-VIZ, I need to specify default graph IRIs.
Scenario:
I have a pipeline that starts with a datasource, which is a named graph in LODcz. That is bound to a component, an extractor, which contains a SPARQL construct query. In order to make this correct for LP-ETL, I should merge those 2 components and use just SPARQL Endpoint component in ETL. However, I would have to rewrite the extractor's query to contain named graph(s) specification. | test | sparql endpoint support for default graph iris when generating pipelines from lp viz i need to specify default graph iris scenario i have a pipeline that starts with a datasource which is a named graph in lodcz that is bound to a component an extractor which contains a sparql construct query in order to make this correct for lp etl i should merge those components and use just sparql endpoint component in etl however i would have to rewrite the extractor s query to contain named graph s specification | 1 |
337,914 | 30,271,586,575 | IssuesEvent | 2023-07-07 15:46:01 | flutter/flutter | https://api.github.com/repos/flutter/flutter | opened | [video_player] Flaky integration test failure on web | a: tests platform-web p: video_player package c: flake fyi-ecosystem team-web | While converting web platform tests to LUCI, I'm seeing frequent failures in one test:
```
Failure in method: stay paused when seeking after video completed
══╡ EXCEPTION CAUGHT BY FLUTTER TEST FRAMEWORK ╞═════════════════
The following TestFailure was thrown running a test:
Expected: Duration:<0:00:07.534000>
Actual: Duration:<0:00:07.544000>
When the exception was thrown, this was the stack:
dart-sdk/lib/_internal/js_dev_runtime/private/ddc_runtime/errors.dart 294:49 throw_
packages/matcher/src/expect/prints_matcher.dart.js 433:22 fail
packages/matcher/src/expect/prints_matcher.dart.js 430:12 _expect
packages/matcher/src/expect/prints_matcher.dart.js 365:12 expect$
packages/flutter_test/src/test_text_input_key_handler.dart.js 7284:12 expect$
video_player_test.dart.js 257:23 <fn>
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 45:50 <fn>
...
```
It wasn't happening in Cirrus, but I could repro it locally in 2/4 runs. Either Cirrus for some reason didn't trip the flake, or the flake only happens in newer versions of Chrome (I believe the LUCI config has a newer Chrome version than our last update to the one pulled in our Cirrus tests).
I'm going to mark this as flaky to unblock the migration. | 1.0 | [video_player] Flaky integration test failure on web - While converting web platform tests to LUCI, I'm seeing frequent failures in one test:
```
Failure in method: stay paused when seeking after video completed
══╡ EXCEPTION CAUGHT BY FLUTTER TEST FRAMEWORK ╞═════════════════
The following TestFailure was thrown running a test:
Expected: Duration:<0:00:07.534000>
Actual: Duration:<0:00:07.544000>
When the exception was thrown, this was the stack:
dart-sdk/lib/_internal/js_dev_runtime/private/ddc_runtime/errors.dart 294:49 throw_
packages/matcher/src/expect/prints_matcher.dart.js 433:22 fail
packages/matcher/src/expect/prints_matcher.dart.js 430:12 _expect
packages/matcher/src/expect/prints_matcher.dart.js 365:12 expect$
packages/flutter_test/src/test_text_input_key_handler.dart.js 7284:12 expect$
video_player_test.dart.js 257:23 <fn>
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 45:50 <fn>
...
```
It wasn't happening in Cirrus, but I could repro it locally in 2/4 runs. Either Cirrus for some reason didn't trip the flake, or the flake only happens in newer versions of Chrome (I believe the LUCI config has a newer Chrome version than our last update to the one pulled in our Cirrus tests).
I'm going to mark this as flaky to unblock the migration. | test | flaky integration test failure on web while converting web platform tests to luci i m seeing frequent failures in one test failure in method stay paused when seeking after video completed ══╡ exception caught by flutter test framework ╞═════════════════ the following testfailure was thrown running a test expected duration actual duration when the exception was thrown this was the stack dart sdk lib internal js dev runtime private ddc runtime errors dart throw packages matcher src expect prints matcher dart js fail packages matcher src expect prints matcher dart js expect packages matcher src expect prints matcher dart js expect packages flutter test src test text input key handler dart js expect video player test dart js dart sdk lib internal js dev runtime patch async patch dart it wasn t happening in cirrus but i could repro it locally in runs either cirrus for some reason didn t trip the flake or the flake only happens in newer versions of chrome i believe the luci config has a newer chrome version than our last update to the one pulled in our cirrus tests i m going to mark this as flaky to unblock the migration | 1 |
123,820 | 4,876,573,137 | IssuesEvent | 2016-11-16 13:19:14 | benjamincharity/angular-flickity | https://api.github.com/repos/benjamincharity/angular-flickity | closed | Add tests | Priority: Medium Type: Maintenance | - [x] Test service
- [x] Test directives
- [x] Test controller
- [x] Hook up CodeClimate
- [x] Set up build with CircleCi
| 1.0 | Add tests - - [x] Test service
- [x] Test directives
- [x] Test controller
- [x] Hook up CodeClimate
- [x] Set up build with CircleCi
| non_test | add tests test service test directives test controller hook up codeclimate set up build with circleci | 0 |
54,899 | 6,416,832,316 | IssuesEvent | 2017-08-08 15:32:15 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Flaky Test - dashboard app dashboard clone clone warns on duplicate name | :Dashboard :Sharing test | Just ran into this test that passed second time around, after no changes, so it must be flaky.
```
03:35:17.578 │ debg existsByDisplayedByCssSelector [data-test-subj~="dashboardLandingPage"]
03:35:18.626 │ debg clickDashboardBreadcrumbLink
03:35:18.691 │ debg TestSubjects.find(searchFilter)
03:35:18.692 │ debg in displayedByCssSelector: [data-test-subj~="searchFilter"]
03:35:28.745 │ debg --- tryForTime failure: An element could not be located on the page using the given search parameters.
03:35:39.287 │ debg --- tryForTime failed again with the same message ...
03:35:49.856 │ debg --- tryForTime failed again with the same message ...
03:36:00.425 │ debg --- tryForTime failed again with the same message ...
03:36:00.992 │ debg --- tryForTime failure: tryForTime timeout: NoSuchElement: An element could not be located on the page using the given search parameters.
03:36:00.995 │ at /var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/leadfoot/lib/findDisplayed.js:37:21
03:36:00.995 │ at /var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/dojo/Promise.js:156:41
03:36:00.995 │ at run (/var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/dojo/Promise.js:51:33)
03:36:00.995 │ at /var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/dojo/nextTick.js:35:17
03:36:00.995 │ at _combinedTickCallback (internal/process/next_tick.js:73:7)
03:36:00.995 │ at process._tickDomainCallback (internal/process/next_tick.js:128:9)
03:36:00.995 │ at Command.findDisplayed (/var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/leadfoot/Command.js:23:10)
....
(/var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/bluebird/js/main/async.js:15:14)
03:36:01.002 │ at runCallback (timers.js:672:20)
03:36:01.003 │ at tryOnImmediate (timers.js:645:5)
03:36:01.003 │ at processImmediate [as _immediateCallback] (timers.js:617:5)
03:36:01.498 │ debg --- tryForTime failure: tryForTime timeout: Error: tryForTime timeout: NoSuchElement: An element could not be located on the page using the given search parameters.
03:36:01.500 │ at /var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/leadfoot/lib/findDisplayed.js:37:21
03:36:01.500 │ at /var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/dojo/Promise.js:156:41
......
03:36:01.507 │ at tryOnImmediate (timers.js:645:5)
03:36:01.507 │ at processImmediate [as _immediateCallback] (timers.js:617:5)
03:36:02.004 │ debg Taking screenshot "/var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/test/functional/screenshots/failure/dashboard app dashboard clone clone warns on duplicate name.png"
03:36:02.091 └- ✖ fail: "dashboard app dashboard clone clone warns on duplicate name"
03:36:02.093 │ tryForTime timeout: Error: tryForTime timeout: Error: tryForTime timeout: NoSuchElement: An element could not be located on the page using the given search parameters.
```
Accompanying screenshot gives a clue:
<img width="1215" alt="screen shot 2017-08-02 at 1 37 06 pm" src="https://user-images.githubusercontent.com/16563603/28886513-c81c1370-7787-11e7-9883-45ec10c4bb97.png">
Looks like `clickDashboardBreadcrumbLink` failed to actually load up the landing page. | 1.0 | Flaky Test - dashboard app dashboard clone clone warns on duplicate name - Just ran into this test that passed second time around, after no changes, so it must be flaky.
```
03:35:17.578 │ debg existsByDisplayedByCssSelector [data-test-subj~="dashboardLandingPage"]
03:35:18.626 │ debg clickDashboardBreadcrumbLink
03:35:18.691 │ debg TestSubjects.find(searchFilter)
03:35:18.692 │ debg in displayedByCssSelector: [data-test-subj~="searchFilter"]
03:35:28.745 │ debg --- tryForTime failure: An element could not be located on the page using the given search parameters.
03:35:39.287 │ debg --- tryForTime failed again with the same message ...
03:35:49.856 │ debg --- tryForTime failed again with the same message ...
03:36:00.425 │ debg --- tryForTime failed again with the same message ...
03:36:00.992 │ debg --- tryForTime failure: tryForTime timeout: NoSuchElement: An element could not be located on the page using the given search parameters.
03:36:00.995 │ at /var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/leadfoot/lib/findDisplayed.js:37:21
03:36:00.995 │ at /var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/dojo/Promise.js:156:41
03:36:00.995 │ at run (/var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/dojo/Promise.js:51:33)
03:36:00.995 │ at /var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/dojo/nextTick.js:35:17
03:36:00.995 │ at _combinedTickCallback (internal/process/next_tick.js:73:7)
03:36:00.995 │ at process._tickDomainCallback (internal/process/next_tick.js:128:9)
03:36:00.995 │ at Command.findDisplayed (/var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/leadfoot/Command.js:23:10)
....
(/var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/bluebird/js/main/async.js:15:14)
03:36:01.002 │ at runCallback (timers.js:672:20)
03:36:01.003 │ at tryOnImmediate (timers.js:645:5)
03:36:01.003 │ at processImmediate [as _immediateCallback] (timers.js:617:5)
03:36:01.498 │ debg --- tryForTime failure: tryForTime timeout: Error: tryForTime timeout: NoSuchElement: An element could not be located on the page using the given search parameters.
03:36:01.500 │ at /var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/leadfoot/lib/findDisplayed.js:37:21
03:36:01.500 │ at /var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/node_modules/dojo/Promise.js:156:41
......
03:36:01.507 │ at tryOnImmediate (timers.js:645:5)
03:36:01.507 │ at processImmediate [as _immediateCallback] (timers.js:617:5)
03:36:02.004 │ debg Taking screenshot "/var/lib/jenkins/workspace/elastic+kibana+pull-request+multijob-selenium/test/functional/screenshots/failure/dashboard app dashboard clone clone warns on duplicate name.png"
03:36:02.091 └- ✖ fail: "dashboard app dashboard clone clone warns on duplicate name"
03:36:02.093 │ tryForTime timeout: Error: tryForTime timeout: Error: tryForTime timeout: NoSuchElement: An element could not be located on the page using the given search parameters.
```
Accompanying screenshot gives a clue:
<img width="1215" alt="screen shot 2017-08-02 at 1 37 06 pm" src="https://user-images.githubusercontent.com/16563603/28886513-c81c1370-7787-11e7-9883-45ec10c4bb97.png">
Looks like `clickDashboardBreadcrumbLink` failed to actually load up the landing page. | test | flaky test dashboard app dashboard clone clone warns on duplicate name just ran into this test that passed second time around after no changes so it must be flaky │ debg existsbydisplayedbycssselector │ debg clickdashboardbreadcrumblink │ debg testsubjects find searchfilter │ debg in displayedbycssselector │ debg tryfortime failure an element could not be located on the page using the given search parameters │ debg tryfortime failed again with the same message │ debg tryfortime failed again with the same message │ debg tryfortime failed again with the same message │ debg tryfortime failure tryfortime timeout nosuchelement an element could not be located on the page using the given search parameters │ at var lib jenkins workspace elastic kibana pull request multijob selenium node modules leadfoot lib finddisplayed js │ at var lib jenkins workspace elastic kibana pull request multijob selenium node modules dojo promise js │ at run var lib jenkins workspace elastic kibana pull request multijob selenium node modules dojo promise js │ at var lib jenkins workspace elastic kibana pull request multijob selenium node modules dojo nexttick js │ at combinedtickcallback internal process next tick js │ at process tickdomaincallback internal process next tick js │ at command finddisplayed var lib jenkins workspace elastic kibana pull request multijob selenium node modules leadfoot command js var lib jenkins workspace elastic kibana pull request multijob selenium node modules bluebird js main async js │ at runcallback timers js │ at tryonimmediate timers js │ at processimmediate timers js │ debg tryfortime failure tryfortime timeout error tryfortime timeout nosuchelement an element could not be located on the page using the given search parameters │ at var lib jenkins workspace elastic kibana pull request multijob selenium node modules leadfoot lib finddisplayed js │ at var lib jenkins workspace elastic kibana pull request multijob selenium node modules dojo promise js │ at tryonimmediate timers js │ at processimmediate timers js │ debg taking screenshot var lib jenkins workspace elastic kibana pull request multijob selenium test functional screenshots failure dashboard app dashboard clone clone warns on duplicate name png └ ✖ fail dashboard app dashboard clone clone warns on duplicate name │ tryfortime timeout error tryfortime timeout error tryfortime timeout nosuchelement an element could not be located on the page using the given search parameters accompanying screenshot gives a clue img width alt screen shot at pm src looks like clickdashboardbreadcrumblink failed to actually load up the landing page | 1 |
92,461 | 15,857,082,277 | IssuesEvent | 2021-04-08 03:55:23 | f00b4rb00f/test-project | https://api.github.com/repos/f00b4rb00f/test-project | opened | CVE-2020-7792 (High) detected in mout-0.9.1.tgz | security vulnerability | ## CVE-2020-7792 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mout-0.9.1.tgz</b></p></summary>
<p>Modular Utilities</p>
<p>Library home page: <a href="https://registry.npmjs.org/mout/-/mout-0.9.1.tgz">https://registry.npmjs.org/mout/-/mout-0.9.1.tgz</a></p>
<p>Path to dependency file: /test-project/attacker-app/package.json</p>
<p>Path to vulnerable library: test-project/attacker-app/node_modules/mout/package.json,test-project/attacker-app/node_modules/mout/package.json</p>
<p>
Dependency Hierarchy:
- wiredep-2.2.2.tgz (Root Library)
- bower-config-0.5.3.tgz
- :x: **mout-0.9.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package mout. The deepFillIn function can be used to 'fill missing properties recursively', while the deepMixIn 'mixes objects into the target object, recursively mixing existing child objects as well'. In both cases, the key used to access the target object recursively is not checked, leading to a Prototype Pollution.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7792>CVE-2020-7792</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7792 (High) detected in mout-0.9.1.tgz - ## CVE-2020-7792 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mout-0.9.1.tgz</b></p></summary>
<p>Modular Utilities</p>
<p>Library home page: <a href="https://registry.npmjs.org/mout/-/mout-0.9.1.tgz">https://registry.npmjs.org/mout/-/mout-0.9.1.tgz</a></p>
<p>Path to dependency file: /test-project/attacker-app/package.json</p>
<p>Path to vulnerable library: test-project/attacker-app/node_modules/mout/package.json,test-project/attacker-app/node_modules/mout/package.json</p>
<p>
Dependency Hierarchy:
- wiredep-2.2.2.tgz (Root Library)
- bower-config-0.5.3.tgz
- :x: **mout-0.9.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package mout. The deepFillIn function can be used to 'fill missing properties recursively', while the deepMixIn 'mixes objects into the target object, recursively mixing existing child objects as well'. In both cases, the key used to access the target object recursively is not checked, leading to a Prototype Pollution.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7792>CVE-2020-7792</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in mout tgz cve high severity vulnerability vulnerable library mout tgz modular utilities library home page a href path to dependency file test project attacker app package json path to vulnerable library test project attacker app node modules mout package json test project attacker app node modules mout package json dependency hierarchy wiredep tgz root library bower config tgz x mout tgz vulnerable library vulnerability details this affects all versions of package mout the deepfillin function can be used to fill missing properties recursively while the deepmixin mixes objects into the target object recursively mixing existing child objects as well in both cases the key used to access the target object recursively is not checked leading to a prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource | 0 |
197,742 | 14,941,189,546 | IssuesEvent | 2021-01-25 19:23:15 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: replicate/up/1to3 failed | C-test-failure O-roachtest O-robot branch-master release-blocker | [(roachtest).replicate/up/1to3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2607609&tab=buildLog) on [master@7a5d3d0f9465c4d6a6f204ac6a672a027cf585a6](https://github.com/cockroachdb/cockroach/commits/7a5d3d0f9465c4d6a6f204ac6a672a027cf585a6):
```
cluster.go:2687,allocator.go:70,allocator.go:78,test_runner.go:767: monitor failure: monitor task failed: read tcp 172.17.0.3:60002 -> 35.222.248.119:26257: read: connection reset by peer
(1) attached stack trace
-- stack trace:
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2675
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2683
| main.registerAllocator.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/allocator.go:70
| main.registerAllocator.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/allocator.go:78
| [...repeated from below...]
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2731
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (4) monitor task failed
Wraps: (5) read tcp 172.17.0.3:60002 -> 35.222.248.119:26257
Wraps: (6) read
Wraps: (7) connection reset by peer
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *net.OpError (6) *os.SyscallError (7) syscall.Errno
cluster.go:1666,context.go:140,cluster.go:1655,test_runner.go:848: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-2607609-1611387508-22-n3cpu4 --oneshot --ignore-empty-nodes: exit status 1 3: 6873
1: dead
2: 6812
Error: UNCLASSIFIED_PROBLEM: 1: dead
(1) UNCLASSIFIED_PROBLEM
Wraps: (2) attached stack trace
-- stack trace:
| main.glob..func14
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1147
| main.wrap.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:271
| github.com/spf13/cobra.(*Command).execute
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:830
| github.com/spf13/cobra.(*Command).ExecuteC
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:914
| github.com/spf13/cobra.(*Command).Execute
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:864
| main.main
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1852
| runtime.main
| /usr/local/go/src/runtime/proc.go:204
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (3) 1: dead
Error types: (1) errors.Unclassified (2) *withstack.withStack (3) *errutil.leafError
```
<details><summary>More</summary><p>
Artifacts: [/replicate/up/1to3](https://teamcity.cockroachdb.com/viewLog.html?buildId=2607609&tab=artifacts#/replicate/up/1to3)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Areplicate%2Fup%2F1to3.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 2.0 | roachtest: replicate/up/1to3 failed - [(roachtest).replicate/up/1to3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2607609&tab=buildLog) on [master@7a5d3d0f9465c4d6a6f204ac6a672a027cf585a6](https://github.com/cockroachdb/cockroach/commits/7a5d3d0f9465c4d6a6f204ac6a672a027cf585a6):
```
cluster.go:2687,allocator.go:70,allocator.go:78,test_runner.go:767: monitor failure: monitor task failed: read tcp 172.17.0.3:60002 -> 35.222.248.119:26257: read: connection reset by peer
(1) attached stack trace
-- stack trace:
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2675
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2683
| main.registerAllocator.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/allocator.go:70
| main.registerAllocator.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/allocator.go:78
| [...repeated from below...]
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2731
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (4) monitor task failed
Wraps: (5) read tcp 172.17.0.3:60002 -> 35.222.248.119:26257
Wraps: (6) read
Wraps: (7) connection reset by peer
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *net.OpError (6) *os.SyscallError (7) syscall.Errno
cluster.go:1666,context.go:140,cluster.go:1655,test_runner.go:848: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-2607609-1611387508-22-n3cpu4 --oneshot --ignore-empty-nodes: exit status 1 3: 6873
1: dead
2: 6812
Error: UNCLASSIFIED_PROBLEM: 1: dead
(1) UNCLASSIFIED_PROBLEM
Wraps: (2) attached stack trace
-- stack trace:
| main.glob..func14
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1147
| main.wrap.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:271
| github.com/spf13/cobra.(*Command).execute
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:830
| github.com/spf13/cobra.(*Command).ExecuteC
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:914
| github.com/spf13/cobra.(*Command).Execute
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:864
| main.main
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1852
| runtime.main
| /usr/local/go/src/runtime/proc.go:204
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (3) 1: dead
Error types: (1) errors.Unclassified (2) *withstack.withStack (3) *errutil.leafError
```
<details><summary>More</summary><p>
Artifacts: [/replicate/up/1to3](https://teamcity.cockroachdb.com/viewLog.html?buildId=2607609&tab=artifacts#/replicate/up/1to3)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Areplicate%2Fup%2F1to3.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| test | roachtest replicate up failed on cluster go allocator go allocator go test runner go monitor failure monitor task failed read tcp read connection reset by peer attached stack trace stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main registerallocator home agent work go src github com cockroachdb cockroach pkg cmd roachtest allocator go main registerallocator home agent work go src github com cockroachdb cockroach pkg cmd roachtest allocator go wraps monitor failure wraps attached stack trace stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime goexit usr local go src runtime asm s wraps monitor task failed wraps read tcp wraps read wraps connection reset by peer error types withstack withstack errutil withprefix withstack withstack errutil withprefix net operror os syscallerror syscall errno cluster go context go cluster go test runner go dead node detection home agent work go src github com cockroachdb cockroach bin roachprod monitor teamcity oneshot ignore empty nodes exit status dead error unclassified problem dead unclassified problem wraps attached stack trace stack trace main glob home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go main wrap home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go github com cobra command execute home agent work go src github com cockroachdb cockroach vendor github com cobra command go github com cobra command executec home agent work go src github com cockroachdb cockroach vendor github com cobra command go github com cobra command execute home agent work go src github com cockroachdb cockroach vendor github com cobra command go main main home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps dead error types errors unclassified withstack withstack errutil leaferror more artifacts powered by | 1 |
286,192 | 31,389,721,119 | IssuesEvent | 2023-08-26 07:10:19 | inmar/patience_js | https://api.github.com/repos/inmar/patience_js | closed | CVE-2023-26118 (High) detected in angular-1.8.2.min.js - autoclosed | Mend: dependency security vulnerability | ## CVE-2023-26118 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.8.2.min.js</b></p></summary>
<p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.8.2/angular.min.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.8.2/angular.min.js</a></p>
<p>Path to dependency file: /demo/index.html</p>
<p>Path to vulnerable library: /demo/bower_components/angular/angular.min.js</p>
<p>
Dependency Hierarchy:
- :x: **angular-1.8.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/inmar/patience_js/commit/13467d4317daf190edff4876cbb7b9044d46feec">13467d4317daf190edff4876cbb7b9044d46feec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of the package angular are vulnerable to Regular Expression Denial of Service (ReDoS) via the <input type="url"> element due to the usage of an insecure regular expression in the input[url] functionality. Exploiting this vulnerability is possible by a large carefully-crafted input, which can result in catastrophic backtracking.
<p>Publish Date: 2023-03-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-26118>CVE-2023-26118</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
| True | CVE-2023-26118 (High) detected in angular-1.8.2.min.js - autoclosed - ## CVE-2023-26118 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.8.2.min.js</b></p></summary>
<p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.8.2/angular.min.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.8.2/angular.min.js</a></p>
<p>Path to dependency file: /demo/index.html</p>
<p>Path to vulnerable library: /demo/bower_components/angular/angular.min.js</p>
<p>
Dependency Hierarchy:
- :x: **angular-1.8.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/inmar/patience_js/commit/13467d4317daf190edff4876cbb7b9044d46feec">13467d4317daf190edff4876cbb7b9044d46feec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of the package angular are vulnerable to Regular Expression Denial of Service (ReDoS) via the <input type="url"> element due to the usage of an insecure regular expression in the input[url] functionality. Exploiting this vulnerability is possible by a large carefully-crafted input, which can result in catastrophic backtracking.
<p>Publish Date: 2023-03-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-26118>CVE-2023-26118</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
| non_test | cve high detected in angular min js autoclosed cve high severity vulnerability vulnerable library angular min js angularjs is an mvc framework for building web applications the core features include html enhanced with custom component and data binding capabilities dependency injection and strong focus on simplicity testability maintainability and boiler plate reduction library home page a href path to dependency file demo index html path to vulnerable library demo bower components angular angular min js dependency hierarchy x angular min js vulnerable library found in head commit a href found in base branch master vulnerability details all versions of the package angular are vulnerable to regular expression denial of service redos via the element due to the usage of an insecure regular expression in the input functionality exploiting this vulnerability is possible by a large carefully crafted input which can result in catastrophic backtracking publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href | 0 |
127,921 | 10,500,214,020 | IssuesEvent | 2019-09-26 10:00:00 | owncloud/phoenix | https://api.github.com/repos/owncloud/phoenix | closed | convert to new drone version | Acceptance tests QA-team | - [x] make CI pass by `drone convert --save .drone.yml` #1729
- [ ] create `.drone.jsonnet` file for the build process | 1.0 | convert to new drone version - - [x] make CI pass by `drone convert --save .drone.yml` #1729
- [ ] create `.drone.jsonnet` file for the build process | test | convert to new drone version make ci pass by drone convert save drone yml create drone jsonnet file for the build process | 1 |
37,557 | 6,621,226,791 | IssuesEvent | 2017-09-21 18:21:39 | kubernetes/test-infra | https://api.github.com/repos/kubernetes/test-infra | closed | Document Approvers/Owners Process | kind/documentation | Need some longterm explanation of how to add approvers and reviewers and who can approve.
(A lot can be reused from the design document) | 1.0 | Document Approvers/Owners Process - Need some longterm explanation of how to add approvers and reviewers and who can approve.
(A lot can be reused from the design document) | non_test | document approvers owners process need some longterm explanation of how to add approvers and reviewers and who can approve a lot can be reused from the design document | 0 |
50,141 | 3,006,206,509 | IssuesEvent | 2015-07-27 08:52:51 | N4SJAMK/teamboard-client-react | https://api.github.com/repos/N4SJAMK/teamboard-client-react | closed | iPad: Creating ticket header with ipad | HIGH PRIORITY | If you write header and press "go" from keyboard; ticket is not saved.
while writing content there is no "go" available to press.
Maybe the "go"-button should save the ticket. | 1.0 | iPad: Creating ticket header with ipad - If you write header and press "go" from keyboard; ticket is not saved.
while writing content there is no "go" available to press.
Maybe the "go"-button should save the ticket. | non_test | ipad creating ticket header with ipad if you write header and press go from keyboard ticket is not saved while writing content there is no go available to press maybe the go button should save the ticket | 0 |
818,116 | 30,671,506,800 | IssuesEvent | 2023-07-25 23:06:37 | googleapis/repo-automation-bots | https://api.github.com/repos/googleapis/repo-automation-bots | closed | Error: `Server Error: Sorry, this diff is taking too long to generate.: {"resource":"PullRequest","field":"diff","code":"not_available"}` | type: bug bot: auto label priority: p3 | Stack trace:
```
HttpError: Server Error: Sorry, this diff is taking too long to generate.: {"resource":"PullRequest","field":"diff","code":"not_available"}
at /workspace/node_modules/@octokit/request/dist-node/index.js:88:21
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async sendRequestWithRetries (/workspace/node_modules/@octokit/auth-app/dist-node/index.js:376:12)
at async Object.next (/workspace/node_modules/@octokit/plugin-paginate-rest/dist-node/index.js:71:28)
at async MultiConfigChecker.validateConfigChanges (/workspace/node_modules/@google-automations/bot-config-utils/build/src/bot-config-utils.js:245:30)
at async ConfigChecker.validateConfigChanges (/workspace/node_modules/@google-automations/bot-config-utils/build/src/bot-config-utils.js:169:16)
at async /workspace/build/src/auto-label.js:462:9
at async Promise.all (index 0)
at async /workspace/node_modules/gcf-utils/build/src/gcf-utils.js:389:25
```
This is in the code that tries to validate schema of the auto-label bot config file. This happened on a third-party installation for a PR that has 119 commits and the GitHub UI shows "infinite" files changed. | 1.0 | Error: `Server Error: Sorry, this diff is taking too long to generate.: {"resource":"PullRequest","field":"diff","code":"not_available"}` - Stack trace:
```
HttpError: Server Error: Sorry, this diff is taking too long to generate.: {"resource":"PullRequest","field":"diff","code":"not_available"}
at /workspace/node_modules/@octokit/request/dist-node/index.js:88:21
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async sendRequestWithRetries (/workspace/node_modules/@octokit/auth-app/dist-node/index.js:376:12)
at async Object.next (/workspace/node_modules/@octokit/plugin-paginate-rest/dist-node/index.js:71:28)
at async MultiConfigChecker.validateConfigChanges (/workspace/node_modules/@google-automations/bot-config-utils/build/src/bot-config-utils.js:245:30)
at async ConfigChecker.validateConfigChanges (/workspace/node_modules/@google-automations/bot-config-utils/build/src/bot-config-utils.js:169:16)
at async /workspace/build/src/auto-label.js:462:9
at async Promise.all (index 0)
at async /workspace/node_modules/gcf-utils/build/src/gcf-utils.js:389:25
```
This is in the code that tries to validate schema of the auto-label bot config file. This happened on a third-party installation for a PR that has 119 commits and the GitHub UI shows "infinite" files changed. | non_test | error server error sorry this diff is taking too long to generate resource pullrequest field diff code not available stack trace httperror server error sorry this diff is taking too long to generate resource pullrequest field diff code not available at workspace node modules octokit request dist node index js at runmicrotasks at processticksandrejections internal process task queues js at async sendrequestwithretries workspace node modules octokit auth app dist node index js at async object next workspace node modules octokit plugin paginate rest dist node index js at async multiconfigchecker validateconfigchanges workspace node modules google automations bot config utils build src bot config utils js at async configchecker validateconfigchanges workspace node modules google automations bot config utils build src bot config utils js at async workspace build src auto label js at async promise all index at async workspace node modules gcf utils build src gcf utils js this is in the code that tries to validate schema of the auto label bot config file this happened on a third party installation for a pr that has commits and the github ui shows infinite files changed | 0 |
165,914 | 26,251,537,182 | IssuesEvent | 2023-01-05 19:47:39 | pulumi/pulumi-azure-native | https://api.github.com/repos/pulumi/pulumi-azure-native | closed | Add Azure lighthouse resources to azure-native | kind/enhancement resolution/by-design | ## Hello!
<!-- Please leave this section as-is, it's designed to help others in the community know how to interact with our GitHub issues. -->
- Vote on this issue by adding a 👍 reaction
- If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)
## Issue details
We would like to manage Azure Lighthouse (https://learn.microsoft.com/en-us/azure/lighthouse/overview) resources for some of our use cases (allowing resource delegation across Azure AD tenants, e.g. for Azure Monitor access from an Azure B2C Tenant). I can see that the old pulumi azure library had a namespace for the two relevant azure resources, Definition and Assignment: https://www.pulumi.com/registry/packages/azure/api-docs/lighthouse/
I could not find any old issue even mentioning the term lighthouse. Is there a specific reason why these resources were not ported, or not ported yet? Are there APIs missing?
<!-- Enhancement requests are most helpful when they describe the problem you're having as well as articulating the potential solution you'd like to see built. -->
### Affected area/feature
Pulumi azure-native library, problably a `lighthouse` namespace like in the old lib.
<!-- If you know the specific area where this feature request would go (e.g. Automation API, the Pulumi Service, the Terraform bridge, etc.), feel free to put that area here. -->
| 1.0 | Add Azure lighthouse resources to azure-native - ## Hello!
<!-- Please leave this section as-is, it's designed to help others in the community know how to interact with our GitHub issues. -->
- Vote on this issue by adding a 👍 reaction
- If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)
## Issue details
We would like to manage Azure Lighthouse (https://learn.microsoft.com/en-us/azure/lighthouse/overview) resources for some of our use cases (allowing resource delegation across Azure AD tenants, e.g. for Azure Monitor access from an Azure B2C Tenant). I can see that the old pulumi azure library had a namespace for the two relevant azure resources, Definition and Assignment: https://www.pulumi.com/registry/packages/azure/api-docs/lighthouse/
I could not find any old issue even mentioning the term lighthouse. Is there a specific reason why these resources were not ported, or not ported yet? Are there APIs missing?
<!-- Enhancement requests are most helpful when they describe the problem you're having as well as articulating the potential solution you'd like to see built. -->
### Affected area/feature
Pulumi azure-native library, problably a `lighthouse` namespace like in the old lib.
<!-- If you know the specific area where this feature request would go (e.g. Automation API, the Pulumi Service, the Terraform bridge, etc.), feel free to put that area here. -->
| non_test | add azure lighthouse resources to azure native hello vote on this issue by adding a 👍 reaction if you want to implement this feature comment to let us know we ll work with you on design scheduling etc issue details we would like to manage azure lighthouse resources for some of our use cases allowing resource delegation across azure ad tenants e g for azure monitor access from an azure tenant i can see that the old pulumi azure library had a namespace for the two relevant azure resources definition and assignment i could not find any old issue even mentioning the term lighthouse is there a specific reason why these resources were not ported or not ported yet are there apis missing affected area feature pulumi azure native library problably a lighthouse namespace like in the old lib | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.