Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
853
| labels
stringlengths 4
898
| body
stringlengths 2
262k
| index
stringclasses 13
values | text_combine
stringlengths 96
262k
| label
stringclasses 2
values | text
stringlengths 96
250k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
44,348
| 7,106,326,261
|
IssuesEvent
|
2018-01-16 16:18:22
|
unicef/magicbox
|
https://api.github.com/repos/unicef/magicbox
|
opened
|
Choose documentation platform
|
documentation needs feedback priority:crit
|
# Summary
Establish a documentation platform for hosted documentation about MagicBox projects
# Background
Effective documentation is critical in technical projects – open source or otherwise. However, it's especially helpful for MagicBox because of the small team size and need to engage with the open source community.
Better documentation for MagicBox means…
1. New contributors / contractors learn the pipeline / software quicker
2. Design decisions are better documented (less time back-tracking)
3. Enables other contributor-oriented documentation (e.g. issue / PR templates, see #3)
4. Easier to tell a story (e.g. Opensource.com, see #8)
So far, a lot of highly technical information is shared through Medium blog posts, which is helpful, but blogging isn't effective as a long-term strategy for referencing technical information for MagicBox. This type of information should live in documentation, while stories behind how it happens is better suited for blog posts.
# Details
Choosing a platform should be based on what is most intuitive and easiest to maintain for the team. A popular platform is [ReadTheDocs](https://readthedocs.org/), which uses a Python-based toolchain. Thus, it also makes it an especially fantastic tool for Python projects, but it is still great for any programming language regardless.
I want to do additional research to see if there is a Node / JavaScript-tailored docs platform, since many of our projects are in Node. However, I think ReadTheDocs may still be the best solution for this.
# Outcome
Better documentation means a better experience for people to contribute to our projects; enables us to bring new people into the community easier and also provide reference points for various design decisions in the future (i.e. why we did something the way we did)
|
1.0
|
Choose documentation platform - # Summary
Establish a documentation platform for hosted documentation about MagicBox projects
# Background
Effective documentation is critical in technical projects – open source or otherwise. However, it's especially helpful for MagicBox because of the small team size and need to engage with the open source community.
Better documentation for MagicBox means…
1. New contributors / contractors learn the pipeline / software quicker
2. Design decisions are better documented (less time back-tracking)
3. Enables other contributor-oriented documentation (e.g. issue / PR templates, see #3)
4. Easier to tell a story (e.g. Opensource.com, see #8)
So far, a lot of highly technical information is shared through Medium blog posts, which is helpful, but blogging isn't effective as a long-term strategy for referencing technical information for MagicBox. This type of information should live in documentation, while stories behind how it happens is better suited for blog posts.
# Details
Choosing a platform should be based on what is most intuitive and easiest to maintain for the team. A popular platform is [ReadTheDocs](https://readthedocs.org/), which uses a Python-based toolchain. Thus, it also makes it an especially fantastic tool for Python projects, but it is still great for any programming language regardless.
I want to do additional research to see if there is a Node / JavaScript-tailored docs platform, since many of our projects are in Node. However, I think ReadTheDocs may still be the best solution for this.
# Outcome
Better documentation means a better experience for people to contribute to our projects; enables us to bring new people into the community easier and also provide reference points for various design decisions in the future (i.e. why we did something the way we did)
|
non_build
|
choose documentation platform summary establish a documentation platform for hosted documentation about magicbox projects background effective documentation is critical in technical projects – open source or otherwise however it s especially helpful for magicbox because of the small team size and need to engage with the open source community better documentation for magicbox means… new contributors contractors learn the pipeline software quicker design decisions are better documented less time back tracking enables other contributor oriented documentation e g issue pr templates see easier to tell a story e g opensource com see so far a lot of highly technical information is shared through medium blog posts which is helpful but blogging isn t effective as a long term strategy for referencing technical information for magicbox this type of information should live in documentation while stories behind how it happens is better suited for blog posts details choosing a platform should be based on what is most intuitive and easiest to maintain for the team a popular platform is which uses a python based toolchain thus it also makes it an especially fantastic tool for python projects but it is still great for any programming language regardless i want to do additional research to see if there is a node javascript tailored docs platform since many of our projects are in node however i think readthedocs may still be the best solution for this outcome better documentation means a better experience for people to contribute to our projects enables us to bring new people into the community easier and also provide reference points for various design decisions in the future i e why we did something the way we did
| 0
|
90,701
| 26,171,906,428
|
IssuesEvent
|
2023-01-02 01:42:45
|
apache/beam
|
https://api.github.com/repos/apache/beam
|
closed
|
beam_PreCommit_PythonLint_Commit takes 50+ minutes to execute
|
build infra P3 bug jenkins
|
The beam_PreCommit_PythonLint_Commit phase takes 50**** minutes to execute as seen here:
[https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4252/](https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4252/) (53 minutes)
[https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4253/](https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4253/) (52 minutes)
According to the build blame report the mean time is ~10 minutes ([https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/buildTimeBlameReport/](https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/buildTimeBlameReport/))
Imported from Jira [BEAM-10255](https://issues.apache.org/jira/browse/BEAM-10255). Original Jira may contain additional context.
Reported by: tysonjh.
|
1.0
|
beam_PreCommit_PythonLint_Commit takes 50+ minutes to execute - The beam_PreCommit_PythonLint_Commit phase takes 50**** minutes to execute as seen here:
[https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4252/](https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4252/) (53 minutes)
[https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4253/](https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4253/) (52 minutes)
According to the build blame report the mean time is ~10 minutes ([https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/buildTimeBlameReport/](https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/buildTimeBlameReport/))
Imported from Jira [BEAM-10255](https://issues.apache.org/jira/browse/BEAM-10255). Original Jira may contain additional context.
Reported by: tysonjh.
|
build
|
beam precommit pythonlint commit takes minutes to execute the beam precommit pythonlint commit phase takes minutes to execute as seen here minutes minutes according to the build blame report the mean time is minutes imported from jira original jira may contain additional context reported by tysonjh
| 1
|
594,487
| 18,046,665,930
|
IssuesEvent
|
2021-09-19 02:09:16
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
C# WaitForReady is hard to set for a gRPC call
|
kind/enhancement lang/C# priority/P2 disposition/stale
|
`CallOptions` has a `WaitForReady` value and a fluent method for setting it to true. However, there is no option to set it to true on the CallOptions constructor.
Getter property:
https://github.com/grpc/grpc/blob/7125bbe5a564e30ad63c4f19bed89e7676cb7c29/src/csharp/Grpc.Core.Api/CallOptions.cs#L114-L122
Set method:
https://github.com/grpc/grpc/blob/7125bbe5a564e30ad63c4f19bed89e7676cb7c29/src/csharp/Grpc.Core.Api/CallOptions.cs#L204-L216
Ctor: (no option for wait for ready)
https://github.com/grpc/grpc/blob/7125bbe5a564e30ad63c4f19bed89e7676cb7c29/src/csharp/Grpc.Core.Api/CallOptions.cs#L39-L58
Also, there is no option for WaitForReady in gRPC generated code:
https://github.com/grpc/grpc/blob/fd3bd70939fb4239639fbd26143ec416366e4157/src/csharp/Grpc.IntegrationTesting/TestGrpc.cs#L289-L300
Setting it to true requires code like:
```cs
new CallOptions(deadline: DateTime.UtcNow.AddSeconds(5)).WithWaitForReady()
```
|
1.0
|
C# WaitForReady is hard to set for a gRPC call - `CallOptions` has a `WaitForReady` value and a fluent method for setting it to true. However, there is no option to set it to true on the CallOptions constructor.
Getter property:
https://github.com/grpc/grpc/blob/7125bbe5a564e30ad63c4f19bed89e7676cb7c29/src/csharp/Grpc.Core.Api/CallOptions.cs#L114-L122
Set method:
https://github.com/grpc/grpc/blob/7125bbe5a564e30ad63c4f19bed89e7676cb7c29/src/csharp/Grpc.Core.Api/CallOptions.cs#L204-L216
Ctor: (no option for wait for ready)
https://github.com/grpc/grpc/blob/7125bbe5a564e30ad63c4f19bed89e7676cb7c29/src/csharp/Grpc.Core.Api/CallOptions.cs#L39-L58
Also, there is no option for WaitForReady in gRPC generated code:
https://github.com/grpc/grpc/blob/fd3bd70939fb4239639fbd26143ec416366e4157/src/csharp/Grpc.IntegrationTesting/TestGrpc.cs#L289-L300
Setting it to true requires code like:
```cs
new CallOptions(deadline: DateTime.UtcNow.AddSeconds(5)).WithWaitForReady()
```
|
non_build
|
c waitforready is hard to set for a grpc call calloptions has a waitforready value and a fluent method for setting it to true however there is no option to set it to true on the calloptions constructor getter property set method ctor no option for wait for ready also there is no option for waitforready in grpc generated code setting it to true requires code like cs new calloptions deadline datetime utcnow addseconds withwaitforready
| 0
|
342,162
| 10,312,882,784
|
IssuesEvent
|
2019-08-29 21:00:44
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
opened
|
[craftercms] Change our backup/restore to use tar and gzip
|
enhancement priority: medium
|
Since we no longer run in Windows (we support Linux Docker images in Windows), we can now support backing up with tar and gzip, instead of using our own zipping utility, since tar is more reliable.
|
1.0
|
[craftercms] Change our backup/restore to use tar and gzip - Since we no longer run in Windows (we support Linux Docker images in Windows), we can now support backing up with tar and gzip, instead of using our own zipping utility, since tar is more reliable.
|
non_build
|
change our backup restore to use tar and gzip since we no longer run in windows we support linux docker images in windows we can now support backing up with tar and gzip instead of using our own zipping utility since tar is more reliable
| 0
|
38,857
| 10,256,921,612
|
IssuesEvent
|
2019-08-21 18:49:52
|
tensorflow/tfjs
|
https://api.github.com/repos/tensorflow/tfjs
|
closed
|
tfjs-examples: Simple object detection: yarn train --gpu fails
|
type:build/install
|
#### TensorFlow.js version
1.2.2
#### Browser version
Windows Version 10.0.17134 Build 17134
Node v10.15.0
### Problem description
**Install appears to succeed (with warnings)**
```
yarn install v1.17.3
[1/5] Validating package.json...
[2/5] Resolving packages...
[3/5] Fetching packages...
info fsevents@1.2.4: The platform "win32" is incompatible with this module.
info "fsevents@1.2.4" is an optional dependency and failed compatibility check. Excluding it from installation.
[4/5] Linking dependencies...
warning "@tensorflow/tfjs > @tensorflow/tfjs-data@1.2.2" has unmet peer dependency "seedrandom@~2.4.3".
warning "@tensorflow/tfjs > @tensorflow/tfjs-core > rollup-plugin-visualizer@1.1.1" has unmet peer dependency "rollup@>=0.60.0".
[5/5] Building fresh packages...
Done in 139.97s.
```
**Training with the gpu flag fails**
```
$ yarn train --gpu
yarn run v1.17.3
$ node train.js --gpu
Training using GPU.
cpu backend was already registered. Reusing existing backend factory.
Platform node has already been set. Overwriting the platform with [object Object].
node-pre-gyp info This Node instance does not support builds for N-API version 4
node-pre-gyp info This Node instance does not support builds for N-API version 4
(node:14276) UnhandledPromiseRejectionWarning: Error: The specified module could not be found.
\\?\C:\Users\Ian\projects\tfjs\tfjs-examples\simple-object-detection\node_modules\@tensorflow\tfjs-node-gpu\lib\napi-v3\tfjs_binding.node
at Object.Module._extensions..node (internal/modules/cjs/loader.js:718:18)
at Module.load (internal/modules/cjs/loader.js:599:32)
at tryModuleLoad (internal/modules/cjs/loader.js:538:12)
at Function.Module._load (internal/modules/cjs/loader.js:530:3)
at Module.require (internal/modules/cjs/loader.js:637:17)
at require (internal/modules/cjs/helpers.js:22:18)
at Object.<anonymous> (C:\Users\Ian\projects\tfjs\tfjs-examples\simple-object-detection\node_modules\@tensorflow\tfjs-node-gpu\dist\index.js:44:16)
at Module._compile (internal/modules/cjs/loader.js:689:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
at Module.load (internal/modules/cjs/loader.js:599:32)
(node:14276) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:14276) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Done in 1.74s.
```
|
1.0
|
tfjs-examples: Simple object detection: yarn train --gpu fails - #### TensorFlow.js version
1.2.2
#### Browser version
Windows Version 10.0.17134 Build 17134
Node v10.15.0
### Problem description
**Install appears to succeed (with warnings)**
```
yarn install v1.17.3
[1/5] Validating package.json...
[2/5] Resolving packages...
[3/5] Fetching packages...
info fsevents@1.2.4: The platform "win32" is incompatible with this module.
info "fsevents@1.2.4" is an optional dependency and failed compatibility check. Excluding it from installation.
[4/5] Linking dependencies...
warning "@tensorflow/tfjs > @tensorflow/tfjs-data@1.2.2" has unmet peer dependency "seedrandom@~2.4.3".
warning "@tensorflow/tfjs > @tensorflow/tfjs-core > rollup-plugin-visualizer@1.1.1" has unmet peer dependency "rollup@>=0.60.0".
[5/5] Building fresh packages...
Done in 139.97s.
```
**Training with the gpu flag fails**
```
$ yarn train --gpu
yarn run v1.17.3
$ node train.js --gpu
Training using GPU.
cpu backend was already registered. Reusing existing backend factory.
Platform node has already been set. Overwriting the platform with [object Object].
node-pre-gyp info This Node instance does not support builds for N-API version 4
node-pre-gyp info This Node instance does not support builds for N-API version 4
(node:14276) UnhandledPromiseRejectionWarning: Error: The specified module could not be found.
\\?\C:\Users\Ian\projects\tfjs\tfjs-examples\simple-object-detection\node_modules\@tensorflow\tfjs-node-gpu\lib\napi-v3\tfjs_binding.node
at Object.Module._extensions..node (internal/modules/cjs/loader.js:718:18)
at Module.load (internal/modules/cjs/loader.js:599:32)
at tryModuleLoad (internal/modules/cjs/loader.js:538:12)
at Function.Module._load (internal/modules/cjs/loader.js:530:3)
at Module.require (internal/modules/cjs/loader.js:637:17)
at require (internal/modules/cjs/helpers.js:22:18)
at Object.<anonymous> (C:\Users\Ian\projects\tfjs\tfjs-examples\simple-object-detection\node_modules\@tensorflow\tfjs-node-gpu\dist\index.js:44:16)
at Module._compile (internal/modules/cjs/loader.js:689:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
at Module.load (internal/modules/cjs/loader.js:599:32)
(node:14276) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:14276) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Done in 1.74s.
```
|
build
|
tfjs examples simple object detection yarn train gpu fails tensorflow js version browser version windows version build node problem description install appears to succeed with warnings yarn install validating package json resolving packages fetching packages info fsevents the platform is incompatible with this module info fsevents is an optional dependency and failed compatibility check excluding it from installation linking dependencies warning tensorflow tfjs tensorflow tfjs data has unmet peer dependency seedrandom warning tensorflow tfjs tensorflow tfjs core rollup plugin visualizer has unmet peer dependency rollup building fresh packages done in training with the gpu flag fails yarn train gpu yarn run node train js gpu training using gpu cpu backend was already registered reusing existing backend factory platform node has already been set overwriting the platform with node pre gyp info this node instance does not support builds for n api version node pre gyp info this node instance does not support builds for n api version node unhandledpromiserejectionwarning error the specified module could not be found c users ian projects tfjs tfjs examples simple object detection node modules tensorflow tfjs node gpu lib napi tfjs binding node at object module extensions node internal modules cjs loader js at module load internal modules cjs loader js at trymoduleload internal modules cjs loader js at function module load internal modules cjs loader js at module require internal modules cjs loader js at require internal modules cjs helpers js at object c users ian projects tfjs tfjs examples simple object detection node modules tensorflow tfjs node gpu dist index js at module compile internal modules cjs loader js at object module extensions js internal modules cjs loader js at module load internal modules cjs loader js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id node deprecationwarning unhandled promise rejections are deprecated in the future promise rejections that are not handled will terminate the node js process with a non zero exit code done in
| 1
|
16,229
| 10,679,502,822
|
IssuesEvent
|
2019-10-21 19:24:41
|
matomo-org/matomo
|
https://api.github.com/repos/matomo-org/matomo
|
closed
|
SegmentEditor: display readable values for some metrics (e.g. FF -> Firefox)
|
Enhancement Hacktoberfest c: Usability
|
When the segmenteditor is used, some select values are very technical / non user-friendly. These Values should be mapped / labeled to make them more meaningful.
For example this would make sense for:
- Browser - expects values like (FF, IE, MF, CM, IW, etc.) - This should be more readable (Firefox, Internet Explorer,??,??,??)
- Visit Location (Continent, Country)
- Operating system
- possibly others where it makes sense
|
True
|
SegmentEditor: display readable values for some metrics (e.g. FF -> Firefox) - When the segmenteditor is used, some select values are very technical / non user-friendly. These Values should be mapped / labeled to make them more meaningful.
For example this would make sense for:
- Browser - expects values like (FF, IE, MF, CM, IW, etc.) - This should be more readable (Firefox, Internet Explorer,??,??,??)
- Visit Location (Continent, Country)
- Operating system
- possibly others where it makes sense
|
non_build
|
segmenteditor display readable values for some metrics e g ff firefox when the segmenteditor is used some select values are very technical non user friendly these values should be mapped labeled to make them more meaningful for example this would make sense for browser expects values like ff ie mf cm iw etc this should be more readable firefox internet explorer visit location continent country operating system possibly others where it makes sense
| 0
|
59,376
| 17,023,110,759
|
IssuesEvent
|
2021-07-03 00:25:25
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
ConcurrentModificationException when reselecting segment of way.
|
Component: applet Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 1.15am, Tuesday, 18th April 2006]**
Here's a fun one:
If you make a slight mistake when selecting the segments, deselect the segment then later reselect it, it thinks you're concurrently modifying things... ...with yourself.
The applet then becomes unresponsive with:
```
java.util.ConcurrentModificationException
at java.util.AbstractList$Itr.checkForComodification(Unknown Source)
at java.util.AbstractList$Itr.next(Unknown Source)
at org.openstreetmap.processing.OsmApplet.draw(OsmApplet.java:493)
at processing.core.PApplet.display(PApplet.java:1183)
at processing.core.PGraphics.requestDisplay(PGraphics.java:520)
at processing.core.PApplet.run(PApplet.java:1009)
at java.lang.Thread.run(Unknown Source)
```
Quite annoying if you've just selected ten screens worth of segments for a long way and you have to start again.
|
1.0
|
ConcurrentModificationException when reselecting segment of way. - **[Submitted to the original trac issue database at 1.15am, Tuesday, 18th April 2006]**
Here's a fun one:
If you make a slight mistake when selecting the segments, deselect the segment then later reselect it, it thinks you're concurrently modifying things... ...with yourself.
The applet then becomes unresponsive with:
```
java.util.ConcurrentModificationException
at java.util.AbstractList$Itr.checkForComodification(Unknown Source)
at java.util.AbstractList$Itr.next(Unknown Source)
at org.openstreetmap.processing.OsmApplet.draw(OsmApplet.java:493)
at processing.core.PApplet.display(PApplet.java:1183)
at processing.core.PGraphics.requestDisplay(PGraphics.java:520)
at processing.core.PApplet.run(PApplet.java:1009)
at java.lang.Thread.run(Unknown Source)
```
Quite annoying if you've just selected ten screens worth of segments for a long way and you have to start again.
|
non_build
|
concurrentmodificationexception when reselecting segment of way here s a fun one if you make a slight mistake when selecting the segments deselect the segment then later reselect it it thinks you re concurrently modifying things with yourself the applet then becomes unresponsive with java util concurrentmodificationexception at java util abstractlist itr checkforcomodification unknown source at java util abstractlist itr next unknown source at org openstreetmap processing osmapplet draw osmapplet java at processing core papplet display papplet java at processing core pgraphics requestdisplay pgraphics java at processing core papplet run papplet java at java lang thread run unknown source quite annoying if you ve just selected ten screens worth of segments for a long way and you have to start again
| 0
|
63,892
| 15,729,091,239
|
IssuesEvent
|
2021-03-29 14:29:58
|
atlas-engineer/nyxt
|
https://api.github.com/repos/atlas-engineer/nyxt
|
opened
|
Guix reference scanner triggers all kinds of issues on grafted Nyxt builds
|
bug build high
|
Many users have reported breaking issues with the _grafted_ Nyxt build / install, such as GLib errors, etc.:#1103, #1241...
The issue is with Guix reference scanner, see https://issues.guix.gnu.org/33848.
|
1.0
|
Guix reference scanner triggers all kinds of issues on grafted Nyxt builds - Many users have reported breaking issues with the _grafted_ Nyxt build / install, such as GLib errors, etc.:#1103, #1241...
The issue is with Guix reference scanner, see https://issues.guix.gnu.org/33848.
|
build
|
guix reference scanner triggers all kinds of issues on grafted nyxt builds many users have reported breaking issues with the grafted nyxt build install such as glib errors etc the issue is with guix reference scanner see
| 1
|
151,044
| 19,648,332,255
|
IssuesEvent
|
2022-01-10 01:27:42
|
ekediala/ekediala
|
https://api.github.com/repos/ekediala/ekediala
|
opened
|
WS-2020-0042 (High) detected in acorn-6.3.0.tgz, acorn-5.7.3.tgz
|
security vulnerability
|
## WS-2020-0042 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>acorn-6.3.0.tgz</b>, <b>acorn-5.7.3.tgz</b></p></summary>
<p>
<details><summary><b>acorn-6.3.0.tgz</b></p></summary>
<p>ECMAScript parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/acorn/-/acorn-6.3.0.tgz">https://registry.npmjs.org/acorn/-/acorn-6.3.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/acorn/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-3.11.0.tgz (Root Library)
- :x: **acorn-6.3.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>acorn-5.7.3.tgz</b></p></summary>
<p>ECMAScript parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/acorn/-/acorn-5.7.3.tgz">https://registry.npmjs.org/acorn/-/acorn-5.7.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/espree/node_modules/acorn/package.json</p>
<p>
Dependency Hierarchy:
- cli-plugin-eslint-3.11.0.tgz (Root Library)
- eslint-4.19.1.tgz
- espree-3.5.4.tgz
- :x: **acorn-5.7.3.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
acorn is vulnerable to REGEX DoS. A regex of the form /[x-\ud800]/u causes the parser to enter an infinite loop. attackers may leverage the vulnerability leading to a Denial of Service since the string is not valid UTF16 and it results in it being sanitized before reaching the parser.
<p>Publish Date: 2020-03-01
<p>URL: <a href=https://github.com/acornjs/acorn/commit/b5c17877ac0511e31579ea31e7650ba1a5871e51>WS-2020-0042</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1488">https://www.npmjs.com/advisories/1488</a></p>
<p>Release Date: 2020-03-01</p>
<p>Fix Resolution: 7.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2020-0042 (High) detected in acorn-6.3.0.tgz, acorn-5.7.3.tgz - ## WS-2020-0042 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>acorn-6.3.0.tgz</b>, <b>acorn-5.7.3.tgz</b></p></summary>
<p>
<details><summary><b>acorn-6.3.0.tgz</b></p></summary>
<p>ECMAScript parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/acorn/-/acorn-6.3.0.tgz">https://registry.npmjs.org/acorn/-/acorn-6.3.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/acorn/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-3.11.0.tgz (Root Library)
- :x: **acorn-6.3.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>acorn-5.7.3.tgz</b></p></summary>
<p>ECMAScript parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/acorn/-/acorn-5.7.3.tgz">https://registry.npmjs.org/acorn/-/acorn-5.7.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/espree/node_modules/acorn/package.json</p>
<p>
Dependency Hierarchy:
- cli-plugin-eslint-3.11.0.tgz (Root Library)
- eslint-4.19.1.tgz
- espree-3.5.4.tgz
- :x: **acorn-5.7.3.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
acorn is vulnerable to REGEX DoS. A regex of the form /[x-\ud800]/u causes the parser to enter an infinite loop. attackers may leverage the vulnerability leading to a Denial of Service since the string is not valid UTF16 and it results in it being sanitized before reaching the parser.
<p>Publish Date: 2020-03-01
<p>URL: <a href=https://github.com/acornjs/acorn/commit/b5c17877ac0511e31579ea31e7650ba1a5871e51>WS-2020-0042</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1488">https://www.npmjs.com/advisories/1488</a></p>
<p>Release Date: 2020-03-01</p>
<p>Fix Resolution: 7.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_build
|
ws high detected in acorn tgz acorn tgz ws high severity vulnerability vulnerable libraries acorn tgz acorn tgz acorn tgz ecmascript parser library home page a href path to dependency file package json path to vulnerable library node modules acorn package json dependency hierarchy cli service tgz root library x acorn tgz vulnerable library acorn tgz ecmascript parser library home page a href path to dependency file package json path to vulnerable library node modules espree node modules acorn package json dependency hierarchy cli plugin eslint tgz root library eslint tgz espree tgz x acorn tgz vulnerable library vulnerability details acorn is vulnerable to regex dos a regex of the form u causes the parser to enter an infinite loop attackers may leverage the vulnerability leading to a denial of service since the string is not valid and it results in it being sanitized before reaching the parser publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
89,316
| 25,752,747,219
|
IssuesEvent
|
2022-12-08 14:22:23
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
Installation process on dead pip
|
type:build/install subtype: ubuntu/linux TF 2.11
|
<details><summary>Click to expand!</summary>
### Issue Type
Bug
### Source
source
### Tensorflow Version
2.11
### Custom Code
Yes
### OS Platform and Distribution
Linux Ubuntu 22.04
### Mobile device
_No response_
### Python version
3.10.6
### Bazel version
_No response_
### GCC/Compiler version
11.3.0
### CUDA/cuDNN version
_No response_
### GPU model and memory
None
### Current Behaviour?
```shell
I was trying to install tensorflow, using install tensorflow, but it didn't even start the installation and said that the installation process was killed. I wanted to know how to solve this problem. I managed to install the Keras library, but to run it you need the Tensorflow library.
Here is some technical information about my notebook:
CPU Intel Celeron 1.9 Ghz x 2
4 GB of RAM
I don't have a GPU.
I don't know if it could have something to do with the hardware requirements, but I believe it could have something to do with that.
```
### Standalone code to reproduce the issue
```shell
pip install tensorflow
```
### Relevant log output
```shell
Defaulting to user installation because normal site-packages is not writeable
Collecting tensorflow
Dead
```
</details>
|
1.0
|
Installation process on dead pip - <details><summary>Click to expand!</summary>
### Issue Type
Bug
### Source
source
### Tensorflow Version
2.11
### Custom Code
Yes
### OS Platform and Distribution
Linux Ubuntu 22.04
### Mobile device
_No response_
### Python version
3.10.6
### Bazel version
_No response_
### GCC/Compiler version
11.3.0
### CUDA/cuDNN version
_No response_
### GPU model and memory
None
### Current Behaviour?
```shell
I was trying to install tensorflow, using install tensorflow, but it didn't even start the installation and said that the installation process was killed. I wanted to know how to solve this problem. I managed to install the Keras library, but to run it you need the Tensorflow library.
Here is some technical information about my notebook:
CPU Intel Celeron 1.9 Ghz x 2
4 GB of RAM
I don't have a GPU.
I don't know if it could have something to do with the hardware requirements, but I believe it could have something to do with that.
```
### Standalone code to reproduce the issue
```shell
pip install tensorflow
```
### Relevant log output
```shell
Defaulting to user installation because normal site-packages is not writeable
Collecting tensorflow
Dead
```
</details>
|
build
|
installation process on dead pip click to expand issue type bug source source tensorflow version custom code yes os platform and distribution linux ubuntu mobile device no response python version bazel version no response gcc compiler version cuda cudnn version no response gpu model and memory none current behaviour shell i was trying to install tensorflow using install tensorflow but it didn t even start the installation and said that the installation process was killed i wanted to know how to solve this problem i managed to install the keras library but to run it you need the tensorflow library here is some technical information about my notebook cpu intel celeron ghz x gb of ram i don t have a gpu i don t know if it could have something to do with the hardware requirements but i believe it could have something to do with that standalone code to reproduce the issue shell pip install tensorflow relevant log output shell defaulting to user installation because normal site packages is not writeable collecting tensorflow dead
| 1
|
46,336
| 11,825,276,480
|
IssuesEvent
|
2020-03-21 11:55:51
|
nodejs/nodejs.dev
|
https://api.github.com/repos/nodejs/nodejs.dev
|
opened
|
CI: Live previews in forks
|
build discussion
|
The current setup provides live previews only when merging against https://github.com/nodejs/nodejs.dev. This means that folks who will be collaborating on forks will either need to open nested PRs or will have to defer CI related fixes when merging a single PR.
If multiple PRs are preferred, this creates unnecessary noise in the project's repo which would be very confusing for someone not familiar with the work to figure out why or to recognize there is sharing some history in PRs.
In both cases, imho that overhead leads to friction that compounds for everyone involved.
The question that I'm inclined to ask, could GCB or netlify be setup to carry over into individual user forks?
|
1.0
|
CI: Live previews in forks - The current setup provides live previews only when merging against https://github.com/nodejs/nodejs.dev. This means that folks who will be collaborating on forks will either need to open nested PRs or will have to defer CI related fixes when merging a single PR.
If multiple PRs are preferred, this creates unnecessary noise in the project's repo which would be very confusing for someone not familiar with the work to figure out why or to recognize there is sharing some history in PRs.
In both cases, imho that overhead leads to friction that compounds for everyone involved.
The question that I'm inclined to ask, could GCB or netlify be setup to carry over into individual user forks?
|
build
|
ci live previews in forks the current setup provides live previews only when merging against this means that folks who will be collaborating on forks will either need to open nested prs or will have to defer ci related fixes when merging a single pr if multiple prs are preferred this creates unnecessary noise in the project s repo which would be very confusing for someone not familiar with the work to figure out why or to recognize there is sharing some history in prs in both cases imho that overhead leads to friction that compounds for everyone involved the question that i m inclined to ask could gcb or netlify be setup to carry over into individual user forks
| 1
|
40,897
| 10,591,039,193
|
IssuesEvent
|
2019-10-09 09:59:53
|
htm-community/htm.core
|
https://api.github.com/repos/htm-community/htm.core
|
opened
|
Python setup.py develop broken
|
bug build python
|
I think the `develop` mode is broken, as it links to `build/Release/distr/src`
while the changes should be "live" on source files.
Intention of 'develop' py install is that you can make changes to the python files and the package (is linked) will immediately reflect that.
|
1.0
|
Python setup.py develop broken - I think the `develop` mode is broken, as it links to `build/Release/distr/src`
while the changes should be "live" on source files.
Intention of 'develop' py install is that you can make changes to the python files and the package (is linked) will immediately reflect that.
|
build
|
python setup py develop broken i think the develop mode is broken as it links to build release distr src while the changes should be live on source files intention of develop py install is that you can make changes to the python files and the package is linked will immediately reflect that
| 1
|
73,710
| 7,350,245,511
|
IssuesEvent
|
2018-03-08 13:44:55
|
LiskHQ/lisk
|
https://api.github.com/repos/LiskHQ/lisk
|
opened
|
Review unit test coverage of modules
|
*hard test
|
account
- some incomplete test
- refactor to stub
blocks
- all test pending
cache
- some tests are pending
- some test are incomplete
- needs refactor to stub.
dapps
- all test pending
delegates
- almost all test pending(25)
- refactor to stub
loader
- almost all test pending(25)
- refactor to stub
- refactor code base nested functions
multisignature
- all tests pending
node
- almost all test pending
- refactor to stub
peers
- pending tests __private functions
- refactor to stub
signatures
- all test pending
### Which version(s) does this affect? (Environment, OS, etc...)
1.0.0
|
1.0
|
Review unit test coverage of modules - account
- some incomplete test
- refactor to stub
blocks
- all test pending
cache
- some tests are pending
- some test are incomplete
- needs refactor to stub.
dapps
- all test pending
delegates
- almost all test pending(25)
- refactor to stub
loader
- almost all test pending(25)
- refactor to stub
- refactor code base nested functions
multisignature
- all tests pending
node
- almost all test pending
- refactor to stub
peers
- pending tests __private functions
- refactor to stub
signatures
- all test pending
### Which version(s) does this affect? (Environment, OS, etc...)
1.0.0
|
non_build
|
review unit test coverage of modules account some incomplete test refactor to stub blocks all test pending cache some tests are pending some test are incomplete needs refactor to stub dapps all test pending delegates almost all test pending refactor to stub loader almost all test pending refactor to stub refactor code base nested functions multisignature all tests pending node almost all test pending refactor to stub peers pending tests private functions refactor to stub signatures all test pending which version s does this affect environment os etc
| 0
|
337,455
| 30,248,167,444
|
IssuesEvent
|
2023-07-06 18:11:10
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: ycsb/A/nodes=3/cpu=32/mvcc-range-keys=global failed
|
C-test-failure O-robot O-roachtest release-blocker branch-release-23.1
|
roachtest.ycsb/A/nodes=3/cpu=32/mvcc-range-keys=global [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10797189?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10797189?buildTab=artifacts#/ycsb/A/nodes=3/cpu=32/mvcc-range-keys=global) on release-23.1 @ [fa2d7f7c9894d701ac4a393f058aa84552957087](https://github.com/cockroachdb/cockroach/commits/fa2d7f7c9894d701ac4a393f058aa84552957087):
```
(cluster.go:2247).Run: output in run_174531.816560464_n4_workload-run-ycsb-in: ./workload run ycsb --init --insert-count=1000000 --workload=A --concurrency=144 --splits=3 --histograms=perf/stats.json --select-for-update=true --ramp=2m --duration=30m {pgurl:1-3} returned: COMMAND_PROBLEM: exit status 1
(monitor.go:137).Wait: monitor failure: monitor task failed: t.Fatal() was called
test artifacts and logs in: /artifacts/ycsb/A/nodes=3/cpu=32/mvcc-range-keys=global/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=32</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*ycsb/A/nodes=3/cpu=32/mvcc-range-keys=global.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: ycsb/A/nodes=3/cpu=32/mvcc-range-keys=global failed - roachtest.ycsb/A/nodes=3/cpu=32/mvcc-range-keys=global [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10797189?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10797189?buildTab=artifacts#/ycsb/A/nodes=3/cpu=32/mvcc-range-keys=global) on release-23.1 @ [fa2d7f7c9894d701ac4a393f058aa84552957087](https://github.com/cockroachdb/cockroach/commits/fa2d7f7c9894d701ac4a393f058aa84552957087):
```
(cluster.go:2247).Run: output in run_174531.816560464_n4_workload-run-ycsb-in: ./workload run ycsb --init --insert-count=1000000 --workload=A --concurrency=144 --splits=3 --histograms=perf/stats.json --select-for-update=true --ramp=2m --duration=30m {pgurl:1-3} returned: COMMAND_PROBLEM: exit status 1
(monitor.go:137).Wait: monitor failure: monitor task failed: t.Fatal() was called
test artifacts and logs in: /artifacts/ycsb/A/nodes=3/cpu=32/mvcc-range-keys=global/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=32</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*ycsb/A/nodes=3/cpu=32/mvcc-range-keys=global.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_build
|
roachtest ycsb a nodes cpu mvcc range keys global failed roachtest ycsb a nodes cpu mvcc range keys global with on release cluster go run output in run workload run ycsb in workload run ycsb init insert count workload a concurrency splits histograms perf stats json select for update true ramp duration pgurl returned command problem exit status monitor go wait monitor failure monitor task failed t fatal was called test artifacts and logs in artifacts ycsb a nodes cpu mvcc range keys global run parameters roachtest arch roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see cc cockroachdb test eng
| 0
|
49,297
| 12,311,197,941
|
IssuesEvent
|
2020-05-12 12:00:13
|
PowerShell/PowerShell
|
https://api.github.com/repos/PowerShell/PowerShell
|
closed
|
PowerShell on FreeBSD
|
Area-Build Issue-Question Resolution-Answered
|
Is there interest in PowerShell on FreeBSD?
I have been following dotnet/runtime for a while and building the preview releases on FreeBSD.
I was curious to see if PowerShell would build and did a little hacking to get things to compile.
There weren't many changes needed, but of course that's no guarantee that everything is
working correctly. Certainly enough to get going though.
```
[jason@freebsd11 ~/src/PowerShell/src/powershell-unix]$ ../../.dotnet/dotnet --info
.NET Core SDK (reflecting any global.json):
Version: 5.0.100-preview.3.20216.6
Commit: 9f62a32109
Runtime Environment:
OS Name: FreeBSD
OS Version: 11
OS Platform: FreeBSD
RID: freebsd.11-x64
Base Path: /usr/home/jason/src/PowerShell/.dotnet/sdk/5.0.100-preview.3.20216.6/
Host (useful for support):
Version: 5.0.0-preview.3.20214.6
Commit: b037784658
.NET SDKs installed:
5.0.100-preview.3.20216.6 [/usr/home/jason/src/PowerShell/.dotnet/sdk]
.NET runtimes installed:
Microsoft.AspNetCore.App 5.0.0-preview.3.20215.14 [/usr/home/jason/src/PowerShell/.dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 5.0.0-preview.3.20214.6 [/usr/home/jason/src/PowerShell/.dotnet/shared/Microsoft.NETCore.App]
To install additional .NET runtimes or SDKs:
https://aka.ms/dotnet-download
[jason@freebsd11 ~/src/PowerShell/src/powershell-unix]$ bin/Release/netcoreapp5.0/freebsd-x64/pwsh
PowerShell 7.1.0-preview.1-45-gba53621894a030c2f5dfce0db81fa1e09408fd2f
Copyright (c) Microsoft Corporation.
https://aka.ms/powershell
Type 'help' to get help.
PS /usr/home/jason/src/PowerShell/src/powershell-unix> Get-Host
Name : ConsoleHost
Version : 7.1.0-preview.1
InstanceId : 18a64041-fd7b-49ab-bd93-76d3e9a915e0
UI : System.Management.Automation.Internal.Host.InternalHostUserInterface
CurrentCulture :
CurrentUICulture :
PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy
DebuggerEnabled : True
IsRunspacePushed : False
Runspace : System.Management.Automation.Runspaces.LocalRunspace
PS /usr/home/jason/src/PowerShell/src/powershell-unix> $PSVersionTable
Name Value
---- -----
PSVersion 7.1.0-preview.1
PSEdition Core
GitCommitId 7.1.0-preview.1-45-gba53621894a030c2f5dfce0db81fa1e09408fd2f
OS FreeBSD 11.3-RELEASE FreeBSD 11.3-RELEASE #0 r349754: Fri Jul 5 04:45:24 UTC 2019 root@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC
Platform Unix
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0
PS /usr/home/jason/src/PowerShell/src/powershell-unix>
```
Work previously done by @mateusrodrigues helped, thanks.
|
1.0
|
PowerShell on FreeBSD - Is there interest in PowerShell on FreeBSD?
I have been following dotnet/runtime for a while and building the preview releases on FreeBSD.
I was curious to see if PowerShell would build and did a little hacking to get things to compile.
There weren't many changes needed, but of course that's no guarantee that everything is
working correctly. Certainly enough to get going though.
```
[jason@freebsd11 ~/src/PowerShell/src/powershell-unix]$ ../../.dotnet/dotnet --info
.NET Core SDK (reflecting any global.json):
Version: 5.0.100-preview.3.20216.6
Commit: 9f62a32109
Runtime Environment:
OS Name: FreeBSD
OS Version: 11
OS Platform: FreeBSD
RID: freebsd.11-x64
Base Path: /usr/home/jason/src/PowerShell/.dotnet/sdk/5.0.100-preview.3.20216.6/
Host (useful for support):
Version: 5.0.0-preview.3.20214.6
Commit: b037784658
.NET SDKs installed:
5.0.100-preview.3.20216.6 [/usr/home/jason/src/PowerShell/.dotnet/sdk]
.NET runtimes installed:
Microsoft.AspNetCore.App 5.0.0-preview.3.20215.14 [/usr/home/jason/src/PowerShell/.dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 5.0.0-preview.3.20214.6 [/usr/home/jason/src/PowerShell/.dotnet/shared/Microsoft.NETCore.App]
To install additional .NET runtimes or SDKs:
https://aka.ms/dotnet-download
[jason@freebsd11 ~/src/PowerShell/src/powershell-unix]$ bin/Release/netcoreapp5.0/freebsd-x64/pwsh
PowerShell 7.1.0-preview.1-45-gba53621894a030c2f5dfce0db81fa1e09408fd2f
Copyright (c) Microsoft Corporation.
https://aka.ms/powershell
Type 'help' to get help.
PS /usr/home/jason/src/PowerShell/src/powershell-unix> Get-Host
Name : ConsoleHost
Version : 7.1.0-preview.1
InstanceId : 18a64041-fd7b-49ab-bd93-76d3e9a915e0
UI : System.Management.Automation.Internal.Host.InternalHostUserInterface
CurrentCulture :
CurrentUICulture :
PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy
DebuggerEnabled : True
IsRunspacePushed : False
Runspace : System.Management.Automation.Runspaces.LocalRunspace
PS /usr/home/jason/src/PowerShell/src/powershell-unix> $PSVersionTable
Name Value
---- -----
PSVersion 7.1.0-preview.1
PSEdition Core
GitCommitId 7.1.0-preview.1-45-gba53621894a030c2f5dfce0db81fa1e09408fd2f
OS FreeBSD 11.3-RELEASE FreeBSD 11.3-RELEASE #0 r349754: Fri Jul 5 04:45:24 UTC 2019 root@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC
Platform Unix
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0
PS /usr/home/jason/src/PowerShell/src/powershell-unix>
```
Work previously done by @mateusrodrigues helped, thanks.
|
build
|
powershell on freebsd is there interest in powershell on freebsd i have been following dotnet runtime for a while and building the preview releases on freebsd i was curious to see if powershell would build and did a little hacking to get things to compile there weren t many changes needed but of course that s no guarantee that everything is working correctly certainly enough to get going though dotnet dotnet info net core sdk reflecting any global json version preview commit runtime environment os name freebsd os version os platform freebsd rid freebsd base path usr home jason src powershell dotnet sdk preview host useful for support version preview commit net sdks installed preview net runtimes installed microsoft aspnetcore app preview microsoft netcore app preview to install additional net runtimes or sdks bin release freebsd pwsh powershell preview copyright c microsoft corporation type help to get help ps usr home jason src powershell src powershell unix get host name consolehost version preview instanceid ui system management automation internal host internalhostuserinterface currentculture currentuiculture privatedata microsoft powershell consolehost consolecolorproxy debuggerenabled true isrunspacepushed false runspace system management automation runspaces localrunspace ps usr home jason src powershell src powershell unix psversiontable name value psversion preview psedition core gitcommitid preview os freebsd release freebsd release fri jul utc root nyi freebsd org usr obj usr src sys generic platform unix pscompatibleversions … psremotingprotocolversion serializationversion wsmanstackversion ps usr home jason src powershell src powershell unix work previously done by mateusrodrigues helped thanks
| 1
|
31,936
| 8,775,335,280
|
IssuesEvent
|
2018-12-18 22:43:25
|
Automattic/wp-calypso
|
https://api.github.com/repos/Automattic/wp-calypso
|
closed
|
Build size: async load live chat
|
Build Happychat [Status] Stale [Type] Enhancement
|
https://github.com/Automattic/wp-calypso/pull/22487 added the ability to initiate a livechat right from the sidebar.
Seems to have unintentionally dragged in a lot of code to build. Lets figure out how to defer loading happychat code until we know the user is interested.
|
1.0
|
Build size: async load live chat - https://github.com/Automattic/wp-calypso/pull/22487 added the ability to initiate a livechat right from the sidebar.
Seems to have unintentionally dragged in a lot of code to build. Lets figure out how to defer loading happychat code until we know the user is interested.
|
build
|
build size async load live chat added the ability to initiate a livechat right from the sidebar seems to have unintentionally dragged in a lot of code to build lets figure out how to defer loading happychat code until we know the user is interested
| 1
|
41,519
| 10,730,051,674
|
IssuesEvent
|
2019-10-28 16:40:58
|
angular/angular
|
https://api.github.com/repos/angular/angular
|
closed
|
"test_ivy_aot" CircleCI job is failing randomly with i18n-related error
|
comp: build & ci comp: i18n hotlist: angular-core-team
|
That started to happen recently (9/17), the error message is:
```
1) runtime i18n should work correctly with event listeners
Message:
Expected Hello Angular to be equal to Bonjour Angular
Stack:
Error: Expected Hello Angular to be equal to Bonjour Angular
```
The problem goes away after CircleCI job restart (thus it looks like a flake).
One of the most recent failures can be found here: https://circleci.com/gh/angular/angular/460157
|
1.0
|
"test_ivy_aot" CircleCI job is failing randomly with i18n-related error - That started to happen recently (9/17), the error message is:
```
1) runtime i18n should work correctly with event listeners
Message:
Expected Hello Angular to be equal to Bonjour Angular
Stack:
Error: Expected Hello Angular to be equal to Bonjour Angular
```
The problem goes away after CircleCI job restart (thus it looks like a flake).
One of the most recent failures can be found here: https://circleci.com/gh/angular/angular/460157
|
build
|
test ivy aot circleci job is failing randomly with related error that started to happen recently the error message is runtime should work correctly with event listeners message expected hello angular to be equal to bonjour angular stack error expected hello angular to be equal to bonjour angular the problem goes away after circleci job restart thus it looks like a flake one of the most recent failures can be found here
| 1
|
5,860
| 3,684,102,770
|
IssuesEvent
|
2016-02-24 16:16:48
|
edemo/PDOauth
|
https://api.github.com/repos/edemo/PDOauth
|
closed
|
fix travis build
|
3 - Done build environment
|
With postgres-based tests travis is out of sync.
Should be rewritten to use the docker image.
<!---
@huboard:{"order":99.5,"milestone_order":3.25,"custom_state":""}
-->
|
1.0
|
fix travis build - With postgres-based tests travis is out of sync.
Should be rewritten to use the docker image.
<!---
@huboard:{"order":99.5,"milestone_order":3.25,"custom_state":""}
-->
|
build
|
fix travis build with postgres based tests travis is out of sync should be rewritten to use the docker image huboard order milestone order custom state
| 1
|
60,447
| 14,851,523,342
|
IssuesEvent
|
2021-01-18 07:04:00
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[SB] Instruction step created in Questionnaires is not displaying in the list
|
Bug P1 Process: Fixed Process: Tested dev Study builder
|
Steps:-
1. Navigate to Questionnaires and click on add questionnaire
2. Fill the required details in the screen and click on save button
3. click on Add instruction Step and create a instruction step and click on Done button
4. Verify whether created Instruction step is displaying in the list once it is created

|
1.0
|
[SB] Instruction step created in Questionnaires is not displaying in the list - Steps:-
1. Navigate to Questionnaires and click on add questionnaire
2. Fill the required details in the screen and click on save button
3. click on Add instruction Step and create a instruction step and click on Done button
4. Verify whether created Instruction step is displaying in the list once it is created

|
build
|
instruction step created in questionnaires is not displaying in the list steps navigate to questionnaires and click on add questionnaire fill the required details in the screen and click on save button click on add instruction step and create a instruction step and click on done button verify whether created instruction step is displaying in the list once it is created
| 1
|
126,807
| 17,109,740,716
|
IssuesEvent
|
2021-07-10 03:32:42
|
ZeroK-RTS/Zero-K
|
https://api.github.com/repos/ZeroK-RTS/Zero-K
|
opened
|
Simple commands, consider hiding retreat until a zone is set.
|
design decision feature
|
This would free up space, and possibly allow selection rank to be added by default. Many people wanted to know how to re-enable selection rank, indicating that it is reasonably discoverable. Planes could retain the retreat state by default as they use it to return to repair pads.
This makes the "show/hide" state menu complicated. Retreat would have to be split into two buttons, one for "always" and one for "if a zone has been set".
|
1.0
|
Simple commands, consider hiding retreat until a zone is set. - This would free up space, and possibly allow selection rank to be added by default. Many people wanted to know how to re-enable selection rank, indicating that it is reasonably discoverable. Planes could retain the retreat state by default as they use it to return to repair pads.
This makes the "show/hide" state menu complicated. Retreat would have to be split into two buttons, one for "always" and one for "if a zone has been set".
|
non_build
|
simple commands consider hiding retreat until a zone is set this would free up space and possibly allow selection rank to be added by default many people wanted to know how to re enable selection rank indicating that it is reasonably discoverable planes could retain the retreat state by default as they use it to return to repair pads this makes the show hide state menu complicated retreat would have to be split into two buttons one for always and one for if a zone has been set
| 0
|
72,516
| 19,299,266,233
|
IssuesEvent
|
2021-12-13 01:52:12
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
bazel-genfiles/ and *.pb.h not been generated with bazel-building tensorflow2.4.1 from source
|
stat:awaiting response type:build/install subtype:bazel TF 2.4
|
**System information**
- OS Platform and Distribution ( Linux Ubuntu 18.04):
- TensorFlow installed from (source)
- TensorFlow version: Tags v2.4.1
- Python version: Python 3.7.6
- Installed using virtualenv? pip? conda?: conda
- Bazel version (if compiling from source): have tried bazel 3.4.0 、 bazel 3.1.0
- GCC/Compiler version (if compiling from source): gcc (GCC) 7.4.0
- CUDA/cuDNN version: cuda_11.1 , cudnn 8.0.5
- GPU model and memory: rtx3060
- nvidia-driver version: 460.91
- eigen version : eigen-3.3.90
I want to build tensorflow c++ api with bazel, no errors were reported during bazel-build process .
the command I used: `bazel build --config=opt --config=cuda //tensorflow:libtensorflow_cc.so`
however,while compiling a C++ program, I got the following error:
`fatal error: tensorflow/core/framework/device_attributes.pb.h: No such file or directory
#include "tensorflow/core/framework/device_attributes.pb.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
`
After that I checked the tensorflow/core/framework folder and **found no *.pb.h files in it ,only some .proto files,**(etc. device_attributes.proto ) . **Not even the bazel-genfiles folder.**
The *.pb.h files should be generated by protobuf , what's wrong with my protobuf ?
the protobuf version and url refer to [workspace.bzl](https://github.com/tensorflow/tensorflow/blob/v2.4.1/tensorflow/workspace.bzl)
I don't know how to solve it. Thanks for your help.
|
1.0
|
bazel-genfiles/ and *.pb.h not been generated with bazel-building tensorflow2.4.1 from source - **System information**
- OS Platform and Distribution ( Linux Ubuntu 18.04):
- TensorFlow installed from (source)
- TensorFlow version: Tags v2.4.1
- Python version: Python 3.7.6
- Installed using virtualenv? pip? conda?: conda
- Bazel version (if compiling from source): have tried bazel 3.4.0 、 bazel 3.1.0
- GCC/Compiler version (if compiling from source): gcc (GCC) 7.4.0
- CUDA/cuDNN version: cuda_11.1 , cudnn 8.0.5
- GPU model and memory: rtx3060
- nvidia-driver version: 460.91
- eigen version : eigen-3.3.90
I want to build tensorflow c++ api with bazel, no errors were reported during bazel-build process .
the command I used: `bazel build --config=opt --config=cuda //tensorflow:libtensorflow_cc.so`
however,while compiling a C++ program, I got the following error:
`fatal error: tensorflow/core/framework/device_attributes.pb.h: No such file or directory
#include "tensorflow/core/framework/device_attributes.pb.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
`
After that I checked the tensorflow/core/framework folder and **found no *.pb.h files in it ,only some .proto files,**(etc. device_attributes.proto ) . **Not even the bazel-genfiles folder.**
The *.pb.h files should be generated by protobuf , what's wrong with my protobuf ?
the protobuf version and url refer to [workspace.bzl](https://github.com/tensorflow/tensorflow/blob/v2.4.1/tensorflow/workspace.bzl)
I don't know how to solve it. Thanks for your help.
|
build
|
bazel genfiles and pb h not been generated with bazel building from source system information os platform and distribution linux ubuntu tensorflow installed from source tensorflow version tags python version python installed using virtualenv pip conda conda bazel version if compiling from source have tried bazel 、 bazel gcc compiler version if compiling from source gcc gcc cuda cudnn version cuda cudnn gpu model and memory nvidia driver version eigen version eigen i want to build tensorflow c api with bazel no errors were reported during bazel build process the command i used bazel build config opt config cuda tensorflow libtensorflow cc so however while compiling a c program i got the following error fatal error tensorflow core framework device attributes pb h no such file or directory include tensorflow core framework device attributes pb h compilation terminated after that i checked the tensorflow core framework folder and found no pb h files in it only some proto files etc device attributes proto not even the bazel genfiles folder the pb h files should be generated by protobuf what s wrong with my protobuf the protobuf version and url refer to i don t know how to solve it thanks for your help
| 1
|
31,851
| 8,758,130,849
|
IssuesEvent
|
2018-12-15 00:40:07
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
Bazel 0.20 fixes into release 1.17.x [request]
|
area/build lang/core
|
### What version of gRPC and what language are you using?
I'm using gRPC from commit f8696b5136 (author: @nicolasnoble, reviewer: @jtattermusch)
which is from grpc/grpc#17363 which fixes an issue with **Bazel 0.20.0**
This is my WORKSPACE file:
```
##
# We need a specific unreleased version of gRPC:
# which fixes grpc & bazel 0.20 issues
# https://github.com/grpc/grpc/pull/17363
##
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = 'com_github_grpc_grpc',
url = 'https://github.com/grpc/grpc/archive/f8696b5136108fd2d46e553fdd550e1be6ba2677.zip',
sha256 = 'e8484fba47a3eebaed82704e8197a5f98f20d6843e44ea2a5e007180958ad545',
strip_prefix = 'grpc-f8696b5136108fd2d46e553fdd550e1be6ba2677',
)
load("@com_github_grpc_grpc//bazel:grpc_deps.bzl", "grpc_deps")
grpc_deps()
```
### What operating system (Linux, Windows, …) and version?
Ubuntu 18.04 & Ubuntu 14.04
### What runtime / compiler are you using (e.g. python version or version of gcc)
gcc 7.3.0
### What did you do?
We were using grpc, with a specific release specified in our WORKSPACE (1.14.2) file, but then we upgraded to **Bazel 0.20.0** and it failed due to deprecated imports of the grpc_deps.
### What did you expect to see?
To be able to continue using a released version of gRPC.
It would be better to point to a specific release: **1.17.0** was released just after #17363 went it, so I first tried the latest release **1.17.0** but that also didn't work.
### What did you see instead?
It didn't compile, but I find the fix had been merged to master with: f8696b5136108fd2d46e553fdd550e1be6ba2677
### Anything else we should know about your project / environment?
could f8696b5136108fd2d46e553fdd550e1be6ba2677 get cherry-picked into *1.17.1*
|
1.0
|
Bazel 0.20 fixes into release 1.17.x [request] - ### What version of gRPC and what language are you using?
I'm using gRPC from commit f8696b5136 (author: @nicolasnoble, reviewer: @jtattermusch)
which is from grpc/grpc#17363 which fixes an issue with **Bazel 0.20.0**
This is my WORKSPACE file:
```
##
# We need a specific unreleased version of gRPC:
# which fixes grpc & bazel 0.20 issues
# https://github.com/grpc/grpc/pull/17363
##
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = 'com_github_grpc_grpc',
url = 'https://github.com/grpc/grpc/archive/f8696b5136108fd2d46e553fdd550e1be6ba2677.zip',
sha256 = 'e8484fba47a3eebaed82704e8197a5f98f20d6843e44ea2a5e007180958ad545',
strip_prefix = 'grpc-f8696b5136108fd2d46e553fdd550e1be6ba2677',
)
load("@com_github_grpc_grpc//bazel:grpc_deps.bzl", "grpc_deps")
grpc_deps()
```
### What operating system (Linux, Windows, …) and version?
Ubuntu 18.04 & Ubuntu 14.04
### What runtime / compiler are you using (e.g. python version or version of gcc)
gcc 7.3.0
### What did you do?
We were using grpc, with a specific release specified in our WORKSPACE (1.14.2) file, but then we upgraded to **Bazel 0.20.0** and it failed due to deprecated imports of the grpc_deps.
### What did you expect to see?
To be able to continue using a released version of gRPC.
It would be better to point to a specific release: **1.17.0** was released just after #17363 went it, so I first tried the latest release **1.17.0** but that also didn't work.
### What did you see instead?
It didn't compile, but I find the fix had been merged to master with: f8696b5136108fd2d46e553fdd550e1be6ba2677
### Anything else we should know about your project / environment?
could f8696b5136108fd2d46e553fdd550e1be6ba2677 get cherry-picked into *1.17.1*
|
build
|
bazel fixes into release x what version of grpc and what language are you using i m using grpc from commit author nicolasnoble reviewer jtattermusch which is from grpc grpc which fixes an issue with bazel this is my workspace file we need a specific unreleased version of grpc which fixes grpc bazel issues load bazel tools tools build defs repo http bzl http archive http archive name com github grpc grpc url strip prefix grpc load com github grpc grpc bazel grpc deps bzl grpc deps grpc deps what operating system linux windows … and version ubuntu ubuntu what runtime compiler are you using e g python version or version of gcc gcc what did you do we were using grpc with a specific release specified in our workspace file but then we upgraded to bazel and it failed due to deprecated imports of the grpc deps what did you expect to see to be able to continue using a released version of grpc it would be better to point to a specific release was released just after went it so i first tried the latest release but that also didn t work what did you see instead it didn t compile but i find the fix had been merged to master with anything else we should know about your project environment could get cherry picked into
| 1
|
84,933
| 24,470,722,522
|
IssuesEvent
|
2022-10-07 19:36:38
|
BOINC/boinc
|
https://api.github.com/repos/BOINC/boinc
|
closed
|
Built with -ffast-math option in gcc/g++, boinc client doesn't recognize beignet (intel gpu) library on Linux
|
C: Client - Build C: Client - Daemon P: Major R: wontfix T: Defect E: to be determined C: Client - Linux
|
**Describe the bug**
A clear and concise description of what the bug is.
I am building boinc client from git master tree on Fedora 29 linux.
Building boinc client by configuring 'CFLAGS="-O4 -mavx2 -funroll-loops -fforce-addr -ffast-math" CXXFLAGS=$CFLAGS ./configure --disable-server --disable-manager', boinc client doesn't recognize intel gpu, though it detects nvidia gpu (my machine has both). If -ffast-math is removed, boinc client works for both intel and nvidia. Here I use "-O4 -mavx2", but they don't matter. With only "-O3", it happens. -O4 directs more aggressive optimization. -mavx2 is just for my cpu (Haswell).
Funnily, usage of -ffast-math is introduced in boinc wiki https://boinc.berkeley.edu/wiki/Compiling_the_core_client.
**Steps To Reproduce**
1. Just configure with 'CFLAGS="-O4 -mavx2 -funroll-loops -fforce-addr -ffast-math" CXXFLAGS=$CFLAGS ./configure --disable-server --disable-manager' under boinc directory.
2. make and install boinc and launch.
**Expected behavior**
A clear and concise description of what you expected to happen.
Intel integrated gpu must be detected and used by boinc client.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**System Information (please complete the following information):**
- OS: Linux Fedora 29 x86_64
- BOINC Version: master branch (7.15.0)
**Additional context**
Add any other context about the problem here.
This happens with boinc 7.14.2 also.
|
1.0
|
Built with -ffast-math option in gcc/g++, boinc client doesn't recognize beignet (intel gpu) library on Linux - **Describe the bug**
A clear and concise description of what the bug is.
I am building boinc client from git master tree on Fedora 29 linux.
Building boinc client by configuring 'CFLAGS="-O4 -mavx2 -funroll-loops -fforce-addr -ffast-math" CXXFLAGS=$CFLAGS ./configure --disable-server --disable-manager', boinc client doesn't recognize intel gpu, though it detects nvidia gpu (my machine has both). If -ffast-math is removed, boinc client works for both intel and nvidia. Here I use "-O4 -mavx2", but they don't matter. With only "-O3", it happens. -O4 directs more aggressive optimization. -mavx2 is just for my cpu (Haswell).
Funnily, usage of -ffast-math is introduced in boinc wiki https://boinc.berkeley.edu/wiki/Compiling_the_core_client.
**Steps To Reproduce**
1. Just configure with 'CFLAGS="-O4 -mavx2 -funroll-loops -fforce-addr -ffast-math" CXXFLAGS=$CFLAGS ./configure --disable-server --disable-manager' under boinc directory.
2. make and install boinc and launch.
**Expected behavior**
A clear and concise description of what you expected to happen.
Intel integrated gpu must be detected and used by boinc client.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**System Information (please complete the following information):**
- OS: Linux Fedora 29 x86_64
- BOINC Version: master branch (7.15.0)
**Additional context**
Add any other context about the problem here.
This happens with boinc 7.14.2 also.
|
build
|
built with ffast math option in gcc g boinc client doesn t recognize beignet intel gpu library on linux describe the bug a clear and concise description of what the bug is i am building boinc client from git master tree on fedora linux building boinc client by configuring cflags funroll loops fforce addr ffast math cxxflags cflags configure disable server disable manager boinc client doesn t recognize intel gpu though it detects nvidia gpu my machine has both if ffast math is removed boinc client works for both intel and nvidia here i use but they don t matter with only it happens directs more aggressive optimization is just for my cpu haswell funnily usage of ffast math is introduced in boinc wiki steps to reproduce just configure with cflags funroll loops fforce addr ffast math cxxflags cflags configure disable server disable manager under boinc directory make and install boinc and launch expected behavior a clear and concise description of what you expected to happen intel integrated gpu must be detected and used by boinc client screenshots if applicable add screenshots to help explain your problem system information please complete the following information os linux fedora boinc version master branch additional context add any other context about the problem here this happens with boinc also
| 1
|
48,636
| 12,225,505,905
|
IssuesEvent
|
2020-05-03 05:44:55
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
Windows build error: 'mlir::FoldingHook': 'value' is not a valid template type argument for parameter 'ConcreteType'
|
type:build/install
|
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 Pro 10.0.18363
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
- TensorFlow installed from (source or binary): source
- TensorFlow version: 2.1 (SHA-1 is c878390581fc817564f8ebe1f4237d0cbd225f14)
- Python version: 3.7
- Installed using virtualenv? pip? conda?: N/A?
- Bazel version (if compiling from source): 3.1.0
- GCC/Compiler version (if compiling from source): (BAZEL_VS: C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools)
- CUDA/cuDNN version: CUDA 10.1, cuDNN 7.6.5
- GPU model and memory: RTX 2080 (8GB)
I'm following this guide https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows and advice here https://github.com/tensorflow/tensorflow/issues/23542 with some adjustments to build Tensorflow 2.1 for Windows.
I ran `python configure.py` and this ended up being my `.tf_configure.bazelrc`:
build --action_env PYTHON_BIN_PATH="C:/Python37/python.exe"
build --action_env PYTHON_LIB_PATH="C:/Python37/lib/site-packages"
build --python_path="C:/Python37/python.exe"
build --config=xla
build --action_env CUDA_TOOLKIT_PATH="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1"
build --action_env TF_CUDA_COMPUTE_CAPABILITIES="7.5"
build --config=cuda
build:opt --copt=/arch:AVX2
build:opt --define with_default_optimizations=true
build --define=override_eigen_strong_inline=true
test --flaky_test_attempts=3
test --test_size_filters=small,medium
test:v1 --test_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu,-oss_serial
test:v1 --build_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu
test:v2 --test_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu,-oss_serial,-v1only
test:v2 --build_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu,-v1only
build --action_env TF_CONFIGURE_IOS="0"
Then I did
`bazel build --config=cuda --copt=-nvcc_options=disable-warnings tensorflow:tensorflow.dll`
I've done this twice and got the same error during the build. Pardon the mess...
ERROR: C:/users/SOMEUSER/_bazel_SOMEUSER/dktb5wq4/external/llvm-project/mlir/BUILD:75:1: C++ compilation of rule '@llvm-project//mlir:IR' failed (Exit 2): python.exe failed: error executing command
cd C:/users/SOMEUSER/_bazel_SOMEUSER/dktb5wq4/execroot/org_tensorflow
SET CUDA_TOOLKIT_PATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1
SET INCLUDE=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.25.28610\ATLMFC\include;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.25.28610\include;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um;C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\ucrt;C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared;C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um;C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt;C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\cppwinrt
SET LIB=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.25.28610\ATLMFC\lib\x64;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.25.28610\lib\x64;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.18362.0\ucrt\x64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.18362.0\um\x64;
SET PATH=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.25.28610\bin\HostX64\x64;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\VC\VCPackages;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\TestWindow;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\bin\Roslyn;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Team Tools\Performance Tools\x64;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Team Tools\Performance Tools;C:\Program Files (x86)\Microsoft Visual Studio\Shared\Common\VSPerfCollectionTools\vs2019\\x64;C:\Program Files (x86)\Microsoft Visual Studio\Shared\Common\VSPerfCollectionTools\vs2019\;C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6.1 Tools\x64\;C:\Program Files (x86)\Windows Kits\10\bin\10.0.18362.0\x64;C:\Program Files (x86)\Windows Kits\10\bin\x64;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\\MSBuild\Current\Bin;C:\Windows\Microsoft.NET\Framework64\v4.0.30319;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\Tools\;;C:\WINDOWS\system32;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\Ninja
SET PWD=/proc/self/cwd
SET PYTHON_BIN_PATH=C:/Python37/python.exe
SET PYTHON_LIB_PATH=C:/Python37/lib/site-packages
SET RUNFILES_MANIFEST_ONLY=1
SET TEMP=C:\Users\SOMEU~1\AppData\Local\Temp
SET TF2_BEHAVIOR=1
SET TF_CONFIGURE_IOS=0
SET TF_CUDA_COMPUTE_CAPABILITIES=7.5
SET TF_ENABLE_XLA=1
SET TF_NEED_CUDA=1
SET TMP=C:\Users\SOMEU~1\AppData\Local\Temp
C:/Python37/python.exe -B external/local_config_cuda/crosstool/windows/msvc_wrapper_for_nvcc.py /nologo /DCOMPILER_MSVC /DNOMINMAX /D_WIN32_WINNT=0x0600 /D_CRT_SECURE_NO_DEPRECATE /D_CRT_SECURE_NO_WARNINGS /D_SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS /bigobj /Zm500 /J /Gy /GF /EHsc /wd4351 /wd4291 /wd4250 /wd4996 /Iexternal/llvm-project /Ibazel-out/x64_windows-opt/bin/external/llvm-project /Iexternal/zlib /Ibazel-out/x64_windows-opt/bin/external/zlib /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/CallOpInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/DialectSymbolRegistry /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/InferTypeOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/OpAsmInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SideEffectInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SymbolInterfacesIncGen /Iexternal/llvm-project/mlir/include /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/include /Iexternal/llvm-project/llvm/include /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/include /Iexternal/zlib /Ibazel-out/x64_windows-opt/bin/external/zlib /D_CRT_SECURE_NO_DEPRECATE /D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE /D_CRT_NONSTDC_NO_WARNINGS /D_SCL_SECURE_NO_DEPRECATE /D_SCL_SECURE_NO_WARNINGS /DUNICODE /D_UNICODE /DLLVM_ENABLE_STATS /D__STDC_LIMIT_MACROS /D__STDC_CONSTANT_MACROS /D__STDC_FORMAT_MACROS /DLLVM_BUILD_GLOBAL_ISEL /showIncludes /MD /O2 /DNDEBUG /w /D_USE_MATH_DEFINES -DWIN32_LEAN_AND_MEAN -DNOGDI /std:c++14 /Fobazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_objs/IR/Module.o /c external/llvm-project/mlir/lib/IR/Module.cpp
Execution platform: @local_execution_config_platform//:platform
external/llvm-project/mlir/include\mlir/IR/OpDefinition.h(1185): error C2923: 'mlir::FoldingHook': 'value' is not a valid template type argument for parameter 'ConcreteType'
external/llvm-project/llvm/include\llvm/ADT/STLExtras.h(1281): note: see declaration of 'value'
external/llvm-project/mlir/include\mlir/IR/Module.h(36): note: see reference to class template instantiation 'mlir::Op<mlir::ModuleOp,mlir::OpTrait::ZeroOperands,mlir::OpTrait::ZeroResult,mlir::OpTrait::IsIsolatedFromAbove,mlir::OpTrait::PolyhedralScope,mlir::OpTrait::SymbolTable,mlir::OpTrait::SingleBlockImplicitTerminator<mlir::ModuleTerminatorOp>::Impl,mlir::SymbolOpInterface::Trait>' being compiled
external/llvm-project/mlir/include\mlir/IR/OpDefinition.h(1187): error C2955: 'mlir::FoldingHook': use of class template requires template argument list
external/llvm-project/mlir/include\mlir/IR/OpDefinition.h(273): note: see declaration of 'mlir::FoldingHook'
Target //tensorflow:tensorflow.dll failed to build
INFO: Elapsed time: 1466.809s, Critical Path: 96.50s
INFO: 3656 processes: 3656 local.
FAILED: Build did NOT complete successfully
The first time the build stopped entirely. The second time, which is right now, the same error reported, but it's still running. I see
`[10,544 / 12,151] Compiling tensorflow/core/kernels/scatter_op.cc; 5820s local`
Is there any hope that this will finish? What can be done to fix the error? Thanks for reading.
Update: I noticed my CPU wasn't working hard at all while waiting for the second round, so I decided to terminate it.
|
1.0
|
Windows build error: 'mlir::FoldingHook': 'value' is not a valid template type argument for parameter 'ConcreteType' - **System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 Pro 10.0.18363
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
- TensorFlow installed from (source or binary): source
- TensorFlow version: 2.1 (SHA-1 is c878390581fc817564f8ebe1f4237d0cbd225f14)
- Python version: 3.7
- Installed using virtualenv? pip? conda?: N/A?
- Bazel version (if compiling from source): 3.1.0
- GCC/Compiler version (if compiling from source): (BAZEL_VS: C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools)
- CUDA/cuDNN version: CUDA 10.1, cuDNN 7.6.5
- GPU model and memory: RTX 2080 (8GB)
I'm following this guide https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows and advice here https://github.com/tensorflow/tensorflow/issues/23542 with some adjustments to build Tensorflow 2.1 for Windows.
I ran `python configure.py` and this ended up being my `.tf_configure.bazelrc`:
build --action_env PYTHON_BIN_PATH="C:/Python37/python.exe"
build --action_env PYTHON_LIB_PATH="C:/Python37/lib/site-packages"
build --python_path="C:/Python37/python.exe"
build --config=xla
build --action_env CUDA_TOOLKIT_PATH="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1"
build --action_env TF_CUDA_COMPUTE_CAPABILITIES="7.5"
build --config=cuda
build:opt --copt=/arch:AVX2
build:opt --define with_default_optimizations=true
build --define=override_eigen_strong_inline=true
test --flaky_test_attempts=3
test --test_size_filters=small,medium
test:v1 --test_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu,-oss_serial
test:v1 --build_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu
test:v2 --test_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu,-oss_serial,-v1only
test:v2 --build_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu,-v1only
build --action_env TF_CONFIGURE_IOS="0"
Then I did
`bazel build --config=cuda --copt=-nvcc_options=disable-warnings tensorflow:tensorflow.dll`
I've done this twice and got the same error during the build. Pardon the mess...
ERROR: C:/users/SOMEUSER/_bazel_SOMEUSER/dktb5wq4/external/llvm-project/mlir/BUILD:75:1: C++ compilation of rule '@llvm-project//mlir:IR' failed (Exit 2): python.exe failed: error executing command
cd C:/users/SOMEUSER/_bazel_SOMEUSER/dktb5wq4/execroot/org_tensorflow
SET CUDA_TOOLKIT_PATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1
SET INCLUDE=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.25.28610\ATLMFC\include;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.25.28610\include;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um;C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\ucrt;C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared;C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um;C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt;C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\cppwinrt
SET LIB=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.25.28610\ATLMFC\lib\x64;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.25.28610\lib\x64;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.18362.0\ucrt\x64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.18362.0\um\x64;
SET PATH=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.25.28610\bin\HostX64\x64;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\VC\VCPackages;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\TestWindow;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\bin\Roslyn;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Team Tools\Performance Tools\x64;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Team Tools\Performance Tools;C:\Program Files (x86)\Microsoft Visual Studio\Shared\Common\VSPerfCollectionTools\vs2019\\x64;C:\Program Files (x86)\Microsoft Visual Studio\Shared\Common\VSPerfCollectionTools\vs2019\;C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6.1 Tools\x64\;C:\Program Files (x86)\Windows Kits\10\bin\10.0.18362.0\x64;C:\Program Files (x86)\Windows Kits\10\bin\x64;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\\MSBuild\Current\Bin;C:\Windows\Microsoft.NET\Framework64\v4.0.30319;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\Tools\;;C:\WINDOWS\system32;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\Ninja
SET PWD=/proc/self/cwd
SET PYTHON_BIN_PATH=C:/Python37/python.exe
SET PYTHON_LIB_PATH=C:/Python37/lib/site-packages
SET RUNFILES_MANIFEST_ONLY=1
SET TEMP=C:\Users\SOMEU~1\AppData\Local\Temp
SET TF2_BEHAVIOR=1
SET TF_CONFIGURE_IOS=0
SET TF_CUDA_COMPUTE_CAPABILITIES=7.5
SET TF_ENABLE_XLA=1
SET TF_NEED_CUDA=1
SET TMP=C:\Users\SOMEU~1\AppData\Local\Temp
C:/Python37/python.exe -B external/local_config_cuda/crosstool/windows/msvc_wrapper_for_nvcc.py /nologo /DCOMPILER_MSVC /DNOMINMAX /D_WIN32_WINNT=0x0600 /D_CRT_SECURE_NO_DEPRECATE /D_CRT_SECURE_NO_WARNINGS /D_SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS /bigobj /Zm500 /J /Gy /GF /EHsc /wd4351 /wd4291 /wd4250 /wd4996 /Iexternal/llvm-project /Ibazel-out/x64_windows-opt/bin/external/llvm-project /Iexternal/zlib /Ibazel-out/x64_windows-opt/bin/external/zlib /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/CallOpInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/DialectSymbolRegistry /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/InferTypeOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/OpAsmInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SideEffectInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SymbolInterfacesIncGen /Iexternal/llvm-project/mlir/include /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/include /Iexternal/llvm-project/llvm/include /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/include /Iexternal/zlib /Ibazel-out/x64_windows-opt/bin/external/zlib /D_CRT_SECURE_NO_DEPRECATE /D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE /D_CRT_NONSTDC_NO_WARNINGS /D_SCL_SECURE_NO_DEPRECATE /D_SCL_SECURE_NO_WARNINGS /DUNICODE /D_UNICODE /DLLVM_ENABLE_STATS /D__STDC_LIMIT_MACROS /D__STDC_CONSTANT_MACROS /D__STDC_FORMAT_MACROS /DLLVM_BUILD_GLOBAL_ISEL /showIncludes /MD /O2 /DNDEBUG /w /D_USE_MATH_DEFINES -DWIN32_LEAN_AND_MEAN -DNOGDI /std:c++14 /Fobazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_objs/IR/Module.o /c external/llvm-project/mlir/lib/IR/Module.cpp
Execution platform: @local_execution_config_platform//:platform
external/llvm-project/mlir/include\mlir/IR/OpDefinition.h(1185): error C2923: 'mlir::FoldingHook': 'value' is not a valid template type argument for parameter 'ConcreteType'
external/llvm-project/llvm/include\llvm/ADT/STLExtras.h(1281): note: see declaration of 'value'
external/llvm-project/mlir/include\mlir/IR/Module.h(36): note: see reference to class template instantiation 'mlir::Op<mlir::ModuleOp,mlir::OpTrait::ZeroOperands,mlir::OpTrait::ZeroResult,mlir::OpTrait::IsIsolatedFromAbove,mlir::OpTrait::PolyhedralScope,mlir::OpTrait::SymbolTable,mlir::OpTrait::SingleBlockImplicitTerminator<mlir::ModuleTerminatorOp>::Impl,mlir::SymbolOpInterface::Trait>' being compiled
external/llvm-project/mlir/include\mlir/IR/OpDefinition.h(1187): error C2955: 'mlir::FoldingHook': use of class template requires template argument list
external/llvm-project/mlir/include\mlir/IR/OpDefinition.h(273): note: see declaration of 'mlir::FoldingHook'
Target //tensorflow:tensorflow.dll failed to build
INFO: Elapsed time: 1466.809s, Critical Path: 96.50s
INFO: 3656 processes: 3656 local.
FAILED: Build did NOT complete successfully
The first time the build stopped entirely. The second time, which is right now, the same error reported, but it's still running. I see
`[10,544 / 12,151] Compiling tensorflow/core/kernels/scatter_op.cc; 5820s local`
Is there any hope that this will finish? What can be done to fix the error? Thanks for reading.
Update: I noticed my CPU wasn't working hard at all while waiting for the second round, so I decided to terminate it.
|
build
|
windows build error mlir foldinghook value is not a valid template type argument for parameter concretetype system information os platform and distribution e g linux ubuntu windows pro mobile device e g iphone pixel samsung galaxy if the issue happens on mobile device n a tensorflow installed from source or binary source tensorflow version sha is python version installed using virtualenv pip conda n a bazel version if compiling from source gcc compiler version if compiling from source bazel vs c program files microsoft visual studio buildtools cuda cudnn version cuda cudnn gpu model and memory rtx i m following this guide and advice here with some adjustments to build tensorflow for windows i ran python configure py and this ended up being my tf configure bazelrc build action env python bin path c python exe build action env python lib path c lib site packages build python path c python exe build config xla build action env cuda toolkit path c program files nvidia gpu computing toolkit cuda build action env tf cuda compute capabilities build config cuda build opt copt arch build opt define with default optimizations true build define override eigen strong inline true test flaky test attempts test test size filters small medium test test tag filters benchmark test no oss no windows no windows gpu no gpu oss serial test build tag filters benchmark test no oss no windows no windows gpu no gpu test test tag filters benchmark test no oss no windows no windows gpu no gpu oss serial test build tag filters benchmark test no oss no windows no windows gpu no gpu build action env tf configure ios then i did bazel build config cuda copt nvcc options disable warnings tensorflow tensorflow dll i ve done this twice and got the same error during the build pardon the mess error c users someuser bazel someuser external llvm project mlir build c compilation of rule llvm project mlir ir failed exit python exe failed error executing command cd c users someuser bazel someuser execroot org tensorflow set cuda toolkit path c program files nvidia gpu computing toolkit cuda set include c program files microsoft visual studio community vc tools msvc atlmfc include c program files microsoft visual studio community vc tools msvc include c program files windows kits netfxsdk include um c program files windows kits include ucrt c program files windows kits include shared c program files windows kits include um c program files windows kits include winrt c program files windows kits include cppwinrt set lib c program files microsoft visual studio community vc tools msvc atlmfc lib c program files microsoft visual studio community vc tools msvc lib c program files windows kits netfxsdk lib um c program files windows kits lib ucrt c program files windows kits lib um set path c program files microsoft visual studio community vc tools msvc bin c program files microsoft visual studio community ide vc vcpackages c program files microsoft visual studio community ide commonextensions microsoft testwindow c program files microsoft visual studio community ide commonextensions microsoft teamfoundation team explorer c program files microsoft visual studio community msbuild current bin roslyn c program files microsoft visual studio community team tools performance tools c program files microsoft visual studio community team tools performance tools c program files microsoft visual studio shared common vsperfcollectiontools c program files microsoft visual studio shared common vsperfcollectiontools c program files microsoft sdks windows bin netfx tools c program files windows kits bin c program files windows kits bin c program files microsoft visual studio community msbuild current bin c windows microsoft net c program files microsoft visual studio community ide c program files microsoft visual studio community tools c windows c program files microsoft visual studio community ide commonextensions microsoft cmake cmake bin c program files microsoft visual studio community ide commonextensions microsoft cmake ninja set pwd proc self cwd set python bin path c python exe set python lib path c lib site packages set runfiles manifest only set temp c users someu appdata local temp set behavior set tf configure ios set tf cuda compute capabilities set tf enable xla set tf need cuda set tmp c users someu appdata local temp c python exe b external local config cuda crosstool windows msvc wrapper for nvcc py nologo dcompiler msvc dnominmax d winnt d crt secure no deprecate d crt secure no warnings d silence stdext hash deprecation warnings bigobj j gy gf ehsc iexternal llvm project ibazel out windows opt bin external llvm project iexternal zlib ibazel out windows opt bin external zlib ibazel out windows opt bin external llvm project mlir virtual includes callopinterfacesincgen ibazel out windows opt bin external llvm project mlir virtual includes dialectsymbolregistry ibazel out windows opt bin external llvm project mlir virtual includes infertypeopinterfaceincgen ibazel out windows opt bin external llvm project mlir virtual includes opasminterfacesincgen ibazel out windows opt bin external llvm project mlir virtual includes sideeffectinterfacesincgen ibazel out windows opt bin external llvm project mlir virtual includes symbolinterfacesincgen iexternal llvm project mlir include ibazel out windows opt bin external llvm project mlir include iexternal llvm project llvm include ibazel out windows opt bin external llvm project llvm include iexternal zlib ibazel out windows opt bin external zlib d crt secure no deprecate d crt secure no warnings d crt nonstdc no deprecate d crt nonstdc no warnings d scl secure no deprecate d scl secure no warnings dunicode d unicode dllvm enable stats d stdc limit macros d stdc constant macros d stdc format macros dllvm build global isel showincludes md dndebug w d use math defines lean and mean dnogdi std c fobazel out windows opt bin external llvm project mlir objs ir module o c external llvm project mlir lib ir module cpp execution platform local execution config platform platform external llvm project mlir include mlir ir opdefinition h error mlir foldinghook value is not a valid template type argument for parameter concretetype external llvm project llvm include llvm adt stlextras h note see declaration of value external llvm project mlir include mlir ir module h note see reference to class template instantiation mlir op impl mlir symbolopinterface trait being compiled external llvm project mlir include mlir ir opdefinition h error mlir foldinghook use of class template requires template argument list external llvm project mlir include mlir ir opdefinition h note see declaration of mlir foldinghook target tensorflow tensorflow dll failed to build info elapsed time critical path info processes local failed build did not complete successfully the first time the build stopped entirely the second time which is right now the same error reported but it s still running i see compiling tensorflow core kernels scatter op cc local is there any hope that this will finish what can be done to fix the error thanks for reading update i noticed my cpu wasn t working hard at all while waiting for the second round so i decided to terminate it
| 1
|
280,633
| 24,319,860,109
|
IssuesEvent
|
2022-09-30 09:44:51
|
lowRISC/opentitan
|
https://api.github.com/repos/lowRISC/opentitan
|
closed
|
[test-triage] chip_sw_rstmgr_alert_info
|
Component:TestTriage
|
### Hierarchy of regression failure
Chip Level
### Failure Description
```
UVM_ERROR @ 3178.232878 us: (cip_base_scoreboard.sv:431) [uvm_test_top.env.scoreboard] Check failed item.d_error == exp_d_error (1 [0x1] vs 0 [0x0]) On interface chip_reg_block, TL item: req: (cip_tl_seq_item@107135) { a_addr: 'h200042b8 a_data: 'h0 a_mask: 'hf a_size: 'h2 a_param: 'h0 a_source: 'h0 a_opcode: 'h4 a_user: 'h2662a d_param: 'h0 d_source: 'h0 d_data: 'hffffffff d_size: 'h2 d_opcode: 'h1 d_error: 'h1 d_sink: 'h0 d_user: 'heaa a_source_is_overridden: 'h0 a_valid_delay: 'h0 d_valid_delay: 'h0 a_valid_len: 'h0 d_valid_len: 'h0 req_abort_after_a_valid_len: 'h0 rsp_abort_after_d_valid_len: 'h0 req_completed: 'h0 rsp_completed: 'h0 tl_intg_err_type: TlIntgErrNone max_ecc_errors: 'h3 }
, unmapped_err: 0, mem_access_err: 0, bus_intg_err: 0, byte_wr_err: 0, csr_size_err: 0, tl_item_err: 0, write_w_instr_type_err: 0, cfg.tl_mem_access_gated: 0 ecc_err: 0
UVM_INFO @ 3178.232878 us: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
--- UVM Report catcher Summary ---
```
### Steps to Reproduce
- Commit hash where failure was observed 0c214bdb3
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
`./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i chip_sw_rstmgr_alert_info --build-seed 739773536 --waves -v h`
### Tests with similar or related failures
- [ ] chip_sw_rstmgr_alert_info
- [ ] chip_sw_clkmgr_escalation_reset
- [ ] chip_sw_flash_ctrl_lc_rw_en
- [ ] chip_sw_lc_ctrl_transition
- [ ] chip_sw_lc_walkthrough_dev
- [ ] chip_sw_lc_walkthrough_prod
|
1.0
|
[test-triage] chip_sw_rstmgr_alert_info - ### Hierarchy of regression failure
Chip Level
### Failure Description
```
UVM_ERROR @ 3178.232878 us: (cip_base_scoreboard.sv:431) [uvm_test_top.env.scoreboard] Check failed item.d_error == exp_d_error (1 [0x1] vs 0 [0x0]) On interface chip_reg_block, TL item: req: (cip_tl_seq_item@107135) { a_addr: 'h200042b8 a_data: 'h0 a_mask: 'hf a_size: 'h2 a_param: 'h0 a_source: 'h0 a_opcode: 'h4 a_user: 'h2662a d_param: 'h0 d_source: 'h0 d_data: 'hffffffff d_size: 'h2 d_opcode: 'h1 d_error: 'h1 d_sink: 'h0 d_user: 'heaa a_source_is_overridden: 'h0 a_valid_delay: 'h0 d_valid_delay: 'h0 a_valid_len: 'h0 d_valid_len: 'h0 req_abort_after_a_valid_len: 'h0 rsp_abort_after_d_valid_len: 'h0 req_completed: 'h0 rsp_completed: 'h0 tl_intg_err_type: TlIntgErrNone max_ecc_errors: 'h3 }
, unmapped_err: 0, mem_access_err: 0, bus_intg_err: 0, byte_wr_err: 0, csr_size_err: 0, tl_item_err: 0, write_w_instr_type_err: 0, cfg.tl_mem_access_gated: 0 ecc_err: 0
UVM_INFO @ 3178.232878 us: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
--- UVM Report catcher Summary ---
```
### Steps to Reproduce
- Commit hash where failure was observed 0c214bdb3
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
`./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i chip_sw_rstmgr_alert_info --build-seed 739773536 --waves -v h`
### Tests with similar or related failures
- [ ] chip_sw_rstmgr_alert_info
- [ ] chip_sw_clkmgr_escalation_reset
- [ ] chip_sw_flash_ctrl_lc_rw_en
- [ ] chip_sw_lc_ctrl_transition
- [ ] chip_sw_lc_walkthrough_dev
- [ ] chip_sw_lc_walkthrough_prod
|
non_build
|
chip sw rstmgr alert info hierarchy of regression failure chip level failure description uvm error us cip base scoreboard sv check failed item d error exp d error vs on interface chip reg block tl item req cip tl seq item a addr a data a mask hf a size a param a source a opcode a user d param d source d data hffffffff d size d opcode d error d sink d user heaa a source is overridden a valid delay d valid delay a valid len d valid len req abort after a valid len rsp abort after d valid len req completed rsp completed tl intg err type tlintgerrnone max ecc errors unmapped err mem access err bus intg err byte wr err csr size err tl item err write w instr type err cfg tl mem access gated ecc err uvm info us uvm report catcher svh uvm report catcher summary steps to reproduce commit hash where failure was observed dvsim invocation command to reproduce the failure inclusive of build and run seeds util dvsim dvsim py hw top earlgrey dv chip sim cfg hjson i chip sw rstmgr alert info build seed waves v h tests with similar or related failures chip sw rstmgr alert info chip sw clkmgr escalation reset chip sw flash ctrl lc rw en chip sw lc ctrl transition chip sw lc walkthrough dev chip sw lc walkthrough prod
| 0
|
332,302
| 24,340,443,848
|
IssuesEvent
|
2022-10-01 16:35:28
|
SUI-Components/sui-components
|
https://api.github.com/repos/SUI-Components/sui-components
|
closed
|
Skeleton DEMO - Remove items from the examples section
|
documentation hacktoberfest ★★☆☆☆☆ Medium
|
**In order to simplify the [Skeleton](https://sui-components.vercel.app/workbench/atom/skeleton/demo) demo, please adjust the following:**
- Add Skeleton to the image placeholder as well
- Make all the Skeletons the same width
- For ALL examples, make a square example of 200px * 200px
https://user-images.githubusercontent.com/23620759/135512158-5351f22c-d839-44cf-aa30-ec45a8ddb899.mov
- Remove 2 of the 3 examples, leave just the one with the toggle to enable and disable the animation
https://user-images.githubusercontent.com/23620759/135511742-1168e8d6-2e3b-4ece-8af3-834086ab1887.mov
|
1.0
|
Skeleton DEMO - Remove items from the examples section - **In order to simplify the [Skeleton](https://sui-components.vercel.app/workbench/atom/skeleton/demo) demo, please adjust the following:**
- Add Skeleton to the image placeholder as well
- Make all the Skeletons the same width
- For ALL examples, make a square example of 200px * 200px
https://user-images.githubusercontent.com/23620759/135512158-5351f22c-d839-44cf-aa30-ec45a8ddb899.mov
- Remove 2 of the 3 examples, leave just the one with the toggle to enable and disable the animation
https://user-images.githubusercontent.com/23620759/135511742-1168e8d6-2e3b-4ece-8af3-834086ab1887.mov
|
non_build
|
skeleton demo remove items from the examples section in order to simplify the demo please adjust the following add skeleton to the image placeholder as well make all the skeletons the same width for all examples make a square example of remove of the examples leave just the one with the toggle to enable and disable the animation
| 0
|
46,220
| 11,800,692,606
|
IssuesEvent
|
2020-03-18 18:00:22
|
nosqlbench/nosqlbench
|
https://api.github.com/repos/nosqlbench/nosqlbench
|
opened
|
Build guidebook strictly as as a secondary artifact
|
build
|
Presently, the guidebook application builds into the source tree as resource content.
This complicates commits after builds. We should try to move this content into an ephemeral artifact location like "target".
We can .gitignore the dev view and staging versions in the source tree, and stage the generated app in docsys/target/guidebook.
|
1.0
|
Build guidebook strictly as as a secondary artifact - Presently, the guidebook application builds into the source tree as resource content.
This complicates commits after builds. We should try to move this content into an ephemeral artifact location like "target".
We can .gitignore the dev view and staging versions in the source tree, and stage the generated app in docsys/target/guidebook.
|
build
|
build guidebook strictly as as a secondary artifact presently the guidebook application builds into the source tree as resource content this complicates commits after builds we should try to move this content into an ephemeral artifact location like target we can gitignore the dev view and staging versions in the source tree and stage the generated app in docsys target guidebook
| 1
|
590,006
| 17,768,626,644
|
IssuesEvent
|
2021-08-30 10:50:51
|
Adversarial-Deep-Learning/code-soup
|
https://api.github.com/repos/Adversarial-Deep-Learning/code-soup
|
opened
|
Tensor Board Logging
|
good first issue Priority:Low
|
We need to log the updates via tensor board, this will be an extension of the current logging #23
|
1.0
|
Tensor Board Logging - We need to log the updates via tensor board, this will be an extension of the current logging #23
|
non_build
|
tensor board logging we need to log the updates via tensor board this will be an extension of the current logging
| 0
|
311,156
| 23,373,431,784
|
IssuesEvent
|
2022-08-10 22:33:11
|
cloudflare/cloudflare-docs
|
https://api.github.com/repos/cloudflare/cloudflare-docs
|
closed
|
WARP client Windows - Managed deployment - disable updates
|
documentation Backlog content:edit
|
### Which Cloudflare product does this pertain to?
WARP Client
### Existing documentation URL(s)
https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/deployment/mdm-deployment/
### Section that requires update
Install WARP on Windows
### What needs to change?
In case there is any possibility to disable or delay automatic client updates, it should be described.
### How should it change?
Description how to disable or postpone automatic Client updates should be added.
### Additional information
Update of cloudflare Warp Client usually requires local admin permissions. Because of security reasons it is not advised to give all users Local admin rights.
User without admin rights gets a prompt from the app to update. However Installation cannot complete because of missing permissions. This results in another prompt.
In case of managed deployment, the updates in Windows environments are managed over tools like MECM. Of course, not all users get the update at the same time. Thus it can happen, that the prompt from app comes before the deployed update.
|
1.0
|
WARP client Windows - Managed deployment - disable updates - ### Which Cloudflare product does this pertain to?
WARP Client
### Existing documentation URL(s)
https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/deployment/mdm-deployment/
### Section that requires update
Install WARP on Windows
### What needs to change?
In case there is any possibility to disable or delay automatic client updates, it should be described.
### How should it change?
Description how to disable or postpone automatic Client updates should be added.
### Additional information
Update of cloudflare Warp Client usually requires local admin permissions. Because of security reasons it is not advised to give all users Local admin rights.
User without admin rights gets a prompt from the app to update. However Installation cannot complete because of missing permissions. This results in another prompt.
In case of managed deployment, the updates in Windows environments are managed over tools like MECM. Of course, not all users get the update at the same time. Thus it can happen, that the prompt from app comes before the deployed update.
|
non_build
|
warp client windows managed deployment disable updates which cloudflare product does this pertain to warp client existing documentation url s section that requires update install warp on windows what needs to change in case there is any possibility to disable or delay automatic client updates it should be described how should it change description how to disable or postpone automatic client updates should be added additional information update of cloudflare warp client usually requires local admin permissions because of security reasons it is not advised to give all users local admin rights user without admin rights gets a prompt from the app to update however installation cannot complete because of missing permissions this results in another prompt in case of managed deployment the updates in windows environments are managed over tools like mecm of course not all users get the update at the same time thus it can happen that the prompt from app comes before the deployed update
| 0
|
13,444
| 5,374,818,019
|
IssuesEvent
|
2017-02-23 01:49:47
|
Homebrew/homebrew-science
|
https://api.github.com/repos/Homebrew/homebrew-science
|
closed
|
madlib test failed against postgresql 9.5.2
|
build-error
|
With `postgresql` 9.5.2 installed, `brew test madlib` is failing, both for me locally (from source or bottle), and on Jenkins test-bot runs where `madlib` is tested as part of a `postgresql` formula change. Looks like it's causing https://github.com/Homebrew/legacy-homebrew/pull/49612 to fail in the test stage.
With the current bottle, `postgresql` changes [fail like this](http://bot.brew.sh/job/Legacy%20Homebrew%20Pull%20Requests/44099/) on 10.9 and 10.10. (I'm assuming the 10.11 is passing because its bottle was added after the recent postgresql update?)
```
==> launchctl load /usr/local/opt/postgresql/homebrew.mxcl.postgresql.plist
==> /usr/local/Cellar/postgresql/9.5.2/bin/createdb -w -U brew test_madpack
==> /usr/local/Cellar/madlib/1.8/bin/madpack -p postgres -c brew/@localhost/test_madpack install
madpack.py : INFO : Detected PostgreSQL version 9.5.
madpack.py : ERROR : This version is not among the PostgreSQL versions for which MADlib support files have been installed (9.4).
==> /usr/local/Cellar/postgresql/9.5.2/bin/dropdb -w -U brew test_madpack
```
When I do `brew install -s madlib` locally on 10.9.5, with a fresh `brew install postgresql --with-python`, I get a failure like this.
```
$ brew test madlib
Testing homebrew/science/madlib
==> Using the sandbox
==> /usr/local/Cellar/madlib/1.8/bin/madpack -h
==> launchctl load /usr/local/opt/postgresql/homebrew.mxcl.postgresql.plist
==> /usr/local/Cellar/postgresql/9.5.2/bin/createdb -w -U janke test_madpack
Last 15 lines from /Users/janke/Library/Logs/Homebrew/madlib/03.createdb:
2016-04-04 02:41:36 -0400
/usr/local/Cellar/postgresql/9.5.2/bin/createdb
-w
-U
janke
test_madpack
createdb: could not connect to database template1: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Error: homebrew/science/madlib: failed
Failed executing: /usr/local/Cellar/postgresql/9.5.2/bin/createdb -w -U janke test_madpack
/usr/local/Library/Homebrew/formula.rb:1481:in `block in system'
/usr/local/Library/Homebrew/formula.rb:1418:in `open'
/usr/local/Library/Homebrew/formula.rb:1418:in `system'
/usr/local/Library/Taps/homebrew/homebrew-science/madlib.rb:98:in `block in <class:Madlib>'
/usr/local/Library/Homebrew/formula.rb:1327:in `block in run_test'
/usr/local/Library/Homebrew/extend/fileutils.rb:37:in `mktemp'
/usr/local/Library/Homebrew/formula.rb:1323:in `run_test'
/usr/local/Library/Homebrew/test.rb:28:in `block in <main>'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/timeout.rb:66:in `timeout'
/usr/local/Library/Homebrew/test.rb:27:in `<main>'
```
I was trying to bump the bottle revision for `madlib` [like here](https://github.com/Homebrew/homebrew-science/commit/c0987ce70c322a12e7643bef06b2bfe9f22ab4b2) because I thought a rebuild against the current `postgresql` would fix it. But I'm getting that other test failure when installing it locally from source so the bottle's not involved.
Anyone know what's going on here?
|
1.0
|
madlib test failed against postgresql 9.5.2 - With `postgresql` 9.5.2 installed, `brew test madlib` is failing, both for me locally (from source or bottle), and on Jenkins test-bot runs where `madlib` is tested as part of a `postgresql` formula change. Looks like it's causing https://github.com/Homebrew/legacy-homebrew/pull/49612 to fail in the test stage.
With the current bottle, `postgresql` changes [fail like this](http://bot.brew.sh/job/Legacy%20Homebrew%20Pull%20Requests/44099/) on 10.9 and 10.10. (I'm assuming the 10.11 is passing because its bottle was added after the recent postgresql update?)
```
==> launchctl load /usr/local/opt/postgresql/homebrew.mxcl.postgresql.plist
==> /usr/local/Cellar/postgresql/9.5.2/bin/createdb -w -U brew test_madpack
==> /usr/local/Cellar/madlib/1.8/bin/madpack -p postgres -c brew/@localhost/test_madpack install
madpack.py : INFO : Detected PostgreSQL version 9.5.
madpack.py : ERROR : This version is not among the PostgreSQL versions for which MADlib support files have been installed (9.4).
==> /usr/local/Cellar/postgresql/9.5.2/bin/dropdb -w -U brew test_madpack
```
When I do `brew install -s madlib` locally on 10.9.5, with a fresh `brew install postgresql --with-python`, I get a failure like this.
```
$ brew test madlib
Testing homebrew/science/madlib
==> Using the sandbox
==> /usr/local/Cellar/madlib/1.8/bin/madpack -h
==> launchctl load /usr/local/opt/postgresql/homebrew.mxcl.postgresql.plist
==> /usr/local/Cellar/postgresql/9.5.2/bin/createdb -w -U janke test_madpack
Last 15 lines from /Users/janke/Library/Logs/Homebrew/madlib/03.createdb:
2016-04-04 02:41:36 -0400
/usr/local/Cellar/postgresql/9.5.2/bin/createdb
-w
-U
janke
test_madpack
createdb: could not connect to database template1: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Error: homebrew/science/madlib: failed
Failed executing: /usr/local/Cellar/postgresql/9.5.2/bin/createdb -w -U janke test_madpack
/usr/local/Library/Homebrew/formula.rb:1481:in `block in system'
/usr/local/Library/Homebrew/formula.rb:1418:in `open'
/usr/local/Library/Homebrew/formula.rb:1418:in `system'
/usr/local/Library/Taps/homebrew/homebrew-science/madlib.rb:98:in `block in <class:Madlib>'
/usr/local/Library/Homebrew/formula.rb:1327:in `block in run_test'
/usr/local/Library/Homebrew/extend/fileutils.rb:37:in `mktemp'
/usr/local/Library/Homebrew/formula.rb:1323:in `run_test'
/usr/local/Library/Homebrew/test.rb:28:in `block in <main>'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/timeout.rb:66:in `timeout'
/usr/local/Library/Homebrew/test.rb:27:in `<main>'
```
I was trying to bump the bottle revision for `madlib` [like here](https://github.com/Homebrew/homebrew-science/commit/c0987ce70c322a12e7643bef06b2bfe9f22ab4b2) because I thought a rebuild against the current `postgresql` would fix it. But I'm getting that other test failure when installing it locally from source so the bottle's not involved.
Anyone know what's going on here?
|
build
|
madlib test failed against postgresql with postgresql installed brew test madlib is failing both for me locally from source or bottle and on jenkins test bot runs where madlib is tested as part of a postgresql formula change looks like it s causing to fail in the test stage with the current bottle postgresql changes on and i m assuming the is passing because its bottle was added after the recent postgresql update launchctl load usr local opt postgresql homebrew mxcl postgresql plist usr local cellar postgresql bin createdb w u brew test madpack usr local cellar madlib bin madpack p postgres c brew localhost test madpack install madpack py info detected postgresql version madpack py error this version is not among the postgresql versions for which madlib support files have been installed usr local cellar postgresql bin dropdb w u brew test madpack when i do brew install s madlib locally on with a fresh brew install postgresql with python i get a failure like this brew test madlib testing homebrew science madlib using the sandbox usr local cellar madlib bin madpack h launchctl load usr local opt postgresql homebrew mxcl postgresql plist usr local cellar postgresql bin createdb w u janke test madpack last lines from users janke library logs homebrew madlib createdb usr local cellar postgresql bin createdb w u janke test madpack createdb could not connect to database could not connect to server no such file or directory is the server running locally and accepting connections on unix domain socket tmp s pgsql error homebrew science madlib failed failed executing usr local cellar postgresql bin createdb w u janke test madpack usr local library homebrew formula rb in block in system usr local library homebrew formula rb in open usr local library homebrew formula rb in system usr local library taps homebrew homebrew science madlib rb in block in usr local library homebrew formula rb in block in run test usr local library homebrew extend fileutils rb in mktemp usr local library homebrew formula rb in run test usr local library homebrew test rb in block in system library frameworks ruby framework versions usr lib ruby timeout rb in timeout usr local library homebrew test rb in i was trying to bump the bottle revision for madlib because i thought a rebuild against the current postgresql would fix it but i m getting that other test failure when installing it locally from source so the bottle s not involved anyone know what s going on here
| 1
|
83,085
| 16,088,095,822
|
IssuesEvent
|
2021-04-26 13:42:55
|
gradle/gradle
|
https://api.github.com/repos/gradle/gradle
|
opened
|
Deprecate consumption of code quality + antlr plugin configurations
|
@core in:antlr-plugin in:checkstyle-plugin in:codenarc-plugin in:jacoco-plugin in:pmd-plugin
|
Code quality plugins (and antlr plugin) declare a configuration for dependencies of the specific tool. Those configurations are by default both consumable and resolvable. They are only meant to be resolved by the project applying a plugin and should not be consumed by other projects.
Adding attributes on such configurations (related: https://github.com/gradle/gradle/issues/13736) will cause ambiguities in resolution: https://github.com/gradle/gradle/pull/16969
The said configurations should only be resolved in the declaring project and made unavailable for consumption. This will be a breaking change and thus consumption of those configurations should first be deprecated for removal in Gradle 8.0.
|
1.0
|
Deprecate consumption of code quality + antlr plugin configurations - Code quality plugins (and antlr plugin) declare a configuration for dependencies of the specific tool. Those configurations are by default both consumable and resolvable. They are only meant to be resolved by the project applying a plugin and should not be consumed by other projects.
Adding attributes on such configurations (related: https://github.com/gradle/gradle/issues/13736) will cause ambiguities in resolution: https://github.com/gradle/gradle/pull/16969
The said configurations should only be resolved in the declaring project and made unavailable for consumption. This will be a breaking change and thus consumption of those configurations should first be deprecated for removal in Gradle 8.0.
|
non_build
|
deprecate consumption of code quality antlr plugin configurations code quality plugins and antlr plugin declare a configuration for dependencies of the specific tool those configurations are by default both consumable and resolvable they are only meant to be resolved by the project applying a plugin and should not be consumed by other projects adding attributes on such configurations related will cause ambiguities in resolution the said configurations should only be resolved in the declaring project and made unavailable for consumption this will be a breaking change and thus consumption of those configurations should first be deprecated for removal in gradle
| 0
|
549,358
| 16,091,279,512
|
IssuesEvent
|
2021-04-26 17:00:17
|
fossasia/open-event-frontend
|
https://api.github.com/repos/fossasia/open-event-frontend
|
closed
|
Wizard Step 1: Dropdown event types etc. are not translatable
|
Priority: High bug enhancement
|
The dropdown menus on the top of the event wizard step 1 are not translatable.

|
1.0
|
Wizard Step 1: Dropdown event types etc. are not translatable - The dropdown menus on the top of the event wizard step 1 are not translatable.

|
non_build
|
wizard step dropdown event types etc are not translatable the dropdown menus on the top of the event wizard step are not translatable
| 0
|
93,607
| 3,906,348,037
|
IssuesEvent
|
2016-04-19 08:28:51
|
OCHA-DAP/liverpool16
|
https://api.github.com/repos/OCHA-DAP/liverpool16
|
closed
|
Allowing each config layer to also draw a chart
|
enhancement Medium Priority
|
Basically, it should be possible to have more than 1 chart.
|
1.0
|
Allowing each config layer to also draw a chart - Basically, it should be possible to have more than 1 chart.
|
non_build
|
allowing each config layer to also draw a chart basically it should be possible to have more than chart
| 0
|
14,175
| 24,582,919,394
|
IssuesEvent
|
2022-10-13 17:04:10
|
renovatebot/renovate
|
https://api.github.com/repos/renovatebot/renovate
|
opened
|
Want to update git-subtree managed directories
|
type:feature status:requirements priority-5-triage
|
### What would you like Renovate to be able to do?
Want to update [git-subtree](https://www.atlassian.com/git/tutorials/git-subtree) managed directories
### If you have any ideas on how this should be implemented, please tell us here.
N/A
### Is this a feature you are interested in implementing yourself?
No
|
1.0
|
Want to update git-subtree managed directories - ### What would you like Renovate to be able to do?
Want to update [git-subtree](https://www.atlassian.com/git/tutorials/git-subtree) managed directories
### If you have any ideas on how this should be implemented, please tell us here.
N/A
### Is this a feature you are interested in implementing yourself?
No
|
non_build
|
want to update git subtree managed directories what would you like renovate to be able to do want to update managed directories if you have any ideas on how this should be implemented please tell us here n a is this a feature you are interested in implementing yourself no
| 0
|
13,021
| 2,732,875,397
|
IssuesEvent
|
2015-04-17 09:54:47
|
tiku01/oryx-editor
|
https://api.github.com/repos/tiku01/oryx-editor
|
closed
|
canConnect in stencilsSets disabled
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. try to add a canConnect function in the rules of a stencilSets (as described
in ORYX-SSS Manual : http://oryx-editor.googlecode.com/files/OryxSSS.pdf)
What is the expected output?
the function canConnect should be executed
What do you see instead?
the function canConnect is not executed
Please use labels and text to provide additional information.
you will find here a hack that temporary bug resolution :
https://gist.github.com/1080003
```
Original issue reported on code.google.com by `florent....@gmail.com` on 13 Jul 2011 at 9:43
|
1.0
|
canConnect in stencilsSets disabled - ```
What steps will reproduce the problem?
1. try to add a canConnect function in the rules of a stencilSets (as described
in ORYX-SSS Manual : http://oryx-editor.googlecode.com/files/OryxSSS.pdf)
What is the expected output?
the function canConnect should be executed
What do you see instead?
the function canConnect is not executed
Please use labels and text to provide additional information.
you will find here a hack that temporary bug resolution :
https://gist.github.com/1080003
```
Original issue reported on code.google.com by `florent....@gmail.com` on 13 Jul 2011 at 9:43
|
non_build
|
canconnect in stencilssets disabled what steps will reproduce the problem try to add a canconnect function in the rules of a stencilsets as described in oryx sss manual what is the expected output the function canconnect should be executed what do you see instead the function canconnect is not executed please use labels and text to provide additional information you will find here a hack that temporary bug resolution original issue reported on code google com by florent gmail com on jul at
| 0
|
506,009
| 14,656,485,998
|
IssuesEvent
|
2020-12-28 13:31:56
|
danieleteti/delphimvcframework
|
https://api.github.com/repos/danieleteti/delphimvcframework
|
closed
|
Alternative server components
|
Priority-Medium enhancement
|
Is it possible to use any other server components? For example for realtime applications is now better to use websockets.
Is it anough to implement interfaces in MVCFramework.Server.pas to use some commercial websocket server components eg. from esegece?
|
1.0
|
Alternative server components - Is it possible to use any other server components? For example for realtime applications is now better to use websockets.
Is it anough to implement interfaces in MVCFramework.Server.pas to use some commercial websocket server components eg. from esegece?
|
non_build
|
alternative server components is it possible to use any other server components for example for realtime applications is now better to use websockets is it anough to implement interfaces in mvcframework server pas to use some commercial websocket server components eg from esegece
| 0
|
62,622
| 12,227,942,017
|
IssuesEvent
|
2020-05-03 17:17:39
|
RonAsis/Wsep202
|
https://api.github.com/repos/RonAsis/Wsep202
|
opened
|
"Owned store" 's menu
|
High priority code+test
|
When in "owned store" in the client, when pressing on an owned store, make sure that the owner's menu shows.
|
1.0
|
"Owned store" 's menu - When in "owned store" in the client, when pressing on an owned store, make sure that the owner's menu shows.
|
non_build
|
owned store s menu when in owned store in the client when pressing on an owned store make sure that the owner s menu shows
| 0
|
93,447
| 26,957,445,581
|
IssuesEvent
|
2023-02-08 15:48:21
|
rubyforgood/casa
|
https://api.github.com/repos/rubyforgood/casa
|
closed
|
add test for CaseCourtReportContext in the case that all fields are populated
|
Help Wanted not-ready-to-build
|
**Description**
add tests for CaseCourtReportContext in the case that all fields are populated
Add test/s to fill in test data for these fields and test them in file `spec/models/case_court_report_context_spec.rb`
```
expect(context[:case_contacts]).to eq([]) # TODO test this
expect(context[:case_court_orders].length).to eq(4) # TODO test this better
expect(context[:case_mandates].length).to eq(4) # TODO test this better
expect(context[:latest_hearing_date]).to eq("___<LATEST HEARING DATE>____")
expect(context[:org_address]).to eq(nil) # TODO test this better
expect(context[:volunteer]).to eq(nil) # TODO test this better
```
### Questions? Join Slack!
We highly recommend that you join us in slack https://rubyforgood.herokuapp.com/ #casa channel to ask questions quickly and hear about office hours (currently Tuesday 6-8pm Pacific), stakeholder news, and upcoming new issues.
|
1.0
|
add test for CaseCourtReportContext in the case that all fields are populated - **Description**
add tests for CaseCourtReportContext in the case that all fields are populated
Add test/s to fill in test data for these fields and test them in file `spec/models/case_court_report_context_spec.rb`
```
expect(context[:case_contacts]).to eq([]) # TODO test this
expect(context[:case_court_orders].length).to eq(4) # TODO test this better
expect(context[:case_mandates].length).to eq(4) # TODO test this better
expect(context[:latest_hearing_date]).to eq("___<LATEST HEARING DATE>____")
expect(context[:org_address]).to eq(nil) # TODO test this better
expect(context[:volunteer]).to eq(nil) # TODO test this better
```
### Questions? Join Slack!
We highly recommend that you join us in slack https://rubyforgood.herokuapp.com/ #casa channel to ask questions quickly and hear about office hours (currently Tuesday 6-8pm Pacific), stakeholder news, and upcoming new issues.
|
build
|
add test for casecourtreportcontext in the case that all fields are populated description add tests for casecourtreportcontext in the case that all fields are populated add test s to fill in test data for these fields and test them in file spec models case court report context spec rb expect context to eq todo test this expect context length to eq todo test this better expect context length to eq todo test this better expect context to eq expect context to eq nil todo test this better expect context to eq nil todo test this better questions join slack we highly recommend that you join us in slack casa channel to ask questions quickly and hear about office hours currently tuesday pacific stakeholder news and upcoming new issues
| 1
|
30,456
| 8,551,963,302
|
IssuesEvent
|
2018-11-07 19:35:56
|
general-language-syntax/GLS
|
https://api.github.com/repos/general-language-syntax/GLS
|
closed
|
Add an "--accept" equivalent for integration and end-to-end tests
|
build tooling testing utils
|
It can be annoying to update many integration and end-to-end test data files at once. If we know our change is right, or just want to see how it would affect the many test cases, it'd be nice to have a utility to override the `.cs`, `.java`, `.js`, etc. test files.
|
1.0
|
Add an "--accept" equivalent for integration and end-to-end tests - It can be annoying to update many integration and end-to-end test data files at once. If we know our change is right, or just want to see how it would affect the many test cases, it'd be nice to have a utility to override the `.cs`, `.java`, `.js`, etc. test files.
|
build
|
add an accept equivalent for integration and end to end tests it can be annoying to update many integration and end to end test data files at once if we know our change is right or just want to see how it would affect the many test cases it d be nice to have a utility to override the cs java js etc test files
| 1
|
114,745
| 14,630,736,353
|
IssuesEvent
|
2020-12-23 18:20:34
|
ufersa/plataforma-sabia
|
https://api.github.com/repos/ufersa/plataforma-sabia
|
closed
|
Funcionalidade de Banco de Editais
|
API UX/Design
|
## Feature Description
<!-- Descreva claramente o que será implementado -->
Do mesmo molde do Banco de Ideias o banco de editais será possível visualizar os variados editais que são publicados por meio de outras instituições. Queremos dar a oportunidade para os nossos usuários fiquem sabendo quais editais vigentes eles podem consultar e participar.
Essa funcionalidade terá:
**1 - LISTA DE EDITAIS**
- Listar todos os editais cadastrados em formato de card (similar ao banco de ideias)
- O conteúdo dessa lista será:
- Organização:
- Número do Edital:
- Título:
- Descritivo:
- Público-Alvo:
- Palavras-chave:
- Recursos financeiros:
- Período de inscrição:
- Observação:
- Link para a Página do edital: (abrir em uma nova janela)
- Somente serão publicos os registros com status PUBLISHED
- Mesmo o edital com data expirada continua visível nessa lista
**2 - REGISTRAR UM EDITAL**
- Qualquer usuário cadastrado poderá registrar um edital.
- Os campos serão:
- Organização (Dropdown):
- Número do Edital: (campo texto)
- Título: (campo texto)
- Descritivo: (textarea)
- Público-Alvo: (Lista de publico-alvo das taxonomias existentes)
- Palavras-chave: (Lista de palavras-chave das taxonomias existentes)
- Recursos financeiros: (Opcional) - campo de valor monetário
- Período de inscrição: (data de inicio e data de fim)
- Observação: (textarea)
- Link para a Página do edital: (URL)
- Ao cadastrar o registro terá um status de pendente
- Somente o ADMIN muda o status para PUBLISHED
- O ADMIN poderá cancelar o registro.
---------------
## Acceptance criteria
* <!-- Um ou mais pontos que descrevam os critérios de aceitação.-->
## Implementation Brief
* <!-- Um ou mais pontos que descrevem tecnicamente como implementar essa feature. Quanto mais detalhes melhor. -->
|
1.0
|
Funcionalidade de Banco de Editais - ## Feature Description
<!-- Descreva claramente o que será implementado -->
Do mesmo molde do Banco de Ideias o banco de editais será possível visualizar os variados editais que são publicados por meio de outras instituições. Queremos dar a oportunidade para os nossos usuários fiquem sabendo quais editais vigentes eles podem consultar e participar.
Essa funcionalidade terá:
**1 - LISTA DE EDITAIS**
- Listar todos os editais cadastrados em formato de card (similar ao banco de ideias)
- O conteúdo dessa lista será:
- Organização:
- Número do Edital:
- Título:
- Descritivo:
- Público-Alvo:
- Palavras-chave:
- Recursos financeiros:
- Período de inscrição:
- Observação:
- Link para a Página do edital: (abrir em uma nova janela)
- Somente serão publicos os registros com status PUBLISHED
- Mesmo o edital com data expirada continua visível nessa lista
**2 - REGISTRAR UM EDITAL**
- Qualquer usuário cadastrado poderá registrar um edital.
- Os campos serão:
- Organização (Dropdown):
- Número do Edital: (campo texto)
- Título: (campo texto)
- Descritivo: (textarea)
- Público-Alvo: (Lista de publico-alvo das taxonomias existentes)
- Palavras-chave: (Lista de palavras-chave das taxonomias existentes)
- Recursos financeiros: (Opcional) - campo de valor monetário
- Período de inscrição: (data de inicio e data de fim)
- Observação: (textarea)
- Link para a Página do edital: (URL)
- Ao cadastrar o registro terá um status de pendente
- Somente o ADMIN muda o status para PUBLISHED
- O ADMIN poderá cancelar o registro.
---------------
## Acceptance criteria
* <!-- Um ou mais pontos que descrevam os critérios de aceitação.-->
## Implementation Brief
* <!-- Um ou mais pontos que descrevem tecnicamente como implementar essa feature. Quanto mais detalhes melhor. -->
|
non_build
|
funcionalidade de banco de editais feature description do mesmo molde do banco de ideias o banco de editais será possível visualizar os variados editais que são publicados por meio de outras instituições queremos dar a oportunidade para os nossos usuários fiquem sabendo quais editais vigentes eles podem consultar e participar essa funcionalidade terá lista de editais listar todos os editais cadastrados em formato de card similar ao banco de ideias o conteúdo dessa lista será organização número do edital título descritivo público alvo palavras chave recursos financeiros período de inscrição observação link para a página do edital abrir em uma nova janela somente serão publicos os registros com status published mesmo o edital com data expirada continua visível nessa lista registrar um edital qualquer usuário cadastrado poderá registrar um edital os campos serão organização dropdown número do edital campo texto título campo texto descritivo textarea público alvo lista de publico alvo das taxonomias existentes palavras chave lista de palavras chave das taxonomias existentes recursos financeiros opcional campo de valor monetário período de inscrição data de inicio e data de fim observação textarea link para a página do edital url ao cadastrar o registro terá um status de pendente somente o admin muda o status para published o admin poderá cancelar o registro acceptance criteria implementation brief
| 0
|
775,654
| 27,234,942,148
|
IssuesEvent
|
2023-02-21 15:41:37
|
ascheid/itsg33-pbmm-issue-gen
|
https://api.github.com/repos/ascheid/itsg33-pbmm-issue-gen
|
closed
|
AC-9 PREVIOUS LOGON (ACCESS) NOTIFICATION
|
Priority: P2
|
PREVIOUS LOGON NOTIFICATION | SUCCESSFUL / UNSUCCESSFUL LOGONS
The information system notifies the user of the number of [Selection: successful logons/accesses; unsuccessful logon/access attempts; both] during [Assignment: organization-defined time period].
|
1.0
|
AC-9 PREVIOUS LOGON (ACCESS) NOTIFICATION - PREVIOUS LOGON NOTIFICATION | SUCCESSFUL / UNSUCCESSFUL LOGONS
The information system notifies the user of the number of [Selection: successful logons/accesses; unsuccessful logon/access attempts; both] during [Assignment: organization-defined time period].
|
non_build
|
ac previous logon access notification previous logon notification successful unsuccessful logons the information system notifies the user of the number of during
| 0
|
147,085
| 11,771,140,627
|
IssuesEvent
|
2020-03-15 22:30:01
|
Gregeg/Bezier-Curves-Processing
|
https://api.github.com/repos/Gregeg/Bezier-Curves-Processing
|
opened
|
Robot rotates correctly from bezier program
|
test
|
run a test to see if the robot rotates as instructed by the program
|
1.0
|
Robot rotates correctly from bezier program - run a test to see if the robot rotates as instructed by the program
|
non_build
|
robot rotates correctly from bezier program run a test to see if the robot rotates as instructed by the program
| 0
|
642,108
| 20,867,551,739
|
IssuesEvent
|
2022-03-22 08:53:19
|
googleapis/java-spanner
|
https://api.github.com/repos/googleapis/java-spanner
|
closed
|
spanner.it.ITDatabaseAdminDialectAwareTest: testCreateDatabaseWithDialect[dialect = POSTGRESQL] failed
|
type: bug priority: p1 api: spanner flakybot: issue
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 2b54949ec5082f1aab4b3b5b46bf0bef94f73d9e
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/b3b15dbc-a52f-4e43-a3a1-2982135e348f), [Sponge](http://sponge2/b3b15dbc-a52f-4e43-a3a1-2982135e348f)
status: failed
<details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.cloud.spanner.SpannerException: RESOURCE_EXHAUSTED: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Unable to create database 'testdb_1985482779_0002' because the instance 'projects/gcloud-devel/instances/spanner-testing-east1' has already reached the maximum database limit (100). Please delete a database in the instance and try again, or choose a different instance.
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:588)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:439)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:100)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:73)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:133)
at com.google.cloud.spanner.it.ITDatabaseAdminDialectAwareTest.testCreateDatabaseWithDialect(ITDatabaseAdminDialectAwareTest.java:105)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
Caused by: com.google.cloud.spanner.SpannerException: RESOURCE_EXHAUSTED: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Unable to create database 'testdb_1985482779_0002' because the instance 'projects/gcloud-devel/instances/spanner-testing-east1' has already reached the maximum database limit (100). Please delete a database in the instance and try again, or choose a different instance.
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerExceptionPreformatted(SpannerExceptionFactory.java:284)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:61)
at com.google.cloud.spanner.SpannerExceptionFactory.fromApiException(SpannerExceptionFactory.java:299)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:174)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:110)
at com.google.cloud.spanner.DatabaseAdminClientImpl.lambda$createDatabase$5(DatabaseAdminClientImpl.java:310)
at com.google.api.core.ApiFutures$ApiFunctionToGuavaFunction.apply(ApiFutures.java:240)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:234)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:222)
at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:133)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1277)
at com.google.common.util.concurrent.AbstractFuture.addListener(AbstractFuture.java:761)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.addListener(FluentFuture.java:115)
at com.google.common.util.concurrent.ForwardingListenableFuture.addListener(ForwardingListenableFuture.java:47)
at com.google.api.core.ApiFutureToListenableFuture.addListener(ApiFutureToListenableFuture.java:52)
at com.google.common.util.concurrent.AbstractCatchingFuture.create(AbstractCatchingFuture.java:46)
at com.google.common.util.concurrent.Futures.catching(Futures.java:305)
at com.google.api.core.ApiFutures.catching(ApiFutures.java:99)
at com.google.api.gax.longrunning.OperationFutureImpl.<init>(OperationFutureImpl.java:97)
at com.google.cloud.spanner.DatabaseAdminClientImpl.createDatabase(DatabaseAdminClientImpl.java:308)
at com.google.cloud.spanner.it.ITDatabaseAdminDialectAwareTest.testCreateDatabaseWithDialect(ITDatabaseAdminDialectAwareTest.java:104)
... 51 more
Caused by: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Unable to create database 'testdb_1985482779_0002' because the instance 'projects/gcloud-devel/instances/spanner-testing-east1' has already reached the maximum database limit (100). Please delete a database in the instance and try again, or choose a different instance.
at io.grpc.Status.asRuntimeException(Status.java:535)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:534)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.api.gax.grpc.ChannelPool$ReleasingClientCall$1.onClose(ChannelPool.java:455)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.cloud.spanner.spi.v1.SpannerErrorInterceptor$1$1.onClose(SpannerErrorInterceptor.java:100)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
</pre></details>
|
1.0
|
spanner.it.ITDatabaseAdminDialectAwareTest: testCreateDatabaseWithDialect[dialect = POSTGRESQL] failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 2b54949ec5082f1aab4b3b5b46bf0bef94f73d9e
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/b3b15dbc-a52f-4e43-a3a1-2982135e348f), [Sponge](http://sponge2/b3b15dbc-a52f-4e43-a3a1-2982135e348f)
status: failed
<details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.cloud.spanner.SpannerException: RESOURCE_EXHAUSTED: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Unable to create database 'testdb_1985482779_0002' because the instance 'projects/gcloud-devel/instances/spanner-testing-east1' has already reached the maximum database limit (100). Please delete a database in the instance and try again, or choose a different instance.
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:588)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:439)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:100)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:73)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:133)
at com.google.cloud.spanner.it.ITDatabaseAdminDialectAwareTest.testCreateDatabaseWithDialect(ITDatabaseAdminDialectAwareTest.java:105)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
Caused by: com.google.cloud.spanner.SpannerException: RESOURCE_EXHAUSTED: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Unable to create database 'testdb_1985482779_0002' because the instance 'projects/gcloud-devel/instances/spanner-testing-east1' has already reached the maximum database limit (100). Please delete a database in the instance and try again, or choose a different instance.
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerExceptionPreformatted(SpannerExceptionFactory.java:284)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:61)
at com.google.cloud.spanner.SpannerExceptionFactory.fromApiException(SpannerExceptionFactory.java:299)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:174)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:110)
at com.google.cloud.spanner.DatabaseAdminClientImpl.lambda$createDatabase$5(DatabaseAdminClientImpl.java:310)
at com.google.api.core.ApiFutures$ApiFunctionToGuavaFunction.apply(ApiFutures.java:240)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:234)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:222)
at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:133)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1277)
at com.google.common.util.concurrent.AbstractFuture.addListener(AbstractFuture.java:761)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.addListener(FluentFuture.java:115)
at com.google.common.util.concurrent.ForwardingListenableFuture.addListener(ForwardingListenableFuture.java:47)
at com.google.api.core.ApiFutureToListenableFuture.addListener(ApiFutureToListenableFuture.java:52)
at com.google.common.util.concurrent.AbstractCatchingFuture.create(AbstractCatchingFuture.java:46)
at com.google.common.util.concurrent.Futures.catching(Futures.java:305)
at com.google.api.core.ApiFutures.catching(ApiFutures.java:99)
at com.google.api.gax.longrunning.OperationFutureImpl.<init>(OperationFutureImpl.java:97)
at com.google.cloud.spanner.DatabaseAdminClientImpl.createDatabase(DatabaseAdminClientImpl.java:308)
at com.google.cloud.spanner.it.ITDatabaseAdminDialectAwareTest.testCreateDatabaseWithDialect(ITDatabaseAdminDialectAwareTest.java:104)
... 51 more
Caused by: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Unable to create database 'testdb_1985482779_0002' because the instance 'projects/gcloud-devel/instances/spanner-testing-east1' has already reached the maximum database limit (100). Please delete a database in the instance and try again, or choose a different instance.
at io.grpc.Status.asRuntimeException(Status.java:535)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:534)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.api.gax.grpc.ChannelPool$ReleasingClientCall$1.onClose(ChannelPool.java:455)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.cloud.spanner.spi.v1.SpannerErrorInterceptor$1$1.onClose(SpannerErrorInterceptor.java:100)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
</pre></details>
|
non_build
|
spanner it itdatabaseadmindialectawaretest testcreatedatabasewithdialect failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output java util concurrent executionexception com google cloud spanner spannerexception resource exhausted io grpc statusruntimeexception resource exhausted unable to create database testdb because the instance projects gcloud devel instances spanner testing has already reached the maximum database limit please delete a database in the instance and try again or choose a different instance at com google common util concurrent abstractfuture getdonevalue abstractfuture java at com google common util concurrent abstractfuture get abstractfuture java at com google common util concurrent fluentfuture trustedfuture get fluentfuture java at com google common util concurrent forwardingfuture get forwardingfuture java at com google api gax longrunning operationfutureimpl get operationfutureimpl java at com google cloud spanner it itdatabaseadmindialectawaretest testcreatedatabasewithdialect itdatabaseadmindialectawaretest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit rules externalresource evaluate externalresource java at org junit rules runrules evaluate runrules java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executelazy junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by com google cloud spanner spannerexception resource exhausted io grpc statusruntimeexception resource exhausted unable to create database testdb because the instance projects gcloud devel instances spanner testing has already reached the maximum database limit please delete a database in the instance and try again or choose a different instance at com google cloud spanner spannerexceptionfactory newspannerexceptionpreformatted spannerexceptionfactory java at com google cloud spanner spannerexceptionfactory newspannerexception spannerexceptionfactory java at com google cloud spanner spannerexceptionfactory fromapiexception spannerexceptionfactory java at com google cloud spanner spannerexceptionfactory newspannerexception spannerexceptionfactory java at com google cloud spanner spannerexceptionfactory newspannerexception spannerexceptionfactory java at com google cloud spanner databaseadminclientimpl lambda createdatabase databaseadminclientimpl java at com google api core apifutures apifunctiontoguavafunction apply apifutures java at com google common util concurrent abstractcatchingfuture catchingfuture dofallback abstractcatchingfuture java at com google common util concurrent abstractcatchingfuture catchingfuture dofallback abstractcatchingfuture java at com google common util concurrent abstractcatchingfuture run abstractcatchingfuture java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture addlistener abstractfuture java at com google common util concurrent fluentfuture trustedfuture addlistener fluentfuture java at com google common util concurrent forwardinglistenablefuture addlistener forwardinglistenablefuture java at com google api core apifuturetolistenablefuture addlistener apifuturetolistenablefuture java at com google common util concurrent abstractcatchingfuture create abstractcatchingfuture java at com google common util concurrent futures catching futures java at com google api core apifutures catching apifutures java at com google api gax longrunning operationfutureimpl operationfutureimpl java at com google cloud spanner databaseadminclientimpl createdatabase databaseadminclientimpl java at com google cloud spanner it itdatabaseadmindialectawaretest testcreatedatabasewithdialect itdatabaseadmindialectawaretest java more caused by io grpc statusruntimeexception resource exhausted unable to create database testdb because the instance projects gcloud devel instances spanner testing has already reached the maximum database limit please delete a database in the instance and try again or choose a different instance at io grpc status asruntimeexception status java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc partialforwardingclientcalllistener onclose partialforwardingclientcalllistener java at io grpc forwardingclientcalllistener onclose forwardingclientcalllistener java at io grpc forwardingclientcalllistener simpleforwardingclientcalllistener onclose forwardingclientcalllistener java at com google api gax grpc channelpool releasingclientcall onclose channelpool java at io grpc partialforwardingclientcalllistener onclose partialforwardingclientcalllistener java at io grpc forwardingclientcalllistener onclose forwardingclientcalllistener java at io grpc forwardingclientcalllistener simpleforwardingclientcalllistener onclose forwardingclientcalllistener java at io grpc partialforwardingclientcalllistener onclose partialforwardingclientcalllistener java at io grpc forwardingclientcalllistener onclose forwardingclientcalllistener java at io grpc forwardingclientcalllistener simpleforwardingclientcalllistener onclose forwardingclientcalllistener java at com google cloud spanner spi spannererrorinterceptor onclose spannererrorinterceptor java at io grpc internal delayedclientcall delayedlistener run delayedclientcall java at io grpc internal delayedclientcall delayedlistener delayorexecute delayedclientcall java at io grpc internal delayedclientcall delayedlistener onclose delayedclientcall java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java base java util concurrent executors runnableadapter call executors java at java base java util concurrent futuretask run futuretask java at java base java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java
| 0
|
1,989
| 2,869,084,920
|
IssuesEvent
|
2015-06-05 23:12:41
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
be more robust when resolution is not complete
|
Area-Pkg Pkg-PolymerBuild PolymerMilestone-Next Priority-High Triaged Type-Defect
|
For example, if people have broken code that doesn't resolve, smoke/recorder.dart might not be able to copy correctly the annotation. Right now this gives a terrible error message.
|
1.0
|
be more robust when resolution is not complete - For example, if people have broken code that doesn't resolve, smoke/recorder.dart might not be able to copy correctly the annotation. Right now this gives a terrible error message.
|
build
|
be more robust when resolution is not complete for example if people have broken code that doesn t resolve smoke recorder dart might not be able to copy correctly the annotation right now this gives a terrible error message
| 1
|
461,662
| 13,233,991,143
|
IssuesEvent
|
2020-08-18 15:36:04
|
status-im/status-react
|
https://api.github.com/repos/status-im/status-react
|
closed
|
Status is crashing on login when Passcode is turned off on device, but "Save password" is turned on in app
|
bug high-priority ios
|
# Bug Report
## Problem
Crash on login to Status app when Passcode is turned off on device, but before this "Save password" with biometric authentication was enabled.
Reproducible on phones with Touch and Face IDs
NOTE: after enabling Passcode back can login in the app
#### Expected behavior
Can login
#### Actual behavior
Crash
### Reproduction
- Open Status
- Create account and login
- Turn on "Save password", enable biometric authentication
- Relogin
- On your device: go to Settings > Touch ID and Passcode > select "Turn passcode off"
- open Status and try to login
### Additional Information
- Status version: release 1.5
- Operating System: iOS 13
#### Logs
```
Date/Time: 2020-08-11 12:41:42 +0200
End time: 2020-08-11 12:42:38 +0200
OS Version: iPhone OS 13.5.1 (Build 17F80)
Architecture: arm64e
Report Version: 29
Incident Identifier: 2A035BE8-6460-4DB1-A754-B85A5732F6FD
Data Source: Microstackshots
Shared Cache: 0xd1d0000 E214C012-579E-3370-BCAC-0DDC4817369B
Command: StatusIm
Path: /private/var/containers/Bundle/Application/6F9D88CE-490F-4326-AB58-968BCB94FF57/StatusIm.app/StatusIm
Identifier: im.status.ethereum
Version: 1.5.0 (20200811035931)
Beta Identifier: 8802AC27-316D-430A-95EF-A2456B0BB41B
PID: 572
Event: wakeups
Action taken: none
Wakeups: 45001 wakeups over the last 55 seconds (811 wakeups per second average), exceeding limit of 150 wakeups per second over 300 seconds
Wakeups limit: 45000
Limit duration: 300s
Wakeups caused: 45001
Wakeups duration: 55s
Duration: 55.49s
Duration Sampled: 21.88s
Steps: 11
Hardware model: iPhone12,8
Active cpus: 6
Heaviest stack for the target process:
4 ??? (libsystem_pthread.dylib + 6396) [0x18d3418fc]
2 ??? (JavaScriptCore + 598596) [0x19c94c244]
2 ??? (JavaScriptCore + 593868) [0x19c94afcc]
2 ??? (JavaScriptCore + 594764) [0x19c94b34c]
2 ??? (JavaScriptCore + 593312) [0x19c94ada0]
2 ??? (libsystem_kernel.dylib + 166536) [0x18d424a88]
Powerstats for: StatusIm [572]
Bundle ID: im.status.ethereum
Adam ID: 0
Is first party: No
App version: 1.5.0
Build version: 20200811035931
Is Beta: No
Share with Devs: Yes
UUID: F735D99F-882E-378A-BC2C-4148BE5F26B0
Path: /private/var/containers/Bundle/Application/6F9D88CE-490F-4326-AB58-968BCB94FF57/StatusIm.app/StatusIm
Architecture: arm64
Footprint: 116.42 MB -> 135.44 MB (+19.02 MB)
Start time: 2020-08-11 12:42:15 +0200
End time: 2020-08-11 12:42:37 +0200
Num samples: 11 (100%)
CPU Time: 0.401s
Primary state: 4 samples Frontmost App, Non-Suppressed, Kernel mode, Effective Thread QoS Default, Requested Thread QoS Default, Override Thread QoS Unspecified
User Activity: 0 samples Idle, 11 samples Active
Power Source: 11 samples on Battery, 0 samples on AC
4 ??? (libsystem_pthread.dylib + 6396) [0x18d3418fc]
2 ??? (JavaScriptCore + 598596) [0x19c94c244]
2 ??? (JavaScriptCore + 593868) [0x19c94afcc]
2 ??? (JavaScriptCore + 594764) [0x19c94b34c]
2 ??? (JavaScriptCore + 593312) [0x19c94ada0]
2 ??? (libsystem_kernel.dylib + 166536) [0x18d424a88]
1 <User mode, Effective Thread QoS User Initiated, Requested Thread QoS User Initiated>
1 <Effective Thread QoS User Initiated, Requested Thread QoS User Initiated>
1 ??? (JavaScriptCore + 330644) [0x19c90ab94]
1 ??? (JavaScriptCore + 322160) [0x19c908a70]
1 ??? (JavaScriptCore + 48888) [0x19c8c5ef8]
1 ??? (JavaScriptCore + 217064) [0x19c8eefe8]
1 ??? (JavaScriptCore + 214120) [0x19c8ee468]
1 ??? (JavaScriptCore + 7865504) [0x19d03a4a0]
1 ??? (JavaScriptCore + 8005532) [0x19d05c79c]
1 ??? (JavaScriptCore + 8002876) [0x19d05bd3c]
1 ??? (JavaScriptCore + 8014260) [0x19d05e9b4]
1 ??? (JavaScriptCore + 4654492) [0x19cd2a59c]
1 ??? (JavaScriptCore + 4569844) [0x19cd15af4]
1 ??? (JavaScriptCore + 4569976) [0x19cd15b78]
1 <User mode, Effective Thread QoS User Initiated, Requested Thread QoS User Initiated>
1 ??? (Foundation + 1317904) [0x18da34c10]
1 ??? (StatusIm + 793520) [0x104b91bb0]
1 ??? (CoreFoundation + 686324) [0x18d5b18f4]
1 ??? (CoreFoundation + 689164) [0x18d5b240c]
1 ??? (CoreFoundation + 708716) [0x18d5b706c]
1 ??? (CoreFoundation + 710928) [0x18d5b7910]
1 ??? (StatusIm + 888508) [0x104ba8ebc]
1 ??? (StatusIm + 836748) [0x104b9c48c]
1 ??? (StatusIm + 1551660) [0x104c4ad2c]
1 ??? (StatusIm + 1592252) [0x104c54bbc]
1 ??? (StatusIm + 722316) [0x104b8058c]
1 ??? (StatusIm + 1600272) [0x104c56b10]
1 ??? (StatusIm + 1600688) [0x104c56cb0]
1 ??? (StatusIm + 1566368) [0x104c4e6a0]
1 ??? (JavaScriptCore + 3083576) [0x19cbaad38]
1 ??? (JavaScriptCore + 10812788) [0x19d309d74]
1 ??? (JavaScriptCore + 8691856) [0x19d104090]
1 ??? (JavaScriptCore + 2518816) [0x19cb20f20]
1 ??? (JavaScriptCore + 11455316) [0x19d3a6b54]
1 ??? (JavaScriptCore + 8691776) [0x19d104040]
1 ??? (JavaScriptCore + 2518420) [0x19cb20d94]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2657608) [0x19cb42d48]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2662976) [0x19cb44240]
1 ??? (JavaScriptCore + 11455316) [0x19d3a6b54]
1 ??? (JavaScriptCore + 8691776) [0x19d104040]
1 ??? (JavaScriptCore + 2518420) [0x19cb20d94]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2662976) [0x19cb44240]
1 ??? (JavaScriptCore + 11455316) [0x19d3a6b54]
1 ??? (JavaScriptCore + 8691776) [0x19d104040]
1 ??? (JavaScriptCore + 2518420) [0x19cb20d94]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2656380) [0x19cb4287c]
1 ??? (JavaScriptCore + 2663352) [0x19cb443b8]
1 ??? (JavaScriptCore + 12650564) [0x19d4ca844]
1 ??? (JavaScriptCore + 7318912) [0x19cfb4d80]
1 <User mode, Effective Thread QoS User Interactive, Requested Thread QoS User Interactive>
2 ??? (libsystem_kernel.dylib + 166096) [0x18d4248d0]
1 ??? (libsystem_kernel.dylib + 158656) [0x18d422bc0]
1 ??? (StatusIm + 3189988) [0x104ddace4]
1 ??? (libsystem_kernel.dylib + 158616) [0x18d422b98]
1 ??? (StatusIm + 3116160) [0x104dc8c80]
1 <User mode>
1 ??? (StatusIm + 3064744) [0x104dbc3a8]
1 <User mode>
1 ??? (StatusIm + 2839384) [0x104d85358]
1 <User mode>
Binary Images:
0x104ad0000 - ??? im.status.ethereum 1.5.0 (20200811035931) <F735D99F-882E-378A-BC2C-4148BE5F26B0> /private/var/containers/Bundle/Application/6F9D88CE-490F-4326-AB58-968BCB94FF57/StatusIm.app/StatusIm
0x18d340000 - 0x18d34afff libsystem_pthread.dylib <4CF76CD7-DC5B-37CF-83D7-46153D0D3962> /usr/lib/system/libsystem_pthread.dylib
0x18d3fc000 - 0x18d42bfff libsystem_kernel.dylib <42BDCD41-02A6-3529-A271-0E6402154A44> /usr/lib/system/libsystem_kernel.dylib
0x18d50a000 - 0x18d887fff CoreFoundation <AF42303F-57B6-3C11-8F18-8E80ABF7D886> /System/Library/Frameworks/CoreFoundation.framework/CoreFoundation
0x18d8f3000 - 0x18dbbdfff Foundation <19FAB59F-6527-3245-85BB-905FD4255CDE> /System/Library/Frameworks/Foundation.framework/Foundation
0x19c8ba000 - 0x19d7defff JavaScriptCore <03568D30-96EC-314B-9C89-7F0A73DA18C6> /System/Library/Frameworks/JavaScriptCore.framework/JavaScriptCore
```
|
1.0
|
Status is crashing on login when Passcode is turned off on device, but "Save password" is turned on in app - # Bug Report
## Problem
Crash on login to Status app when Passcode is turned off on device, but before this "Save password" with biometric authentication was enabled.
Reproducible on phones with Touch and Face IDs
NOTE: after enabling Passcode back can login in the app
#### Expected behavior
Can login
#### Actual behavior
Crash
### Reproduction
- Open Status
- Create account and login
- Turn on "Save password", enable biometric authentication
- Relogin
- On your device: go to Settings > Touch ID and Passcode > select "Turn passcode off"
- open Status and try to login
### Additional Information
- Status version: release 1.5
- Operating System: iOS 13
#### Logs
```
Date/Time: 2020-08-11 12:41:42 +0200
End time: 2020-08-11 12:42:38 +0200
OS Version: iPhone OS 13.5.1 (Build 17F80)
Architecture: arm64e
Report Version: 29
Incident Identifier: 2A035BE8-6460-4DB1-A754-B85A5732F6FD
Data Source: Microstackshots
Shared Cache: 0xd1d0000 E214C012-579E-3370-BCAC-0DDC4817369B
Command: StatusIm
Path: /private/var/containers/Bundle/Application/6F9D88CE-490F-4326-AB58-968BCB94FF57/StatusIm.app/StatusIm
Identifier: im.status.ethereum
Version: 1.5.0 (20200811035931)
Beta Identifier: 8802AC27-316D-430A-95EF-A2456B0BB41B
PID: 572
Event: wakeups
Action taken: none
Wakeups: 45001 wakeups over the last 55 seconds (811 wakeups per second average), exceeding limit of 150 wakeups per second over 300 seconds
Wakeups limit: 45000
Limit duration: 300s
Wakeups caused: 45001
Wakeups duration: 55s
Duration: 55.49s
Duration Sampled: 21.88s
Steps: 11
Hardware model: iPhone12,8
Active cpus: 6
Heaviest stack for the target process:
4 ??? (libsystem_pthread.dylib + 6396) [0x18d3418fc]
2 ??? (JavaScriptCore + 598596) [0x19c94c244]
2 ??? (JavaScriptCore + 593868) [0x19c94afcc]
2 ??? (JavaScriptCore + 594764) [0x19c94b34c]
2 ??? (JavaScriptCore + 593312) [0x19c94ada0]
2 ??? (libsystem_kernel.dylib + 166536) [0x18d424a88]
Powerstats for: StatusIm [572]
Bundle ID: im.status.ethereum
Adam ID: 0
Is first party: No
App version: 1.5.0
Build version: 20200811035931
Is Beta: No
Share with Devs: Yes
UUID: F735D99F-882E-378A-BC2C-4148BE5F26B0
Path: /private/var/containers/Bundle/Application/6F9D88CE-490F-4326-AB58-968BCB94FF57/StatusIm.app/StatusIm
Architecture: arm64
Footprint: 116.42 MB -> 135.44 MB (+19.02 MB)
Start time: 2020-08-11 12:42:15 +0200
End time: 2020-08-11 12:42:37 +0200
Num samples: 11 (100%)
CPU Time: 0.401s
Primary state: 4 samples Frontmost App, Non-Suppressed, Kernel mode, Effective Thread QoS Default, Requested Thread QoS Default, Override Thread QoS Unspecified
User Activity: 0 samples Idle, 11 samples Active
Power Source: 11 samples on Battery, 0 samples on AC
4 ??? (libsystem_pthread.dylib + 6396) [0x18d3418fc]
2 ??? (JavaScriptCore + 598596) [0x19c94c244]
2 ??? (JavaScriptCore + 593868) [0x19c94afcc]
2 ??? (JavaScriptCore + 594764) [0x19c94b34c]
2 ??? (JavaScriptCore + 593312) [0x19c94ada0]
2 ??? (libsystem_kernel.dylib + 166536) [0x18d424a88]
1 <User mode, Effective Thread QoS User Initiated, Requested Thread QoS User Initiated>
1 <Effective Thread QoS User Initiated, Requested Thread QoS User Initiated>
1 ??? (JavaScriptCore + 330644) [0x19c90ab94]
1 ??? (JavaScriptCore + 322160) [0x19c908a70]
1 ??? (JavaScriptCore + 48888) [0x19c8c5ef8]
1 ??? (JavaScriptCore + 217064) [0x19c8eefe8]
1 ??? (JavaScriptCore + 214120) [0x19c8ee468]
1 ??? (JavaScriptCore + 7865504) [0x19d03a4a0]
1 ??? (JavaScriptCore + 8005532) [0x19d05c79c]
1 ??? (JavaScriptCore + 8002876) [0x19d05bd3c]
1 ??? (JavaScriptCore + 8014260) [0x19d05e9b4]
1 ??? (JavaScriptCore + 4654492) [0x19cd2a59c]
1 ??? (JavaScriptCore + 4569844) [0x19cd15af4]
1 ??? (JavaScriptCore + 4569976) [0x19cd15b78]
1 <User mode, Effective Thread QoS User Initiated, Requested Thread QoS User Initiated>
1 ??? (Foundation + 1317904) [0x18da34c10]
1 ??? (StatusIm + 793520) [0x104b91bb0]
1 ??? (CoreFoundation + 686324) [0x18d5b18f4]
1 ??? (CoreFoundation + 689164) [0x18d5b240c]
1 ??? (CoreFoundation + 708716) [0x18d5b706c]
1 ??? (CoreFoundation + 710928) [0x18d5b7910]
1 ??? (StatusIm + 888508) [0x104ba8ebc]
1 ??? (StatusIm + 836748) [0x104b9c48c]
1 ??? (StatusIm + 1551660) [0x104c4ad2c]
1 ??? (StatusIm + 1592252) [0x104c54bbc]
1 ??? (StatusIm + 722316) [0x104b8058c]
1 ??? (StatusIm + 1600272) [0x104c56b10]
1 ??? (StatusIm + 1600688) [0x104c56cb0]
1 ??? (StatusIm + 1566368) [0x104c4e6a0]
1 ??? (JavaScriptCore + 3083576) [0x19cbaad38]
1 ??? (JavaScriptCore + 10812788) [0x19d309d74]
1 ??? (JavaScriptCore + 8691856) [0x19d104090]
1 ??? (JavaScriptCore + 2518816) [0x19cb20f20]
1 ??? (JavaScriptCore + 11455316) [0x19d3a6b54]
1 ??? (JavaScriptCore + 8691776) [0x19d104040]
1 ??? (JavaScriptCore + 2518420) [0x19cb20d94]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2657608) [0x19cb42d48]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2662976) [0x19cb44240]
1 ??? (JavaScriptCore + 11455316) [0x19d3a6b54]
1 ??? (JavaScriptCore + 8691776) [0x19d104040]
1 ??? (JavaScriptCore + 2518420) [0x19cb20d94]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2651936) [0x19cb41720]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2662976) [0x19cb44240]
1 ??? (JavaScriptCore + 11455316) [0x19d3a6b54]
1 ??? (JavaScriptCore + 8691776) [0x19d104040]
1 ??? (JavaScriptCore + 2518420) [0x19cb20d94]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2652116) [0x19cb417d4]
1 ??? (JavaScriptCore + 2656380) [0x19cb4287c]
1 ??? (JavaScriptCore + 2663352) [0x19cb443b8]
1 ??? (JavaScriptCore + 12650564) [0x19d4ca844]
1 ??? (JavaScriptCore + 7318912) [0x19cfb4d80]
1 <User mode, Effective Thread QoS User Interactive, Requested Thread QoS User Interactive>
2 ??? (libsystem_kernel.dylib + 166096) [0x18d4248d0]
1 ??? (libsystem_kernel.dylib + 158656) [0x18d422bc0]
1 ??? (StatusIm + 3189988) [0x104ddace4]
1 ??? (libsystem_kernel.dylib + 158616) [0x18d422b98]
1 ??? (StatusIm + 3116160) [0x104dc8c80]
1 <User mode>
1 ??? (StatusIm + 3064744) [0x104dbc3a8]
1 <User mode>
1 ??? (StatusIm + 2839384) [0x104d85358]
1 <User mode>
Binary Images:
0x104ad0000 - ??? im.status.ethereum 1.5.0 (20200811035931) <F735D99F-882E-378A-BC2C-4148BE5F26B0> /private/var/containers/Bundle/Application/6F9D88CE-490F-4326-AB58-968BCB94FF57/StatusIm.app/StatusIm
0x18d340000 - 0x18d34afff libsystem_pthread.dylib <4CF76CD7-DC5B-37CF-83D7-46153D0D3962> /usr/lib/system/libsystem_pthread.dylib
0x18d3fc000 - 0x18d42bfff libsystem_kernel.dylib <42BDCD41-02A6-3529-A271-0E6402154A44> /usr/lib/system/libsystem_kernel.dylib
0x18d50a000 - 0x18d887fff CoreFoundation <AF42303F-57B6-3C11-8F18-8E80ABF7D886> /System/Library/Frameworks/CoreFoundation.framework/CoreFoundation
0x18d8f3000 - 0x18dbbdfff Foundation <19FAB59F-6527-3245-85BB-905FD4255CDE> /System/Library/Frameworks/Foundation.framework/Foundation
0x19c8ba000 - 0x19d7defff JavaScriptCore <03568D30-96EC-314B-9C89-7F0A73DA18C6> /System/Library/Frameworks/JavaScriptCore.framework/JavaScriptCore
```
|
non_build
|
status is crashing on login when passcode is turned off on device but save password is turned on in app bug report problem crash on login to status app when passcode is turned off on device but before this save password with biometric authentication was enabled reproducible on phones with touch and face ids note after enabling passcode back can login in the app expected behavior can login actual behavior crash reproduction open status create account and login turn on save password enable biometric authentication relogin on your device go to settings touch id and passcode select turn passcode off open status and try to login additional information status version release operating system ios logs date time end time os version iphone os build architecture report version incident identifier data source microstackshots shared cache bcac command statusim path private var containers bundle application statusim app statusim identifier im status ethereum version beta identifier pid event wakeups action taken none wakeups wakeups over the last seconds wakeups per second average exceeding limit of wakeups per second over seconds wakeups limit limit duration wakeups caused wakeups duration duration duration sampled steps hardware model active cpus heaviest stack for the target process libsystem pthread dylib javascriptcore javascriptcore javascriptcore javascriptcore libsystem kernel dylib powerstats for statusim bundle id im status ethereum adam id is first party no app version build version is beta no share with devs yes uuid path private var containers bundle application statusim app statusim architecture footprint mb mb mb start time end time num samples cpu time primary state samples frontmost app non suppressed kernel mode effective thread qos default requested thread qos default override thread qos unspecified user activity samples idle samples active power source samples on battery samples on ac libsystem pthread dylib javascriptcore javascriptcore javascriptcore javascriptcore libsystem kernel dylib javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore foundation statusim corefoundation corefoundation corefoundation corefoundation statusim statusim statusim statusim statusim statusim statusim statusim javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore javascriptcore libsystem kernel dylib libsystem kernel dylib statusim libsystem kernel dylib statusim statusim statusim binary images im status ethereum private var containers bundle application statusim app statusim libsystem pthread dylib usr lib system libsystem pthread dylib libsystem kernel dylib usr lib system libsystem kernel dylib corefoundation system library frameworks corefoundation framework corefoundation foundation system library frameworks foundation framework foundation javascriptcore system library frameworks javascriptcore framework javascriptcore
| 0
|
23,223
| 7,299,793,022
|
IssuesEvent
|
2018-02-26 21:18:36
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
Azure builder not generating deployable Windows managed images
|
builder/azure
|
When using the packer script listed below, an Azure managed image is successfully generated, but when creating a VM, it gets stuck in the "creating" phase. The boot diagnostics show the lockscreen. If I delete the VM in the "creating" phase, attach the managed disk to another VM as a data disk and look at the sysprep log, here is the last entry: https://gist.github.com/NateB2/d11bba23e91d9ee3c56635177b1d4078
Running the same packer script except saving the image as a VHD instead of a managed image works successfully. I was able to successfully generate a VHD, save it as a managed image, and then successfully create and launch a VM.
Packer version: 1.2.0
Host Platform: Windows Server 2016 Datacenter
Debug log output (as piped through VSTS): https://gist.github.com/NateB2/a740250cecb9b1cff0ed32c14aa9c87f
Packer script that creates a usable VHD: https://gist.github.com/NateB2/660748343cbf77d03fb135aa7a656b9e
Packer script that creates an unusable managed image: https://gist.github.com/NateB2/f497f5c7a7d44cb618aa42f104300f43
For now, I can work around the issue by creating the VHD and creating an image from the VHD, but it would be nice to have the ability to create a managed disk directly.
|
1.0
|
Azure builder not generating deployable Windows managed images - When using the packer script listed below, an Azure managed image is successfully generated, but when creating a VM, it gets stuck in the "creating" phase. The boot diagnostics show the lockscreen. If I delete the VM in the "creating" phase, attach the managed disk to another VM as a data disk and look at the sysprep log, here is the last entry: https://gist.github.com/NateB2/d11bba23e91d9ee3c56635177b1d4078
Running the same packer script except saving the image as a VHD instead of a managed image works successfully. I was able to successfully generate a VHD, save it as a managed image, and then successfully create and launch a VM.
Packer version: 1.2.0
Host Platform: Windows Server 2016 Datacenter
Debug log output (as piped through VSTS): https://gist.github.com/NateB2/a740250cecb9b1cff0ed32c14aa9c87f
Packer script that creates a usable VHD: https://gist.github.com/NateB2/660748343cbf77d03fb135aa7a656b9e
Packer script that creates an unusable managed image: https://gist.github.com/NateB2/f497f5c7a7d44cb618aa42f104300f43
For now, I can work around the issue by creating the VHD and creating an image from the VHD, but it would be nice to have the ability to create a managed disk directly.
|
build
|
azure builder not generating deployable windows managed images when using the packer script listed below an azure managed image is successfully generated but when creating a vm it gets stuck in the creating phase the boot diagnostics show the lockscreen if i delete the vm in the creating phase attach the managed disk to another vm as a data disk and look at the sysprep log here is the last entry running the same packer script except saving the image as a vhd instead of a managed image works successfully i was able to successfully generate a vhd save it as a managed image and then successfully create and launch a vm packer version host platform windows server datacenter debug log output as piped through vsts packer script that creates a usable vhd packer script that creates an unusable managed image for now i can work around the issue by creating the vhd and creating an image from the vhd but it would be nice to have the ability to create a managed disk directly
| 1
|
34,855
| 4,561,910,130
|
IssuesEvent
|
2016-09-14 13:23:45
|
vector-im/vector-web
|
https://api.github.com/repos/vector-im/vector-web
|
closed
|
Create room 3 and 4 (invite people to room) screen: User invite flow to be improved on
|
design-signed-off rs2 ui/ux
|
Create room 3 and 4 (invite people to room) screen:
**Original requirements from Amandine on basecamp**
Like for room creation we believe and have been supported by the users, that the invite flow is not right.
Today we have a single invite field to search and invite by email or by ID. People are completely missing the fact they can invite by email (Trevor’s question yesterday proves it again...).
Our proposal was to display an “invite by email” button when starting typing in the room (basically replace the current first item of the suggestion list), which would open a modal popup where people can type emails or drop a list of them. We’re not entirely convinced that it will solve the email invite discovery problem (i.e. before you start typing into the box) but that could be a quick fix: any other suggestions?
**(design only):** needs more design work as per comments (see how Dropbox handle this)
|
1.0
|
Create room 3 and 4 (invite people to room) screen: User invite flow to be improved on - Create room 3 and 4 (invite people to room) screen:
**Original requirements from Amandine on basecamp**
Like for room creation we believe and have been supported by the users, that the invite flow is not right.
Today we have a single invite field to search and invite by email or by ID. People are completely missing the fact they can invite by email (Trevor’s question yesterday proves it again...).
Our proposal was to display an “invite by email” button when starting typing in the room (basically replace the current first item of the suggestion list), which would open a modal popup where people can type emails or drop a list of them. We’re not entirely convinced that it will solve the email invite discovery problem (i.e. before you start typing into the box) but that could be a quick fix: any other suggestions?
**(design only):** needs more design work as per comments (see how Dropbox handle this)
|
non_build
|
create room and invite people to room screen user invite flow to be improved on create room and invite people to room screen original requirements from amandine on basecamp like for room creation we believe and have been supported by the users that the invite flow is not right today we have a single invite field to search and invite by email or by id people are completely missing the fact they can invite by email trevor’s question yesterday proves it again our proposal was to display an “invite by email” button when starting typing in the room basically replace the current first item of the suggestion list which would open a modal popup where people can type emails or drop a list of them we’re not entirely convinced that it will solve the email invite discovery problem i e before you start typing into the box but that could be a quick fix any other suggestions design only needs more design work as per comments see how dropbox handle this
| 0
|
15,472
| 5,967,400,526
|
IssuesEvent
|
2017-05-30 15:52:34
|
curl/curl
|
https://api.github.com/repos/curl/curl
|
closed
|
OS400 fails to build 7.52.1
|
build
|
[ There are collections of known issues to be aware of:
https://curl.haxx.se/docs/knownbugs.html https://curl.haxx.se/docs/todo.html ]
### I did this
Build 7.52.1 release on OS/400 V7R1M0
### I expected the following
Clean build
### curl/libcurl version
7.52.1
[curl -V output perhaps?]
### operating system
OS/400 V7R1M0
The build is failing because the definition of CURLOPT_SOCKS_PROXY isn't defined (appears to have been renamed CURLOPT_PRE_PROXY) is still in the OS/400 specific build files (curl.inc.in, ccsidcurl.c and README.OS400).
Secondly, there are a number of assert() calls in http2.c, memdebug.c, mprintf.c and rand.c that should be behind a #ifdef HAVE_ASSERT_H condition, these result in an unresolved external when linking the service program.
|
1.0
|
OS400 fails to build 7.52.1 - [ There are collections of known issues to be aware of:
https://curl.haxx.se/docs/knownbugs.html https://curl.haxx.se/docs/todo.html ]
### I did this
Build 7.52.1 release on OS/400 V7R1M0
### I expected the following
Clean build
### curl/libcurl version
7.52.1
[curl -V output perhaps?]
### operating system
OS/400 V7R1M0
The build is failing because the definition of CURLOPT_SOCKS_PROXY isn't defined (appears to have been renamed CURLOPT_PRE_PROXY) is still in the OS/400 specific build files (curl.inc.in, ccsidcurl.c and README.OS400).
Secondly, there are a number of assert() calls in http2.c, memdebug.c, mprintf.c and rand.c that should be behind a #ifdef HAVE_ASSERT_H condition, these result in an unresolved external when linking the service program.
|
build
|
fails to build there are collections of known issues to be aware of i did this build release on os i expected the following clean build curl libcurl version operating system os the build is failing because the definition of curlopt socks proxy isn t defined appears to have been renamed curlopt pre proxy is still in the os specific build files curl inc in ccsidcurl c and readme secondly there are a number of assert calls in c memdebug c mprintf c and rand c that should be behind a ifdef have assert h condition these result in an unresolved external when linking the service program
| 1
|
855
| 2,648,779,841
|
IssuesEvent
|
2015-03-14 07:42:05
|
Jasig/cas
|
https://api.github.com/repos/Jasig/cas
|
closed
|
Inclusion of clearpass dependency causes crash
|
Bug Build ClearPass Major
|
Similiar to what @leleuj reports here:
http://jasig.275507.n4.nabble.com/CAS-4-1-issue-with-the-webflow-stored-on-client-side-td4664313.html
I realized that including clearpass as a dependency in the webapp, which will retrieve cas-client-core and then opensaml causes transitive dependencies for `bcprov` to be included in the lib directory. This causes a runtime crash as duplicates are found.
`bcprov-jdk15on-1.50.jar` should be the required jar. Others must be excluded.
|
1.0
|
Inclusion of clearpass dependency causes crash - Similiar to what @leleuj reports here:
http://jasig.275507.n4.nabble.com/CAS-4-1-issue-with-the-webflow-stored-on-client-side-td4664313.html
I realized that including clearpass as a dependency in the webapp, which will retrieve cas-client-core and then opensaml causes transitive dependencies for `bcprov` to be included in the lib directory. This causes a runtime crash as duplicates are found.
`bcprov-jdk15on-1.50.jar` should be the required jar. Others must be excluded.
|
build
|
inclusion of clearpass dependency causes crash similiar to what leleuj reports here i realized that including clearpass as a dependency in the webapp which will retrieve cas client core and then opensaml causes transitive dependencies for bcprov to be included in the lib directory this causes a runtime crash as duplicates are found bcprov jar should be the required jar others must be excluded
| 1
|
68,339
| 17,258,061,639
|
IssuesEvent
|
2021-07-22 00:42:26
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
opened
|
MacOS failure to compile dylib with metal delegate
|
type:build/install
|
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS Big Sur (11.4)
- TensorFlow installed from (source or binary): source
- TensorFlow version: 2.5.0
- Python version: Python 3.9.1
- Bazel version (if compiling from source): bazel 3.7.2-homebrew
- GCC/Compiler version (if compiling from source): clang version 11.0.0
**Describe the problem**
Building the default bazel target is fine and generates a dylib but when I add the metal delegate target to the file _tensorflow/lite/BUILD_:
```
tflite_cc_shared_object(
name = "tensorflowlite",
# Until we have more granular symbol export for the C++ API on Windows,
# export all symbols.
features = ["windows_export_all_symbols"],
linkopts = select({
"//tensorflow:macos": [
"-Wl,-exported_symbols_list,$(location //tensorflow/lite:tflite_exported_symbols.lds)",
],
"//tensorflow:windows": [],
"//conditions:default": [
"-Wl,-z,defs",
"-Wl,--version-script,$(location //tensorflow/lite:tflite_version_script.lds)",
],
}),
per_os_targets = True,
deps = [
":framework",
":tflite_exported_symbols.lds",
":tflite_version_script.lds",
"//tensorflow/lite/tools/evaluation:utils",
"//tensorflow/lite/delegates/gpu:metal_delegate", # adding this makes it fail
"//tensorflow/lite/kernels:builtin_ops_all_linked",
],
)
```
It fails and gives me
```
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=183
INFO: Reading rc options for 'build' from /Users/mng/Repositories/tensorflow/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /Users/mng/Repositories/tensorflow/.bazelrc:
'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2
INFO: Reading rc options for 'build' from /Users/mng/Repositories/tensorflow/.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=/Library/Frameworks/Python.framework/Versions/3.8/bin/python3 --action_env PYTHON_LIB_PATH=/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages --python_path=/Library/Frameworks/Python.framework/Versions/3.8/bin/python3
INFO: Found applicable config definition build:short_logs in file /Users/mng/Repositories/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /Users/mng/Repositories/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:macos in file /Users/mng/Repositories/tensorflow/.bazelrc: --apple_platform_type=macos --copt=-DGRPC_BAZEL_BUILD --copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14
ERROR: /private/var/tmp/_bazel_mng/a01251391e7b28e63e36a7eb9c920e09/external/cpuinfo/BUILD.bazel:96:11: Configurable attribute "srcs" doesn't match this configuration (would a default condition help?).
Conditions checked:
@cpuinfo//:linux_x86_64
@cpuinfo//:linux_arm
@cpuinfo//:linux_armhf
@cpuinfo//:linux_armv7a
@cpuinfo//:linux_armeabi
@cpuinfo//:linux_aarch64
@cpuinfo//:linux_mips64
@cpuinfo//:linux_riscv64
@cpuinfo//:linux_s390x
@cpuinfo//:macos_x86_64
@cpuinfo//:macos_arm64
@cpuinfo//:windows_x86_64
@cpuinfo//:android_armv7
@cpuinfo//:android_arm64
@cpuinfo//:android_x86
@cpuinfo//:android_x86_64
@cpuinfo//:ios_x86_64
@cpuinfo//:ios_x86
@cpuinfo//:ios_armv7
@cpuinfo//:ios_arm64
@cpuinfo//:ios_arm64e
@cpuinfo//:watchos_x86_64
@cpuinfo//:watchos_x86
@cpuinfo//:watchos_armv7k
@cpuinfo//:watchos_arm64_32
@cpuinfo//:tvos_x86_64
@cpuinfo//:tvos_arm64
ERROR: Analysis of target '//tensorflow/lite:tensorflowlite' failed; build aborted: /private/var/tmp/_bazel_mng/a01251391e7b28e63e36a7eb9c920e09/external/cpuinfo/BUILD.bazel:96:11: Configurable attribute "srcs" doesn't match this configuration (would a default condition help?).
Conditions checked:
@cpuinfo//:linux_x86_64
@cpuinfo//:linux_arm
@cpuinfo//:linux_armhf
@cpuinfo//:linux_armv7a
@cpuinfo//:linux_armeabi
@cpuinfo//:linux_aarch64
@cpuinfo//:linux_mips64
@cpuinfo//:linux_riscv64
@cpuinfo//:linux_s390x
@cpuinfo//:macos_x86_64
@cpuinfo//:macos_arm64
@cpuinfo//:windows_x86_64
@cpuinfo//:android_armv7
@cpuinfo//:android_arm64
@cpuinfo//:android_x86
@cpuinfo//:android_x86_64
@cpuinfo//:ios_x86_64
@cpuinfo//:ios_x86
@cpuinfo//:ios_armv7
@cpuinfo//:ios_arm64
@cpuinfo//:ios_arm64e
@cpuinfo//:watchos_x86_64
@cpuinfo//:watchos_x86
@cpuinfo//:watchos_armv7k
@cpuinfo//:watchos_arm64_32
@cpuinfo//:tvos_x86_64
@cpuinfo//:tvos_arm64
INFO: Elapsed time: 0.104s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded, 526 targets configured)
```
I assume when adding the metal delegate it changes the select cpu type (im not familiar with bazel enough to know). Unless there is a better way of adding the metal target.
I also tried to add a target myself such as
```
macos_dylib (
name = "tensorflowlite2",
minimum_os_version = "10.12",
deps = [
"//tensorflow/lite/kernels:builtin_ops_all_linked",
"//tensorflow/lite/tools/evaluation:utils",
"//tensorflow/lite/delegates/gpu:metal_delegate",
],
)
```
First I had to ch am
**Provide the exact sequence of commands / steps that you executed before running into the problem**
**Any other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
|
1.0
|
MacOS failure to compile dylib with metal delegate - **System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS Big Sur (11.4)
- TensorFlow installed from (source or binary): source
- TensorFlow version: 2.5.0
- Python version: Python 3.9.1
- Bazel version (if compiling from source): bazel 3.7.2-homebrew
- GCC/Compiler version (if compiling from source): clang version 11.0.0
**Describe the problem**
Building the default bazel target is fine and generates a dylib but when I add the metal delegate target to the file _tensorflow/lite/BUILD_:
```
tflite_cc_shared_object(
name = "tensorflowlite",
# Until we have more granular symbol export for the C++ API on Windows,
# export all symbols.
features = ["windows_export_all_symbols"],
linkopts = select({
"//tensorflow:macos": [
"-Wl,-exported_symbols_list,$(location //tensorflow/lite:tflite_exported_symbols.lds)",
],
"//tensorflow:windows": [],
"//conditions:default": [
"-Wl,-z,defs",
"-Wl,--version-script,$(location //tensorflow/lite:tflite_version_script.lds)",
],
}),
per_os_targets = True,
deps = [
":framework",
":tflite_exported_symbols.lds",
":tflite_version_script.lds",
"//tensorflow/lite/tools/evaluation:utils",
"//tensorflow/lite/delegates/gpu:metal_delegate", # adding this makes it fail
"//tensorflow/lite/kernels:builtin_ops_all_linked",
],
)
```
It fails and gives me
```
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=183
INFO: Reading rc options for 'build' from /Users/mng/Repositories/tensorflow/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /Users/mng/Repositories/tensorflow/.bazelrc:
'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2
INFO: Reading rc options for 'build' from /Users/mng/Repositories/tensorflow/.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=/Library/Frameworks/Python.framework/Versions/3.8/bin/python3 --action_env PYTHON_LIB_PATH=/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages --python_path=/Library/Frameworks/Python.framework/Versions/3.8/bin/python3
INFO: Found applicable config definition build:short_logs in file /Users/mng/Repositories/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /Users/mng/Repositories/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:macos in file /Users/mng/Repositories/tensorflow/.bazelrc: --apple_platform_type=macos --copt=-DGRPC_BAZEL_BUILD --copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14
ERROR: /private/var/tmp/_bazel_mng/a01251391e7b28e63e36a7eb9c920e09/external/cpuinfo/BUILD.bazel:96:11: Configurable attribute "srcs" doesn't match this configuration (would a default condition help?).
Conditions checked:
@cpuinfo//:linux_x86_64
@cpuinfo//:linux_arm
@cpuinfo//:linux_armhf
@cpuinfo//:linux_armv7a
@cpuinfo//:linux_armeabi
@cpuinfo//:linux_aarch64
@cpuinfo//:linux_mips64
@cpuinfo//:linux_riscv64
@cpuinfo//:linux_s390x
@cpuinfo//:macos_x86_64
@cpuinfo//:macos_arm64
@cpuinfo//:windows_x86_64
@cpuinfo//:android_armv7
@cpuinfo//:android_arm64
@cpuinfo//:android_x86
@cpuinfo//:android_x86_64
@cpuinfo//:ios_x86_64
@cpuinfo//:ios_x86
@cpuinfo//:ios_armv7
@cpuinfo//:ios_arm64
@cpuinfo//:ios_arm64e
@cpuinfo//:watchos_x86_64
@cpuinfo//:watchos_x86
@cpuinfo//:watchos_armv7k
@cpuinfo//:watchos_arm64_32
@cpuinfo//:tvos_x86_64
@cpuinfo//:tvos_arm64
ERROR: Analysis of target '//tensorflow/lite:tensorflowlite' failed; build aborted: /private/var/tmp/_bazel_mng/a01251391e7b28e63e36a7eb9c920e09/external/cpuinfo/BUILD.bazel:96:11: Configurable attribute "srcs" doesn't match this configuration (would a default condition help?).
Conditions checked:
@cpuinfo//:linux_x86_64
@cpuinfo//:linux_arm
@cpuinfo//:linux_armhf
@cpuinfo//:linux_armv7a
@cpuinfo//:linux_armeabi
@cpuinfo//:linux_aarch64
@cpuinfo//:linux_mips64
@cpuinfo//:linux_riscv64
@cpuinfo//:linux_s390x
@cpuinfo//:macos_x86_64
@cpuinfo//:macos_arm64
@cpuinfo//:windows_x86_64
@cpuinfo//:android_armv7
@cpuinfo//:android_arm64
@cpuinfo//:android_x86
@cpuinfo//:android_x86_64
@cpuinfo//:ios_x86_64
@cpuinfo//:ios_x86
@cpuinfo//:ios_armv7
@cpuinfo//:ios_arm64
@cpuinfo//:ios_arm64e
@cpuinfo//:watchos_x86_64
@cpuinfo//:watchos_x86
@cpuinfo//:watchos_armv7k
@cpuinfo//:watchos_arm64_32
@cpuinfo//:tvos_x86_64
@cpuinfo//:tvos_arm64
INFO: Elapsed time: 0.104s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded, 526 targets configured)
```
I assume when adding the metal delegate it changes the select cpu type (im not familiar with bazel enough to know). Unless there is a better way of adding the metal target.
I also tried to add a target myself such as
```
macos_dylib (
name = "tensorflowlite2",
minimum_os_version = "10.12",
deps = [
"//tensorflow/lite/kernels:builtin_ops_all_linked",
"//tensorflow/lite/tools/evaluation:utils",
"//tensorflow/lite/delegates/gpu:metal_delegate",
],
)
```
First I had to ch am
**Provide the exact sequence of commands / steps that you executed before running into the problem**
**Any other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
|
build
|
macos failure to compile dylib with metal delegate system information os platform and distribution e g linux ubuntu macos big sur tensorflow installed from source or binary source tensorflow version python version python bazel version if compiling from source bazel homebrew gcc compiler version if compiling from source clang version describe the problem building the default bazel target is fine and generates a dylib but when i add the metal delegate target to the file tensorflow lite build tflite cc shared object name tensorflowlite until we have more granular symbol export for the c api on windows export all symbols features linkopts select tensorflow macos wl exported symbols list location tensorflow lite tflite exported symbols lds tensorflow windows conditions default wl z defs wl version script location tensorflow lite tflite version script lds per os targets true deps framework tflite exported symbols lds tflite version script lds tensorflow lite tools evaluation utils tensorflow lite delegates gpu metal delegate adding this makes it fail tensorflow lite kernels builtin ops all linked it fails and gives me info options provided by the client inherited common options isatty terminal columns info reading rc options for build from users mng repositories tensorflow bazelrc inherited common options experimental repo remote exec info reading rc options for build from users mng repositories tensorflow bazelrc build options define framework shared object true java toolchain tf toolchains toolchains java tf java toolchain host java toolchain tf toolchains toolchains java tf java toolchain define use fast cpp protos true define allow oversize protos true spawn strategy standalone c opt announce rc define grpc no ares true noincompatible remove legacy whole archive noincompatible prohibit enable platform specific config define with xla support true config short logs config info reading rc options for build from users mng repositories tensorflow tf configure bazelrc build options action env python bin path library frameworks python framework versions bin action env python lib path library frameworks python framework versions lib site packages python path library frameworks python framework versions bin info found applicable config definition build short logs in file users mng repositories tensorflow bazelrc output filter dont match anything info found applicable config definition build in file users mng repositories tensorflow bazelrc define tf api version action env behavior info found applicable config definition build macos in file users mng repositories tensorflow bazelrc apple platform type macos copt dgrpc bazel build copt w define prefix usr define libdir prefix lib define includedir prefix include define protobuf include path prefix include cxxopt std c host cxxopt std c error private var tmp bazel mng external cpuinfo build bazel configurable attribute srcs doesn t match this configuration would a default condition help conditions checked cpuinfo linux cpuinfo linux arm cpuinfo linux armhf cpuinfo linux cpuinfo linux armeabi cpuinfo linux cpuinfo linux cpuinfo linux cpuinfo linux cpuinfo macos cpuinfo macos cpuinfo windows cpuinfo android cpuinfo android cpuinfo android cpuinfo android cpuinfo ios cpuinfo ios cpuinfo ios cpuinfo ios cpuinfo ios cpuinfo watchos cpuinfo watchos cpuinfo watchos cpuinfo watchos cpuinfo tvos cpuinfo tvos error analysis of target tensorflow lite tensorflowlite failed build aborted private var tmp bazel mng external cpuinfo build bazel configurable attribute srcs doesn t match this configuration would a default condition help conditions checked cpuinfo linux cpuinfo linux arm cpuinfo linux armhf cpuinfo linux cpuinfo linux armeabi cpuinfo linux cpuinfo linux cpuinfo linux cpuinfo linux cpuinfo macos cpuinfo macos cpuinfo windows cpuinfo android cpuinfo android cpuinfo android cpuinfo android cpuinfo ios cpuinfo ios cpuinfo ios cpuinfo ios cpuinfo ios cpuinfo watchos cpuinfo watchos cpuinfo watchos cpuinfo watchos cpuinfo tvos cpuinfo tvos info elapsed time info processes failed build did not complete successfully packages loaded targets configured i assume when adding the metal delegate it changes the select cpu type im not familiar with bazel enough to know unless there is a better way of adding the metal target i also tried to add a target myself such as macos dylib name minimum os version deps tensorflow lite kernels builtin ops all linked tensorflow lite tools evaluation utils tensorflow lite delegates gpu metal delegate first i had to ch am provide the exact sequence of commands steps that you executed before running into the problem any other info logs include any logs or source code that would be helpful to diagnose the problem if including tracebacks please include the full traceback large logs and files should be attached
| 1
|
101,476
| 4,118,528,052
|
IssuesEvent
|
2016-06-08 11:52:35
|
Esri/solutions-webappbuilder-widgets
|
https://api.github.com/repos/Esri/solutions-webappbuilder-widgets
|
closed
|
Smart Editor - Test webmap with additional layer types
|
Mid Priority Smart Editor
|
### Widget
Smart Editor
### Version of widget
Alpha
### Bug or Enhancement
Bug
### Repo Steps or Enhancement details
We need to test to ensure other layer types do not break the widget
#### Layer Types
Feature Collections with one layer
FC with more than one layer
Map Service
Tile Maps
Vector Maps
Tables
#### Field Types for presets
GUID
GlobalID
Range Domains
Date
DateTime(set in pop up config of layer)
|
1.0
|
Smart Editor - Test webmap with additional layer types - ### Widget
Smart Editor
### Version of widget
Alpha
### Bug or Enhancement
Bug
### Repo Steps or Enhancement details
We need to test to ensure other layer types do not break the widget
#### Layer Types
Feature Collections with one layer
FC with more than one layer
Map Service
Tile Maps
Vector Maps
Tables
#### Field Types for presets
GUID
GlobalID
Range Domains
Date
DateTime(set in pop up config of layer)
|
non_build
|
smart editor test webmap with additional layer types widget smart editor version of widget alpha bug or enhancement bug repo steps or enhancement details we need to test to ensure other layer types do not break the widget layer types feature collections with one layer fc with more than one layer map service tile maps vector maps tables field types for presets guid globalid range domains date datetime set in pop up config of layer
| 0
|
38,927
| 10,267,656,498
|
IssuesEvent
|
2019-08-23 02:38:51
|
openndr/ndr-build-env
|
https://api.github.com/repos/openndr/ndr-build-env
|
closed
|
Isolate DPDK-related build dependents from generic build result paths.
|
build dpdk enhancement nbh
|
Currently, we use '/<target>/lib' & '/<target>/include' paths to store DPDK-related build dependents.
If a user wants to compile libraries, the above situation can cause confusion to the developer finding the result of compilations.
We must isolate DPDK-related build dependents from generic build result paths.
|
1.0
|
Isolate DPDK-related build dependents from generic build result paths. - Currently, we use '/<target>/lib' & '/<target>/include' paths to store DPDK-related build dependents.
If a user wants to compile libraries, the above situation can cause confusion to the developer finding the result of compilations.
We must isolate DPDK-related build dependents from generic build result paths.
|
build
|
isolate dpdk related build dependents from generic build result paths currently we use lib include paths to store dpdk related build dependents if a user wants to compile libraries the above situation can cause confusion to the developer finding the result of compilations we must isolate dpdk related build dependents from generic build result paths
| 1
|
72,073
| 31,152,939,809
|
IssuesEvent
|
2023-08-16 11:14:26
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
This deployment never works if I follow your instructions to deploy an ASP.NET CORE 6 website
|
app-service/svc triaged assigned-to-author product-question Pri1
|
[Enter feedback here]
---
#### Document Details
I created a sample ASP.NET Core 6 project, use dotnet publish command to build it, and package the publish folder into a zip file, then I run the command suggested by this page
`az webapp deploy --resource-group {resourceGroupName} --name {appServiceName} --src-path "zip file path" --debug`
Turns out it do uploads my zip file to wwwroot, and unpacked the zip package directly under the wwwroot, which means now i got this

This is never going to work
I'm also wondering by simply upload a zip file, how does the app service know how to launch my asp.net core apps?
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 4ce674e6-fcb7-a867-2a74-f28f6a1b0204
* Version Independent ID: 461fc0b5-ac32-54b8-cdb6-76a2c8e6052f
* Content: [Deploy files to App Service - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/deploy-zip?tabs=cli)
* Content Source: [articles/app-service/deploy-zip.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/app-service/deploy-zip.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin**
|
1.0
|
This deployment never works if I follow your instructions to deploy an ASP.NET CORE 6 website -
[Enter feedback here]
---
#### Document Details
I created a sample ASP.NET Core 6 project, use dotnet publish command to build it, and package the publish folder into a zip file, then I run the command suggested by this page
`az webapp deploy --resource-group {resourceGroupName} --name {appServiceName} --src-path "zip file path" --debug`
Turns out it do uploads my zip file to wwwroot, and unpacked the zip package directly under the wwwroot, which means now i got this

This is never going to work
I'm also wondering by simply upload a zip file, how does the app service know how to launch my asp.net core apps?
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 4ce674e6-fcb7-a867-2a74-f28f6a1b0204
* Version Independent ID: 461fc0b5-ac32-54b8-cdb6-76a2c8e6052f
* Content: [Deploy files to App Service - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/deploy-zip?tabs=cli)
* Content Source: [articles/app-service/deploy-zip.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/app-service/deploy-zip.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin**
|
non_build
|
this deployment never works if i follow your instructions to deploy an asp net core website document details i created a sample asp net core project use dotnet publish command to build it and package the publish folder into a zip file then i run the command suggested by this page az webapp deploy resource group resourcegroupname name appservicename src path zip file path debug turns out it do uploads my zip file to wwwroot and unpacked the zip package directly under the wwwroot which means now i got this this is never going to work i m also wondering by simply upload a zip file how does the app service know how to launch my asp net core apps ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service github login cephalin microsoft alias cephalin
| 0
|
26,356
| 7,817,903,934
|
IssuesEvent
|
2018-06-13 10:29:03
|
ShaikASK/Testing
|
https://api.github.com/repos/ShaikASK/Testing
|
closed
|
HR Admin /HR Users : Edit Hires : Application displays an error message when user try to edit "New Hire"
|
Defect HR Admin Module HR User Module New Hire P1 Release#2 Build#3
|
Steps :
1.Launch the URL
2.Sign in as HR admin /HR Users
3.Go to New Hires
4.Create a New Hire and Save it
5.Edit the above created New Hire and save the changes
Experienced Behaviour : Observed that error message is displayed as "ERROR: Could not update New Hire when user try to edit the New Hire
Expected Behaviour : Ensure that application should not display any error message when user try to edit "New Hire"
|
1.0
|
HR Admin /HR Users : Edit Hires : Application displays an error message when user try to edit "New Hire" - Steps :
1.Launch the URL
2.Sign in as HR admin /HR Users
3.Go to New Hires
4.Create a New Hire and Save it
5.Edit the above created New Hire and save the changes
Experienced Behaviour : Observed that error message is displayed as "ERROR: Could not update New Hire when user try to edit the New Hire
Expected Behaviour : Ensure that application should not display any error message when user try to edit "New Hire"
|
build
|
hr admin hr users edit hires application displays an error message when user try to edit new hire steps launch the url sign in as hr admin hr users go to new hires create a new hire and save it edit the above created new hire and save the changes experienced behaviour observed that error message is displayed as error could not update new hire when user try to edit the new hire expected behaviour ensure that application should not display any error message when user try to edit new hire
| 1
|
8,295
| 4,216,910,442
|
IssuesEvent
|
2016-06-30 11:01:29
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
opened
|
Builder has troubles with gettting src files from github.
|
area/build-release
|
```
Fetching upstream changes from https://github.com/kubernetes/kubernetes
> /usr/bin/git -c core.askpass=true fetch --tags --progress https://github.com/kubernetes/kubernetes +refs/pull/28279/merge:refs/remotes/origin/pr/28279/merge # timeout=20
ERROR: Error fetching remote repo 'remote1'
hudson.plugins.git.GitException: Failed to fetch from https://github.com/kubernetes/kubernetes
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:810)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1066)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1097)
at hudson.scm.SCM.checkout(SCM.java:485)
... skipping 6 lines ...
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "/usr/bin/git -c core.askpass=true fetch --tags --progress https://github.com/kubernetes/kubernetes +refs/pull/28279/merge:refs/remotes/origin/pr/28279/merge" returned status code 128:
stdout:
```
More available here:
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/28279/kubernetes-pull-verify-all/2669/
|
1.0
|
Builder has troubles with gettting src files from github. - ```
Fetching upstream changes from https://github.com/kubernetes/kubernetes
> /usr/bin/git -c core.askpass=true fetch --tags --progress https://github.com/kubernetes/kubernetes +refs/pull/28279/merge:refs/remotes/origin/pr/28279/merge # timeout=20
ERROR: Error fetching remote repo 'remote1'
hudson.plugins.git.GitException: Failed to fetch from https://github.com/kubernetes/kubernetes
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:810)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1066)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1097)
at hudson.scm.SCM.checkout(SCM.java:485)
... skipping 6 lines ...
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "/usr/bin/git -c core.askpass=true fetch --tags --progress https://github.com/kubernetes/kubernetes +refs/pull/28279/merge:refs/remotes/origin/pr/28279/merge" returned status code 128:
stdout:
```
More available here:
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/28279/kubernetes-pull-verify-all/2669/
|
build
|
builder has troubles with gettting src files from github fetching upstream changes from usr bin git c core askpass true fetch tags progress refs pull merge refs remotes origin pr merge timeout error error fetching remote repo hudson plugins git gitexception failed to fetch from at hudson plugins git gitscm fetchfrom gitscm java at hudson plugins git gitscm retrievechanges gitscm java at hudson plugins git gitscm checkout gitscm java at hudson scm scm checkout scm java skipping lines at hudson model resourcecontroller execute resourcecontroller java at hudson model executor run executor java caused by hudson plugins git gitexception command usr bin git c core askpass true fetch tags progress refs pull merge refs remotes origin pr merge returned status code stdout more available here
| 1
|
80,447
| 23,208,473,318
|
IssuesEvent
|
2022-08-02 08:03:27
|
llvm/llvm-project
|
https://api.github.com/repos/llvm/llvm-project
|
closed
|
LLVM 15.0.0-rc1 fails to build: wrong member name in IntelJITEventListener
|
build-problem
|
Trying out the build of llvm 15.0.0-rc1 (as in: the folder `llvm/` in this repo) in https://github.com/conda-forge/llvmdev-feedstock/pull/163 runs into:
```console
[...]
[1561/3294] Building C object lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/jitprofiling.c.o
[1562/3294] Building CXX object lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/IntelJITEventListener.cpp.o
FAILED: lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/IntelJITEventListener.cpp.o
$BUILD_PREFIX/bin/x86_64-conda-linux-gnu-c++ -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -I$SRC_DIR/build/lib/ExecutionEngine/IntelJITEvents -I$SRC_DIR/llvm/lib/ExecutionEngine/IntelJITEvents -I$SRC_DIR/build/include -I$SRC_DIR/llvm/include -I$SRC_DIR/llvm/lib/ExecutionEngine/IntelJITEvents/.. -I$SRC_DIR/build/ittapi/include -fvisibility-inlines-hidden -std=c++17 -fmessage-length=0 -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem $PREFIX/include -fdebug-prefix-map=$SRC_DIR=/usr/local/src/conda/llvm-package-15.0.0.rc1 -fdebug-prefix-map=$PREFIX=/usr/local/src/conda-prefix -fPIC -fno-semantic-interposition -fvisibility-inlines-hidden -Werror=date-time -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wno-missing-field-initializers -pedantic -Wno-long-long -Wimplicit-fallthrough -Wno-maybe-uninitialized -Wno-class-memaccess -Wno-redundant-move -Wno-pessimizing-move -Wno-noexcept-type -Wdelete-non-virtual-dtor -Wsuggest-override -Wno-comment -Wmisleading-indentation -fdiagnostics-color -ffunction-sections -fdata-sections -O3 -DNDEBUG -std=c++14 -fno-exceptions -MD -MT lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/IntelJITEventListener.cpp.o -MF lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/IntelJITEventListener.cpp.o.d -o lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/IntelJITEventListener.cpp.o -c $SRC_DIR/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp
$SRC_DIR/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp: In member function 'int {anonymous}::IntelIttnotifyInfo::fillSectionInformation(const llvm::object::ObjectFile&, const llvm::RuntimeDyld::LoadedObjectInfo&)':
$SRC_DIR/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp:81:17: error: 'ELFSectionRef' is not a member of 'llvm::object'; did you mean 'SectionRef'?
81 | object::ELFSectionRef ElfSection(Section);
| ^~~~~~~~~~~~~
| SectionRef
$SRC_DIR/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp:86:35: error: 'ElfSection' was not declared in this scope; did you mean 'Section'?
86 | SectionInfo.file_offset = ElfSection.getOffset();
| ^~~~~~~~~~
| Section
```
and similarly on windows:
```console
[1585/3371] Building CXX object lib\Target\CMakeFiles\LLVMTarget.dir\TargetIntrinsicInfo.cpp.obj
[1586/3371] Building CXX object lib\ExecutionEngine\IntelJITEvents\CMakeFiles\LLVMIntelJITEvents.dir\IntelJITEventListener.cpp.obj
FAILED: lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/IntelJITEventListener.cpp.obj
C:\PROGRA~2\MICROS~1\2019\ENTERP~1\VC\Tools\MSVC\1429~1.301\bin\Hostx64\x64\cl.exe /nologo /TP -DUNICODE -D_CRT_NONSTDC_NO_DEPRECATE -D_CRT_NONSTDC_NO_WARNINGS -D_CRT_SECURE_NO_DEPRECATE -D_CRT_SECURE_NO_WARNINGS -D_HAS_EXCEPTIONS=0 -D_SCL_SECURE_NO_DEPRECATE -D_SCL_SECURE_NO_WARNINGS -D_UNICODE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -I%SRC_DIR%\build\lib\ExecutionEngine\IntelJITEvents -I%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents -I%SRC_DIR%\build\include -I%SRC_DIR%\llvm\include -I%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\.. -I%SRC_DIR%\build\ittapi\include -external:I%PREFIX%\Library\include -external:W0 -MD /DWIN32 /D_WINDOWS /Zc:inline /Zc:__cplusplus /Oi /bigobj /permissive- /W4 -wd4141 -wd4146 -wd4244 -wd4267 -wd4291 -wd4351 -wd4456 -wd4457 -wd4458 -wd4459 -wd4503 -wd4624 -wd4722 -wd4100 -wd4127 -wd4512 -wd4505 -wd4610 -wd4510 -wd4702 -wd4245 -wd4706 -wd4310 -wd4701 -wd4703 -wd4389 -wd4611 -wd4805 -wd4204 -wd4577 -wd4091 -wd4592 -wd4319 -wd4709 -wd4324 -w14062 -we4238 /Gw /MD /O2 /Ob2 /DNDEBUG -std:c++14 /EHs-c- /GR /showIncludes /Folib\ExecutionEngine\IntelJITEvents\CMakeFiles\LLVMIntelJITEvents.dir\IntelJITEventListener.cpp.obj /Fdlib\ExecutionEngine\IntelJITEvents\CMakeFiles\LLVMIntelJITEvents.dir\LLVMIntelJITEvents.pdb /FS -c %SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(81): error C2039: 'ELFSectionRef': is not a member of 'llvm::object'
%SRC_DIR%\llvm\include\llvm/Object/SymbolSize.h(16): note: see declaration of 'llvm::object'
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(81): error C2065: 'ELFSectionRef': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(81): error C2146: syntax error: missing ';' before identifier 'ElfSection'
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(81): error C3861: 'ElfSection': identifier not found
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(86): error C2065: 'ElfSection': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(87): error C2065: 'ElfSection': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(90): error C2065: 'ElfSection': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(91): error C3536: 'SectionNameOrError': cannot be used before it is initialized
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(92): error C2100: illegal indirection
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(95): error C2065: 'ElfSection': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(99): error C2065: 'ElfSection': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(101): error C2065: 'ElfSection': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(103): error C2065: 'ElfSection': undeclared identifier
```
Build instructions are unchanged from LLVM 14. Happy to detail them (or change them, if necessary), but this looks like a code problem...?
|
1.0
|
LLVM 15.0.0-rc1 fails to build: wrong member name in IntelJITEventListener - Trying out the build of llvm 15.0.0-rc1 (as in: the folder `llvm/` in this repo) in https://github.com/conda-forge/llvmdev-feedstock/pull/163 runs into:
```console
[...]
[1561/3294] Building C object lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/jitprofiling.c.o
[1562/3294] Building CXX object lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/IntelJITEventListener.cpp.o
FAILED: lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/IntelJITEventListener.cpp.o
$BUILD_PREFIX/bin/x86_64-conda-linux-gnu-c++ -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -I$SRC_DIR/build/lib/ExecutionEngine/IntelJITEvents -I$SRC_DIR/llvm/lib/ExecutionEngine/IntelJITEvents -I$SRC_DIR/build/include -I$SRC_DIR/llvm/include -I$SRC_DIR/llvm/lib/ExecutionEngine/IntelJITEvents/.. -I$SRC_DIR/build/ittapi/include -fvisibility-inlines-hidden -std=c++17 -fmessage-length=0 -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem $PREFIX/include -fdebug-prefix-map=$SRC_DIR=/usr/local/src/conda/llvm-package-15.0.0.rc1 -fdebug-prefix-map=$PREFIX=/usr/local/src/conda-prefix -fPIC -fno-semantic-interposition -fvisibility-inlines-hidden -Werror=date-time -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wno-missing-field-initializers -pedantic -Wno-long-long -Wimplicit-fallthrough -Wno-maybe-uninitialized -Wno-class-memaccess -Wno-redundant-move -Wno-pessimizing-move -Wno-noexcept-type -Wdelete-non-virtual-dtor -Wsuggest-override -Wno-comment -Wmisleading-indentation -fdiagnostics-color -ffunction-sections -fdata-sections -O3 -DNDEBUG -std=c++14 -fno-exceptions -MD -MT lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/IntelJITEventListener.cpp.o -MF lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/IntelJITEventListener.cpp.o.d -o lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/IntelJITEventListener.cpp.o -c $SRC_DIR/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp
$SRC_DIR/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp: In member function 'int {anonymous}::IntelIttnotifyInfo::fillSectionInformation(const llvm::object::ObjectFile&, const llvm::RuntimeDyld::LoadedObjectInfo&)':
$SRC_DIR/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp:81:17: error: 'ELFSectionRef' is not a member of 'llvm::object'; did you mean 'SectionRef'?
81 | object::ELFSectionRef ElfSection(Section);
| ^~~~~~~~~~~~~
| SectionRef
$SRC_DIR/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp:86:35: error: 'ElfSection' was not declared in this scope; did you mean 'Section'?
86 | SectionInfo.file_offset = ElfSection.getOffset();
| ^~~~~~~~~~
| Section
```
and similarly on windows:
```console
[1585/3371] Building CXX object lib\Target\CMakeFiles\LLVMTarget.dir\TargetIntrinsicInfo.cpp.obj
[1586/3371] Building CXX object lib\ExecutionEngine\IntelJITEvents\CMakeFiles\LLVMIntelJITEvents.dir\IntelJITEventListener.cpp.obj
FAILED: lib/ExecutionEngine/IntelJITEvents/CMakeFiles/LLVMIntelJITEvents.dir/IntelJITEventListener.cpp.obj
C:\PROGRA~2\MICROS~1\2019\ENTERP~1\VC\Tools\MSVC\1429~1.301\bin\Hostx64\x64\cl.exe /nologo /TP -DUNICODE -D_CRT_NONSTDC_NO_DEPRECATE -D_CRT_NONSTDC_NO_WARNINGS -D_CRT_SECURE_NO_DEPRECATE -D_CRT_SECURE_NO_WARNINGS -D_HAS_EXCEPTIONS=0 -D_SCL_SECURE_NO_DEPRECATE -D_SCL_SECURE_NO_WARNINGS -D_UNICODE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -I%SRC_DIR%\build\lib\ExecutionEngine\IntelJITEvents -I%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents -I%SRC_DIR%\build\include -I%SRC_DIR%\llvm\include -I%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\.. -I%SRC_DIR%\build\ittapi\include -external:I%PREFIX%\Library\include -external:W0 -MD /DWIN32 /D_WINDOWS /Zc:inline /Zc:__cplusplus /Oi /bigobj /permissive- /W4 -wd4141 -wd4146 -wd4244 -wd4267 -wd4291 -wd4351 -wd4456 -wd4457 -wd4458 -wd4459 -wd4503 -wd4624 -wd4722 -wd4100 -wd4127 -wd4512 -wd4505 -wd4610 -wd4510 -wd4702 -wd4245 -wd4706 -wd4310 -wd4701 -wd4703 -wd4389 -wd4611 -wd4805 -wd4204 -wd4577 -wd4091 -wd4592 -wd4319 -wd4709 -wd4324 -w14062 -we4238 /Gw /MD /O2 /Ob2 /DNDEBUG -std:c++14 /EHs-c- /GR /showIncludes /Folib\ExecutionEngine\IntelJITEvents\CMakeFiles\LLVMIntelJITEvents.dir\IntelJITEventListener.cpp.obj /Fdlib\ExecutionEngine\IntelJITEvents\CMakeFiles\LLVMIntelJITEvents.dir\LLVMIntelJITEvents.pdb /FS -c %SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(81): error C2039: 'ELFSectionRef': is not a member of 'llvm::object'
%SRC_DIR%\llvm\include\llvm/Object/SymbolSize.h(16): note: see declaration of 'llvm::object'
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(81): error C2065: 'ELFSectionRef': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(81): error C2146: syntax error: missing ';' before identifier 'ElfSection'
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(81): error C3861: 'ElfSection': identifier not found
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(86): error C2065: 'ElfSection': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(87): error C2065: 'ElfSection': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(90): error C2065: 'ElfSection': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(91): error C3536: 'SectionNameOrError': cannot be used before it is initialized
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(92): error C2100: illegal indirection
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(95): error C2065: 'ElfSection': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(99): error C2065: 'ElfSection': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(101): error C2065: 'ElfSection': undeclared identifier
%SRC_DIR%\llvm\lib\ExecutionEngine\IntelJITEvents\IntelJITEventListener.cpp(103): error C2065: 'ElfSection': undeclared identifier
```
Build instructions are unchanged from LLVM 14. Happy to detail them (or change them, if necessary), but this looks like a code problem...?
|
build
|
llvm fails to build wrong member name in inteljiteventlistener trying out the build of llvm as in the folder llvm in this repo in runs into console building c object lib executionengine inteljitevents cmakefiles llvminteljitevents dir jitprofiling c o building cxx object lib executionengine inteljitevents cmakefiles llvminteljitevents dir inteljiteventlistener cpp o failed lib executionengine inteljitevents cmakefiles llvminteljitevents dir inteljiteventlistener cpp o build prefix bin conda linux gnu c d gnu source d stdc constant macros d stdc format macros d stdc limit macros i src dir build lib executionengine inteljitevents i src dir llvm lib executionengine inteljitevents i src dir build include i src dir llvm include i src dir llvm lib executionengine inteljitevents i src dir build ittapi include fvisibility inlines hidden std c fmessage length march nocona mtune haswell ftree vectorize fpic fstack protector strong fno plt ffunction sections pipe isystem prefix include fdebug prefix map src dir usr local src conda llvm package fdebug prefix map prefix usr local src conda prefix fpic fno semantic interposition fvisibility inlines hidden werror date time wall wextra wno unused parameter wwrite strings wcast qual wno missing field initializers pedantic wno long long wimplicit fallthrough wno maybe uninitialized wno class memaccess wno redundant move wno pessimizing move wno noexcept type wdelete non virtual dtor wsuggest override wno comment wmisleading indentation fdiagnostics color ffunction sections fdata sections dndebug std c fno exceptions md mt lib executionengine inteljitevents cmakefiles llvminteljitevents dir inteljiteventlistener cpp o mf lib executionengine inteljitevents cmakefiles llvminteljitevents dir inteljiteventlistener cpp o d o lib executionengine inteljitevents cmakefiles llvminteljitevents dir inteljiteventlistener cpp o c src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp in member function int anonymous intelittnotifyinfo fillsectioninformation const llvm object objectfile const llvm runtimedyld loadedobjectinfo src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error elfsectionref is not a member of llvm object did you mean sectionref object elfsectionref elfsection section sectionref src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error elfsection was not declared in this scope did you mean section sectioninfo file offset elfsection getoffset section and similarly on windows console building cxx object lib target cmakefiles llvmtarget dir targetintrinsicinfo cpp obj building cxx object lib executionengine inteljitevents cmakefiles llvminteljitevents dir inteljiteventlistener cpp obj failed lib executionengine inteljitevents cmakefiles llvminteljitevents dir inteljiteventlistener cpp obj c progra micros enterp vc tools msvc bin cl exe nologo tp dunicode d crt nonstdc no deprecate d crt nonstdc no warnings d crt secure no deprecate d crt secure no warnings d has exceptions d scl secure no deprecate d scl secure no warnings d unicode d stdc constant macros d stdc format macros d stdc limit macros i src dir build lib executionengine inteljitevents i src dir llvm lib executionengine inteljitevents i src dir build include i src dir llvm include i src dir llvm lib executionengine inteljitevents i src dir build ittapi include external i prefix library include external md d windows zc inline zc cplusplus oi bigobj permissive gw md dndebug std c ehs c gr showincludes folib executionengine inteljitevents cmakefiles llvminteljitevents dir inteljiteventlistener cpp obj fdlib executionengine inteljitevents cmakefiles llvminteljitevents dir llvminteljitevents pdb fs c src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error elfsectionref is not a member of llvm object src dir llvm include llvm object symbolsize h note see declaration of llvm object src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error elfsectionref undeclared identifier src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error syntax error missing before identifier elfsection src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error elfsection identifier not found src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error elfsection undeclared identifier src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error elfsection undeclared identifier src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error elfsection undeclared identifier src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error sectionnameorerror cannot be used before it is initialized src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error illegal indirection src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error elfsection undeclared identifier src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error elfsection undeclared identifier src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error elfsection undeclared identifier src dir llvm lib executionengine inteljitevents inteljiteventlistener cpp error elfsection undeclared identifier build instructions are unchanged from llvm happy to detail them or change them if necessary but this looks like a code problem
| 1
|
10,854
| 4,834,152,562
|
IssuesEvent
|
2016-11-08 13:30:49
|
liteguard/liteguard
|
https://api.github.com/repos/liteguard/liteguard
|
closed
|
1.0.0 release
|
build docs in-progress
|
**Ready** when all other issues forming part of the release are **Done**.
- [x] run code analysis in VS in _Release_ mode and address violations (send a regular PR which must be merged before continuing)
- [x] check build, create release in GitHub UI including releaseNotes, mentioning non-owner contributors, if any
- [x] push nuget package
- [x] tweet, mentioning contributors and post link as comment here for easy retweeting ;-)
- [x] post tweet in https://gitter.im/liteguard/liteguard
- [x] post links to the Tweet, NuGet and GitHub release in each issue in this milestone, with thanks to contributors
- [x] increment minor version
- [x] push to origin branch, create PR to upstream master
- [x] check build, merge PR
- [x] create a new milestone for the next release
- [x] create new issue based on this one for next release, adding it to the new milestone
- [x] close all issues on this milestone
- [x] close this milestone
|
1.0
|
1.0.0 release - **Ready** when all other issues forming part of the release are **Done**.
- [x] run code analysis in VS in _Release_ mode and address violations (send a regular PR which must be merged before continuing)
- [x] check build, create release in GitHub UI including releaseNotes, mentioning non-owner contributors, if any
- [x] push nuget package
- [x] tweet, mentioning contributors and post link as comment here for easy retweeting ;-)
- [x] post tweet in https://gitter.im/liteguard/liteguard
- [x] post links to the Tweet, NuGet and GitHub release in each issue in this milestone, with thanks to contributors
- [x] increment minor version
- [x] push to origin branch, create PR to upstream master
- [x] check build, merge PR
- [x] create a new milestone for the next release
- [x] create new issue based on this one for next release, adding it to the new milestone
- [x] close all issues on this milestone
- [x] close this milestone
|
build
|
release ready when all other issues forming part of the release are done run code analysis in vs in release mode and address violations send a regular pr which must be merged before continuing check build create release in github ui including releasenotes mentioning non owner contributors if any push nuget package tweet mentioning contributors and post link as comment here for easy retweeting post tweet in post links to the tweet nuget and github release in each issue in this milestone with thanks to contributors increment minor version push to origin branch create pr to upstream master check build merge pr create a new milestone for the next release create new issue based on this one for next release adding it to the new milestone close all issues on this milestone close this milestone
| 1
|
76,021
| 21,104,304,961
|
IssuesEvent
|
2022-04-04 17:11:00
|
hvdwolf/jExifToolGUI
|
https://api.github.com/repos/hvdwolf/jExifToolGUI
|
closed
|
Translatable text (?)
|
(Fixed) in beta; no release build yet
|
hi,
I can't find in Weblate this string to translate

MacOS
JTG v1.10.0
|
1.0
|
Translatable text (?) - hi,
I can't find in Weblate this string to translate

MacOS
JTG v1.10.0
|
build
|
translatable text hi i can t find in weblate this string to translate macos jtg
| 1
|
35,246
| 14,655,666,915
|
IssuesEvent
|
2020-12-28 11:33:33
|
microsoft/vscode-cpptools
|
https://api.github.com/repos/microsoft/vscode-cpptools
|
closed
|
IntelliSense within remote ssh broken
|
Language Service more info needed remote
|
**Type:** IntelliSense
**Describe the bug**
- OS and Version: Mac Mojave 10.14.6, remote with Linux 4.12.14-lp151.28.36-default x86_64
- VS Code Version: 1.42.1
- C/C++ Extension Version: 0.26.3
- Other extensions you installed (and if the issue persists after disabling them): C/C++ GNU Global (0.3.2), Visual Studio IntelliCode (1.2.5), Git History (0.6.0)
When using the [remote ssh extension](https://code.visualstudio.com/blogs/2019/07/25/remote-ssh), the IntelliSense works for the first couple of seconds but then has no suggestions for the rest of the session.
**To Reproduce**
<!-- Steps to reproduce the behavior: -->
<!-- *The most actionable issue reports include a code sample including configuration files such as c_cpp_properties.json* -->
1. Get the remote ssh extension
2. Remote into a Linus server
3. Start writing some C code.
4. See the lack of suggestions and autocomplete after a couple of minutes of usage. (error)
**Expected behavior**
I expect there to be IntelliSense working in full functionality in the remote ssh window.
|
1.0
|
IntelliSense within remote ssh broken - **Type:** IntelliSense
**Describe the bug**
- OS and Version: Mac Mojave 10.14.6, remote with Linux 4.12.14-lp151.28.36-default x86_64
- VS Code Version: 1.42.1
- C/C++ Extension Version: 0.26.3
- Other extensions you installed (and if the issue persists after disabling them): C/C++ GNU Global (0.3.2), Visual Studio IntelliCode (1.2.5), Git History (0.6.0)
When using the [remote ssh extension](https://code.visualstudio.com/blogs/2019/07/25/remote-ssh), the IntelliSense works for the first couple of seconds but then has no suggestions for the rest of the session.
**To Reproduce**
<!-- Steps to reproduce the behavior: -->
<!-- *The most actionable issue reports include a code sample including configuration files such as c_cpp_properties.json* -->
1. Get the remote ssh extension
2. Remote into a Linus server
3. Start writing some C code.
4. See the lack of suggestions and autocomplete after a couple of minutes of usage. (error)
**Expected behavior**
I expect there to be IntelliSense working in full functionality in the remote ssh window.
|
non_build
|
intellisense within remote ssh broken type intellisense describe the bug os and version mac mojave remote with linux default vs code version c c extension version other extensions you installed and if the issue persists after disabling them c c gnu global visual studio intellicode git history when using the the intellisense works for the first couple of seconds but then has no suggestions for the rest of the session to reproduce get the remote ssh extension remote into a linus server start writing some c code see the lack of suggestions and autocomplete after a couple of minutes of usage error expected behavior i expect there to be intellisense working in full functionality in the remote ssh window
| 0
|
78,159
| 22,153,611,298
|
IssuesEvent
|
2022-06-03 19:43:39
|
golang/go
|
https://api.github.com/repos/golang/go
|
closed
|
x/build, crypto/x509: macOS flakes due to "certificate is not standards compliant"
|
OS-Darwin Builders NeedsInvestigation release-blocker
|
Splitting the builder flakiness off from #51991.
https://github.com/golang/go/issues/51991#issuecomment-1104063192
@bcmills
> This is showing up on the darwin-amd64-10_15 builder as well, though curiously not on any of the other darwin builders.
>
> Marking as release-blocker for Go 1.19 because darwin/amd64 is a [first class port](https://go.dev/wiki/PortingPolicy#first-class-ports).
>
> greplogs --dashboard -md -l -e 'certificate is not standards compliant'
>
> [2022-04-19T23:20:21-8b900b4-104742f/darwin-amd64-10_15](https://build.golang.org/log/12de4fdc6d3c2439d4fac1f2d3dc1f657b44b99d)
[2022-04-14T22:52:29-7bdebbc-cc43e19/darwin-amd64-10_15](https://build.golang.org/log/24959aa9186fa147c01b169eb03a7996093f45eb)
[2022-01-20T14:59:17-3ed4219-9284279/darwin-amd64-10_15](https://build.golang.org/log/6ace0aa5824150508846ca3231df7dd8410d6481)
[2022-01-19T15:34:05-a71de3f-ca33b34/darwin-amd64-10_15](https://build.golang.org/log/fd6df8216f7326fc6bc7a384337c6a86f3516ee4)
[2022-01-18T23:19:04-03fcf44-626f13d/darwin-amd64-10_15](https://build.golang.org/log/d22b48c2f47d6ebd4ae6748ff5a0567454a7547f)
[2022-01-18T21:43:02-8066ee9-cf5d73e/darwin-amd64-10_15](https://build.golang.org/log/833d567076d9bfa73d0aaaf812a1c062a832a9bf)
[2022-01-13T21:34:46-4e31bde-6891d07/darwin-amd64-10_15](https://build.golang.org/log/8f067dab9e5f8dcf2cdb61bc3f9c8eed5c10c90c)
[2022-01-13T20:43:56-03fcf44-6891d07/darwin-amd64-10_15](https://build.golang.org/log/c6c0ab1d0fabca8be76aa44e4b0f5db610b10081)
[2022-01-11T16:48:27-62f0361-1cc3c73/darwin-amd64-10_15](https://build.golang.org/log/a4935b32015f2077c5c22ee8624d3bb1d8879a31)
[2022-01-06T17:36:12-b511507-f009910/darwin-amd64-10_15](https://build.golang.org/log/0ff15a8d1529c462da38b699611ada06611f93b4)
[2021-12-15T23:51:57-598f1b0-6e7c691/darwin-amd64-10_15](https://build.golang.org/log/7eaf4b0551fe07b7d2f0a9d8a6850dbd201375a0)
[2021-12-15T00:33:55-18bc0f9-9d0ca26/darwin-amd64-10_15](https://build.golang.org/log/42bb74ec850da7d844dee23ab358e9712a66e4e9)
[2021-12-13T18:48:44-d71ffde-2580d0e/darwin-amd64-10_15](https://build.golang.org/log/54125089e4730fa5b46bd57098dcd1d6a34802f2)
@rolandshoemaker
> As far as I can tell this seems, possibly, (this is unbearably painful to diagnose) to be an issue with 10.15.1, which is what the the darwin-amd64-10_15 builder is running. I suspect that updating the builder to use 10.15.6 would fix this, but I have absolutely no clue how viable that is.
|
1.0
|
x/build, crypto/x509: macOS flakes due to "certificate is not standards compliant" - Splitting the builder flakiness off from #51991.
https://github.com/golang/go/issues/51991#issuecomment-1104063192
@bcmills
> This is showing up on the darwin-amd64-10_15 builder as well, though curiously not on any of the other darwin builders.
>
> Marking as release-blocker for Go 1.19 because darwin/amd64 is a [first class port](https://go.dev/wiki/PortingPolicy#first-class-ports).
>
> greplogs --dashboard -md -l -e 'certificate is not standards compliant'
>
> [2022-04-19T23:20:21-8b900b4-104742f/darwin-amd64-10_15](https://build.golang.org/log/12de4fdc6d3c2439d4fac1f2d3dc1f657b44b99d)
[2022-04-14T22:52:29-7bdebbc-cc43e19/darwin-amd64-10_15](https://build.golang.org/log/24959aa9186fa147c01b169eb03a7996093f45eb)
[2022-01-20T14:59:17-3ed4219-9284279/darwin-amd64-10_15](https://build.golang.org/log/6ace0aa5824150508846ca3231df7dd8410d6481)
[2022-01-19T15:34:05-a71de3f-ca33b34/darwin-amd64-10_15](https://build.golang.org/log/fd6df8216f7326fc6bc7a384337c6a86f3516ee4)
[2022-01-18T23:19:04-03fcf44-626f13d/darwin-amd64-10_15](https://build.golang.org/log/d22b48c2f47d6ebd4ae6748ff5a0567454a7547f)
[2022-01-18T21:43:02-8066ee9-cf5d73e/darwin-amd64-10_15](https://build.golang.org/log/833d567076d9bfa73d0aaaf812a1c062a832a9bf)
[2022-01-13T21:34:46-4e31bde-6891d07/darwin-amd64-10_15](https://build.golang.org/log/8f067dab9e5f8dcf2cdb61bc3f9c8eed5c10c90c)
[2022-01-13T20:43:56-03fcf44-6891d07/darwin-amd64-10_15](https://build.golang.org/log/c6c0ab1d0fabca8be76aa44e4b0f5db610b10081)
[2022-01-11T16:48:27-62f0361-1cc3c73/darwin-amd64-10_15](https://build.golang.org/log/a4935b32015f2077c5c22ee8624d3bb1d8879a31)
[2022-01-06T17:36:12-b511507-f009910/darwin-amd64-10_15](https://build.golang.org/log/0ff15a8d1529c462da38b699611ada06611f93b4)
[2021-12-15T23:51:57-598f1b0-6e7c691/darwin-amd64-10_15](https://build.golang.org/log/7eaf4b0551fe07b7d2f0a9d8a6850dbd201375a0)
[2021-12-15T00:33:55-18bc0f9-9d0ca26/darwin-amd64-10_15](https://build.golang.org/log/42bb74ec850da7d844dee23ab358e9712a66e4e9)
[2021-12-13T18:48:44-d71ffde-2580d0e/darwin-amd64-10_15](https://build.golang.org/log/54125089e4730fa5b46bd57098dcd1d6a34802f2)
@rolandshoemaker
> As far as I can tell this seems, possibly, (this is unbearably painful to diagnose) to be an issue with 10.15.1, which is what the the darwin-amd64-10_15 builder is running. I suspect that updating the builder to use 10.15.6 would fix this, but I have absolutely no clue how viable that is.
|
build
|
x build crypto macos flakes due to certificate is not standards compliant splitting the builder flakiness off from bcmills this is showing up on the darwin builder as well though curiously not on any of the other darwin builders marking as release blocker for go because darwin is a greplogs dashboard md l e certificate is not standards compliant rolandshoemaker as far as i can tell this seems possibly this is unbearably painful to diagnose to be an issue with which is what the the darwin builder is running i suspect that updating the builder to use would fix this but i have absolutely no clue how viable that is
| 1
|
241,810
| 7,834,716,420
|
IssuesEvent
|
2018-06-16 17:35:23
|
knowmetools/km-api
|
https://api.github.com/repos/knowmetools/km-api
|
closed
|
Account Image
|
Priority: Low Status: In Progress Type: Bug
|
### Bug Report
The user list displays the hero image for the logged in user. This should be displaying the account image.
|
1.0
|
Account Image - ### Bug Report
The user list displays the hero image for the logged in user. This should be displaying the account image.
|
non_build
|
account image bug report the user list displays the hero image for the logged in user this should be displaying the account image
| 0
|
97,027
| 12,197,750,652
|
IssuesEvent
|
2020-04-29 21:21:51
|
phetsims/natural-selection
|
https://api.github.com/repos/phetsims/natural-selection
|
closed
|
should "Add a Mate", "Play", and "Start Over" buttons unpause the sim?
|
design:general status:ready-for-review
|
If the user had paused the sim, should the "Add a Mate", "Play", and "Start Over" buttons automatically unpause the sim? Or should we never automatically unpause the sim, and leave it to the user to figure out why pressing "Play" doesn't do anything?
I think the former (unpause the sim) is the least confusing, since we have 2 independent time-based things that can be "playing" - the sim and the generation clock.
|
1.0
|
should "Add a Mate", "Play", and "Start Over" buttons unpause the sim? - If the user had paused the sim, should the "Add a Mate", "Play", and "Start Over" buttons automatically unpause the sim? Or should we never automatically unpause the sim, and leave it to the user to figure out why pressing "Play" doesn't do anything?
I think the former (unpause the sim) is the least confusing, since we have 2 independent time-based things that can be "playing" - the sim and the generation clock.
|
non_build
|
should add a mate play and start over buttons unpause the sim if the user had paused the sim should the add a mate play and start over buttons automatically unpause the sim or should we never automatically unpause the sim and leave it to the user to figure out why pressing play doesn t do anything i think the former unpause the sim is the least confusing since we have independent time based things that can be playing the sim and the generation clock
| 0
|
223,978
| 24,760,210,446
|
IssuesEvent
|
2022-10-21 22:39:52
|
BrianMcDonaldWS/deck.gl
|
https://api.github.com/repos/BrianMcDonaldWS/deck.gl
|
opened
|
CVE-2022-37598 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2022-37598 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>uglify-js-3.4.9.tgz</b>, <b>uglify-js-3.4.10.tgz</b>, <b>uglify-js-3.8.0.tgz</b></p></summary>
<p>
<details><summary><b>uglify-js-3.4.9.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.9.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.9.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nyc/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- ocular-dev-tools-0.1.8.tgz (Root Library)
- nyc-13.3.0.tgz
- istanbul-reports-2.1.1.tgz
- handlebars-4.1.0.tgz
- :x: **uglify-js-3.4.9.tgz** (Vulnerable Library)
</details>
<details><summary><b>uglify-js-3.4.10.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/html-minifier/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- ocular-dev-tools-0.1.8.tgz (Root Library)
- html-webpack-plugin-3.2.0.tgz
- html-minifier-3.5.21.tgz
- :x: **uglify-js-3.4.10.tgz** (Vulnerable Library)
</details>
<details><summary><b>uglify-js-3.8.0.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.8.0.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.8.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- ocular-dev-tools-0.1.8.tgz (Root Library)
- handlebars-4.7.3.tgz
- :x: **uglify-js-3.8.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/BrianMcDonaldWS/deck.gl/commit/67e433f207a0fc9c0fb2b8f7a2906f254c8c4b87">67e433f207a0fc9c0fb2b8f7a2906f254c8c4b87</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function DEFNODE in ast.js in mishoo UglifyJS 3.13.2 via the name variable in ast.js.
<p>Publish Date: 2022-10-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37598>CVE-2022-37598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-20</p>
<p>Fix Resolution: uglify-js - 3.13.10</p>
</p>
</details>
<p></p>
|
True
|
CVE-2022-37598 (High) detected in multiple libraries - ## CVE-2022-37598 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>uglify-js-3.4.9.tgz</b>, <b>uglify-js-3.4.10.tgz</b>, <b>uglify-js-3.8.0.tgz</b></p></summary>
<p>
<details><summary><b>uglify-js-3.4.9.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.9.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.9.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nyc/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- ocular-dev-tools-0.1.8.tgz (Root Library)
- nyc-13.3.0.tgz
- istanbul-reports-2.1.1.tgz
- handlebars-4.1.0.tgz
- :x: **uglify-js-3.4.9.tgz** (Vulnerable Library)
</details>
<details><summary><b>uglify-js-3.4.10.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/html-minifier/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- ocular-dev-tools-0.1.8.tgz (Root Library)
- html-webpack-plugin-3.2.0.tgz
- html-minifier-3.5.21.tgz
- :x: **uglify-js-3.4.10.tgz** (Vulnerable Library)
</details>
<details><summary><b>uglify-js-3.8.0.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.8.0.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.8.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- ocular-dev-tools-0.1.8.tgz (Root Library)
- handlebars-4.7.3.tgz
- :x: **uglify-js-3.8.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/BrianMcDonaldWS/deck.gl/commit/67e433f207a0fc9c0fb2b8f7a2906f254c8c4b87">67e433f207a0fc9c0fb2b8f7a2906f254c8c4b87</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function DEFNODE in ast.js in mishoo UglifyJS 3.13.2 via the name variable in ast.js.
<p>Publish Date: 2022-10-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37598>CVE-2022-37598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-20</p>
<p>Fix Resolution: uglify-js - 3.13.10</p>
</p>
</details>
<p></p>
|
non_build
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries uglify js tgz uglify js tgz uglify js tgz uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file package json path to vulnerable library node modules nyc node modules uglify js package json dependency hierarchy ocular dev tools tgz root library nyc tgz istanbul reports tgz handlebars tgz x uglify js tgz vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file package json path to vulnerable library node modules html minifier node modules uglify js package json dependency hierarchy ocular dev tools tgz root library html webpack plugin tgz html minifier tgz x uglify js tgz vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file package json path to vulnerable library node modules uglify js package json dependency hierarchy ocular dev tools tgz root library handlebars tgz x uglify js tgz vulnerable library found in head commit a href vulnerability details prototype pollution vulnerability in function defnode in ast js in mishoo uglifyjs via the name variable in ast js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution uglify js
| 0
|
63,518
| 15,613,757,855
|
IssuesEvent
|
2021-03-19 16:52:02
|
aws/aws-cdk
|
https://api.github.com/repos/aws/aws-cdk
|
closed
|
[cdk-pipelines]: Typescript pipeline fails with module missing errors on code build
|
@aws-cdk/aws-codebuild @aws-cdk/pipelines bug needs-triage
|
It exactly the same issue as mentioned here on stackoverflow
https://stackoverflow.com/questions/66590492/aws-codebuild-tsc-error-ts2307-cannot-find-module
If we follow https://cdkworkshop.com/20-typescript/70-advanced-topics/200-pipelines.html.
Essentially, npm i (or npm ci) + tsc works fine locally, but when done over CodeBuild, it appears my dependencies don't have their dependencies installed, causing tsc to break.
### Reproduction Steps
<!--
minimal amount of code that causes the bug (if possible) or a reference:
-->
### What did you expect to happen?
To build successfully on code build
### What actually happened?
Build fails with missing module errors
### Environment
- **CDK CLI Version :** 1.94.1
- **Framework Version:**
- **Node.js Version:** v12.19.1
- **OS :** Linux
- **Language (Version):** TypeScript
### Other
https://stackoverflow.com/questions/66590492/aws-codebuild-tsc-error-ts2307-cannot-find-module
---
This is :bug: Bug Report
|
1.0
|
[cdk-pipelines]: Typescript pipeline fails with module missing errors on code build - It exactly the same issue as mentioned here on stackoverflow
https://stackoverflow.com/questions/66590492/aws-codebuild-tsc-error-ts2307-cannot-find-module
If we follow https://cdkworkshop.com/20-typescript/70-advanced-topics/200-pipelines.html.
Essentially, npm i (or npm ci) + tsc works fine locally, but when done over CodeBuild, it appears my dependencies don't have their dependencies installed, causing tsc to break.
### Reproduction Steps
<!--
minimal amount of code that causes the bug (if possible) or a reference:
-->
### What did you expect to happen?
To build successfully on code build
### What actually happened?
Build fails with missing module errors
### Environment
- **CDK CLI Version :** 1.94.1
- **Framework Version:**
- **Node.js Version:** v12.19.1
- **OS :** Linux
- **Language (Version):** TypeScript
### Other
https://stackoverflow.com/questions/66590492/aws-codebuild-tsc-error-ts2307-cannot-find-module
---
This is :bug: Bug Report
|
build
|
typescript pipeline fails with module missing errors on code build it exactly the same issue as mentioned here on stackoverflow if we follow essentially npm i or npm ci tsc works fine locally but when done over codebuild it appears my dependencies don t have their dependencies installed causing tsc to break reproduction steps minimal amount of code that causes the bug if possible or a reference what did you expect to happen to build successfully on code build what actually happened build fails with missing module errors environment cdk cli version framework version node js version os linux language version typescript other this is bug bug report
| 1
|
10,503
| 4,779,643,511
|
IssuesEvent
|
2016-10-27 23:20:31
|
Unidata/thredds
|
https://api.github.com/repos/Unidata/thredds
|
opened
|
Investigate Gradle composite builds
|
Area: Build / Release Area: NetCDF-Java Type: Cleanup Type: Enhancement Type: Feature
|
https://blog.gradle.org/introducing-composite-builds
> Splitting Monoliths
>
> Organizations that want to avoid the integration pains of multiple repositories tend to use a “monorepo”—a repository containing all projects, often including their dependencies and necessary tools. The upside is that all code is in one place and downstream breakages become visible immediately. But this convenience can come at the cost of productivity: a given developer will usually work only on a small part of a monorepo, but will still be forced to build all upstream projects, and that can mean a lot of waiting and wasted time. Likewise, importing large monorepo projects into an IDE often results in an unresponsive and overwhelming experience.
>
> With composite builds, you can break your monorepo up into several independent builds within the same repository. Developers can work with the individual builds to get fast turnarounds or work with the whole composite when they want to ensure that everything still plays well together.
An important step of reducing the technical debt of the thredds project is breaking up its monolithic structure. Composite builds would allow us to split some of the subprojects off into their own discrete projects, and then be able to work on them without the typical pain you'd have if you did that. Good candidates are `bufr`, `grib`, `dap4`, `ui`, etc.
|
1.0
|
Investigate Gradle composite builds - https://blog.gradle.org/introducing-composite-builds
> Splitting Monoliths
>
> Organizations that want to avoid the integration pains of multiple repositories tend to use a “monorepo”—a repository containing all projects, often including their dependencies and necessary tools. The upside is that all code is in one place and downstream breakages become visible immediately. But this convenience can come at the cost of productivity: a given developer will usually work only on a small part of a monorepo, but will still be forced to build all upstream projects, and that can mean a lot of waiting and wasted time. Likewise, importing large monorepo projects into an IDE often results in an unresponsive and overwhelming experience.
>
> With composite builds, you can break your monorepo up into several independent builds within the same repository. Developers can work with the individual builds to get fast turnarounds or work with the whole composite when they want to ensure that everything still plays well together.
An important step of reducing the technical debt of the thredds project is breaking up its monolithic structure. Composite builds would allow us to split some of the subprojects off into their own discrete projects, and then be able to work on them without the typical pain you'd have if you did that. Good candidates are `bufr`, `grib`, `dap4`, `ui`, etc.
|
build
|
investigate gradle composite builds splitting monoliths organizations that want to avoid the integration pains of multiple repositories tend to use a “monorepo”—a repository containing all projects often including their dependencies and necessary tools the upside is that all code is in one place and downstream breakages become visible immediately but this convenience can come at the cost of productivity a given developer will usually work only on a small part of a monorepo but will still be forced to build all upstream projects and that can mean a lot of waiting and wasted time likewise importing large monorepo projects into an ide often results in an unresponsive and overwhelming experience with composite builds you can break your monorepo up into several independent builds within the same repository developers can work with the individual builds to get fast turnarounds or work with the whole composite when they want to ensure that everything still plays well together an important step of reducing the technical debt of the thredds project is breaking up its monolithic structure composite builds would allow us to split some of the subprojects off into their own discrete projects and then be able to work on them without the typical pain you d have if you did that good candidates are bufr grib ui etc
| 1
|
52,849
| 13,065,198,037
|
IssuesEvent
|
2020-07-30 19:23:44
|
GoogleCloudPlatform/golang-samples
|
https://api.github.com/repos/GoogleCloudPlatform/golang-samples
|
closed
|
container_registry/container_analysis: TestPubSub failed
|
buildcop: flaky buildcop: issue priority: p1 sample type: bug
|
This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
commit: 9d203dfe6a2bf97041383afc7c74880cdfbec364
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/caf61114-3815-4157-8ae2-3c4446c60622), [Sponge](http://sponge2/caf61114-3815-4157-8ae2-3c4446c60622)
status: failed
<details><summary>Test output</summary><br><pre>samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
retry.go:44: FAILED after 20 attempts:
samples_test.go:254: invalid occurrence count: -1; want: 3</pre></details>
|
2.0
|
container_registry/container_analysis: TestPubSub failed - This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
commit: 9d203dfe6a2bf97041383afc7c74880cdfbec364
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/caf61114-3815-4157-8ae2-3c4446c60622), [Sponge](http://sponge2/caf61114-3815-4157-8ae2-3c4446c60622)
status: failed
<details><summary>Test output</summary><br><pre>samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
samples_test.go:237: occurrencePubsub(occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a): sub.Receive: rpc error: code = NotFound desc = Resource not found (resource=occurrences-78f67151-43c0-4113-bab2-3e5a6328e11a).
retry.go:44: FAILED after 20 attempts:
samples_test.go:254: invalid occurrence count: -1; want: 3</pre></details>
|
build
|
container registry container analysis testpubsub failed this test failed to configure my behavior see if i m commenting on this issue too often add the buildcop quiet label and i will stop commenting commit buildurl status failed test output samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences samples test go occurrencepubsub occurrences sub receive rpc error code notfound desc resource not found resource occurrences retry go failed after attempts samples test go invalid occurrence count want
| 1
|
58,018
| 14,265,004,642
|
IssuesEvent
|
2020-11-20 16:30:19
|
root-project/root
|
https://api.github.com/repos/root-project/root
|
closed
|
macOS packaging broken in master
|
affects:master bug in:Build System
|
# Describe the bug
https://lcgapp-services.cern.ch/root-jenkins/job/root-release-master/ shows all mac build are broken with similar errrors:
```
07:02:10 CPack: - Building component package: /build/jenkins/ws/BUILDTYPE/Release/LABEL/mac1014/V/master/build/_CPack_Packages/Darwin/productbuild/root_v6.23.01.macosx64-10.14-clang100RELEASE/Contents/Packages/root_v6.23.01.macosx64-10.14-clang100RELEASE-tests.pkg
07:02:11 CPack Error: Bad file extension specified: .md. Currently only .rtfd, .rtf, .html, and .txt files allowed.
07:02:11 CPack Error: Problem copying the License, ReadMe and Welcome files
07:02:11 CPack Error: Problem compressing the directory
07:02:11 CPack Error: Error when generating package: ROOT
```
```
03:20:55 CPack: - Building component package: /Users/sftnight/build/ws/BUILDTYPE/Debug/LABEL/mac1015/V/master/build/_CPack_Packages/Darwin/productbuild/root_v6.23.01.macosx64-10.15-clang120.debug/Contents/Packages/root_v6.23.01.macosx64-10.15-clang120.debug-tests.pkg
03:20:56 CPack Error: Cannot find ReadMe resource file: /README.html
03:20:56 CPack Error: Problem copying the License, ReadMe and Welcome files
03:20:56 CPack Error: Problem compressing the directory
03:20:56 CPack Error: Error when generating package: ROOT
03:20:56 make: *** [package] Error 1
```
|
1.0
|
macOS packaging broken in master - # Describe the bug
https://lcgapp-services.cern.ch/root-jenkins/job/root-release-master/ shows all mac build are broken with similar errrors:
```
07:02:10 CPack: - Building component package: /build/jenkins/ws/BUILDTYPE/Release/LABEL/mac1014/V/master/build/_CPack_Packages/Darwin/productbuild/root_v6.23.01.macosx64-10.14-clang100RELEASE/Contents/Packages/root_v6.23.01.macosx64-10.14-clang100RELEASE-tests.pkg
07:02:11 CPack Error: Bad file extension specified: .md. Currently only .rtfd, .rtf, .html, and .txt files allowed.
07:02:11 CPack Error: Problem copying the License, ReadMe and Welcome files
07:02:11 CPack Error: Problem compressing the directory
07:02:11 CPack Error: Error when generating package: ROOT
```
```
03:20:55 CPack: - Building component package: /Users/sftnight/build/ws/BUILDTYPE/Debug/LABEL/mac1015/V/master/build/_CPack_Packages/Darwin/productbuild/root_v6.23.01.macosx64-10.15-clang120.debug/Contents/Packages/root_v6.23.01.macosx64-10.15-clang120.debug-tests.pkg
03:20:56 CPack Error: Cannot find ReadMe resource file: /README.html
03:20:56 CPack Error: Problem copying the License, ReadMe and Welcome files
03:20:56 CPack Error: Problem compressing the directory
03:20:56 CPack Error: Error when generating package: ROOT
03:20:56 make: *** [package] Error 1
```
|
build
|
macos packaging broken in master describe the bug shows all mac build are broken with similar errrors cpack building component package build jenkins ws buildtype release label v master build cpack packages darwin productbuild root contents packages root tests pkg cpack error bad file extension specified md currently only rtfd rtf html and txt files allowed cpack error problem copying the license readme and welcome files cpack error problem compressing the directory cpack error error when generating package root cpack building component package users sftnight build ws buildtype debug label v master build cpack packages darwin productbuild root debug contents packages root debug tests pkg cpack error cannot find readme resource file readme html cpack error problem copying the license readme and welcome files cpack error problem compressing the directory cpack error error when generating package root make error
| 1
|
599,431
| 18,273,360,465
|
IssuesEvent
|
2021-10-04 15:55:22
|
cloudnativedaysjp/reviewapp-operator
|
https://api.github.com/repos/cloudnativedaysjp/reviewapp-operator
|
closed
|
Application Template から作成される ArgoCD Application が status まで明記されてしまっているのを直す
|
bug medium priority
|
* https://github.com/ShotaKitazawa/reviewapp-operator-demo-infra/blob/4b494d3aeff5c4ca639e8044656afa302a4d95cc/.apps/dev/sample-2.yaml#L27-L35
* expected は、 status フィールドの記述が無いもの
|
1.0
|
Application Template から作成される ArgoCD Application が status まで明記されてしまっているのを直す - * https://github.com/ShotaKitazawa/reviewapp-operator-demo-infra/blob/4b494d3aeff5c4ca639e8044656afa302a4d95cc/.apps/dev/sample-2.yaml#L27-L35
* expected は、 status フィールドの記述が無いもの
|
non_build
|
application template から作成される argocd application が status まで明記されてしまっているのを直す expected は、 status フィールドの記述が無いもの
| 0
|
283,700
| 24,559,275,610
|
IssuesEvent
|
2022-10-12 18:41:28
|
gravitational/teleport
|
https://api.github.com/repos/gravitational/teleport
|
closed
|
[v11] Enhanced recording doesn't work
|
bug testplan bpf server-access
|
The enhanced recording is broken on v11. The problem doesn't appear on v10.
Current behavior:
Server logs:
```
2022-10-08T13:04:07-04:00 DEBU [SESSION:N] Launching session 7c1ee1e7-c27b-4189-ba06-d19f2763cc90. srv/sess.go:870
2022-10-08T13:04:07-04:00 ERRO [NODE] Failed to open enhanced recording (interactive) session: 7c1ee1e7-c27b-4189-ba06-d19f2763cc90: write /cgroup2/teleport/c7fc63c3-dc6f-4830-880a-16f350f58dc2/7c1ee1e7-c27b-4189-ba06-d19f2763cc90/cgroup.procs: invalid argument. id:2 local:[::1]:3022 login:jnyckowski remote:127.0.0.1:36968 teleportUser:bob srv/sess.go:954
2022-10-08T13:04:07-04:00 INFO [SESSION:N] Stopping session 7c1ee1e7-c27b-4189-ba06-d19f2763cc90. srv/sess.go:622
2022-10-08T13:04:07-04:00 DEBU [SESSION:N] Copying from reader to PTY completed with error <nil>. srv/sess.go:924
2022-10-08T13:04:07-04:00 DEBU [SESSION:N] Copying from PTY to writer completed with error read /dev/ptmx: input/output error. srv/sess.go:916
2022-10-08T13:04:07-04:00 INFO [SESSION:N] Closing session 7c1ee1e7-c27b-4189-ba06-d19f2763cc90. srv/sess.go:651
2022-10-08T13:04:07-04:00 INFO [SESSION:N] Closing party f9b7cd73-2315-4d60-9153-f4c2cb0ae7de srv/sess.go:1586
2022-10-08T13:04:07-04:00 INFO [SESSION:N] Removing party ServerContext(127.0.0.1:36968->[::1]:3022, user=jnyckowski, id=2) party(id=f9b7cd73-2315-4d60-9153-f4c2cb0ae7de) from session 7c1ee1e7-c27b-4189-ba06-d19f2763cc90 srv/sess.go:1246
2022-10-08T13:04:07-04:00 DEBU [SESSION:N] No longer tracking participant: f9b7cd73-2315-4d60-9153-f4c2cb0ae7de srv/sess.go:1254
2022-10-08T13:04:07-04:00 INFO [AUDIT] session.leave cluster_name:ubuntu code:T2003I ei:1 event:session.leave login:jnyckowski namespace:default server_addr:[::]:3022 server_hostname:localhost server_id:dffe30aa-cc94-47fa-b461-ae8e08cfacf9 server_labels:map[env:example] sid:7c1ee1e7-c27b-4189-ba06-d19f2763cc90 time:2022-10-08T17:04:07.191Z uid:8f2f79e9-ea20-4cfd-ae46-4e2178591be2 user:bob events/emitter.go:263
2022-10-08T13:04:07-04:00 DEBU [SESSION:N] Copying from Party f9b7cd73-2315-4d60-9153-f4c2cb0ae7de to session writer completed with error <nil>. srv/sess.go:1450
2022-10-08T13:04:07-04:00 DEBU [TERM:LOCA] Closed PTY srv/term.go:299
2022-10-08T13:04:07-04:00 DEBU [PROXY] Client 127.0.0.1:36968 disconnected. id:1 local:127.0.0.1:3080 login:admin remote:127.0.0.1:36968 teleportUser:bob regular/sshserver.go:1553
2022-10-08T13:04:07-04:00 ERRO write /cgroup2/teleport/c7fc63c3-dc6f-4830-880a-16f350f58dc2/7c1ee1e7-c27b-4189-ba06-d19f2763cc90/cgroup.procs: invalid argument regular/sshserver.go:2060
2022-10-08T13:04:07-04:00 WARN [NODE] Failed writing to ssh.Channel.Stderr(): EOF regular/sshserver.go:2103
2022-10-08T13:04:07-04:00 WARN Failed to reply to "shell" request: EOF regular/sshserver.go:2069
2022-10-08T13:04:07-04:00 DEBU [SSH:NODE] Closed connection 127.0.0.1:36968. sshutils/server.go:490
```
and the client side:
```
tsh ssh node
ERROR: EOF
```
Bug details:
- Teleport version v11
- Recreation steps
- Enable `enhanced_recording`
- SSH into the node
|
1.0
|
[v11] Enhanced recording doesn't work - The enhanced recording is broken on v11. The problem doesn't appear on v10.
Current behavior:
Server logs:
```
2022-10-08T13:04:07-04:00 DEBU [SESSION:N] Launching session 7c1ee1e7-c27b-4189-ba06-d19f2763cc90. srv/sess.go:870
2022-10-08T13:04:07-04:00 ERRO [NODE] Failed to open enhanced recording (interactive) session: 7c1ee1e7-c27b-4189-ba06-d19f2763cc90: write /cgroup2/teleport/c7fc63c3-dc6f-4830-880a-16f350f58dc2/7c1ee1e7-c27b-4189-ba06-d19f2763cc90/cgroup.procs: invalid argument. id:2 local:[::1]:3022 login:jnyckowski remote:127.0.0.1:36968 teleportUser:bob srv/sess.go:954
2022-10-08T13:04:07-04:00 INFO [SESSION:N] Stopping session 7c1ee1e7-c27b-4189-ba06-d19f2763cc90. srv/sess.go:622
2022-10-08T13:04:07-04:00 DEBU [SESSION:N] Copying from reader to PTY completed with error <nil>. srv/sess.go:924
2022-10-08T13:04:07-04:00 DEBU [SESSION:N] Copying from PTY to writer completed with error read /dev/ptmx: input/output error. srv/sess.go:916
2022-10-08T13:04:07-04:00 INFO [SESSION:N] Closing session 7c1ee1e7-c27b-4189-ba06-d19f2763cc90. srv/sess.go:651
2022-10-08T13:04:07-04:00 INFO [SESSION:N] Closing party f9b7cd73-2315-4d60-9153-f4c2cb0ae7de srv/sess.go:1586
2022-10-08T13:04:07-04:00 INFO [SESSION:N] Removing party ServerContext(127.0.0.1:36968->[::1]:3022, user=jnyckowski, id=2) party(id=f9b7cd73-2315-4d60-9153-f4c2cb0ae7de) from session 7c1ee1e7-c27b-4189-ba06-d19f2763cc90 srv/sess.go:1246
2022-10-08T13:04:07-04:00 DEBU [SESSION:N] No longer tracking participant: f9b7cd73-2315-4d60-9153-f4c2cb0ae7de srv/sess.go:1254
2022-10-08T13:04:07-04:00 INFO [AUDIT] session.leave cluster_name:ubuntu code:T2003I ei:1 event:session.leave login:jnyckowski namespace:default server_addr:[::]:3022 server_hostname:localhost server_id:dffe30aa-cc94-47fa-b461-ae8e08cfacf9 server_labels:map[env:example] sid:7c1ee1e7-c27b-4189-ba06-d19f2763cc90 time:2022-10-08T17:04:07.191Z uid:8f2f79e9-ea20-4cfd-ae46-4e2178591be2 user:bob events/emitter.go:263
2022-10-08T13:04:07-04:00 DEBU [SESSION:N] Copying from Party f9b7cd73-2315-4d60-9153-f4c2cb0ae7de to session writer completed with error <nil>. srv/sess.go:1450
2022-10-08T13:04:07-04:00 DEBU [TERM:LOCA] Closed PTY srv/term.go:299
2022-10-08T13:04:07-04:00 DEBU [PROXY] Client 127.0.0.1:36968 disconnected. id:1 local:127.0.0.1:3080 login:admin remote:127.0.0.1:36968 teleportUser:bob regular/sshserver.go:1553
2022-10-08T13:04:07-04:00 ERRO write /cgroup2/teleport/c7fc63c3-dc6f-4830-880a-16f350f58dc2/7c1ee1e7-c27b-4189-ba06-d19f2763cc90/cgroup.procs: invalid argument regular/sshserver.go:2060
2022-10-08T13:04:07-04:00 WARN [NODE] Failed writing to ssh.Channel.Stderr(): EOF regular/sshserver.go:2103
2022-10-08T13:04:07-04:00 WARN Failed to reply to "shell" request: EOF regular/sshserver.go:2069
2022-10-08T13:04:07-04:00 DEBU [SSH:NODE] Closed connection 127.0.0.1:36968. sshutils/server.go:490
```
and the client side:
```
tsh ssh node
ERROR: EOF
```
Bug details:
- Teleport version v11
- Recreation steps
- Enable `enhanced_recording`
- SSH into the node
|
non_build
|
enhanced recording doesn t work the enhanced recording is broken on the problem doesn t appear on current behavior server logs debu launching session srv sess go erro failed to open enhanced recording interactive session write teleport cgroup procs invalid argument id local login jnyckowski remote teleportuser bob srv sess go info stopping session srv sess go debu copying from reader to pty completed with error srv sess go debu copying from pty to writer completed with error read dev ptmx input output error srv sess go info closing session srv sess go info closing party srv sess go info removing party servercontext user jnyckowski id party id from session srv sess go debu no longer tracking participant srv sess go info session leave cluster name ubuntu code ei event session leave login jnyckowski namespace default server addr server hostname localhost server id server labels map sid time uid user bob events emitter go debu copying from party to session writer completed with error srv sess go debu closed pty srv term go debu client disconnected id local login admin remote teleportuser bob regular sshserver go erro write teleport cgroup procs invalid argument regular sshserver go warn failed writing to ssh channel stderr eof regular sshserver go warn failed to reply to shell request eof regular sshserver go debu closed connection sshutils server go and the client side tsh ssh node error eof bug details teleport version recreation steps enable enhanced recording ssh into the node
| 0
|
116,058
| 11,899,408,957
|
IssuesEvent
|
2020-03-30 08:57:42
|
OpenEnergyPlatform/oeplatform
|
https://api.github.com/repos/OpenEnergyPlatform/oeplatform
|
closed
|
Dataview: Add Graph / Map view not self-explanatory
|
SzenarienDB data view / modification documentation enhancement
|
The menu that is displayed after clicking the above is not self-explanatory. My suggestion would be to add some explanatory paragraph (yes more text), or delete these interaction possibilities (for the moment).
Map view:

Graph view:

|
1.0
|
Dataview: Add Graph / Map view not self-explanatory - The menu that is displayed after clicking the above is not self-explanatory. My suggestion would be to add some explanatory paragraph (yes more text), or delete these interaction possibilities (for the moment).
Map view:

Graph view:

|
non_build
|
dataview add graph map view not self explanatory the menu that is displayed after clicking the above is not self explanatory my suggestion would be to add some explanatory paragraph yes more text or delete these interaction possibilities for the moment map view graph view
| 0
|
49,055
| 12,270,671,566
|
IssuesEvent
|
2020-05-07 15:51:14
|
USDA-FSA/fsa-design-system
|
https://api.github.com/repos/USDA-FSA/fsa-design-system
|
closed
|
Soft launch fsa-style@2.5.0 dependency
|
Category: Site Design & Build P1 Type: Style Base
|
Once https://github.com/USDA-FSA/fsa-style/issues/430 is complete, do the minimum to get `fsa-style` dependency updated to latest.
### Steps
- [x] Bump **`package.json`** version
- [x] Validate dependencies, most especially `fsa-style`
- [x] Validate all Issues in this Project/Milestone are complete and sitting on **`gh-pages-dev`** branch.
- [x] Pull **`gh-pages-dev`** from **`gh-pages`** to make sure you've got them all locally and they're identical.
- [x] Merge **`gh-pages-dev`** to **`gh-pages`** (effectively master)
- [x] Git Tag **`gh-pages`** as version, e.g. `1.0.0`
- [x] Push tag
- [x] Validate on https://github.com/USDA-FSA/fsa-design-system/tags
- [x] Push **`gh-pages`**
- [x] Validate at http://usda-fsa.github.io/fsa-design-system
- [x] Publish tag as a [Release](https://github.com/USDA-FSA/fsa-design-system/releases)
- [x] Prune local and remote branches
- [x] Close
- [ ] associated GitHub [Project](https://github.com/orgs/USDA-FSA/projects)
- [ ] associated GitHub [Milestone](https://github.com/USDA-FSA/fsa-design-system/milestones)
|
1.0
|
Soft launch fsa-style@2.5.0 dependency - Once https://github.com/USDA-FSA/fsa-style/issues/430 is complete, do the minimum to get `fsa-style` dependency updated to latest.
### Steps
- [x] Bump **`package.json`** version
- [x] Validate dependencies, most especially `fsa-style`
- [x] Validate all Issues in this Project/Milestone are complete and sitting on **`gh-pages-dev`** branch.
- [x] Pull **`gh-pages-dev`** from **`gh-pages`** to make sure you've got them all locally and they're identical.
- [x] Merge **`gh-pages-dev`** to **`gh-pages`** (effectively master)
- [x] Git Tag **`gh-pages`** as version, e.g. `1.0.0`
- [x] Push tag
- [x] Validate on https://github.com/USDA-FSA/fsa-design-system/tags
- [x] Push **`gh-pages`**
- [x] Validate at http://usda-fsa.github.io/fsa-design-system
- [x] Publish tag as a [Release](https://github.com/USDA-FSA/fsa-design-system/releases)
- [x] Prune local and remote branches
- [x] Close
- [ ] associated GitHub [Project](https://github.com/orgs/USDA-FSA/projects)
- [ ] associated GitHub [Milestone](https://github.com/USDA-FSA/fsa-design-system/milestones)
|
build
|
soft launch fsa style dependency once is complete do the minimum to get fsa style dependency updated to latest steps bump package json version validate dependencies most especially fsa style validate all issues in this project milestone are complete and sitting on gh pages dev branch pull gh pages dev from gh pages to make sure you ve got them all locally and they re identical merge gh pages dev to gh pages effectively master git tag gh pages as version e g push tag validate on push gh pages validate at publish tag as a prune local and remote branches close associated github associated github
| 1
|
2,616
| 5,394,392,534
|
IssuesEvent
|
2017-02-27 02:59:04
|
mitchellh/packer
|
https://api.github.com/repos/mitchellh/packer
|
closed
|
OVF Tool update changes default hashing algorithm
|
enhancement post-processor/vsphere
|
Hello,
This issue is about the latest version of OVF Tool used in the VMware ESX builder. It is now in version 4.2 and the default algorithm changed from SHA1 to SHA256. It is currently not possible to change Packer args used with OVF Tool as far as I know.
The SHA256 is not supported by the vSphere Client thus, it is not possible to had an OVF VM to an ESX with this method. (It is still possible to add them using vSphere web client as I've read but I didn't tested it).
I think it would be a good idea to let users change the parameters they used.
For more details : http://www.virtuallyghetto.com/2016/11/default-hashing-algorithm-changed-in-ovftool-4-2-preventing-ovfova-import-using-vsphere-c-client.html
|
1.0
|
OVF Tool update changes default hashing algorithm - Hello,
This issue is about the latest version of OVF Tool used in the VMware ESX builder. It is now in version 4.2 and the default algorithm changed from SHA1 to SHA256. It is currently not possible to change Packer args used with OVF Tool as far as I know.
The SHA256 is not supported by the vSphere Client thus, it is not possible to had an OVF VM to an ESX with this method. (It is still possible to add them using vSphere web client as I've read but I didn't tested it).
I think it would be a good idea to let users change the parameters they used.
For more details : http://www.virtuallyghetto.com/2016/11/default-hashing-algorithm-changed-in-ovftool-4-2-preventing-ovfova-import-using-vsphere-c-client.html
|
non_build
|
ovf tool update changes default hashing algorithm hello this issue is about the latest version of ovf tool used in the vmware esx builder it is now in version and the default algorithm changed from to it is currently not possible to change packer args used with ovf tool as far as i know the is not supported by the vsphere client thus it is not possible to had an ovf vm to an esx with this method it is still possible to add them using vsphere web client as i ve read but i didn t tested it i think it would be a good idea to let users change the parameters they used for more details
| 0
|
27,365
| 7,941,126,544
|
IssuesEvent
|
2018-07-10 02:37:31
|
openshiftio/openshift.io
|
https://api.github.com/repos/openshiftio/openshift.io
|
closed
|
A diagnose pipeline to capture tenant status and metadata to help diagnose tenant issues
|
area/user/tenant status/analyzing team/build-cd team/platform team/service-delivery type/SDD-feature
|
a pipeline job may be the easiest place to do nightly diagnosis reports on whats happening in the tenant space; what pods are running, restarts, memory usage and whatnot.
ultimately we'll have nice prometheus & elasticsearch to help; but a job we can trigger on a per user basis on demand might be the simplest thing to do.
Being able to login to tenants Jenkins might be handy too so we can noodle the diagnosis reports and historic ones
|
1.0
|
A diagnose pipeline to capture tenant status and metadata to help diagnose tenant issues - a pipeline job may be the easiest place to do nightly diagnosis reports on whats happening in the tenant space; what pods are running, restarts, memory usage and whatnot.
ultimately we'll have nice prometheus & elasticsearch to help; but a job we can trigger on a per user basis on demand might be the simplest thing to do.
Being able to login to tenants Jenkins might be handy too so we can noodle the diagnosis reports and historic ones
|
build
|
a diagnose pipeline to capture tenant status and metadata to help diagnose tenant issues a pipeline job may be the easiest place to do nightly diagnosis reports on whats happening in the tenant space what pods are running restarts memory usage and whatnot ultimately we ll have nice prometheus elasticsearch to help but a job we can trigger on a per user basis on demand might be the simplest thing to do being able to login to tenants jenkins might be handy too so we can noodle the diagnosis reports and historic ones
| 1
|
146,846
| 23,132,294,049
|
IssuesEvent
|
2022-07-28 11:31:00
|
vaadin/docs
|
https://api.github.com/repos/vaadin/docs
|
closed
|
Basic layout example in documentation confusing
|
design system documentation vaadin-ordered-layout
|
### Description
At https://vaadin.com/docs/latest/components/basic-layouts examples show code like:
```
<vaadin-vertical-layout theme="spacing padding">
<layout-item>Item 1</layout-item>
<layout-item>Item 2</layout-item>
<layout-item>Item 3</layout-item>
<layout-item>Item 4</layout-item>
</vaadin-vertical-layout>
```
Our developers copy the code and expect layout-item to be defined "out of the box" as a Vaadin concept which is not as it is imported relatively in the example. It would be better just to use a standard HTML element like a div in the example so it is less confusing.
### Expected outcome
Less confusing documentation.
### Minimal reproducible example
See description
### Steps to reproduce
See description
### Environment
Vaadin version(s): all
### Browsers
Issue is not browser related
|
1.0
|
Basic layout example in documentation confusing - ### Description
At https://vaadin.com/docs/latest/components/basic-layouts examples show code like:
```
<vaadin-vertical-layout theme="spacing padding">
<layout-item>Item 1</layout-item>
<layout-item>Item 2</layout-item>
<layout-item>Item 3</layout-item>
<layout-item>Item 4</layout-item>
</vaadin-vertical-layout>
```
Our developers copy the code and expect layout-item to be defined "out of the box" as a Vaadin concept which is not as it is imported relatively in the example. It would be better just to use a standard HTML element like a div in the example so it is less confusing.
### Expected outcome
Less confusing documentation.
### Minimal reproducible example
See description
### Steps to reproduce
See description
### Environment
Vaadin version(s): all
### Browsers
Issue is not browser related
|
non_build
|
basic layout example in documentation confusing description at examples show code like item item item item our developers copy the code and expect layout item to be defined out of the box as a vaadin concept which is not as it is imported relatively in the example it would be better just to use a standard html element like a div in the example so it is less confusing expected outcome less confusing documentation minimal reproducible example see description steps to reproduce see description environment vaadin version s all browsers issue is not browser related
| 0
|
7,103
| 3,934,534,587
|
IssuesEvent
|
2016-04-25 23:08:59
|
jens-maus/yam
|
https://api.github.com/repos/jens-maus/yam
|
closed
|
yam nightly grim reaper when jumping to next new mail
|
#major @undecided bug fixed nightly build
|
**Originally by _Michael.Merkel@gmx.net_ on 2013-09-21 20:30:51 +0200**
___
## Summary
reading new mails i used the "shift + right" key combination to jump to the next unread mail. that jumped from the inbox to a subfolder with that mail.
then the grim reaper appeared.
i was not able so far to reproduce it :-( sorry.
nevertheless here is the grim reaper - maybe it helps.
## Steps to reproduce
unfortunately not able so far :-(
## Expected results
## Actual results
## Regression
## Notes
20.6038 (svn r2770)
|
1.0
|
yam nightly grim reaper when jumping to next new mail - **Originally by _Michael.Merkel@gmx.net_ on 2013-09-21 20:30:51 +0200**
___
## Summary
reading new mails i used the "shift + right" key combination to jump to the next unread mail. that jumped from the inbox to a subfolder with that mail.
then the grim reaper appeared.
i was not able so far to reproduce it :-( sorry.
nevertheless here is the grim reaper - maybe it helps.
## Steps to reproduce
unfortunately not able so far :-(
## Expected results
## Actual results
## Regression
## Notes
20.6038 (svn r2770)
|
build
|
yam nightly grim reaper when jumping to next new mail originally by michael merkel gmx net on summary reading new mails i used the shift right key combination to jump to the next unread mail that jumped from the inbox to a subfolder with that mail then the grim reaper appeared i was not able so far to reproduce it sorry nevertheless here is the grim reaper maybe it helps steps to reproduce unfortunately not able so far expected results actual results regression notes svn
| 1
|
678,543
| 23,201,229,626
|
IssuesEvent
|
2022-08-01 21:44:45
|
thoth-station/core
|
https://api.github.com/repos/thoth-station/core
|
closed
|
provide Python 3.9 based toolchain
|
kind/feature priority/important-soon lifecycle/rotten triage/accepted
|
**Is your feature request related to a problem? Please describe.**
As the upcoming RHEL9 will use Python 3.9, we need to prepare our toolchain for this.
**Describe the solution you'd like**
s2i-thoth and thoth-infra need to be updated. pytest and mypy toolchain needs to be provided for py39
**Describe alternatives you've considered**
n/a
**Additional context**
tbd
**Acceptance Criteria**
- [ ] container images are provided on quay
- [ ] .prow and .thoth files are updated on all repos
**References**
* https://chat.google.com/room/AAAAVjnVXFk/Fu3w7uwPjW8
|
1.0
|
provide Python 3.9 based toolchain - **Is your feature request related to a problem? Please describe.**
As the upcoming RHEL9 will use Python 3.9, we need to prepare our toolchain for this.
**Describe the solution you'd like**
s2i-thoth and thoth-infra need to be updated. pytest and mypy toolchain needs to be provided for py39
**Describe alternatives you've considered**
n/a
**Additional context**
tbd
**Acceptance Criteria**
- [ ] container images are provided on quay
- [ ] .prow and .thoth files are updated on all repos
**References**
* https://chat.google.com/room/AAAAVjnVXFk/Fu3w7uwPjW8
|
non_build
|
provide python based toolchain is your feature request related to a problem please describe as the upcoming will use python we need to prepare our toolchain for this describe the solution you d like thoth and thoth infra need to be updated pytest and mypy toolchain needs to be provided for describe alternatives you ve considered n a additional context tbd acceptance criteria container images are provided on quay prow and thoth files are updated on all repos references
| 0
|
52,562
| 13,224,836,971
|
IssuesEvent
|
2020-08-17 19:57:02
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
[clsim] I3CLSimLightSourceToStepConverterFlasher hangs (Trac #2426)
|
Incomplete Migration Migrated from Trac combo simulation defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2426">https://code.icecube.wisc.edu/projects/icecube/ticket/2426</a>, reported by jvansantenand owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-06-24T12:32:17",
"_ts": "1593001937450890",
"description": "Steps to reproduce:\n\n{{{\nclsim/resources/scripts/flasher/generateTestFlashes.py\nclsim/resources/scripts/flasher/applyCLSim.py -i test_flashes.i3\n}}}\n\nand observe the second process not exiting, and also not using any CPU time.\n\nThis happens `I3CLSimLightSourceToStepConverterFlasher` can only run in the main thread, as it attempts to call `I3CLSimRandomValueIceCubeFlasherTimeProfile.SampleFromDistribution()` and waits forever trying to acquire the GIL that the main thread never releases. This clashes with its use within I3CLSimLightSourceToStepConverterAsync.\n\nThere are two solutions: either drop the GIL in `I3Tray::Execute()` (deja vu, anyone?) or port I3CLSimRandomValueIceCubeFlasherTimeProfile to C++. The latter is probably easier in the short term.",
"reporter": "jvansanten",
"cc": "fiedl",
"resolution": "fixed",
"time": "2020-04-23T19:42:44",
"component": "combo simulation",
"summary": "[clsim] I3CLSimLightSourceToStepConverterFlasher hangs",
"priority": "major",
"keywords": "",
"milestone": "Autumnal Equinox 2020",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[clsim] I3CLSimLightSourceToStepConverterFlasher hangs (Trac #2426) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2426">https://code.icecube.wisc.edu/projects/icecube/ticket/2426</a>, reported by jvansantenand owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-06-24T12:32:17",
"_ts": "1593001937450890",
"description": "Steps to reproduce:\n\n{{{\nclsim/resources/scripts/flasher/generateTestFlashes.py\nclsim/resources/scripts/flasher/applyCLSim.py -i test_flashes.i3\n}}}\n\nand observe the second process not exiting, and also not using any CPU time.\n\nThis happens `I3CLSimLightSourceToStepConverterFlasher` can only run in the main thread, as it attempts to call `I3CLSimRandomValueIceCubeFlasherTimeProfile.SampleFromDistribution()` and waits forever trying to acquire the GIL that the main thread never releases. This clashes with its use within I3CLSimLightSourceToStepConverterAsync.\n\nThere are two solutions: either drop the GIL in `I3Tray::Execute()` (deja vu, anyone?) or port I3CLSimRandomValueIceCubeFlasherTimeProfile to C++. The latter is probably easier in the short term.",
"reporter": "jvansanten",
"cc": "fiedl",
"resolution": "fixed",
"time": "2020-04-23T19:42:44",
"component": "combo simulation",
"summary": "[clsim] I3CLSimLightSourceToStepConverterFlasher hangs",
"priority": "major",
"keywords": "",
"milestone": "Autumnal Equinox 2020",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
|
non_build
|
hangs trac migrated from json status closed changetime ts description steps to reproduce n n nclsim resources scripts flasher generatetestflashes py nclsim resources scripts flasher applyclsim py i test flashes n n nand observe the second process not exiting and also not using any cpu time n nthis happens can only run in the main thread as it attempts to call samplefromdistribution and waits forever trying to acquire the gil that the main thread never releases this clashes with its use within n nthere are two solutions either drop the gil in execute deja vu anyone or port to c the latter is probably easier in the short term reporter jvansanten cc fiedl resolution fixed time component combo simulation summary hangs priority major keywords milestone autumnal equinox owner jvansanten type defect
| 0
|
108,225
| 11,583,543,197
|
IssuesEvent
|
2020-02-22 11:46:43
|
ampproject/amp-wp
|
https://api.github.com/repos/ampproject/amp-wp
|
opened
|
PM: Updating Project Management guidelines for AMP Plugin
|
Type: Documentation Type: Project Management
|
## Feature description
Update and adapt the PM guidelines to match the current process for AMP Plugin.
The specifications are documented in `<root>/contributing/project-management.md` and don't apply for this project after the codebase was split.
---------------
_Do not alter or remove anything below. The following sections will be managed by moderators only._
## Acceptance criteria
* <!-- One or more bullet points for acceptance criteria. -->
## Implementation brief
* <!-- One or more bullet points for how to technically resolve the issue. For significant Implementation Design, it is ok use a Google document **accessible by anyone**. -->
## QA testing instructions
* <!-- One or more bullet points to describe how to test the implementation in QA. -->
## Demo
* <!-- A video or screenshots demoing the implementation. -->
## Changelog entry
* <!-- One sentence summarizing the PR, to be used in the changelog. -->
|
1.0
|
PM: Updating Project Management guidelines for AMP Plugin - ## Feature description
Update and adapt the PM guidelines to match the current process for AMP Plugin.
The specifications are documented in `<root>/contributing/project-management.md` and don't apply for this project after the codebase was split.
---------------
_Do not alter or remove anything below. The following sections will be managed by moderators only._
## Acceptance criteria
* <!-- One or more bullet points for acceptance criteria. -->
## Implementation brief
* <!-- One or more bullet points for how to technically resolve the issue. For significant Implementation Design, it is ok use a Google document **accessible by anyone**. -->
## QA testing instructions
* <!-- One or more bullet points to describe how to test the implementation in QA. -->
## Demo
* <!-- A video or screenshots demoing the implementation. -->
## Changelog entry
* <!-- One sentence summarizing the PR, to be used in the changelog. -->
|
non_build
|
pm updating project management guidelines for amp plugin feature description update and adapt the pm guidelines to match the current process for amp plugin the specifications are documented in contributing project management md and don t apply for this project after the codebase was split do not alter or remove anything below the following sections will be managed by moderators only acceptance criteria implementation brief qa testing instructions demo changelog entry
| 0
|
49,852
| 12,420,599,208
|
IssuesEvent
|
2020-05-23 12:51:58
|
inspireui/support
|
https://api.github.com/repos/inspireui/support
|
closed
|
[ Fluxstore Pro 1.7.4 ]- Sort categories in the side menu doesn't work
|
FluxBuilder FluxStore feature-request 🆕
|
- Hello, I noticed that you added sorting category by category id number in the config_xx.json in the pre-releases 1.7.4 it works in the categories screen but not effecting the side menu categories still sort randomly, The second problem in the category screen it sorts it correctly but when you click in the category it doesn't open the subcategory like before it opens the category directly.
screenshots :

the old category submenu is gone! it opens directly :

- I verified my purchase code and upload the Config Files through - http://private.inspireui.com
|
1.0
|
[ Fluxstore Pro 1.7.4 ]- Sort categories in the side menu doesn't work - - Hello, I noticed that you added sorting category by category id number in the config_xx.json in the pre-releases 1.7.4 it works in the categories screen but not effecting the side menu categories still sort randomly, The second problem in the category screen it sorts it correctly but when you click in the category it doesn't open the subcategory like before it opens the category directly.
screenshots :

the old category submenu is gone! it opens directly :

- I verified my purchase code and upload the Config Files through - http://private.inspireui.com
|
build
|
sort categories in the side menu doesn t work hello i noticed that you added sorting category by category id number in the config xx json in the pre releases it works in the categories screen but not effecting the side menu categories still sort randomly the second problem in the category screen it sorts it correctly but when you click in the category it doesn t open the subcategory like before it opens the category directly screenshots the old category submenu is gone it opens directly i verified my purchase code and upload the config files through
| 1
|
236,860
| 19,580,126,813
|
IssuesEvent
|
2022-01-04 20:06:44
|
eXist-db/exist
|
https://api.github.com/repos/eXist-db/exist
|
closed
|
[feature] add int test for mail module
|
enhancement needs XQSuite test
|
the mail module is currently broken on J11, but we don't have any test revealing this. We should run add a simple xqs test to catch these kinds of problems
|
1.0
|
[feature] add int test for mail module - the mail module is currently broken on J11, but we don't have any test revealing this. We should run add a simple xqs test to catch these kinds of problems
|
non_build
|
add int test for mail module the mail module is currently broken on but we don t have any test revealing this we should run add a simple xqs test to catch these kinds of problems
| 0
|
102,482
| 32,017,946,854
|
IssuesEvent
|
2023-09-22 00:20:09
|
CrayLabs/SmartSim
|
https://api.github.com/repos/CrayLabs/SmartSim
|
opened
|
Upgrade SmartSim's ML Deps for Python 3.11 Support
|
area: build area: ML
|
## Description
Currently, SmartSim does not have support for the latest versions of Python. This is because SmartSim relies on RedisAI in order to fetch and build ML backends. The latest version of RedisAI at time of writing (v1.2.7) officially supports ML backends for PyTorch v1.11.0, Tensorflow v2.8.0, and ORT v1.11.1, and as such provides a `get_deps.sh` script that is used to fetch these specific versions of these backends.
During the `smart build` phase of the SmartSim installation process, SmartSim will use the `get_deps.sh` script provided by RedisAI to fetch the ML backends at the versions its preferred versions. This unfortunately means that the python ML dependencies of provided through the optional `[ml]` dependencies, must also be pinned at these versions.
Unfortunately, [PyTorch v1.11.0](https://download.pytorch.org/whl/torch/) and [Tensorflow v2.8.0](https://pypi.org/project/tensorflow/2.8.0/#files) only officially have wheels available through Python 3.10. [Onnx v1.11.0](https://pypi.org/project/onnx/1.11.0/#files) is actually already unavailable for Python 3.10 as it only provides a wheels through Python 3.9.
SmartSim has already been built by hand in a number of different environments with updated Torch backends ([Frontier](https://www.craylabs.org/docs/installation_instructions/platform.html#summary), [Spack](https://spack.readthedocs.io/en/latest/package_list.html#py-smartsim)) with no known issues. A similar fix is likely possible for the TF and ONNX backends as well (In my personal experience I have got TF 2.13 CPU to pass the unit tests). If we can update our `smart build` CLI to use newer versions of the ML dependencies over the versions shipped by RedisAI, SmartSim would become compatible with Python 3.11 (and presumably beyond).
## Justification
Python 3.11 has been released for about a year now and has already seen adoption by all of SmartSim's supported backends. Furthermore, Python 3.12 is planned to be released next month, and it is unlikely that old versions of ML libraries will see continued support as the Python ecosystem continues to advance.
Additionally, with newer versions of Python, users are likely to run into confusing error messages when attempting to install SmartSim with newer (currently unsupported) versions of Python (#373). This is due to the fact that past versions of SmartSim that did support the (at the time) newest versions of Python where published without a supported Python upper bound. By providing version SmartSim that does support the newest versions of Python, we are likely to cut some of these confusing errors off at the head, as users will simply receive the newer version of SmartSim that _does_ support the latest version of Python
## Implementation Strategy
As mentioned in the description, SmartSim has already been built by hand with newer versions of ML backends. We just need to automate this process by integrating into `smart build`.
Currently SmartSim uses the `get_deps.sh` script shipped by RedisAI directly to fetch the ML backends. If, instead, SmartSim were to fetch the updated versions of the dependencies required by RedisAI, presumably the remainder of the `smart build` process would be able to continue without issue and without being any the wiser of the change. SmartSim could do this by updating the `smartsim._core._install.builder.RedisAIBuilder.build_from_git` method to instead fetch the backends it needs directly, or instead using its own forked and vendored version of the `get_deps.sh` script with updated dependency numbers that is then shipped alongside SmartSim.
Additionally, this will also likely require some shifting of version numbers of other python packages (particularly `protobuf`) depended on by python in order to avoid conflicts, thought I don't suspect there will be any problems bumping these as needed by the ML deps.
|
1.0
|
Upgrade SmartSim's ML Deps for Python 3.11 Support - ## Description
Currently, SmartSim does not have support for the latest versions of Python. This is because SmartSim relies on RedisAI in order to fetch and build ML backends. The latest version of RedisAI at time of writing (v1.2.7) officially supports ML backends for PyTorch v1.11.0, Tensorflow v2.8.0, and ORT v1.11.1, and as such provides a `get_deps.sh` script that is used to fetch these specific versions of these backends.
During the `smart build` phase of the SmartSim installation process, SmartSim will use the `get_deps.sh` script provided by RedisAI to fetch the ML backends at the versions its preferred versions. This unfortunately means that the python ML dependencies of provided through the optional `[ml]` dependencies, must also be pinned at these versions.
Unfortunately, [PyTorch v1.11.0](https://download.pytorch.org/whl/torch/) and [Tensorflow v2.8.0](https://pypi.org/project/tensorflow/2.8.0/#files) only officially have wheels available through Python 3.10. [Onnx v1.11.0](https://pypi.org/project/onnx/1.11.0/#files) is actually already unavailable for Python 3.10 as it only provides a wheels through Python 3.9.
SmartSim has already been built by hand in a number of different environments with updated Torch backends ([Frontier](https://www.craylabs.org/docs/installation_instructions/platform.html#summary), [Spack](https://spack.readthedocs.io/en/latest/package_list.html#py-smartsim)) with no known issues. A similar fix is likely possible for the TF and ONNX backends as well (In my personal experience I have got TF 2.13 CPU to pass the unit tests). If we can update our `smart build` CLI to use newer versions of the ML dependencies over the versions shipped by RedisAI, SmartSim would become compatible with Python 3.11 (and presumably beyond).
## Justification
Python 3.11 has been released for about a year now and has already seen adoption by all of SmartSim's supported backends. Furthermore, Python 3.12 is planned to be released next month, and it is unlikely that old versions of ML libraries will see continued support as the Python ecosystem continues to advance.
Additionally, with newer versions of Python, users are likely to run into confusing error messages when attempting to install SmartSim with newer (currently unsupported) versions of Python (#373). This is due to the fact that past versions of SmartSim that did support the (at the time) newest versions of Python where published without a supported Python upper bound. By providing version SmartSim that does support the newest versions of Python, we are likely to cut some of these confusing errors off at the head, as users will simply receive the newer version of SmartSim that _does_ support the latest version of Python
## Implementation Strategy
As mentioned in the description, SmartSim has already been built by hand with newer versions of ML backends. We just need to automate this process by integrating into `smart build`.
Currently SmartSim uses the `get_deps.sh` script shipped by RedisAI directly to fetch the ML backends. If, instead, SmartSim were to fetch the updated versions of the dependencies required by RedisAI, presumably the remainder of the `smart build` process would be able to continue without issue and without being any the wiser of the change. SmartSim could do this by updating the `smartsim._core._install.builder.RedisAIBuilder.build_from_git` method to instead fetch the backends it needs directly, or instead using its own forked and vendored version of the `get_deps.sh` script with updated dependency numbers that is then shipped alongside SmartSim.
Additionally, this will also likely require some shifting of version numbers of other python packages (particularly `protobuf`) depended on by python in order to avoid conflicts, thought I don't suspect there will be any problems bumping these as needed by the ML deps.
|
build
|
upgrade smartsim s ml deps for python support description currently smartsim does not have support for the latest versions of python this is because smartsim relies on redisai in order to fetch and build ml backends the latest version of redisai at time of writing officially supports ml backends for pytorch tensorflow and ort and as such provides a get deps sh script that is used to fetch these specific versions of these backends during the smart build phase of the smartsim installation process smartsim will use the get deps sh script provided by redisai to fetch the ml backends at the versions its preferred versions this unfortunately means that the python ml dependencies of provided through the optional dependencies must also be pinned at these versions unfortunately and only officially have wheels available through python is actually already unavailable for python as it only provides a wheels through python smartsim has already been built by hand in a number of different environments with updated torch backends with no known issues a similar fix is likely possible for the tf and onnx backends as well in my personal experience i have got tf cpu to pass the unit tests if we can update our smart build cli to use newer versions of the ml dependencies over the versions shipped by redisai smartsim would become compatible with python and presumably beyond justification python has been released for about a year now and has already seen adoption by all of smartsim s supported backends furthermore python is planned to be released next month and it is unlikely that old versions of ml libraries will see continued support as the python ecosystem continues to advance additionally with newer versions of python users are likely to run into confusing error messages when attempting to install smartsim with newer currently unsupported versions of python this is due to the fact that past versions of smartsim that did support the at the time newest versions of python where published without a supported python upper bound by providing version smartsim that does support the newest versions of python we are likely to cut some of these confusing errors off at the head as users will simply receive the newer version of smartsim that does support the latest version of python implementation strategy as mentioned in the description smartsim has already been built by hand with newer versions of ml backends we just need to automate this process by integrating into smart build currently smartsim uses the get deps sh script shipped by redisai directly to fetch the ml backends if instead smartsim were to fetch the updated versions of the dependencies required by redisai presumably the remainder of the smart build process would be able to continue without issue and without being any the wiser of the change smartsim could do this by updating the smartsim core install builder redisaibuilder build from git method to instead fetch the backends it needs directly or instead using its own forked and vendored version of the get deps sh script with updated dependency numbers that is then shipped alongside smartsim additionally this will also likely require some shifting of version numbers of other python packages particularly protobuf depended on by python in order to avoid conflicts thought i don t suspect there will be any problems bumping these as needed by the ml deps
| 1
|
10,844
| 4,833,239,106
|
IssuesEvent
|
2016-11-08 10:20:16
|
CartoDB/cartodb
|
https://api.github.com/repos/CartoDB/cartodb
|
closed
|
context menu position is not consistent
|
Builder enhancement
|
I've been struggling for a while with the context menu, I already noticed it didn't position correctly between layers panel and layer panel, and it is still more evident in the edit geometry panel

the markup for the `ContextMenuFactory` is different for the markup in the header, and while being the same in layers panel and edit geometry panel it position the menu vertically centered so it varies depending on the height of the header
ideally we should use the same markup for this element, as well as the same markup for the three headers
involved templates
https://github.com/CartoDB/cartodb/blob/master/lib/assets/javascripts/cartodb3/editor/layers/layer-header.tpl
https://github.com/CartoDB/cartodb/blob/master/lib/assets/javascripts/cartodb3/components/context-menu-factory-view.js
https://github.com/CartoDB/cartodb/blob/master/lib/assets/javascripts/cartodb3/editor/editor-header.tpl
|
1.0
|
context menu position is not consistent - I've been struggling for a while with the context menu, I already noticed it didn't position correctly between layers panel and layer panel, and it is still more evident in the edit geometry panel

the markup for the `ContextMenuFactory` is different for the markup in the header, and while being the same in layers panel and edit geometry panel it position the menu vertically centered so it varies depending on the height of the header
ideally we should use the same markup for this element, as well as the same markup for the three headers
involved templates
https://github.com/CartoDB/cartodb/blob/master/lib/assets/javascripts/cartodb3/editor/layers/layer-header.tpl
https://github.com/CartoDB/cartodb/blob/master/lib/assets/javascripts/cartodb3/components/context-menu-factory-view.js
https://github.com/CartoDB/cartodb/blob/master/lib/assets/javascripts/cartodb3/editor/editor-header.tpl
|
build
|
context menu position is not consistent i ve been struggling for a while with the context menu i already noticed it didn t position correctly between layers panel and layer panel and it is still more evident in the edit geometry panel the markup for the contextmenufactory is different for the markup in the header and while being the same in layers panel and edit geometry panel it position the menu vertically centered so it varies depending on the height of the header ideally we should use the same markup for this element as well as the same markup for the three headers involved templates
| 1
|
4,983
| 7,495,726,399
|
IssuesEvent
|
2018-04-08 00:29:49
|
SIU-CS/Red_Dusk-Production
|
https://api.github.com/repos/SIU-CS/Red_Dusk-Production
|
closed
|
Syntax Highlighting
|
Functional Requirement Product Backlog
|
Goal:
The editor window contains syntax highlighting for the user
Description:
The website will have syntax and error highlighting, along with line numbers and other miscellaneous properties of an IDE, to help make it more user-friendly
Origin: Sprint 2 Planning meeting
Version: 1.0 3/22/18 Priority 2
|
1.0
|
Syntax Highlighting - Goal:
The editor window contains syntax highlighting for the user
Description:
The website will have syntax and error highlighting, along with line numbers and other miscellaneous properties of an IDE, to help make it more user-friendly
Origin: Sprint 2 Planning meeting
Version: 1.0 3/22/18 Priority 2
|
non_build
|
syntax highlighting goal the editor window contains syntax highlighting for the user description the website will have syntax and error highlighting along with line numbers and other miscellaneous properties of an ide to help make it more user friendly origin sprint planning meeting version priority
| 0
|
71,265
| 9,487,680,710
|
IssuesEvent
|
2019-04-22 17:33:53
|
sio2sio2/lobaton
|
https://api.github.com/repos/sio2sio2/lobaton
|
closed
|
Documentación de leafext.js
|
documentation
|
Es preciso documentar el la API de ``leafext.js``, y en general, de todo el proyecto. Sería interante tantear la posibilidad de usar [sphinx-js](https://hacks.mozilla.org/2017/07/introducing-sphinx-js-a-better-way-to-document-large-javascript-projects/).
|
1.0
|
Documentación de leafext.js - Es preciso documentar el la API de ``leafext.js``, y en general, de todo el proyecto. Sería interante tantear la posibilidad de usar [sphinx-js](https://hacks.mozilla.org/2017/07/introducing-sphinx-js-a-better-way-to-document-large-javascript-projects/).
|
non_build
|
documentación de leafext js es preciso documentar el la api de leafext js y en general de todo el proyecto sería interante tantear la posibilidad de usar
| 0
|
349,567
| 31,812,984,732
|
IssuesEvent
|
2023-09-13 18:13:43
|
gravitational/teleport
|
https://api.github.com/repos/gravitational/teleport
|
closed
|
`tsh db connect <rds-primary-endpoint>` throws `matches multiple databases` error
|
bug ux test-plan-problem tsh regression database-access
|
Expected behavior:
`tsh db connect <rds-primary-endpoint>` should connect to db
Current behavior:
```
$ tsh db logout
$ tsh db ls --search steve-mysql
Name Description Allowed Users Labels Connect
---------------------- ------------------------------------------------ ------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------- -------
steve-mysql-custom-one Aurora cluster in ca-central-1 (custom endpoint) [*] Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=custom,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1
steve-mysql-custom-two Aurora cluster in ca-central-1 (custom endpoint) [*] Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=custom,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1
steve-mysql Aurora cluster in ca-central-1 [*] Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=primary,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1
steve-mysql-reader Aurora cluster in ca-central-1 (reader endpoint) [*] Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=reader,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1
$ tsh db connect --db-user alice --db-name test steve-mysql
ERROR: database "steve-mysql" matches multiple databases:
Name Description Protocol Type URI Allowed Users Labels
Connect
----------------------------------------------------------- ------------------------------------------------ -------- ---- ----------------------------------------------------------------------- ------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------- -------
steve-mysql-custom-one-rds-aurora-ca-central-1-123123123123 Aurora cluster in ca-central-1 (custom endpoint) mysql rds one.cluster-custom-aabbccddeeff.ca-central-1.rds.amazonaws.com:3306 (unknown) Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=custom,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1,teleport.dev/cloud=AWS,teleport.dev/origin=cloud,tel
eport.internal/discovered-name=steve-mysql-custom-one
steve-mysql-custom-two-rds-aurora-ca-central-1-123123123123 Aurora cluster in ca-central-1 (custom endpoint) mysql rds two.cluster-custom-aabbccddeeff.ca-central-1.rds.amazonaws.com:3306 (unknown) Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=custom,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1,teleport.dev/cloud=AWS,teleport.dev/origin=cloud,tel
eport.internal/discovered-name=steve-mysql-custom-two
steve-mysql-rds-aurora-ca-central-1-123123123123 Aurora cluster in ca-central-1 mysql rds steve-mysql.cluster-aabbccddeeff.ca-central-1.rds.amazonaws.com:3306 (unknown) Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=primary,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1,teleport.dev/cloud=AWS,teleport.dev/origin=cloud,te
leport.internal/discovered-name=steve-mysql
steve-mysql-reader-rds-aurora-ca-central-1-123123123123 Aurora cluster in ca-central-1 (reader endpoint) mysql rds steve-mysql.cluster-ro-aabbccddeeff.ca-central-1.rds.amazonaws.com:3306 (unknown) Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=reader,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1,teleport.dev/cloud=AWS,teleport.dev/origin=cloud,tel
eport.internal/discovered-name=steve-mysql-reader
Hint: use 'tsh db ls -v' or 'tsh db ls --format=[json|yaml]' to list all databases with full details.
Hint: try selecting the database with a more specific name (ex: tsh db connect steve-mysql-custom-one-rds-aurora-ca-central-1-123123123123).
Hint: try selecting the database with additional --labels or --query predicate.
```
Bug details:
- Teleport version: `"server_version": "14.0.0-alpha.2",`
Workaround:
`tsh db connect --db-user alice --db-name test steve-mysql-rds-aurora-ca-central-1-123123123123`
Feels like a regression but it could also be "as designed" from the renaming feature.
Besides `tsh db connect` throws an error, the table dumped from the error is not very readable when there are lots of tags and the row running over to next line.
Test Plan: https://github.com/gravitational/teleport/issues/31122
|
1.0
|
`tsh db connect <rds-primary-endpoint>` throws `matches multiple databases` error - Expected behavior:
`tsh db connect <rds-primary-endpoint>` should connect to db
Current behavior:
```
$ tsh db logout
$ tsh db ls --search steve-mysql
Name Description Allowed Users Labels Connect
---------------------- ------------------------------------------------ ------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------- -------
steve-mysql-custom-one Aurora cluster in ca-central-1 (custom endpoint) [*] Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=custom,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1
steve-mysql-custom-two Aurora cluster in ca-central-1 (custom endpoint) [*] Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=custom,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1
steve-mysql Aurora cluster in ca-central-1 [*] Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=primary,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1
steve-mysql-reader Aurora cluster in ca-central-1 (reader endpoint) [*] Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=reader,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1
$ tsh db connect --db-user alice --db-name test steve-mysql
ERROR: database "steve-mysql" matches multiple databases:
Name Description Protocol Type URI Allowed Users Labels
Connect
----------------------------------------------------------- ------------------------------------------------ -------- ---- ----------------------------------------------------------------------- ------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------- -------
steve-mysql-custom-one-rds-aurora-ca-central-1-123123123123 Aurora cluster in ca-central-1 (custom endpoint) mysql rds one.cluster-custom-aabbccddeeff.ca-central-1.rds.amazonaws.com:3306 (unknown) Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=custom,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1,teleport.dev/cloud=AWS,teleport.dev/origin=cloud,tel
eport.internal/discovered-name=steve-mysql-custom-one
steve-mysql-custom-two-rds-aurora-ca-central-1-123123123123 Aurora cluster in ca-central-1 (custom endpoint) mysql rds two.cluster-custom-aabbccddeeff.ca-central-1.rds.amazonaws.com:3306 (unknown) Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=custom,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1,teleport.dev/cloud=AWS,teleport.dev/origin=cloud,tel
eport.internal/discovered-name=steve-mysql-custom-two
steve-mysql-rds-aurora-ca-central-1-123123123123 Aurora cluster in ca-central-1 mysql rds steve-mysql.cluster-aabbccddeeff.ca-central-1.rds.amazonaws.com:3306 (unknown) Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=primary,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1,teleport.dev/cloud=AWS,teleport.dev/origin=cloud,te
leport.internal/discovered-name=steve-mysql
steve-mysql-reader-rds-aurora-ca-central-1-123123123123 Aurora cluster in ca-central-1 (reader endpoint) mysql rds steve-mysql.cluster-ro-aabbccddeeff.ca-central-1.rds.amazonaws.com:3306 (unknown) Env=dev,Name=steve-mysql,Owner=STeve,account-id=123123123123,endpoint-type=reader,engine-version=8.0.mysql_aurora.3.02.2,engine=aurora-mysql,region=ca-central-1,teleport.dev/cloud=AWS,teleport.dev/origin=cloud,tel
eport.internal/discovered-name=steve-mysql-reader
Hint: use 'tsh db ls -v' or 'tsh db ls --format=[json|yaml]' to list all databases with full details.
Hint: try selecting the database with a more specific name (ex: tsh db connect steve-mysql-custom-one-rds-aurora-ca-central-1-123123123123).
Hint: try selecting the database with additional --labels or --query predicate.
```
Bug details:
- Teleport version: `"server_version": "14.0.0-alpha.2",`
Workaround:
`tsh db connect --db-user alice --db-name test steve-mysql-rds-aurora-ca-central-1-123123123123`
Feels like a regression but it could also be "as designed" from the renaming feature.
Besides `tsh db connect` throws an error, the table dumped from the error is not very readable when there are lots of tags and the row running over to next line.
Test Plan: https://github.com/gravitational/teleport/issues/31122
|
non_build
|
tsh db connect throws matches multiple databases error expected behavior tsh db connect should connect to db current behavior tsh db logout tsh db ls search steve mysql name description allowed users labels connect steve mysql custom one aurora cluster in ca central custom endpoint env dev name steve mysql owner steve account id endpoint type custom engine version mysql aurora engine aurora mysql region ca central steve mysql custom two aurora cluster in ca central custom endpoint env dev name steve mysql owner steve account id endpoint type custom engine version mysql aurora engine aurora mysql region ca central steve mysql aurora cluster in ca central env dev name steve mysql owner steve account id endpoint type primary engine version mysql aurora engine aurora mysql region ca central steve mysql reader aurora cluster in ca central reader endpoint env dev name steve mysql owner steve account id endpoint type reader engine version mysql aurora engine aurora mysql region ca central tsh db connect db user alice db name test steve mysql error database steve mysql matches multiple databases name description protocol type uri allowed users labels connect steve mysql custom one rds aurora ca central aurora cluster in ca central custom endpoint mysql rds one cluster custom aabbccddeeff ca central rds amazonaws com unknown env dev name steve mysql owner steve account id endpoint type custom engine version mysql aurora engine aurora mysql region ca central teleport dev cloud aws teleport dev origin cloud tel eport internal discovered name steve mysql custom one steve mysql custom two rds aurora ca central aurora cluster in ca central custom endpoint mysql rds two cluster custom aabbccddeeff ca central rds amazonaws com unknown env dev name steve mysql owner steve account id endpoint type custom engine version mysql aurora engine aurora mysql region ca central teleport dev cloud aws teleport dev origin cloud tel eport internal discovered name steve mysql custom two steve mysql rds aurora ca central aurora cluster in ca central mysql rds steve mysql cluster aabbccddeeff ca central rds amazonaws com unknown env dev name steve mysql owner steve account id endpoint type primary engine version mysql aurora engine aurora mysql region ca central teleport dev cloud aws teleport dev origin cloud te leport internal discovered name steve mysql steve mysql reader rds aurora ca central aurora cluster in ca central reader endpoint mysql rds steve mysql cluster ro aabbccddeeff ca central rds amazonaws com unknown env dev name steve mysql owner steve account id endpoint type reader engine version mysql aurora engine aurora mysql region ca central teleport dev cloud aws teleport dev origin cloud tel eport internal discovered name steve mysql reader hint use tsh db ls v or tsh db ls format to list all databases with full details hint try selecting the database with a more specific name ex tsh db connect steve mysql custom one rds aurora ca central hint try selecting the database with additional labels or query predicate bug details teleport version server version alpha workaround tsh db connect db user alice db name test steve mysql rds aurora ca central feels like a regression but it could also be as designed from the renaming feature besides tsh db connect throws an error the table dumped from the error is not very readable when there are lots of tags and the row running over to next line test plan
| 0
|
84,228
| 24,262,273,992
|
IssuesEvent
|
2022-09-28 00:42:38
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
Allow dependency injection with GoRoute generator
|
new feature p: first party package proposal p: go_router_builder
|
As far as I've tried it today, the go router generator, only recognize injected params that should be part of the url, that is you cannot inject anything else like into your route.
Something more lenient around the lines of this could be used:
```dart
@TypedRoute()
class ProductsRoute extends GoRoute {
final ServiceContainer services;
final AppRouteFactory routeFactory;
@override
final String path = '/products';
ProductsRoute(this.services);
@override
Widget build(BuildContext context) => Container();
@override
void redirect() => services.authentication.isUserAuthenticated ? null : routeFactory.login();
}
@TypedRoute(params: {'id': int })
class ProductDetailsRoute extends GoRoute {
final ServiceContainer services;
@override
final String path = '/products/:id';
ProductDetailsRoute(this.services);
@override
Widget build(BuildContext context) => Container();
}
```
The generator is not even required here, it's just to auto generate a `go` function, which can be hand written.
```dart
class ProductsRoute extends GoRoute {
final ServiceContainer services;
final AppRouteFactory routeFactory;
@override
final String path = '/products';
ProductDetailsRoute(this.services);
@override
Widget build(BuildContext context) => Container();
Future<void> go(BuildContext context) => context.go(path);
@override
void redirect() => services.authentication.isUserAuthenticated ? null : routeFactory.login();
}
class ProductDetailsRoute extends GoRoute {
final ServiceContainer services;
@override
final String path = '/products/:id';
ProductsRoute(this.services);
@override
Widget build(BuildContext context) => Container();
Future<void> go(BuildContext context, int id) => context.go(path.replace(':id', id.toString()));
}
```
To know which route is part of the stack you can use the fact that this is just a function of the path really. `/products/:id` contains 2 screens in the stack because the URL says so. Which is how a directory structure works, which conveniently, is also what an url is (supposed) to be, ie a path. Or you can use a stack property or whatever.
Further more the distinction between GoRoute and GoRouteData seems to be an implementation details, with the only reason for it to be split in the public API, to accomodate the internals.
|
1.0
|
Allow dependency injection with GoRoute generator - As far as I've tried it today, the go router generator, only recognize injected params that should be part of the url, that is you cannot inject anything else like into your route.
Something more lenient around the lines of this could be used:
```dart
@TypedRoute()
class ProductsRoute extends GoRoute {
final ServiceContainer services;
final AppRouteFactory routeFactory;
@override
final String path = '/products';
ProductsRoute(this.services);
@override
Widget build(BuildContext context) => Container();
@override
void redirect() => services.authentication.isUserAuthenticated ? null : routeFactory.login();
}
@TypedRoute(params: {'id': int })
class ProductDetailsRoute extends GoRoute {
final ServiceContainer services;
@override
final String path = '/products/:id';
ProductDetailsRoute(this.services);
@override
Widget build(BuildContext context) => Container();
}
```
The generator is not even required here, it's just to auto generate a `go` function, which can be hand written.
```dart
class ProductsRoute extends GoRoute {
final ServiceContainer services;
final AppRouteFactory routeFactory;
@override
final String path = '/products';
ProductDetailsRoute(this.services);
@override
Widget build(BuildContext context) => Container();
Future<void> go(BuildContext context) => context.go(path);
@override
void redirect() => services.authentication.isUserAuthenticated ? null : routeFactory.login();
}
class ProductDetailsRoute extends GoRoute {
final ServiceContainer services;
@override
final String path = '/products/:id';
ProductsRoute(this.services);
@override
Widget build(BuildContext context) => Container();
Future<void> go(BuildContext context, int id) => context.go(path.replace(':id', id.toString()));
}
```
To know which route is part of the stack you can use the fact that this is just a function of the path really. `/products/:id` contains 2 screens in the stack because the URL says so. Which is how a directory structure works, which conveniently, is also what an url is (supposed) to be, ie a path. Or you can use a stack property or whatever.
Further more the distinction between GoRoute and GoRouteData seems to be an implementation details, with the only reason for it to be split in the public API, to accomodate the internals.
|
build
|
allow dependency injection with goroute generator as far as i ve tried it today the go router generator only recognize injected params that should be part of the url that is you cannot inject anything else like into your route something more lenient around the lines of this could be used dart typedroute class productsroute extends goroute final servicecontainer services final approutefactory routefactory override final string path products productsroute this services override widget build buildcontext context container override void redirect services authentication isuserauthenticated null routefactory login typedroute params id int class productdetailsroute extends goroute final servicecontainer services override final string path products id productdetailsroute this services override widget build buildcontext context container the generator is not even required here it s just to auto generate a go function which can be hand written dart class productsroute extends goroute final servicecontainer services final approutefactory routefactory override final string path products productdetailsroute this services override widget build buildcontext context container future go buildcontext context context go path override void redirect services authentication isuserauthenticated null routefactory login class productdetailsroute extends goroute final servicecontainer services override final string path products id productsroute this services override widget build buildcontext context container future go buildcontext context int id context go path replace id id tostring to know which route is part of the stack you can use the fact that this is just a function of the path really products id contains screens in the stack because the url says so which is how a directory structure works which conveniently is also what an url is supposed to be ie a path or you can use a stack property or whatever further more the distinction between goroute and goroutedata seems to be an implementation details with the only reason for it to be split in the public api to accomodate the internals
| 1
|
472,958
| 13,634,010,942
|
IssuesEvent
|
2020-09-24 22:42:15
|
RobotLocomotion/drake
|
https://api.github.com/repos/RobotLocomotion/drake
|
closed
|
spatial_algebra: `Class::dot` implementations cause dependency cycles
|
component: multibody plant priority: low team: dynamics
|
From merge-base 9b7cae4c56e, when writing bindings for some of the `SpatialVector` derived classes, I got the following runtime error when running the Python unittest:
```
from pydrake.multibody.math import (
ImportError: ...runfiles/drake/bindings/pydrake/multibody/math.so: undefined symbol: _ZNK5drake9multibody15SpatialVelocityIdE3dotERKNS0_12SpatialForceIdEE
```
Looking into it, it's because the `::dot` definitions are defined in `spatial_algebra.h`???
https://github.com/RobotLocomotion/drake/blob/19a15cd364efe6ef8d23ece115375948b546810c/multibody/math/spatial_algebra.h
EDIT (2020-09-22):
Two issues:
- [x] #14113 The symbols are not compiled into `//:drake_shared_library`
- [ ] The dependency cycles may mean that partial includes could happen (compile time) or some symbols may not be linked (linking time)
FYI @jwnimmer-tri
|
1.0
|
spatial_algebra: `Class::dot` implementations cause dependency cycles - From merge-base 9b7cae4c56e, when writing bindings for some of the `SpatialVector` derived classes, I got the following runtime error when running the Python unittest:
```
from pydrake.multibody.math import (
ImportError: ...runfiles/drake/bindings/pydrake/multibody/math.so: undefined symbol: _ZNK5drake9multibody15SpatialVelocityIdE3dotERKNS0_12SpatialForceIdEE
```
Looking into it, it's because the `::dot` definitions are defined in `spatial_algebra.h`???
https://github.com/RobotLocomotion/drake/blob/19a15cd364efe6ef8d23ece115375948b546810c/multibody/math/spatial_algebra.h
EDIT (2020-09-22):
Two issues:
- [x] #14113 The symbols are not compiled into `//:drake_shared_library`
- [ ] The dependency cycles may mean that partial includes could happen (compile time) or some symbols may not be linked (linking time)
FYI @jwnimmer-tri
|
non_build
|
spatial algebra class dot implementations cause dependency cycles from merge base when writing bindings for some of the spatialvector derived classes i got the following runtime error when running the python unittest from pydrake multibody math import importerror runfiles drake bindings pydrake multibody math so undefined symbol looking into it it s because the dot definitions are defined in spatial algebra h edit two issues the symbols are not compiled into drake shared library the dependency cycles may mean that partial includes could happen compile time or some symbols may not be linked linking time fyi jwnimmer tri
| 0
|
450,751
| 31,990,683,587
|
IssuesEvent
|
2023-09-21 05:27:56
|
gak112/DearjobTesting2
|
https://api.github.com/repos/gak112/DearjobTesting2
|
opened
|
BUG ; DEARJOB ADMIN ; Masters > Company Category > Add category :;
|
bug documentation
|
Action :- In add category it is only taking 1 alphabet with out warning
Expected Output :- it should not accept 1 alphabet and it should give the warning to use maximum characters
Actual Output :- It is accepting single alphabet without giving warning

|
1.0
|
BUG ; DEARJOB ADMIN ; Masters > Company Category > Add category :; - Action :- In add category it is only taking 1 alphabet with out warning
Expected Output :- it should not accept 1 alphabet and it should give the warning to use maximum characters
Actual Output :- It is accepting single alphabet without giving warning

|
non_build
|
bug dearjob admin masters company category add category action in add category it is only taking alphabet with out warning expected output it should not accept alphabet and it should give the warning to use maximum characters actual output it is accepting single alphabet without giving warning
| 0
|
548,392
| 16,063,360,605
|
IssuesEvent
|
2021-04-23 15:22:34
|
dmwm/CRABServer
|
https://api.github.com/repos/dmwm/CRABServer
|
closed
|
Make dag_bootstrap.sh send output to console when debugging
|
Priority: TOP Type: Enhancement
|
this is the needed follow up to #6475
which I forgot to open when that was close
Currently dab_bootstrap.sh creates a log file and reports proper exit code (to condor), but behavior is unchanged when running interactively and there is no output to console when e.g. running PostJob interactively for debug.
Pasting here from #6475
look for possible alternatives, and test them well, e.g. along what indicated here
https://stackoverflow.com/questions/1221833/pipe-output-and-capture-exit-status-in-bash
|
1.0
|
Make dag_bootstrap.sh send output to console when debugging - this is the needed follow up to #6475
which I forgot to open when that was close
Currently dab_bootstrap.sh creates a log file and reports proper exit code (to condor), but behavior is unchanged when running interactively and there is no output to console when e.g. running PostJob interactively for debug.
Pasting here from #6475
look for possible alternatives, and test them well, e.g. along what indicated here
https://stackoverflow.com/questions/1221833/pipe-output-and-capture-exit-status-in-bash
|
non_build
|
make dag bootstrap sh send output to console when debugging this is the needed follow up to which i forgot to open when that was close currently dab bootstrap sh creates a log file and reports proper exit code to condor but behavior is unchanged when running interactively and there is no output to console when e g running postjob interactively for debug pasting here from look for possible alternatives and test them well e g along what indicated here
| 0
|
2,881
| 3,025,823,970
|
IssuesEvent
|
2015-08-03 11:24:03
|
Starcounter/Starcounter
|
https://api.github.com/repos/Starcounter/Starcounter
|
closed
|
Can not build Level1
|
Build System Infrastructure
|

Trying Develop and have been pulling down the latest bits early this morning, after being back from vaccation. Restoring packages seem not to help. Any one with an idea?
|
1.0
|
Can not build Level1 - 
Trying Develop and have been pulling down the latest bits early this morning, after being back from vaccation. Restoring packages seem not to help. Any one with an idea?
|
build
|
can not build trying develop and have been pulling down the latest bits early this morning after being back from vaccation restoring packages seem not to help any one with an idea
| 1
|
569,306
| 17,011,421,878
|
IssuesEvent
|
2021-07-02 05:32:16
|
geolonia/embed
|
https://api.github.com/repos/geolonia/embed
|
closed
|
`data-simple-vector` 対応
|
Priority: High
|
tiles.json URL を指定します。中には simplestyle のスキーマと対応したベクトルタイルが指定されています。
embed では、このURLをsourcesに追加し、合わせて GeoJSON と同様に simplestyle のスタイルも追加します
|
1.0
|
`data-simple-vector` 対応 - tiles.json URL を指定します。中には simplestyle のスキーマと対応したベクトルタイルが指定されています。
embed では、このURLをsourcesに追加し、合わせて GeoJSON と同様に simplestyle のスタイルも追加します
|
non_build
|
data simple vector 対応 tiles json url を指定します。中には simplestyle のスキーマと対応したベクトルタイルが指定されています。 embed では、このurlをsourcesに追加し、合わせて geojson と同様に simplestyle のスタイルも追加します
| 0
|
15,550
| 5,997,135,395
|
IssuesEvent
|
2017-06-03 20:49:41
|
apitrace/apitrace
|
https://api.github.com/repos/apitrace/apitrace
|
closed
|
Error copying file "apitrace/libs/arm64-v8a/libgnustl_shared.so" to "apitrace/build/android-build/libs/arm64-v8a/".
|
Android Build
|
Hello,
I am trying to build apitrace on Android 5.1.1 64bit emulator and get some error message. I am confuse that I don't have such directory or file.
Here cmake commands are.
`cmake -H. -Bbuild -DCMAKE_TOOLCHAIN_FILE=android.toolchain.cmake -DANDROID_NDK=~/android-ndk-r10c -DANDROID_ABI=arm64-v8a -DANDROID_NATIVE_API_LEVEL=android-21 -DANDROID_TOOLCHAIN_NAME=aarch64-linux-android-4.9 -DANDROID_STL=gnustl_shared -DANDROID_API_LEVEL=21 -DANDROID_SDK=~/android-sdk-linux`
`make -C build retraceAPK`
error message like that
**Error copying file "/home/ken/emulator/apitrace/libs/arm64-v8a/libgnustl_shared.so" to "/home/ken/emulator/apitrace/build/android-build/libs/arm64-v8a/".**
I have no idea about that, and I still try many different cmake commands, but always fail at `make retraceAPK`.
|
1.0
|
Error copying file "apitrace/libs/arm64-v8a/libgnustl_shared.so" to "apitrace/build/android-build/libs/arm64-v8a/". - Hello,
I am trying to build apitrace on Android 5.1.1 64bit emulator and get some error message. I am confuse that I don't have such directory or file.
Here cmake commands are.
`cmake -H. -Bbuild -DCMAKE_TOOLCHAIN_FILE=android.toolchain.cmake -DANDROID_NDK=~/android-ndk-r10c -DANDROID_ABI=arm64-v8a -DANDROID_NATIVE_API_LEVEL=android-21 -DANDROID_TOOLCHAIN_NAME=aarch64-linux-android-4.9 -DANDROID_STL=gnustl_shared -DANDROID_API_LEVEL=21 -DANDROID_SDK=~/android-sdk-linux`
`make -C build retraceAPK`
error message like that
**Error copying file "/home/ken/emulator/apitrace/libs/arm64-v8a/libgnustl_shared.so" to "/home/ken/emulator/apitrace/build/android-build/libs/arm64-v8a/".**
I have no idea about that, and I still try many different cmake commands, but always fail at `make retraceAPK`.
|
build
|
error copying file apitrace libs libgnustl shared so to apitrace build android build libs hello i am trying to build apitrace on android emulator and get some error message i am confuse that i don t have such directory or file here cmake commands are cmake h bbuild dcmake toolchain file android toolchain cmake dandroid ndk android ndk dandroid abi dandroid native api level android dandroid toolchain name linux android dandroid stl gnustl shared dandroid api level dandroid sdk android sdk linux make c build retraceapk error message like that error copying file home ken emulator apitrace libs libgnustl shared so to home ken emulator apitrace build android build libs i have no idea about that and i still try many different cmake commands but always fail at make retraceapk
| 1
|
407,234
| 11,908,537,603
|
IssuesEvent
|
2020-03-31 01:15:25
|
TerriaJS/de-australia-map
|
https://api.github.com/repos/TerriaJS/de-australia-map
|
closed
|
Onboarding UI changes - designs
|
high priority
|
- onboarding designs based on discussions with DE Australia team
[Notes live here](https://docs.google.com/document/d/1cENE8rKUDknTd5EY9NzplAghaV4FetJij-UwIgX2AtU/edit?folder=17hIu2s1LGi2semARtA6h79yBsls3HtxM)
|
1.0
|
Onboarding UI changes - designs - - onboarding designs based on discussions with DE Australia team
[Notes live here](https://docs.google.com/document/d/1cENE8rKUDknTd5EY9NzplAghaV4FetJij-UwIgX2AtU/edit?folder=17hIu2s1LGi2semARtA6h79yBsls3HtxM)
|
non_build
|
onboarding ui changes designs onboarding designs based on discussions with de australia team
| 0
|
47,961
| 13,067,332,766
|
IssuesEvent
|
2020-07-31 00:07:36
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
cmake trunk sets CMAKE_INSTALL_PREFIX:PATH=/usr/local which cannot be written to (Trac #1524)
|
Migrated from Trac cmake defect
|
In trying `make tarball` using trunk of simulation meta-project it fails with the following error:
CMake Error at cmake_install.cmake:36 (FILE):
file cannot create directory: /usr/local/lib/icecube. Maybe need
administrative privileges.
In looking in the CMakeCache.txt file the CMAKE_INSTALL_PREFIX:PATH is incorrectly set to /usr/local
Migrated from https://code.icecube.wisc.edu/ticket/1524
```json
{
"status": "closed",
"changetime": "2019-01-12T00:12:46",
"description": "In trying `make tarball` using trunk of simulation meta-project it fails with the following error:\n\nCMake Error at cmake_install.cmake:36 (FILE):\nfile cannot create directory: /usr/local/lib/icecube. Maybe need\nadministrative privileges.\n\nIn looking in the CMakeCache.txt file the CMAKE_INSTALL_PREFIX:PATH is incorrectly set to /usr/local ",
"reporter": "melanie.day",
"cc": "",
"resolution": "fixed",
"_ts": "1547251966149934",
"component": "cmake",
"summary": "cmake trunk sets CMAKE_INSTALL_PREFIX:PATH=/usr/local which cannot be written to",
"priority": "normal",
"keywords": "",
"time": "2016-01-22T17:22:22",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
1.0
|
cmake trunk sets CMAKE_INSTALL_PREFIX:PATH=/usr/local which cannot be written to (Trac #1524) - In trying `make tarball` using trunk of simulation meta-project it fails with the following error:
CMake Error at cmake_install.cmake:36 (FILE):
file cannot create directory: /usr/local/lib/icecube. Maybe need
administrative privileges.
In looking in the CMakeCache.txt file the CMAKE_INSTALL_PREFIX:PATH is incorrectly set to /usr/local
Migrated from https://code.icecube.wisc.edu/ticket/1524
```json
{
"status": "closed",
"changetime": "2019-01-12T00:12:46",
"description": "In trying `make tarball` using trunk of simulation meta-project it fails with the following error:\n\nCMake Error at cmake_install.cmake:36 (FILE):\nfile cannot create directory: /usr/local/lib/icecube. Maybe need\nadministrative privileges.\n\nIn looking in the CMakeCache.txt file the CMAKE_INSTALL_PREFIX:PATH is incorrectly set to /usr/local ",
"reporter": "melanie.day",
"cc": "",
"resolution": "fixed",
"_ts": "1547251966149934",
"component": "cmake",
"summary": "cmake trunk sets CMAKE_INSTALL_PREFIX:PATH=/usr/local which cannot be written to",
"priority": "normal",
"keywords": "",
"time": "2016-01-22T17:22:22",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
non_build
|
cmake trunk sets cmake install prefix path usr local which cannot be written to trac in trying make tarball using trunk of simulation meta project it fails with the following error cmake error at cmake install cmake file file cannot create directory usr local lib icecube maybe need administrative privileges in looking in the cmakecache txt file the cmake install prefix path is incorrectly set to usr local migrated from json status closed changetime description in trying make tarball using trunk of simulation meta project it fails with the following error n ncmake error at cmake install cmake file nfile cannot create directory usr local lib icecube maybe need nadministrative privileges n nin looking in the cmakecache txt file the cmake install prefix path is incorrectly set to usr local reporter melanie day cc resolution fixed ts component cmake summary cmake trunk sets cmake install prefix path usr local which cannot be written to priority normal keywords time milestone owner nega type defect
| 0
|
86,463
| 24,855,406,270
|
IssuesEvent
|
2022-10-27 01:27:13
|
expo/expo-cli
|
https://api.github.com/repos/expo/expo-cli
|
closed
|
Feature request: please fix the problematic expo build:ios -c procedure
|
expo build stale
|
## Description
I have many issues with the expo command build:ios -c.
1) Why does the command `expo build:ios -c` have the answer "Y" as default for their questions? The build -c command is a destructive command, and it's very easy to slightly press the Enter key twice and answer yes to a question that you'd rather answer no instead.
2) since a few versions now, the -c command will ask questions about clearing app certificates, keys ect. before signing in to the apple account. This means that, if you don't have access to the apple account that was set, you now cleared everything, and cannot build an app with the previously used certificates, keys, ect. Why? Please move the clearing questions back after signing in to apple's account. You can't even do much without signing in to an apple account (like creating a new distribution certificate, or key), so the current behavior doesn't actually make sense.
3) The build:ios -c command assumes that you want to build appstore builds and will only suggest those; instead, most of the builds I make (and I think I can assume that it's the same for other developers as well) are ad-hoc testing builds.
So, when it comes to selecting an app profile, expo would lovely suggest the AppStore profile in the apple account, but not the AdHoc profile. So I have to answer no to using the suggested profile, then answer that I want to upload a new profile, and then that I want to upload my own file instead of letting expo handle it. Can't this be done any better? For example, list the profiles available on the specified apple account and some extra options at the end:
```
1) AppStore profile (25 Mar 2020)
2) Ad-Hoc profile (23 Feb 2020)
3) Upload a new profile
4) Let Expo create a new profile
```
4) Again, on the improper behavior for ios app profiles: Why is it that switching the app profile requires me to issue the -c switch? This should be as simple as the build:android command where you are presented with the apk and appbundle options: can't expo ask which mobileprovision profile to use when I issue build:ios (without -c)? This should also prevent the issues described before in this list, and would be much appreciated.
5) Apple team: how likely is that you will want to switch the apple team used for an app, more than once or twice during development? Can't expo simply save the apple team preference for an app? I have mistakenly selected the wrong apple team (I have 4 in my account) in numerous builds, with not so nice consequences, and I have never understood why I was always asked this question. Can't you include this preference together with the certificates, keys, and profile settings? so if ever needed, expo build:ios -c would clear this preference and allow to choose a different team ?
So, please, fix the build:ios procedure to make it actually simple and meaningful to use. The current procedure is so frustrating and prone to user error, I hate it... I am sure many other developers feel the same...
|
1.0
|
Feature request: please fix the problematic expo build:ios -c procedure - ## Description
I have many issues with the expo command build:ios -c.
1) Why does the command `expo build:ios -c` have the answer "Y" as default for their questions? The build -c command is a destructive command, and it's very easy to slightly press the Enter key twice and answer yes to a question that you'd rather answer no instead.
2) since a few versions now, the -c command will ask questions about clearing app certificates, keys ect. before signing in to the apple account. This means that, if you don't have access to the apple account that was set, you now cleared everything, and cannot build an app with the previously used certificates, keys, ect. Why? Please move the clearing questions back after signing in to apple's account. You can't even do much without signing in to an apple account (like creating a new distribution certificate, or key), so the current behavior doesn't actually make sense.
3) The build:ios -c command assumes that you want to build appstore builds and will only suggest those; instead, most of the builds I make (and I think I can assume that it's the same for other developers as well) are ad-hoc testing builds.
So, when it comes to selecting an app profile, expo would lovely suggest the AppStore profile in the apple account, but not the AdHoc profile. So I have to answer no to using the suggested profile, then answer that I want to upload a new profile, and then that I want to upload my own file instead of letting expo handle it. Can't this be done any better? For example, list the profiles available on the specified apple account and some extra options at the end:
```
1) AppStore profile (25 Mar 2020)
2) Ad-Hoc profile (23 Feb 2020)
3) Upload a new profile
4) Let Expo create a new profile
```
4) Again, on the improper behavior for ios app profiles: Why is it that switching the app profile requires me to issue the -c switch? This should be as simple as the build:android command where you are presented with the apk and appbundle options: can't expo ask which mobileprovision profile to use when I issue build:ios (without -c)? This should also prevent the issues described before in this list, and would be much appreciated.
5) Apple team: how likely is that you will want to switch the apple team used for an app, more than once or twice during development? Can't expo simply save the apple team preference for an app? I have mistakenly selected the wrong apple team (I have 4 in my account) in numerous builds, with not so nice consequences, and I have never understood why I was always asked this question. Can't you include this preference together with the certificates, keys, and profile settings? so if ever needed, expo build:ios -c would clear this preference and allow to choose a different team ?
So, please, fix the build:ios procedure to make it actually simple and meaningful to use. The current procedure is so frustrating and prone to user error, I hate it... I am sure many other developers feel the same...
|
build
|
feature request please fix the problematic expo build ios c procedure description i have many issues with the expo command build ios c why does the command expo build ios c have the answer y as default for their questions the build c command is a destructive command and it s very easy to slightly press the enter key twice and answer yes to a question that you d rather answer no instead since a few versions now the c command will ask questions about clearing app certificates keys ect before signing in to the apple account this means that if you don t have access to the apple account that was set you now cleared everything and cannot build an app with the previously used certificates keys ect why please move the clearing questions back after signing in to apple s account you can t even do much without signing in to an apple account like creating a new distribution certificate or key so the current behavior doesn t actually make sense the build ios c command assumes that you want to build appstore builds and will only suggest those instead most of the builds i make and i think i can assume that it s the same for other developers as well are ad hoc testing builds so when it comes to selecting an app profile expo would lovely suggest the appstore profile in the apple account but not the adhoc profile so i have to answer no to using the suggested profile then answer that i want to upload a new profile and then that i want to upload my own file instead of letting expo handle it can t this be done any better for example list the profiles available on the specified apple account and some extra options at the end appstore profile mar ad hoc profile feb upload a new profile let expo create a new profile again on the improper behavior for ios app profiles why is it that switching the app profile requires me to issue the c switch this should be as simple as the build android command where you are presented with the apk and appbundle options can t expo ask which mobileprovision profile to use when i issue build ios without c this should also prevent the issues described before in this list and would be much appreciated apple team how likely is that you will want to switch the apple team used for an app more than once or twice during development can t expo simply save the apple team preference for an app i have mistakenly selected the wrong apple team i have in my account in numerous builds with not so nice consequences and i have never understood why i was always asked this question can t you include this preference together with the certificates keys and profile settings so if ever needed expo build ios c would clear this preference and allow to choose a different team so please fix the build ios procedure to make it actually simple and meaningful to use the current procedure is so frustrating and prone to user error i hate it i am sure many other developers feel the same
| 1
|
148,587
| 23,360,604,115
|
IssuesEvent
|
2022-08-10 11:21:15
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
opened
|
[Bug]: Using internal functions inside a callback function causes UI to change and include fields of the internal function
|
Bug UI Improvement Needs Design Production Needs Triaging medium
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
Using internal functions inside a callback function causes UI to change and include fields of the internal function
### Steps To Reproduce
1. Add a setInterval action on a checkbox widget or any widget that has an action field
2. In the callback field add a `showalert` or a `resetWidget` function and observe the action selector change and display additional fields. These field appear to be at the same **level** as the parent function

### Public Sample App
_No response_
### Version
Cloud
|
1.0
|
[Bug]: Using internal functions inside a callback function causes UI to change and include fields of the internal function - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
Using internal functions inside a callback function causes UI to change and include fields of the internal function
### Steps To Reproduce
1. Add a setInterval action on a checkbox widget or any widget that has an action field
2. In the callback field add a `showalert` or a `resetWidget` function and observe the action selector change and display additional fields. These field appear to be at the same **level** as the parent function

### Public Sample App
_No response_
### Version
Cloud
|
non_build
|
using internal functions inside a callback function causes ui to change and include fields of the internal function is there an existing issue for this i have searched the existing issues description using internal functions inside a callback function causes ui to change and include fields of the internal function steps to reproduce add a setinterval action on a checkbox widget or any widget that has an action field in the callback field add a showalert or a resetwidget function and observe the action selector change and display additional fields these field appear to be at the same level as the parent function public sample app no response version cloud
| 0
|
73,872
| 19,849,395,167
|
IssuesEvent
|
2022-01-21 10:32:41
|
dotnet/efcore
|
https://api.github.com/repos/dotnet/efcore
|
closed
|
EF Core 6 - IsTemporal adds an unnecessary migration when DefaultSchema is configured at the DbContext Level
|
type-bug closed-fixed Servicing-consider customer-reported area-model-building area-sqlserver
|
I am seeing this incorrect behavior with EF Core 6 in relation to TemporalTables.
Consider the below code.
```
public class MyDbContext : DbContext
{
private const string DEFAULT_SCHEMA = "app";
public DbSet<Student> Students { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder
.UseSqlServer("<ConnectionString>")
.EnableSensitiveDataLogging()
.LogTo(Console.WriteLine, LogLevel.Information);
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.HasDefaultSchema(DEFAULT_SCHEMA);
modelBuilder.Entity<Student>(builder =>
{
builder.ToTable(nameof(Student), x => x.IsTemporal());
});
}
}
public class Student
{
public int Id { get; set; }
public string Name { get; set; }
}
```
Now to reproduce the issue,
1. Add a migration
2. Add just another migration

**Look at the highlighted line. It drops the annotation for the schema in History table.**
I can get around this issue by removing the DefaultSchema configuration from the DbCotext level and introducing it into the table level. But then that has to be done for each table :(
```
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Student>(builder =>
{
builder.ToTable(nameof(Student), DEFAULT_SCHEMA, x => x.IsTemporal());
});
}
```
The sample executable code is attached herewith.
[EfCore6-TemporalTables.zip](https://github.com/dotnet/efcore/files/7534380/EfCore6-TemporalTables.zip)
Environment Info:
```
EF Core version: 6.0.0
Database provider: Microsoft.EntityFrameworkCore.SqlServer
Target framework: .NET 6.0
```
|
1.0
|
EF Core 6 - IsTemporal adds an unnecessary migration when DefaultSchema is configured at the DbContext Level - I am seeing this incorrect behavior with EF Core 6 in relation to TemporalTables.
Consider the below code.
```
public class MyDbContext : DbContext
{
private const string DEFAULT_SCHEMA = "app";
public DbSet<Student> Students { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder
.UseSqlServer("<ConnectionString>")
.EnableSensitiveDataLogging()
.LogTo(Console.WriteLine, LogLevel.Information);
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.HasDefaultSchema(DEFAULT_SCHEMA);
modelBuilder.Entity<Student>(builder =>
{
builder.ToTable(nameof(Student), x => x.IsTemporal());
});
}
}
public class Student
{
public int Id { get; set; }
public string Name { get; set; }
}
```
Now to reproduce the issue,
1. Add a migration
2. Add just another migration

**Look at the highlighted line. It drops the annotation for the schema in History table.**
I can get around this issue by removing the DefaultSchema configuration from the DbCotext level and introducing it into the table level. But then that has to be done for each table :(
```
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Student>(builder =>
{
builder.ToTable(nameof(Student), DEFAULT_SCHEMA, x => x.IsTemporal());
});
}
```
The sample executable code is attached herewith.
[EfCore6-TemporalTables.zip](https://github.com/dotnet/efcore/files/7534380/EfCore6-TemporalTables.zip)
Environment Info:
```
EF Core version: 6.0.0
Database provider: Microsoft.EntityFrameworkCore.SqlServer
Target framework: .NET 6.0
```
|
build
|
ef core istemporal adds an unnecessary migration when defaultschema is configured at the dbcontext level i am seeing this incorrect behavior with ef core in relation to temporaltables consider the below code public class mydbcontext dbcontext private const string default schema app public dbset students get set protected override void onconfiguring dbcontextoptionsbuilder optionsbuilder optionsbuilder usesqlserver enablesensitivedatalogging logto console writeline loglevel information protected override void onmodelcreating modelbuilder modelbuilder modelbuilder hasdefaultschema default schema modelbuilder entity builder builder totable nameof student x x istemporal public class student public int id get set public string name get set now to reproduce the issue add a migration add just another migration look at the highlighted line it drops the annotation for the schema in history table i can get around this issue by removing the defaultschema configuration from the dbcotext level and introducing it into the table level but then that has to be done for each table protected override void onmodelcreating modelbuilder modelbuilder modelbuilder entity builder builder totable nameof student default schema x x istemporal the sample executable code is attached herewith environment info ef core version database provider microsoft entityframeworkcore sqlserver target framework net
| 1
|
34,603
| 9,419,791,181
|
IssuesEvent
|
2019-04-10 23:17:09
|
iotile/coretools
|
https://api.github.com/repos/iotile/coretools
|
opened
|
There needs to be a way to set the entry point name for a product in iotile build
|
iotile-build iotile-core type:bug
|
Currently, when you build a component using `iotile build`, it generates a support package and support wheel if any of the listed products in `module_settings.json` are specified as python products. Each of those products get an entry point created in the support distribution but the name of the entry point is hardcoded to always be `os.path.basename()` of the module containing the product.
For some products like `build_step`, the name of the entry point is the key that is used to invoke that product so it is useful to be able to specify it rather than have it always be the module name.
The code that does entry point assignment is:
https://github.com/iotile/coretools/blob/master/iotilebuild/iotile/build/config/site_scons/pythondist.py#L101
The place where we define what products are what is:
https://github.com/iotile/coretools/blob/master/iotilecore/iotile/core/dev/iotileobj.py#L58
We need to:
- [ ] Add a new attribute into the definition of a python product that says how its entry point should be named. Options are "module" (the default) or "class". If "class" is chosen then the product listed in `module_settings.json` must be in the form of `path/to/file.py:SomeObject` and the entry point will be `SomeObject`.
- [ ] Update `load_extensions` to pass a flag to `load_extension` based on the type of python extension loaded to know how the resulting psuedo-entrypoint should be named.
See: https://github.com/iotile/coretools/blob/master/iotilecore/iotile/core/dev/registry.py#L145
It needs to get information from the IOTile object and pass that to `load_extension` so that `load_extension` does not hardcode the name of the extension to the module name:
https://github.com/iotile/coretools/blob/master/iotilecore/iotile/core/dev/registry.py#L284
- [ ] Update the `setup.py` generation process so that we pull in the information about whether the entry point name should be module or class here:
https://github.com/iotile/coretools/blob/master/iotilebuild/iotile/build/config/site_scons/pythondist.py#L101
|
1.0
|
There needs to be a way to set the entry point name for a product in iotile build - Currently, when you build a component using `iotile build`, it generates a support package and support wheel if any of the listed products in `module_settings.json` are specified as python products. Each of those products get an entry point created in the support distribution but the name of the entry point is hardcoded to always be `os.path.basename()` of the module containing the product.
For some products like `build_step`, the name of the entry point is the key that is used to invoke that product so it is useful to be able to specify it rather than have it always be the module name.
The code that does entry point assignment is:
https://github.com/iotile/coretools/blob/master/iotilebuild/iotile/build/config/site_scons/pythondist.py#L101
The place where we define what products are what is:
https://github.com/iotile/coretools/blob/master/iotilecore/iotile/core/dev/iotileobj.py#L58
We need to:
- [ ] Add a new attribute into the definition of a python product that says how its entry point should be named. Options are "module" (the default) or "class". If "class" is chosen then the product listed in `module_settings.json` must be in the form of `path/to/file.py:SomeObject` and the entry point will be `SomeObject`.
- [ ] Update `load_extensions` to pass a flag to `load_extension` based on the type of python extension loaded to know how the resulting psuedo-entrypoint should be named.
See: https://github.com/iotile/coretools/blob/master/iotilecore/iotile/core/dev/registry.py#L145
It needs to get information from the IOTile object and pass that to `load_extension` so that `load_extension` does not hardcode the name of the extension to the module name:
https://github.com/iotile/coretools/blob/master/iotilecore/iotile/core/dev/registry.py#L284
- [ ] Update the `setup.py` generation process so that we pull in the information about whether the entry point name should be module or class here:
https://github.com/iotile/coretools/blob/master/iotilebuild/iotile/build/config/site_scons/pythondist.py#L101
|
build
|
there needs to be a way to set the entry point name for a product in iotile build currently when you build a component using iotile build it generates a support package and support wheel if any of the listed products in module settings json are specified as python products each of those products get an entry point created in the support distribution but the name of the entry point is hardcoded to always be os path basename of the module containing the product for some products like build step the name of the entry point is the key that is used to invoke that product so it is useful to be able to specify it rather than have it always be the module name the code that does entry point assignment is the place where we define what products are what is we need to add a new attribute into the definition of a python product that says how its entry point should be named options are module the default or class if class is chosen then the product listed in module settings json must be in the form of path to file py someobject and the entry point will be someobject update load extensions to pass a flag to load extension based on the type of python extension loaded to know how the resulting psuedo entrypoint should be named see it needs to get information from the iotile object and pass that to load extension so that load extension does not hardcode the name of the extension to the module name update the setup py generation process so that we pull in the information about whether the entry point name should be module or class here
| 1
|
55,122
| 13,526,583,953
|
IssuesEvent
|
2020-09-15 14:25:04
|
weaveworks/eksctl
|
https://api.github.com/repos/weaveworks/eksctl
|
closed
|
Proactively track AMIs and supported regions
|
area/build-and-release help wanted kind/improvement
|
We currently updates AMIs reactively (`make update-ami`), and same goes for region updates (but we don't even have a tool for that).
We could automate AMI updates via daily/weekly CI job, or even subscribe a Lambda to the SNS topic.
Checking supported regions would be a little different though, we could consider that AMIs become available in regions as EKS gets rolled out, but it maybe not the most reliable way to do it. One option that comes to mind is to try calling the API in each of the regions and use response code to decide if that region supports the API or not.
|
1.0
|
Proactively track AMIs and supported regions - We currently updates AMIs reactively (`make update-ami`), and same goes for region updates (but we don't even have a tool for that).
We could automate AMI updates via daily/weekly CI job, or even subscribe a Lambda to the SNS topic.
Checking supported regions would be a little different though, we could consider that AMIs become available in regions as EKS gets rolled out, but it maybe not the most reliable way to do it. One option that comes to mind is to try calling the API in each of the regions and use response code to decide if that region supports the API or not.
|
build
|
proactively track amis and supported regions we currently updates amis reactively make update ami and same goes for region updates but we don t even have a tool for that we could automate ami updates via daily weekly ci job or even subscribe a lambda to the sns topic checking supported regions would be a little different though we could consider that amis become available in regions as eks gets rolled out but it maybe not the most reliable way to do it one option that comes to mind is to try calling the api in each of the regions and use response code to decide if that region supports the api or not
| 1
|
1,663
| 23,984,195,002
|
IssuesEvent
|
2022-09-13 17:32:03
|
golang/vulndb
|
https://api.github.com/repos/golang/vulndb
|
closed
|
x/vulndb: potential Go vuln in github.com/fluxcd/flux2: CVE-2022-36049
|
excluded: NOT_IMPORTABLE
|
CVE-2022-36049 references [github.com/fluxcd/flux2](https://github.com/fluxcd/flux2), which may be a Go module.
Description:
Flux2 is a tool for keeping Kubernetes clusters in sync with sources of configuration, and Flux's helm-controller is a Kubernetes operator that allows one to declaratively manage Helm chart releases. Helm controller is tightly integrated with the Helm SDK. A vulnerability found in the Helm SDK that affects flux2 v0.0.17 until v0.32.0 and helm-controller v0.0.4 until v0.23.0 allows for specific data inputs to cause high memory consumption. In some platforms, this could cause the controller to panic and stop processing reconciliations. In a shared cluster multi-tenancy environment, a tenant could create a HelmRelease that makes the controller panic, denying all other tenants from their Helm releases being reconciled. Patches are available in flux2 v0.32.0 and helm-controller v0.23.0.
References:
- NIST: https://nvd.nist.gov/vuln/detail/CVE-2022-36049
- JSON: https://github.com/CVEProject/cvelist/tree/d4f8f9cf738a1b242e98bb02639446745836a678/2022/36xxx/CVE-2022-36049.json
- web: https://github.com/fluxcd/flux2/security/advisories/GHSA-p2g7-xwvr-rrw3
- web: https://github.com/helm/helm/security/advisories/GHSA-7hfp-qfw3-5jxh
- web: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=44996
- web: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=48360
- Imported by: https://pkg.go.dev/github.com/fluxcd/flux2?tab=importedby
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/fluxcd/flux2
packages:
- package: flux2
description: |
Flux2 is a tool for keeping Kubernetes clusters in sync with sources of configuration, and Flux's helm-controller is a Kubernetes operator that allows one to declaratively manage Helm chart releases. Helm controller is tightly integrated with the Helm SDK. A vulnerability found in the Helm SDK that affects flux2 v0.0.17 until v0.32.0 and helm-controller v0.0.4 until v0.23.0 allows for specific data inputs to cause high memory consumption. In some platforms, this could cause the controller to panic and stop processing reconciliations. In a shared cluster multi-tenancy environment, a tenant could create a HelmRelease that makes the controller panic, denying all other tenants from their Helm releases being reconciled. Patches are available in flux2 v0.32.0 and helm-controller v0.23.0.
cves:
- CVE-2022-36049
references:
- web: https://github.com/fluxcd/flux2/security/advisories/GHSA-p2g7-xwvr-rrw3
- web: https://github.com/helm/helm/security/advisories/GHSA-7hfp-qfw3-5jxh
- web: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=44996
- web: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=48360
```
|
True
|
x/vulndb: potential Go vuln in github.com/fluxcd/flux2: CVE-2022-36049 - CVE-2022-36049 references [github.com/fluxcd/flux2](https://github.com/fluxcd/flux2), which may be a Go module.
Description:
Flux2 is a tool for keeping Kubernetes clusters in sync with sources of configuration, and Flux's helm-controller is a Kubernetes operator that allows one to declaratively manage Helm chart releases. Helm controller is tightly integrated with the Helm SDK. A vulnerability found in the Helm SDK that affects flux2 v0.0.17 until v0.32.0 and helm-controller v0.0.4 until v0.23.0 allows for specific data inputs to cause high memory consumption. In some platforms, this could cause the controller to panic and stop processing reconciliations. In a shared cluster multi-tenancy environment, a tenant could create a HelmRelease that makes the controller panic, denying all other tenants from their Helm releases being reconciled. Patches are available in flux2 v0.32.0 and helm-controller v0.23.0.
References:
- NIST: https://nvd.nist.gov/vuln/detail/CVE-2022-36049
- JSON: https://github.com/CVEProject/cvelist/tree/d4f8f9cf738a1b242e98bb02639446745836a678/2022/36xxx/CVE-2022-36049.json
- web: https://github.com/fluxcd/flux2/security/advisories/GHSA-p2g7-xwvr-rrw3
- web: https://github.com/helm/helm/security/advisories/GHSA-7hfp-qfw3-5jxh
- web: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=44996
- web: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=48360
- Imported by: https://pkg.go.dev/github.com/fluxcd/flux2?tab=importedby
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/fluxcd/flux2
packages:
- package: flux2
description: |
Flux2 is a tool for keeping Kubernetes clusters in sync with sources of configuration, and Flux's helm-controller is a Kubernetes operator that allows one to declaratively manage Helm chart releases. Helm controller is tightly integrated with the Helm SDK. A vulnerability found in the Helm SDK that affects flux2 v0.0.17 until v0.32.0 and helm-controller v0.0.4 until v0.23.0 allows for specific data inputs to cause high memory consumption. In some platforms, this could cause the controller to panic and stop processing reconciliations. In a shared cluster multi-tenancy environment, a tenant could create a HelmRelease that makes the controller panic, denying all other tenants from their Helm releases being reconciled. Patches are available in flux2 v0.32.0 and helm-controller v0.23.0.
cves:
- CVE-2022-36049
references:
- web: https://github.com/fluxcd/flux2/security/advisories/GHSA-p2g7-xwvr-rrw3
- web: https://github.com/helm/helm/security/advisories/GHSA-7hfp-qfw3-5jxh
- web: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=44996
- web: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=48360
```
|
non_build
|
x vulndb potential go vuln in github com fluxcd cve cve references which may be a go module description is a tool for keeping kubernetes clusters in sync with sources of configuration and flux s helm controller is a kubernetes operator that allows one to declaratively manage helm chart releases helm controller is tightly integrated with the helm sdk a vulnerability found in the helm sdk that affects until and helm controller until allows for specific data inputs to cause high memory consumption in some platforms this could cause the controller to panic and stop processing reconciliations in a shared cluster multi tenancy environment a tenant could create a helmrelease that makes the controller panic denying all other tenants from their helm releases being reconciled patches are available in and helm controller references nist json web web web web imported by see for instructions on how to triage this report modules module github com fluxcd packages package description is a tool for keeping kubernetes clusters in sync with sources of configuration and flux s helm controller is a kubernetes operator that allows one to declaratively manage helm chart releases helm controller is tightly integrated with the helm sdk a vulnerability found in the helm sdk that affects until and helm controller until allows for specific data inputs to cause high memory consumption in some platforms this could cause the controller to panic and stop processing reconciliations in a shared cluster multi tenancy environment a tenant could create a helmrelease that makes the controller panic denying all other tenants from their helm releases being reconciled patches are available in and helm controller cves cve references web web web web
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.