Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6,117
| 8,989,680,609
|
IssuesEvent
|
2019-02-01 00:58:43
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
GCE import post-processor doesn't support service accounts
|
enhancement post-processor/googlecompute-import
|
Documentation for the GCE builder clearly states that service accounts are supported, however this does not appear to be true for the `googlecompute-import` post-processor.
Our use cases require that we build images locally (from credentialed GCE instances) before importing them into GCP as GCE images. Security requirements prevent us from using an account file in these cases.
|
1.0
|
GCE import post-processor doesn't support service accounts - Documentation for the GCE builder clearly states that service accounts are supported, however this does not appear to be true for the `googlecompute-import` post-processor.
Our use cases require that we build images locally (from credentialed GCE instances) before importing them into GCP as GCE images. Security requirements prevent us from using an account file in these cases.
|
process
|
gce import post processor doesn t support service accounts documentation for the gce builder clearly states that service accounts are supported however this does not appear to be true for the googlecompute import post processor our use cases require that we build images locally from credentialed gce instances before importing them into gcp as gce images security requirements prevent us from using an account file in these cases
| 1
|
22,262
| 3,619,690,517
|
IssuesEvent
|
2016-02-08 16:56:52
|
miracle091/transmission-remote-dotnet
|
https://api.github.com/repos/miracle091/transmission-remote-dotnet
|
closed
|
Disconned in a web request
|
Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
I have added a public torrent and transmission give me this problem:
Error Say: exception in a web request
What version of the products are you using?
OS: Vista
Transmission: 3.24 (build 3)
Remote: 2.42 (rev XXX)
I lost the connection, i try reconnect and or last only few seconds or isen´t
possible connect. This is not for transmission-daemon because i can connect
with others Guis at the same time. I have this problem today(15/11). I attach
the log file and other file from other day (29/10) for other problem (this i
think is not as important because only was that day), but i attach too...
I attach too 2 images of the program messagge, one is previous to lost
connection an say "Failed request 3:exception in a web request", the other is
later and say "Disconnected. Exceed maximun number of failed request".
All this is because (i think) there is a tracker that say response code
401(Unauthorized).
```
Original issue reported on code.google.com by `H3rm...@gmail.com` on 15 Nov 2011 at 6:28
Attachments:
* [trdcrash_20111114_171428.log](https://storage.googleapis.com/google-code-attachments/transmission-remote-dotnet/issue-421/comment-0/trdcrash_20111114_171428.log)
* [trdcrash_20111029_160701.log](https://storage.googleapis.com/google-code-attachments/transmission-remote-dotnet/issue-421/comment-0/trdcrash_20111029_160701.log)
* [disconnected-previous.jpg](https://storage.googleapis.com/google-code-attachments/transmission-remote-dotnet/issue-421/comment-0/disconnected-previous.jpg)
* [disconnected.jpg](https://storage.googleapis.com/google-code-attachments/transmission-remote-dotnet/issue-421/comment-0/disconnected.jpg)
|
1.0
|
Disconned in a web request - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
I have added a public torrent and transmission give me this problem:
Error Say: exception in a web request
What version of the products are you using?
OS: Vista
Transmission: 3.24 (build 3)
Remote: 2.42 (rev XXX)
I lost the connection, i try reconnect and or last only few seconds or isen´t
possible connect. This is not for transmission-daemon because i can connect
with others Guis at the same time. I have this problem today(15/11). I attach
the log file and other file from other day (29/10) for other problem (this i
think is not as important because only was that day), but i attach too...
I attach too 2 images of the program messagge, one is previous to lost
connection an say "Failed request 3:exception in a web request", the other is
later and say "Disconnected. Exceed maximun number of failed request".
All this is because (i think) there is a tracker that say response code
401(Unauthorized).
```
Original issue reported on code.google.com by `H3rm...@gmail.com` on 15 Nov 2011 at 6:28
Attachments:
* [trdcrash_20111114_171428.log](https://storage.googleapis.com/google-code-attachments/transmission-remote-dotnet/issue-421/comment-0/trdcrash_20111114_171428.log)
* [trdcrash_20111029_160701.log](https://storage.googleapis.com/google-code-attachments/transmission-remote-dotnet/issue-421/comment-0/trdcrash_20111029_160701.log)
* [disconnected-previous.jpg](https://storage.googleapis.com/google-code-attachments/transmission-remote-dotnet/issue-421/comment-0/disconnected-previous.jpg)
* [disconnected.jpg](https://storage.googleapis.com/google-code-attachments/transmission-remote-dotnet/issue-421/comment-0/disconnected.jpg)
|
non_process
|
disconned in a web request what steps will reproduce the problem what is the expected output what do you see instead i have added a public torrent and transmission give me this problem error say exception in a web request what version of the products are you using os vista transmission build remote rev xxx i lost the connection i try reconnect and or last only few seconds or isen´t possible connect this is not for transmission daemon because i can connect with others guis at the same time i have this problem today i attach the log file and other file from other day for other problem this i think is not as important because only was that day but i attach too i attach too images of the program messagge one is previous to lost connection an say failed request exception in a web request the other is later and say disconnected exceed maximun number of failed request all this is because i think there is a tracker that say response code unauthorized original issue reported on code google com by gmail com on nov at attachments
| 0
|
15,003
| 18,718,962,794
|
IssuesEvent
|
2021-11-03 09:33:55
|
opensafely-core/job-server
|
https://api.github.com/repos/opensafely-core/job-server
|
closed
|
Allow staff users to set a status on application forms
|
application-process
|
As an admin user, I want to set form submissions to various "states" in the admin interface, so that I can easily see what part of the process submitted forms are in.
## Status
- Approved fully
- Approved subject to
- Rejected
- Completed
- Ongoing
## To do
- [ ] Admin can set a submission to a single state
- [ ] Admin can add a comment on the current state
|
1.0
|
Allow staff users to set a status on application forms - As an admin user, I want to set form submissions to various "states" in the admin interface, so that I can easily see what part of the process submitted forms are in.
## Status
- Approved fully
- Approved subject to
- Rejected
- Completed
- Ongoing
## To do
- [ ] Admin can set a submission to a single state
- [ ] Admin can add a comment on the current state
|
process
|
allow staff users to set a status on application forms as an admin user i want to set form submissions to various states in the admin interface so that i can easily see what part of the process submitted forms are in status approved fully approved subject to rejected completed ongoing to do admin can set a submission to a single state admin can add a comment on the current state
| 1
|
6,714
| 9,819,523,135
|
IssuesEvent
|
2019-06-13 22:19:12
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Transcript: Add instructional modal
|
Apply Process Approved Requirements Ready State Dept.
|
Who: Internship applicants
What: Display a new transcript modal
Why: To explain the upload process
Acceptance Criteria:
- When an internship applicant selects, "Upload transcript", a modal will appear.
- The text will read:
To upload a transcript, first you need to add it to your USAJOBS profile. Follow these steps:
1. Click **Continue**—we’ll send you to your profile in USAJOBS. This will be a new tab in your browser.
2. Click **Upload document**—you’ll get another pop-up window.
3. Choose the transcript you want to upload and change the name if needed.
4. Select **Transcript** as the document type.
5. Click **Complete Upload**—you’ll see the new transcript in your USAJOB profile.
6. Go back to the browser tab that says **Open Opportunities**.
7. Go to the **Transcript** section and click **Refresh transcripts**—you will see the new transcript you uploaded.
8. Select the transcript you want to include in your application.
9. Click **Save and continue** to finish your application.
- There will be a **Continue** button on the modal that will take the user to their USAJOBS profile (This will open in a new window)
|
1.0
|
Transcript: Add instructional modal - Who: Internship applicants
What: Display a new transcript modal
Why: To explain the upload process
Acceptance Criteria:
- When an internship applicant selects, "Upload transcript", a modal will appear.
- The text will read:
To upload a transcript, first you need to add it to your USAJOBS profile. Follow these steps:
1. Click **Continue**—we’ll send you to your profile in USAJOBS. This will be a new tab in your browser.
2. Click **Upload document**—you’ll get another pop-up window.
3. Choose the transcript you want to upload and change the name if needed.
4. Select **Transcript** as the document type.
5. Click **Complete Upload**—you’ll see the new transcript in your USAJOB profile.
6. Go back to the browser tab that says **Open Opportunities**.
7. Go to the **Transcript** section and click **Refresh transcripts**—you will see the new transcript you uploaded.
8. Select the transcript you want to include in your application.
9. Click **Save and continue** to finish your application.
- There will be a **Continue** button on the modal that will take the user to their USAJOBS profile (This will open in a new window)
|
process
|
transcript add instructional modal who internship applicants what display a new transcript modal why to explain the upload process acceptance criteria when an internship applicant selects upload transcript a modal will appear the text will read to upload a transcript first you need to add it to your usajobs profile follow these steps click continue —we’ll send you to your profile in usajobs this will be a new tab in your browser click upload document —you’ll get another pop up window choose the transcript you want to upload and change the name if needed select transcript as the document type click complete upload —you’ll see the new transcript in your usajob profile go back to the browser tab that says open opportunities go to the transcript section and click refresh transcripts —you will see the new transcript you uploaded select the transcript you want to include in your application click save and continue to finish your application there will be a continue button on the modal that will take the user to their usajobs profile this will open in a new window
| 1
|
68,458
| 8,290,564,871
|
IssuesEvent
|
2018-09-19 17:45:40
|
phetsims/gas-properties
|
https://api.github.com/repos/phetsims/gas-properties
|
closed
|
Representing the work done
|
design:general type:user-feedback
|
For #8. In [Unfuddle Ticket #18](https://phet.unfuddle.com/a#/projects/9404/tickets/by_number/18), a user reported:
> Let me first congratulate you for the numerous very user-friendly and didactical PhET simulations.
>
>I have been using many of them (in electricity, waves and modern
physics) and I am trying to convince my students that they can learn a
lot by playing with them.
>
>I was also running your simulation called Gas Properties and I found it
very instructive too (especially with the histograms showing the
distribution of speed of the molecules). A very instructive simulation
is the one where some work /W/ is put into the system (by pushing the
wall to the right) while maintaining the temperature constant. The
animation correctly shows the ice-cube removing some heat /Q/ from the
system in order to maintain the temperature constant.
>
>But when I tried to put some work into the system (again by pushing the
piston to the right) while maintaining the *pressure* constant, the
simulation didn't show the ice-cube removing some heat (like it should
according to the First Principle). In fact when I ask to maintain the
pressure constant, there is no way to displace the wall (i.e. to put
some work into the system). Why?
>
>Would it be possible to fix this problem?
>
>Also interesting would be to display (quantitatively) how much work is
exchanged and how much heat is removed.
>
>Thank you for considering my remarks and again: congratulations for your
work.
Self-assigning for further review.
|
1.0
|
Representing the work done - For #8. In [Unfuddle Ticket #18](https://phet.unfuddle.com/a#/projects/9404/tickets/by_number/18), a user reported:
> Let me first congratulate you for the numerous very user-friendly and didactical PhET simulations.
>
>I have been using many of them (in electricity, waves and modern
physics) and I am trying to convince my students that they can learn a
lot by playing with them.
>
>I was also running your simulation called Gas Properties and I found it
very instructive too (especially with the histograms showing the
distribution of speed of the molecules). A very instructive simulation
is the one where some work /W/ is put into the system (by pushing the
wall to the right) while maintaining the temperature constant. The
animation correctly shows the ice-cube removing some heat /Q/ from the
system in order to maintain the temperature constant.
>
>But when I tried to put some work into the system (again by pushing the
piston to the right) while maintaining the *pressure* constant, the
simulation didn't show the ice-cube removing some heat (like it should
according to the First Principle). In fact when I ask to maintain the
pressure constant, there is no way to displace the wall (i.e. to put
some work into the system). Why?
>
>Would it be possible to fix this problem?
>
>Also interesting would be to display (quantitatively) how much work is
exchanged and how much heat is removed.
>
>Thank you for considering my remarks and again: congratulations for your
work.
Self-assigning for further review.
|
non_process
|
representing the work done for in a user reported let me first congratulate you for the numerous very user friendly and didactical phet simulations i have been using many of them in electricity waves and modern physics and i am trying to convince my students that they can learn a lot by playing with them i was also running your simulation called gas properties and i found it very instructive too especially with the histograms showing the distribution of speed of the molecules a very instructive simulation is the one where some work w is put into the system by pushing the wall to the right while maintaining the temperature constant the animation correctly shows the ice cube removing some heat q from the system in order to maintain the temperature constant but when i tried to put some work into the system again by pushing the piston to the right while maintaining the pressure constant the simulation didn t show the ice cube removing some heat like it should according to the first principle in fact when i ask to maintain the pressure constant there is no way to displace the wall i e to put some work into the system why would it be possible to fix this problem also interesting would be to display quantitatively how much work is exchanged and how much heat is removed thank you for considering my remarks and again congratulations for your work self assigning for further review
| 0
|
61,653
| 17,023,749,186
|
IssuesEvent
|
2021-07-03 03:38:18
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
compile error on systemed tree with flex sdk 4.5.0
|
Component: potlatch2 Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 1.11am, Saturday, 1st October 2011]**
$ ant
Buildfile: /home/miurahr/projects/osm/potlatch2/build.xml
git-buildnumber:
svn-buildnumber:
buildLocales:
[echo] Building Localization .swf's
BUILD FAILED
/home/miurahr/projects/osm/potlatch2/build.xml:262: The following error occurred while executing this line:
/home/miurahr/projects/osm/potlatch2/build.xml:233: The class not found in jar file: mxmlc.jar
Total time: 1 second
to fix this, please replace lib/flexTask.jar with one in sdk 4.5
refer http://builddeploy.blogspot.com/2010/04/class-not-found-in-jar-file-mxmlcjar.html
I can fix it with this way.
https://github.com/osmfj/potlatch2/commit/d11e48ddf185bfce41c70b951dc110778221125c
|
1.0
|
compile error on systemed tree with flex sdk 4.5.0 - **[Submitted to the original trac issue database at 1.11am, Saturday, 1st October 2011]**
$ ant
Buildfile: /home/miurahr/projects/osm/potlatch2/build.xml
git-buildnumber:
svn-buildnumber:
buildLocales:
[echo] Building Localization .swf's
BUILD FAILED
/home/miurahr/projects/osm/potlatch2/build.xml:262: The following error occurred while executing this line:
/home/miurahr/projects/osm/potlatch2/build.xml:233: The class not found in jar file: mxmlc.jar
Total time: 1 second
to fix this, please replace lib/flexTask.jar with one in sdk 4.5
refer http://builddeploy.blogspot.com/2010/04/class-not-found-in-jar-file-mxmlcjar.html
I can fix it with this way.
https://github.com/osmfj/potlatch2/commit/d11e48ddf185bfce41c70b951dc110778221125c
|
non_process
|
compile error on systemed tree with flex sdk ant buildfile home miurahr projects osm build xml git buildnumber svn buildnumber buildlocales building localization swf s build failed home miurahr projects osm build xml the following error occurred while executing this line home miurahr projects osm build xml the class not found in jar file mxmlc jar total time second to fix this please replace lib flextask jar with one in sdk refer i can fix it with this way
| 0
|
88,579
| 8,165,767,280
|
IssuesEvent
|
2018-08-25 00:09:53
|
aspnet/KestrelHttpServer
|
https://api.github.com/repos/aspnet/KestrelHttpServer
|
closed
|
RegisterAddresses_IPv4Port5000Default_Success
|
bug flaky test
|
This was a local test failure that hints at a product bug. In 2.1 preview1 we added support to bind to https://localhost:5001/ by default if the dev cert was available. However we don't check if the port is available. Note Kestrel doesn't check if port 5000 is available either, but when doing a localhost bind it will only fail if both IPv4 and IPv6 fail.
At a minimum this test needs to add [PortSupportedCondition(5001)].
We may also consider having kestrel catch this failure for the default port scenario and start only on HTTP.
```
[xUnit.net 00:01:31.6372836] RegisterAddresses_IPv4Port5000Default_Success(addressInput: null, testUrl: "http://127.0.0.1:5000/") [FAIL]
Failed RegisterAddresses_IPv4Port5000Default_Success(addressInput: null, testUrl: "http://127.0.0.1:5000/")
Error Message:
System.AggregateException : An error occurred while writing to logger(s).
---- System.Exception : Unexpected critical error. Log Critical[0]: Unable to start Kestrel. System.IO.IOException: Failed to bind to address https://127.0.0.1:5001: address already in use. ---> Microsoft.AspNetCore.Connections.AddressInUseException: Error -4091 EADDRINUSE address already in use ---> Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.UvException: Error -4091 EADDRINUSE address already in use
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.ThrowError(Int32 statusCode)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.ThrowIfErrored(Int32 statusCode)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.tcp_getsockname(UvTcpHandle handle, SockAddr& addr, Int32& namelen)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.UvTcpHandle.GetSockIPEndPoint()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Listener.ListenTcp(Boolean useFileHandle)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Listener.CreateListenSocket()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Listener.<>c.<StartAsync>b__8_0(Listener listener)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvThread.CallbackAdapter`1.<>c.<.cctor>b__3_1(Object callback, Object state)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvThread.DoPostWork()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.ListenerPrimary.<StartAsync>d__15.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvTransport.<BindAsync>d__20.MoveNext()
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvTransport.<BindAsync>d__20.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<>c__DisplayClass22_0`1.<<StartAsync>g__OnBind|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.<BindEndpointAsync>d__3.MoveNext()
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.<BindEndpointAsync>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.LocalhostListenOptions.<BindAsync>d__2.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.DefaultAddressStrategy.<BindAsync>d__0.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.<BindAsync>d__0.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<StartAsync>d__22`1.MoveNext()
-------- System.IO.IOException : Failed to bind to address https://127.0.0.1:5001: address already in use.
------------ Microsoft.AspNetCore.Connections.AddressInUseException : Error -4091 EADDRINUSE address already in use
---------------- Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.UvException : Error -4091 EADDRINUSE address already in use
Stack Trace:
at Microsoft.Extensions.Logging.Logger.Log[TState](LogLevel logLevel, EventId eventId, TState state, Exception exception, Func`3 formatter)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.KestrelTrace.Log[TState](LogLevel logLevel, EventId eventId, TState state, Exception exception, Func`3 formatter)
at Microsoft.Extensions.Logging.LoggerExtensions.LogCritical(ILogger logger, EventId eventId, Exception exception, String message, Object[] args)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<StartAsync>d__22`1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.AspNetCore.Hosting.Internal.WebHost.<StartAsync>d__26.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.AspNetCore.Hosting.Internal.WebHost.Start()
at Microsoft.AspNetCore.Server.Kestrel.FunctionalTests.AddressRegistrationTests.<RegisterAddresses_Success>d__17.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.FunctionalTests.AddressRegistrationTests.<RegisterAddresses_IPv4Port5000Default_Success>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
----- Inner Stack Trace -----
at Microsoft.AspNetCore.Testing.TestApplicationErrorLogger.Log[TState](LogLevel logLevel, EventId eventId, TState state, Exception exception, Func`3 formatter)
at Microsoft.Extensions.Logging.Logger.Log[TState](LogLevel logLevel, EventId eventId, TState state, Exception exception, Func`3 formatter)
----- Inner Stack Trace -----
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.<BindEndpointAsync>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.LocalhostListenOptions.<BindAsync>d__2.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.DefaultAddressStrategy.<BindAsync>d__0.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.<BindAsync>d__0.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<StartAsync>d__22`1.MoveNext()
----- Inner Stack Trace -----
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvTransport.<BindAsync>d__20.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<>c__DisplayClass22_0`1.<<StartAsync>g__OnBind|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.<BindEndpointAsync>d__3.MoveNext()
----- Inner Stack Trace -----
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.ThrowError(Int32 statusCode)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.ThrowIfErrored(Int32 statusCode)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.tcp_getsockname(UvTcpHandle handle, SockAddr& addr, Int32& namelen)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.UvTcpHandle.GetSockIPEndPoint()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Listener.ListenTcp(Boolean useFileHandle)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Listener.CreateListenSocket()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Listener.<>c.<StartAsync>b__8_0(Listener listener)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvThread.CallbackAdapter`1.<>c.<.cctor>b__3_1(Object callback, Object state)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvThread.DoPostWork()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.ListenerPrimary.<StartAsync>d__15.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvTransport.<BindAsync>d__20.MoveNext()
```
|
1.0
|
RegisterAddresses_IPv4Port5000Default_Success - This was a local test failure that hints at a product bug. In 2.1 preview1 we added support to bind to https://localhost:5001/ by default if the dev cert was available. However we don't check if the port is available. Note Kestrel doesn't check if port 5000 is available either, but when doing a localhost bind it will only fail if both IPv4 and IPv6 fail.
At a minimum this test needs to add [PortSupportedCondition(5001)].
We may also consider having kestrel catch this failure for the default port scenario and start only on HTTP.
```
[xUnit.net 00:01:31.6372836] RegisterAddresses_IPv4Port5000Default_Success(addressInput: null, testUrl: "http://127.0.0.1:5000/") [FAIL]
Failed RegisterAddresses_IPv4Port5000Default_Success(addressInput: null, testUrl: "http://127.0.0.1:5000/")
Error Message:
System.AggregateException : An error occurred while writing to logger(s).
---- System.Exception : Unexpected critical error. Log Critical[0]: Unable to start Kestrel. System.IO.IOException: Failed to bind to address https://127.0.0.1:5001: address already in use. ---> Microsoft.AspNetCore.Connections.AddressInUseException: Error -4091 EADDRINUSE address already in use ---> Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.UvException: Error -4091 EADDRINUSE address already in use
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.ThrowError(Int32 statusCode)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.ThrowIfErrored(Int32 statusCode)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.tcp_getsockname(UvTcpHandle handle, SockAddr& addr, Int32& namelen)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.UvTcpHandle.GetSockIPEndPoint()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Listener.ListenTcp(Boolean useFileHandle)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Listener.CreateListenSocket()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Listener.<>c.<StartAsync>b__8_0(Listener listener)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvThread.CallbackAdapter`1.<>c.<.cctor>b__3_1(Object callback, Object state)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvThread.DoPostWork()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.ListenerPrimary.<StartAsync>d__15.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvTransport.<BindAsync>d__20.MoveNext()
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvTransport.<BindAsync>d__20.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<>c__DisplayClass22_0`1.<<StartAsync>g__OnBind|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.<BindEndpointAsync>d__3.MoveNext()
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.<BindEndpointAsync>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.LocalhostListenOptions.<BindAsync>d__2.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.DefaultAddressStrategy.<BindAsync>d__0.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.<BindAsync>d__0.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<StartAsync>d__22`1.MoveNext()
-------- System.IO.IOException : Failed to bind to address https://127.0.0.1:5001: address already in use.
------------ Microsoft.AspNetCore.Connections.AddressInUseException : Error -4091 EADDRINUSE address already in use
---------------- Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.UvException : Error -4091 EADDRINUSE address already in use
Stack Trace:
at Microsoft.Extensions.Logging.Logger.Log[TState](LogLevel logLevel, EventId eventId, TState state, Exception exception, Func`3 formatter)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.KestrelTrace.Log[TState](LogLevel logLevel, EventId eventId, TState state, Exception exception, Func`3 formatter)
at Microsoft.Extensions.Logging.LoggerExtensions.LogCritical(ILogger logger, EventId eventId, Exception exception, String message, Object[] args)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<StartAsync>d__22`1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.AspNetCore.Hosting.Internal.WebHost.<StartAsync>d__26.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.AspNetCore.Hosting.Internal.WebHost.Start()
at Microsoft.AspNetCore.Server.Kestrel.FunctionalTests.AddressRegistrationTests.<RegisterAddresses_Success>d__17.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.FunctionalTests.AddressRegistrationTests.<RegisterAddresses_IPv4Port5000Default_Success>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
----- Inner Stack Trace -----
at Microsoft.AspNetCore.Testing.TestApplicationErrorLogger.Log[TState](LogLevel logLevel, EventId eventId, TState state, Exception exception, Func`3 formatter)
at Microsoft.Extensions.Logging.Logger.Log[TState](LogLevel logLevel, EventId eventId, TState state, Exception exception, Func`3 formatter)
----- Inner Stack Trace -----
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.<BindEndpointAsync>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.LocalhostListenOptions.<BindAsync>d__2.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.DefaultAddressStrategy.<BindAsync>d__0.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.<BindAsync>d__0.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<StartAsync>d__22`1.MoveNext()
----- Inner Stack Trace -----
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvTransport.<BindAsync>d__20.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<>c__DisplayClass22_0`1.<<StartAsync>g__OnBind|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.<BindEndpointAsync>d__3.MoveNext()
----- Inner Stack Trace -----
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.ThrowError(Int32 statusCode)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.ThrowIfErrored(Int32 statusCode)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.tcp_getsockname(UvTcpHandle handle, SockAddr& addr, Int32& namelen)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.UvTcpHandle.GetSockIPEndPoint()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Listener.ListenTcp(Boolean useFileHandle)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Listener.CreateListenSocket()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Listener.<>c.<StartAsync>b__8_0(Listener listener)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvThread.CallbackAdapter`1.<>c.<.cctor>b__3_1(Object callback, Object state)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvThread.DoPostWork()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.ListenerPrimary.<StartAsync>d__15.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.LibuvTransport.<BindAsync>d__20.MoveNext()
```
|
non_process
|
registeraddresses success this was a local test failure that hints at a product bug in we added support to bind to by default if the dev cert was available however we don t check if the port is available note kestrel doesn t check if port is available either but when doing a localhost bind it will only fail if both and fail at a minimum this test needs to add we may also consider having kestrel catch this failure for the default port scenario and start only on http registeraddresses success addressinput null testurl failed registeraddresses success addressinput null testurl error message system aggregateexception an error occurred while writing to logger s system exception unexpected critical error log critical unable to start kestrel system io ioexception failed to bind to address address already in use microsoft aspnetcore connections addressinuseexception error eaddrinuse address already in use microsoft aspnetcore server kestrel transport libuv internal networking uvexception error eaddrinuse address already in use at microsoft aspnetcore server kestrel transport libuv internal networking libuvfunctions throwerror statuscode at microsoft aspnetcore server kestrel transport libuv internal networking libuvfunctions throwiferrored statuscode at microsoft aspnetcore server kestrel transport libuv internal networking libuvfunctions tcp getsockname uvtcphandle handle sockaddr addr namelen at microsoft aspnetcore server kestrel transport libuv internal networking uvtcphandle getsockipendpoint at microsoft aspnetcore server kestrel transport libuv internal listener listentcp boolean usefilehandle at microsoft aspnetcore server kestrel transport libuv internal listener createlistensocket at microsoft aspnetcore server kestrel transport libuv internal listener c b listener listener at microsoft aspnetcore server kestrel transport libuv internal libuvthread callbackadapter c b object callback object state at microsoft aspnetcore server kestrel transport libuv internal libuvthread dopostwork end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel transport libuv internal listenerprimary d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel transport libuv internal libuvtransport d movenext end of inner exception stack trace at microsoft aspnetcore server kestrel transport libuv internal libuvtransport d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel core kestrelserver c g onbind d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel core internal addressbinder d movenext end of inner exception stack trace at microsoft aspnetcore server kestrel core internal addressbinder d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel core localhostlistenoptions d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel core internal addressbinder defaultaddressstrategy d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel core internal addressbinder d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel core kestrelserver d movenext system io ioexception failed to bind to address address already in use microsoft aspnetcore connections addressinuseexception error eaddrinuse address already in use microsoft aspnetcore server kestrel transport libuv internal networking uvexception error eaddrinuse address already in use stack trace at microsoft extensions logging logger log loglevel loglevel eventid eventid tstate state exception exception func formatter at microsoft aspnetcore server kestrel core internal kestreltrace log loglevel loglevel eventid eventid tstate state exception exception func formatter at microsoft extensions logging loggerextensions logcritical ilogger logger eventid eventid exception exception string message object args at microsoft aspnetcore server kestrel core kestrelserver d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft aspnetcore hosting internal webhost d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft aspnetcore hosting internal webhost start at microsoft aspnetcore server kestrel functionaltests addressregistrationtests d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices taskawaiter getresult at microsoft aspnetcore server kestrel functionaltests addressregistrationtests d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task inner stack trace at microsoft aspnetcore testing testapplicationerrorlogger log loglevel loglevel eventid eventid tstate state exception exception func formatter at microsoft extensions logging logger log loglevel loglevel eventid eventid tstate state exception exception func formatter inner stack trace at microsoft aspnetcore server kestrel core internal addressbinder d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel core localhostlistenoptions d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel core internal addressbinder defaultaddressstrategy d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel core internal addressbinder d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel core kestrelserver d movenext inner stack trace at microsoft aspnetcore server kestrel transport libuv internal libuvtransport d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel core kestrelserver c g onbind d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel core internal addressbinder d movenext inner stack trace at microsoft aspnetcore server kestrel transport libuv internal networking libuvfunctions throwerror statuscode at microsoft aspnetcore server kestrel transport libuv internal networking libuvfunctions throwiferrored statuscode at microsoft aspnetcore server kestrel transport libuv internal networking libuvfunctions tcp getsockname uvtcphandle handle sockaddr addr namelen at microsoft aspnetcore server kestrel transport libuv internal networking uvtcphandle getsockipendpoint at microsoft aspnetcore server kestrel transport libuv internal listener listentcp boolean usefilehandle at microsoft aspnetcore server kestrel transport libuv internal listener createlistensocket at microsoft aspnetcore server kestrel transport libuv internal listener c b listener listener at microsoft aspnetcore server kestrel transport libuv internal libuvthread callbackadapter c b object callback object state at microsoft aspnetcore server kestrel transport libuv internal libuvthread dopostwork end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel transport libuv internal listenerprimary d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft aspnetcore server kestrel transport libuv internal libuvtransport d movenext
| 0
|
198,865
| 15,726,625,685
|
IssuesEvent
|
2021-03-29 11:34:58
|
buildout/buildout
|
https://api.github.com/repos/buildout/buildout
|
closed
|
Document meaning of "Unused options for.."
|
documentation
|
I encountered above message when trying to run buildout on the current head of Zope https://github.com/zopefoundation/Zope/commit/c9871f5fc2cefacf1839e09934eb0c5fadc9a2da
Searching for "unused" in the buildout documentation brought up zero hits.
http://www.buildout.org/en/latest/search.html?q=unused&check_keywords=yes&area=default
Complete message:
```
Installing requirements.
requirements: Running '/home/jugmac00/Tests/Zope/bin/zopepy util.py'
Unused options for requirements: 'update-command'.
```
The problematic line would be the one starting with update-command:
https://github.com/zopefoundation/Zope/blob/c9871f5fc2cefacf1839e09934eb0c5fadc9a2da/buildout.cfg#L172-L177
|
1.0
|
Document meaning of "Unused options for.." - I encountered above message when trying to run buildout on the current head of Zope https://github.com/zopefoundation/Zope/commit/c9871f5fc2cefacf1839e09934eb0c5fadc9a2da
Searching for "unused" in the buildout documentation brought up zero hits.
http://www.buildout.org/en/latest/search.html?q=unused&check_keywords=yes&area=default
Complete message:
```
Installing requirements.
requirements: Running '/home/jugmac00/Tests/Zope/bin/zopepy util.py'
Unused options for requirements: 'update-command'.
```
The problematic line would be the one starting with update-command:
https://github.com/zopefoundation/Zope/blob/c9871f5fc2cefacf1839e09934eb0c5fadc9a2da/buildout.cfg#L172-L177
|
non_process
|
document meaning of unused options for i encountered above message when trying to run buildout on the current head of zope searching for unused in the buildout documentation brought up zero hits complete message installing requirements requirements running home tests zope bin zopepy util py unused options for requirements update command the problematic line would be the one starting with update command
| 0
|
17,905
| 23,884,136,375
|
IssuesEvent
|
2022-09-08 05:56:58
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
add `prebuild` option for our typescript setup like we have in npm eco system
|
type: support / not a bug (process) untriaged
|
### Description of the feature request:
In our yarn workspace Typescript monorepo we have the option before building anything, to mangle some files. I would like to have the same feature in our bazel build file, or if not possible inside for example `js_library` (from `load("@build_bazel_rules_nodejs//:index.bzl", "js_library")`) to mangle our package.json before/after creating its package.
### What underlying problem are you trying to solve with this feature?
During development we have other pointers in our package(s) to point the module (and its typings) to the sources and have Typescript path mapping point to the packages its sources. Our package.json per module is our contract.
So in an npm ecosystem (jest/webpack/vite/whetever) everything works as implemented, you have the option to look at sources or packaged/compiled files. This is based on the Typescript feature `path mapping` to be there (sources) or not (compiled). **And during the creation of the package you have the option during pre-build stage to adjust things inside its package.json** (for example remove dev dependencies, and alter module location).
In bazel setup this is not possible. By design. So you cant use sed/jq/etc to mangle a file its content and output it to the same file before you inject it. That will throw a cycle error. So genrule isn't helpful for this usecase.
I tried `post_install_patches` option with our workspace yarn install, ~~but then it seems we have to create a patch per own package and we have lots of them~~, thats not scalable (so didnt look further) edit: and didnt seem to work for local packages (file content stayed the same).
But otherwise im completely lost in options. I also tried to first archive the adjusted file and then strip the path and use it as source, but thats also detected as a cycle (so really good design of Bazel!)
### Which operating system are you running Bazel on?
osx and linux
### What is the output of `bazel info release`?
release 5.3.0
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
```text
private
```
### Have you found anything relevant by searching the web?
github issue: https://github.com/bazelbuild/bazel/issues/7311
### Any other information, logs, or outputs that you want to share?
We have a yarn workspace monorepo
Packages currently have the following setup, which always exports the package its content from an src/index, example package:
```
{
"name": "@mpth-ui/page",
"version": "1.0.0",
"main": "./src/index.ts",
"description": "Generic page component",
```
This index file just exports what can be consumed by consumers, so for this example it's:
```
export { Page } from './Page'
export { PageWithRefresh } from './PageWithRefresh'
export type { PageMaxWidth, PageOptions, PageWithRefreshOptions } from './types'
```
During local development we spinup a webpack server for the application we are working on (also part of the monorepo) and use workspace path mappings. So everything works lightning fast. When we had lerna as tooling we just used the npm prebuild hook for every package. Every components extends its tsconfig on the root's one.
Our basic workspace ./tsconfig.json looks like:
```
{
"compilerOptions": {
"baseUrl": "./packages",
"paths": {
"@mpth/page": ["ui-components/page/src/*"],
```
This file is not used in our Bazel workflow (empty file is copied from our WORKSPACE), so package references will always look at their compiled (so cached) versions. Thats the whole thing what makes bazel (and any distributed cache implementation) fast. The reason to have this root config is also to let our editor understand how to find/import stuff.
The things i tried was using genrule, or first archive and replace prefix, but they all led to the circulair workflow which bazel prevents.
```
genrule(
name = "transform",
srcs = ["package.json"],
outs = ["tmp/package.json"],
local = 1,
cmd = '''
sed "s|./src/index.ts|./esm/index.ts|" "$<" > "$@"
'''
)
```
I know by design you don't want two processes mutating stuff (because who will win/dictate the truth), that makes it not deterministic. But the approach followed in a pure typescript project is a common pattern (using path references and mutating a package its package.json in prebuild stage). So i'm really hoping we can have a feature with an option **i know what im doing, and do not cycle it** :)
mangling is only meant for ci, not for anything local so i can't use generic postinstall script for it as well.
|
1.0
|
add `prebuild` option for our typescript setup like we have in npm eco system - ### Description of the feature request:
In our yarn workspace Typescript monorepo we have the option before building anything, to mangle some files. I would like to have the same feature in our bazel build file, or if not possible inside for example `js_library` (from `load("@build_bazel_rules_nodejs//:index.bzl", "js_library")`) to mangle our package.json before/after creating its package.
### What underlying problem are you trying to solve with this feature?
During development we have other pointers in our package(s) to point the module (and its typings) to the sources and have Typescript path mapping point to the packages its sources. Our package.json per module is our contract.
So in an npm ecosystem (jest/webpack/vite/whetever) everything works as implemented, you have the option to look at sources or packaged/compiled files. This is based on the Typescript feature `path mapping` to be there (sources) or not (compiled). **And during the creation of the package you have the option during pre-build stage to adjust things inside its package.json** (for example remove dev dependencies, and alter module location).
In bazel setup this is not possible. By design. So you cant use sed/jq/etc to mangle a file its content and output it to the same file before you inject it. That will throw a cycle error. So genrule isn't helpful for this usecase.
I tried `post_install_patches` option with our workspace yarn install, ~~but then it seems we have to create a patch per own package and we have lots of them~~, thats not scalable (so didnt look further) edit: and didnt seem to work for local packages (file content stayed the same).
But otherwise im completely lost in options. I also tried to first archive the adjusted file and then strip the path and use it as source, but thats also detected as a cycle (so really good design of Bazel!)
### Which operating system are you running Bazel on?
osx and linux
### What is the output of `bazel info release`?
release 5.3.0
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
```text
private
```
### Have you found anything relevant by searching the web?
github issue: https://github.com/bazelbuild/bazel/issues/7311
### Any other information, logs, or outputs that you want to share?
We have a yarn workspace monorepo
Packages currently have the following setup, which always exports the package its content from an src/index, example package:
```
{
"name": "@mpth-ui/page",
"version": "1.0.0",
"main": "./src/index.ts",
"description": "Generic page component",
```
This index file just exports what can be consumed by consumers, so for this example it's:
```
export { Page } from './Page'
export { PageWithRefresh } from './PageWithRefresh'
export type { PageMaxWidth, PageOptions, PageWithRefreshOptions } from './types'
```
During local development we spinup a webpack server for the application we are working on (also part of the monorepo) and use workspace path mappings. So everything works lightning fast. When we had lerna as tooling we just used the npm prebuild hook for every package. Every components extends its tsconfig on the root's one.
Our basic workspace ./tsconfig.json looks like:
```
{
"compilerOptions": {
"baseUrl": "./packages",
"paths": {
"@mpth/page": ["ui-components/page/src/*"],
```
This file is not used in our Bazel workflow (empty file is copied from our WORKSPACE), so package references will always look at their compiled (so cached) versions. Thats the whole thing what makes bazel (and any distributed cache implementation) fast. The reason to have this root config is also to let our editor understand how to find/import stuff.
The things i tried was using genrule, or first archive and replace prefix, but they all led to the circulair workflow which bazel prevents.
```
genrule(
name = "transform",
srcs = ["package.json"],
outs = ["tmp/package.json"],
local = 1,
cmd = '''
sed "s|./src/index.ts|./esm/index.ts|" "$<" > "$@"
'''
)
```
I know by design you don't want two processes mutating stuff (because who will win/dictate the truth), that makes it not deterministic. But the approach followed in a pure typescript project is a common pattern (using path references and mutating a package its package.json in prebuild stage). So i'm really hoping we can have a feature with an option **i know what im doing, and do not cycle it** :)
mangling is only meant for ci, not for anything local so i can't use generic postinstall script for it as well.
|
process
|
add prebuild option for our typescript setup like we have in npm eco system description of the feature request in our yarn workspace typescript monorepo we have the option before building anything to mangle some files i would like to have the same feature in our bazel build file or if not possible inside for example js library from load build bazel rules nodejs index bzl js library to mangle our package json before after creating its package what underlying problem are you trying to solve with this feature during development we have other pointers in our package s to point the module and its typings to the sources and have typescript path mapping point to the packages its sources our package json per module is our contract so in an npm ecosystem jest webpack vite whetever everything works as implemented you have the option to look at sources or packaged compiled files this is based on the typescript feature path mapping to be there sources or not compiled and during the creation of the package you have the option during pre build stage to adjust things inside its package json for example remove dev dependencies and alter module location in bazel setup this is not possible by design so you cant use sed jq etc to mangle a file its content and output it to the same file before you inject it that will throw a cycle error so genrule isn t helpful for this usecase i tried post install patches option with our workspace yarn install but then it seems we have to create a patch per own package and we have lots of them thats not scalable so didnt look further edit and didnt seem to work for local packages file content stayed the same but otherwise im completely lost in options i also tried to first archive the adjusted file and then strip the path and use it as source but thats also detected as a cycle so really good design of bazel which operating system are you running bazel on osx and linux what is the output of bazel info release release if bazel info release returns development version or non git tell us how you built bazel no response what s the output of git remote get url origin git rev parse master git rev parse head text private have you found anything relevant by searching the web github issue any other information logs or outputs that you want to share we have a yarn workspace monorepo packages currently have the following setup which always exports the package its content from an src index example package name mpth ui page version main src index ts description generic page component this index file just exports what can be consumed by consumers so for this example it s export page from page export pagewithrefresh from pagewithrefresh export type pagemaxwidth pageoptions pagewithrefreshoptions from types during local development we spinup a webpack server for the application we are working on also part of the monorepo and use workspace path mappings so everything works lightning fast when we had lerna as tooling we just used the npm prebuild hook for every package every components extends its tsconfig on the root s one our basic workspace tsconfig json looks like compileroptions baseurl packages paths mpth page this file is not used in our bazel workflow empty file is copied from our workspace so package references will always look at their compiled so cached versions thats the whole thing what makes bazel and any distributed cache implementation fast the reason to have this root config is also to let our editor understand how to find import stuff the things i tried was using genrule or first archive and replace prefix but they all led to the circulair workflow which bazel prevents genrule name transform srcs outs local cmd sed s src index ts esm index ts i know by design you don t want two processes mutating stuff because who will win dictate the truth that makes it not deterministic but the approach followed in a pure typescript project is a common pattern using path references and mutating a package its package json in prebuild stage so i m really hoping we can have a feature with an option i know what im doing and do not cycle it mangling is only meant for ci not for anything local so i can t use generic postinstall script for it as well
| 1
|
5,764
| 8,601,539,802
|
IssuesEvent
|
2018-11-16 11:10:02
|
shirou/gopsutil
|
https://api.github.com/repos/shirou/gopsutil
|
closed
|
Process.Cmdline() is rising memory on windows
|
os:windows package:process
|
There is my code
```
processArray, err := process.Processes()
for _, item := range processArray {
cmd,_ := item.Cmdline() //rising memory
//do something
}
```
I check memory, it rising memory so fast, and item.Cmdline() is the cause, you can see in the picture

|
1.0
|
Process.Cmdline() is rising memory on windows - There is my code
```
processArray, err := process.Processes()
for _, item := range processArray {
cmd,_ := item.Cmdline() //rising memory
//do something
}
```
I check memory, it rising memory so fast, and item.Cmdline() is the cause, you can see in the picture

|
process
|
process cmdline is rising memory on windows there is my code processarray err process processes for item range processarray cmd item cmdline rising memory do something i check memory it rising memory so fast and item cmdline is the cause you can see in the picture
| 1
|
151
| 2,580,989,856
|
IssuesEvent
|
2015-02-13 21:27:58
|
tinkerpop/tinkerpop3
|
https://api.github.com/repos/tinkerpop/tinkerpop3
|
closed
|
[proposal] Blow out the TraversalEngine concept
|
enhancement process
|
`TraversalEngine` is currently an enum of `STANDARD` (OLTP) and `COMPUTER` (OLAP). I think it needs to be an interface with the following methods.
```java
public enum Type { STANDARD, COMPUTER }
public void applyStrategies(final Traversal traversal)
public TraversalStrategies getTraversalStrategies()
public Graph getGraph()
```
`TinkerGraph` will provide `TinkerGraphComputerTraversalEngine`. This guy will:
```java
public void applyStrategies(final Traversal traversal) {
this.getTraversalStrategies().apply(traversal);
if(traversal.isRootTraversal())
traversal.addStep(new ComputerResultStep(this.getGraph().compute().workers(workers).program(TraversalVertexProgram.class).traversal(traversal))))
}
```
When the `Traversal` is created (e.g. `g.V` or `v.out`), the `TraversalEngine` is provided to the traversal by, e.g., `TinkerGraph`, `TinkerVertex`. The user desired execution source is stated as such:
```java
tinkerGraph.setTraversalEngine(TinkerGraphComputerTraversalEngine.create().workers(2).build()) // OLAP
tinkerGraph.setTraversalEngine(StandardTraversalEngine.instance()) // OLTP
tinkerGraph.setTraversalEngine(GremlinServerTraversalEngine.create().host(1.2.3).port(23).remoteEngine(StandardTraversalEngine.instance()).build()) // GremlinServer
```
The first time a Traversal is `next()'d` or `hasNext()'d`, the `TraversalEngine` of the `Traversal` is executed. Note that `ComputeResultStep()` is added by `TinkerGraphComputerTraversalEngine` (see above). This means, at `next()` -- `TinkerGraphComputer` is submitted a `TraversalVertexProgram` and executed. The first `next()` is the result of the `TraversalVertexProgram`. This is sort of how we have it now, though `ComputerResultStep()` "wraps" the submitted traversal.
....there are still lots of holes -- e.g. do we need a `TraversalStrategiesCache`? How does `GremlinServer` come into play? Can different traversals within a traversal have different `TraversalEngines`.......
anywho....just trying to get ideas out.
|
1.0
|
[proposal] Blow out the TraversalEngine concept - `TraversalEngine` is currently an enum of `STANDARD` (OLTP) and `COMPUTER` (OLAP). I think it needs to be an interface with the following methods.
```java
public enum Type { STANDARD, COMPUTER }
public void applyStrategies(final Traversal traversal)
public TraversalStrategies getTraversalStrategies()
public Graph getGraph()
```
`TinkerGraph` will provide `TinkerGraphComputerTraversalEngine`. This guy will:
```java
public void applyStrategies(final Traversal traversal) {
this.getTraversalStrategies().apply(traversal);
if(traversal.isRootTraversal())
traversal.addStep(new ComputerResultStep(this.getGraph().compute().workers(workers).program(TraversalVertexProgram.class).traversal(traversal))))
}
```
When the `Traversal` is created (e.g. `g.V` or `v.out`), the `TraversalEngine` is provided to the traversal by, e.g., `TinkerGraph`, `TinkerVertex`. The user desired execution source is stated as such:
```java
tinkerGraph.setTraversalEngine(TinkerGraphComputerTraversalEngine.create().workers(2).build()) // OLAP
tinkerGraph.setTraversalEngine(StandardTraversalEngine.instance()) // OLTP
tinkerGraph.setTraversalEngine(GremlinServerTraversalEngine.create().host(1.2.3).port(23).remoteEngine(StandardTraversalEngine.instance()).build()) // GremlinServer
```
The first time a Traversal is `next()'d` or `hasNext()'d`, the `TraversalEngine` of the `Traversal` is executed. Note that `ComputeResultStep()` is added by `TinkerGraphComputerTraversalEngine` (see above). This means, at `next()` -- `TinkerGraphComputer` is submitted a `TraversalVertexProgram` and executed. The first `next()` is the result of the `TraversalVertexProgram`. This is sort of how we have it now, though `ComputerResultStep()` "wraps" the submitted traversal.
....there are still lots of holes -- e.g. do we need a `TraversalStrategiesCache`? How does `GremlinServer` come into play? Can different traversals within a traversal have different `TraversalEngines`.......
anywho....just trying to get ideas out.
|
process
|
blow out the traversalengine concept traversalengine is currently an enum of standard oltp and computer olap i think it needs to be an interface with the following methods java public enum type standard computer public void applystrategies final traversal traversal public traversalstrategies gettraversalstrategies public graph getgraph tinkergraph will provide tinkergraphcomputertraversalengine this guy will java public void applystrategies final traversal traversal this gettraversalstrategies apply traversal if traversal isroottraversal traversal addstep new computerresultstep this getgraph compute workers workers program traversalvertexprogram class traversal traversal when the traversal is created e g g v or v out the traversalengine is provided to the traversal by e g tinkergraph tinkervertex the user desired execution source is stated as such java tinkergraph settraversalengine tinkergraphcomputertraversalengine create workers build olap tinkergraph settraversalengine standardtraversalengine instance oltp tinkergraph settraversalengine gremlinservertraversalengine create host port remoteengine standardtraversalengine instance build gremlinserver the first time a traversal is next d or hasnext d the traversalengine of the traversal is executed note that computeresultstep is added by tinkergraphcomputertraversalengine see above this means at next tinkergraphcomputer is submitted a traversalvertexprogram and executed the first next is the result of the traversalvertexprogram this is sort of how we have it now though computerresultstep wraps the submitted traversal there are still lots of holes e g do we need a traversalstrategiescache how does gremlinserver come into play can different traversals within a traversal have different traversalengines anywho just trying to get ideas out
| 1
|
268,639
| 8,409,768,115
|
IssuesEvent
|
2018-10-12 08:30:07
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
hbr.org - video or audio doesn't play
|
browser-firefox priority-normal
|
<!-- @browser: Firefox 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:64.0) Gecko/20100101 Firefox/64.0 -->
<!-- @reported_with: web -->
**URL**: https://hbr.org/video/5582134272001/whiteboard-session-how-does-blockchain-work
**Browser / Version**: Firefox 64.0
**Operating System**: Mac OS X 10.12
**Tested Another Browser**: Yes
**Problem type**: Video or audio doesn't play
**Description**: HBR video player doesn't load or play in Fx Nightly 64
**Steps to Reproduce**:
tried it in Nightly, reloaded, tried it in Chrome.
[](https://webcompat.com/uploads/2018/10/1ebeae24-622f-4798-90a9-f97137232332.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
hbr.org - video or audio doesn't play - <!-- @browser: Firefox 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:64.0) Gecko/20100101 Firefox/64.0 -->
<!-- @reported_with: web -->
**URL**: https://hbr.org/video/5582134272001/whiteboard-session-how-does-blockchain-work
**Browser / Version**: Firefox 64.0
**Operating System**: Mac OS X 10.12
**Tested Another Browser**: Yes
**Problem type**: Video or audio doesn't play
**Description**: HBR video player doesn't load or play in Fx Nightly 64
**Steps to Reproduce**:
tried it in Nightly, reloaded, tried it in Chrome.
[](https://webcompat.com/uploads/2018/10/1ebeae24-622f-4798-90a9-f97137232332.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
hbr org video or audio doesn t play url browser version firefox operating system mac os x tested another browser yes problem type video or audio doesn t play description hbr video player doesn t load or play in fx nightly steps to reproduce tried it in nightly reloaded tried it in chrome from with ❤️
| 0
|
7,252
| 10,418,513,741
|
IssuesEvent
|
2019-09-15 09:17:47
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Add icons to options in Algorithm dialog
|
Feature Request Feedback Processing
|
Author Name: **Harrissou Santanna** (@DelazJ)
Original Redmine Issue: [19217](https://issues.qgis.org/issues/19217)
Redmine category:processing/gui
---
In QGIS dialogs, when eg creating a layer, options like geometry type or Field type have an associated icon
It could be nice to have them also shown in algorithm dialog (see eg "Add field to attributes table" for field type - I unfortunately can no longer find the alg which has a drop-down parameter with option like polygon, line, point...)
|
1.0
|
Add icons to options in Algorithm dialog - Author Name: **Harrissou Santanna** (@DelazJ)
Original Redmine Issue: [19217](https://issues.qgis.org/issues/19217)
Redmine category:processing/gui
---
In QGIS dialogs, when eg creating a layer, options like geometry type or Field type have an associated icon
It could be nice to have them also shown in algorithm dialog (see eg "Add field to attributes table" for field type - I unfortunately can no longer find the alg which has a drop-down parameter with option like polygon, line, point...)
|
process
|
add icons to options in algorithm dialog author name harrissou santanna delazj original redmine issue redmine category processing gui in qgis dialogs when eg creating a layer options like geometry type or field type have an associated icon it could be nice to have them also shown in algorithm dialog see eg add field to attributes table for field type i unfortunately can no longer find the alg which has a drop down parameter with option like polygon line point
| 1
|
226,179
| 17,970,686,989
|
IssuesEvent
|
2021-09-14 01:18:39
|
hoppscotch/hoppscotch
|
https://api.github.com/repos/hoppscotch/hoppscotch
|
closed
|
Error saving / loading request in graphql (Beta)
|
need testing
|
**Describe the bug**
When I want to make a request for graphql the headers do not work, so I get the error "Not authorized" because the authorization header I have set was not sent.
Also I can't save any request, it is saved incorrectly, it always gives error. As well as trying to load the saved request again.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'Graphql'
2. Create petition
2. Click on 'Save'
4. See error
**Expected behavior**
A correct and functional request, since Graphql Playground works fine.
**Screenshots**
https://user-images.githubusercontent.com/56084970/133132933-1786561d-d44b-492d-9895-0bc27f4eba0a.mp4
**Desktop (please complete the following information):**
- OS: Windows
- Browser Edge
- Version 93.0.961.47
**Additional context**
Also the error is that the headers are not sent in the request to graphql.
|
1.0
|
Error saving / loading request in graphql (Beta) - **Describe the bug**
When I want to make a request for graphql the headers do not work, so I get the error "Not authorized" because the authorization header I have set was not sent.
Also I can't save any request, it is saved incorrectly, it always gives error. As well as trying to load the saved request again.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'Graphql'
2. Create petition
2. Click on 'Save'
4. See error
**Expected behavior**
A correct and functional request, since Graphql Playground works fine.
**Screenshots**
https://user-images.githubusercontent.com/56084970/133132933-1786561d-d44b-492d-9895-0bc27f4eba0a.mp4
**Desktop (please complete the following information):**
- OS: Windows
- Browser Edge
- Version 93.0.961.47
**Additional context**
Also the error is that the headers are not sent in the request to graphql.
|
non_process
|
error saving loading request in graphql beta describe the bug when i want to make a request for graphql the headers do not work so i get the error not authorized because the authorization header i have set was not sent also i can t save any request it is saved incorrectly it always gives error as well as trying to load the saved request again to reproduce steps to reproduce the behavior go to graphql create petition click on save see error expected behavior a correct and functional request since graphql playground works fine screenshots desktop please complete the following information os windows browser edge version additional context also the error is that the headers are not sent in the request to graphql
| 0
|
19,652
| 26,009,902,796
|
IssuesEvent
|
2022-12-20 23:59:39
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
opened
|
Replace semver-action in release automation workflow
|
enhancement process
|
### Problem
The `semver-action` causes a lot of warnings
```
Warning: The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
```
We need to find an alternative before `set-output` is disabled
### Solution
- Find an alternative github action with functionality parity
### Alternatives
fork the github `semver-action` action and make the recommended changes
|
1.0
|
Replace semver-action in release automation workflow - ### Problem
The `semver-action` causes a lot of warnings
```
Warning: The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
```
We need to find an alternative before `set-output` is disabled
### Solution
- Find an alternative github action with functionality parity
### Alternatives
fork the github `semver-action` action and make the recommended changes
|
process
|
replace semver action in release automation workflow problem the semver action causes a lot of warnings warning the set output command is deprecated and will be disabled soon please upgrade to using environment files for more information see we need to find an alternative before set output is disabled solution find an alternative github action with functionality parity alternatives fork the github semver action action and make the recommended changes
| 1
|
231,522
| 7,633,794,174
|
IssuesEvent
|
2018-05-06 10:18:33
|
citusdata/citus
|
https://api.github.com/repos/citusdata/citus
|
closed
|
Sub-SELECT from reference table in INSERT statement VALUES clause crashes
|
bug priority:high
|
Steps to reproduce:
```
CREATE TABLE test_reference(a int);
SELECT create_reference_table('test_reference');
CREATE TABLE test_distributed(b int);
SELECT create_distributed_table('test_distributed', 'b');
--- this produces a server crash:
INSERT INTO test_reference (a) VALUES ((SELECT b FROM test_distributed));
--- this works:
INSERT INTO test_reference (a) SELECT b FROM test_distributed;
```
Backtrace:
```
#0 pg_strtok (length=length@entry=0x7fffad00838c) at read.c:114
#1 0x000000000064f66a in nodeRead (token=token@entry=0x0, tok_len=<optimized out>, tok_len@entry=0) at read.c:285
#2 0x000000000064f9b8 in stringToNode (str=<optimized out>) at read.c:53
#3 0x00007f27ec73a728 in ColumnNameToColumn (relationId=884935453, columnNodeString=<optimized out>) at utils/distribution_column.c:193
#4 0x00007f27ec729d0c in ModifyQuerySupported (queryTree=<optimized out>, multiShardQuery=0 '\000') at planner/multi_router_planner.c:530
#5 0x00007f27ec72b1b4 in CreateModifyPlan (originalQuery=0x19144c8, query=0x1913c90, plannerRestrictionContext=0x19651b8) at planner/multi_router_planner.c:196
#6 0x00007f27ec728c0c in CreateDistributedPlan (plannerRestrictionContext=0x19651b8, boundParams=0x0, query=0x1913c90, originalQuery=0x19144c8, localPlan=0x19b5f98)
at planner/multi_planner.c:387
#7 multi_planner (parse=0x1913c90, cursorOptions=256, boundParams=0x0) at planner/multi_planner.c:128
#8 0x000000000070558c in pg_plan_query (querytree=querytree@entry=0x1913c90, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:796
#9 0x000000000070566e in pg_plan_queries (querytrees=<optimized out>, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:862
#10 0x0000000000705b0a in exec_simple_query (query_string=0x1912a48 "INSERT INTO test_reference (a) VALUES ((SELECT b FROM test_distributed));") at postgres.c:1027
#11 0x000000000070769a in PostgresMain (argc=<optimized out>, argv=argv@entry=0x189f3c8, dbname=<optimized out>, username=<optimized out>) at postgres.c:4088
#12 0x000000000047b83e in BackendRun (port=0x18a8f40) at postmaster.c:4357
#13 BackendStartup (port=0x18a8f40) at postmaster.c:4029
#14 ServerLoop () at postmaster.c:1753
#15 0x00000000006a11c2 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x183a250) at postmaster.c:1361
#16 0x000000000047cb05 in main (argc=3, argv=0x183a250) at main.c:228
```
Known versions that break: 7.1, 7.2, HEAD (likely more)
This seems to be due to ModifyQuerySupported not handling the reference table case correctly.
|
1.0
|
Sub-SELECT from reference table in INSERT statement VALUES clause crashes - Steps to reproduce:
```
CREATE TABLE test_reference(a int);
SELECT create_reference_table('test_reference');
CREATE TABLE test_distributed(b int);
SELECT create_distributed_table('test_distributed', 'b');
--- this produces a server crash:
INSERT INTO test_reference (a) VALUES ((SELECT b FROM test_distributed));
--- this works:
INSERT INTO test_reference (a) SELECT b FROM test_distributed;
```
Backtrace:
```
#0 pg_strtok (length=length@entry=0x7fffad00838c) at read.c:114
#1 0x000000000064f66a in nodeRead (token=token@entry=0x0, tok_len=<optimized out>, tok_len@entry=0) at read.c:285
#2 0x000000000064f9b8 in stringToNode (str=<optimized out>) at read.c:53
#3 0x00007f27ec73a728 in ColumnNameToColumn (relationId=884935453, columnNodeString=<optimized out>) at utils/distribution_column.c:193
#4 0x00007f27ec729d0c in ModifyQuerySupported (queryTree=<optimized out>, multiShardQuery=0 '\000') at planner/multi_router_planner.c:530
#5 0x00007f27ec72b1b4 in CreateModifyPlan (originalQuery=0x19144c8, query=0x1913c90, plannerRestrictionContext=0x19651b8) at planner/multi_router_planner.c:196
#6 0x00007f27ec728c0c in CreateDistributedPlan (plannerRestrictionContext=0x19651b8, boundParams=0x0, query=0x1913c90, originalQuery=0x19144c8, localPlan=0x19b5f98)
at planner/multi_planner.c:387
#7 multi_planner (parse=0x1913c90, cursorOptions=256, boundParams=0x0) at planner/multi_planner.c:128
#8 0x000000000070558c in pg_plan_query (querytree=querytree@entry=0x1913c90, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:796
#9 0x000000000070566e in pg_plan_queries (querytrees=<optimized out>, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:862
#10 0x0000000000705b0a in exec_simple_query (query_string=0x1912a48 "INSERT INTO test_reference (a) VALUES ((SELECT b FROM test_distributed));") at postgres.c:1027
#11 0x000000000070769a in PostgresMain (argc=<optimized out>, argv=argv@entry=0x189f3c8, dbname=<optimized out>, username=<optimized out>) at postgres.c:4088
#12 0x000000000047b83e in BackendRun (port=0x18a8f40) at postmaster.c:4357
#13 BackendStartup (port=0x18a8f40) at postmaster.c:4029
#14 ServerLoop () at postmaster.c:1753
#15 0x00000000006a11c2 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x183a250) at postmaster.c:1361
#16 0x000000000047cb05 in main (argc=3, argv=0x183a250) at main.c:228
```
Known versions that break: 7.1, 7.2, HEAD (likely more)
This seems to be due to ModifyQuerySupported not handling the reference table case correctly.
|
non_process
|
sub select from reference table in insert statement values clause crashes steps to reproduce create table test reference a int select create reference table test reference create table test distributed b int select create distributed table test distributed b this produces a server crash insert into test reference a values select b from test distributed this works insert into test reference a select b from test distributed backtrace pg strtok length length entry at read c in noderead token token entry tok len tok len entry at read c in stringtonode str at read c in columnnametocolumn relationid columnnodestring at utils distribution column c in modifyquerysupported querytree multishardquery at planner multi router planner c in createmodifyplan originalquery query plannerrestrictioncontext at planner multi router planner c in createdistributedplan plannerrestrictioncontext boundparams query originalquery localplan at planner multi planner c multi planner parse cursoroptions boundparams at planner multi planner c in pg plan query querytree querytree entry cursoroptions cursoroptions entry boundparams boundparams entry at postgres c in pg plan queries querytrees cursoroptions cursoroptions entry boundparams boundparams entry at postgres c in exec simple query query string insert into test reference a values select b from test distributed at postgres c in postgresmain argc argv argv entry dbname username at postgres c in backendrun port at postmaster c backendstartup port at postmaster c serverloop at postmaster c in postmastermain argc argc entry argv argv entry at postmaster c in main argc argv at main c known versions that break head likely more this seems to be due to modifyquerysupported not handling the reference table case correctly
| 0
|
21,574
| 29,929,127,843
|
IssuesEvent
|
2023-06-22 08:13:47
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
There was a resource authorization issue
|
product-feedback cba Pri1 azure-devops-pipelines/svc azure-devops-pipelines-process/subsvc
|
I have an Azure DevOps job where I am using a template task. The pipeline fails with:
There was a resource authorization issue: "The pipeline is not valid. Job createResources: Step AzureCLI2 input connectedServiceNameARM references service connection $(serviceConnectionName) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz. Job createResources: Step AzureCLI3 input connectedServiceNameARM references service connection $(serviceConnectionName) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."
If I click on "authorize resource" and reurn the pipeline, the same happens.
How do I fix this?
On further testing: If I have a pipeline > job template > task template, I get the error.
If I have a pipeline > job *no template* > task template, this works.
Is this a bug?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Define YAML resources for Azure Pipelines - Azure Pipelines](https://learn.microsoft.com/en-gb/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema#troubleshooting-authorization-for-a-yaml-pipeline)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/resources.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
There was a resource authorization issue - I have an Azure DevOps job where I am using a template task. The pipeline fails with:
There was a resource authorization issue: "The pipeline is not valid. Job createResources: Step AzureCLI2 input connectedServiceNameARM references service connection $(serviceConnectionName) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz. Job createResources: Step AzureCLI3 input connectedServiceNameARM references service connection $(serviceConnectionName) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."
If I click on "authorize resource" and reurn the pipeline, the same happens.
How do I fix this?
On further testing: If I have a pipeline > job template > task template, I get the error.
If I have a pipeline > job *no template* > task template, this works.
Is this a bug?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Define YAML resources for Azure Pipelines - Azure Pipelines](https://learn.microsoft.com/en-gb/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema#troubleshooting-authorization-for-a-yaml-pipeline)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/resources.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
there was a resource authorization issue i have an azure devops job where i am using a template task the pipeline fails with there was a resource authorization issue the pipeline is not valid job createresources step input connectedservicenamearm references service connection serviceconnectionname which could not be found the service connection does not exist or has not been authorized for use for authorization details refer to job createresources step input connectedservicenamearm references service connection serviceconnectionname which could not be found the service connection does not exist or has not been authorized for use for authorization details refer to if i click on authorize resource and reurn the pipeline the same happens how do i fix this on further testing if i have a pipeline job template task template i get the error if i have a pipeline job no template task template this works is this a bug document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service azure devops pipelines sub service azure devops pipelines process github login juliakm microsoft alias jukullam
| 1
|
13,301
| 15,773,718,804
|
IssuesEvent
|
2021-03-31 23:47:53
|
googleapis/python-spanner-django
|
https://api.github.com/repos/googleapis/python-spanner-django
|
opened
|
`nox > * lint_setup_py: failed` in Kokoro
|
type: process
|
All Kokoro build seem to fail with this error. Example failure [1](https://source.cloud.google.com/results/invocations/18fbff30-c574-4ec4-b526-123c166f4a6b/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-spanner-django%2Fpresubmit%2Fworker_1/log), [2](https://source.cloud.google.com/results/invocations/7aeec0bd-70c1-4150-9d9f-46e4a412d908/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-spanner-django%2Fpresubmit%2Fworker_0/log).
Logs surrounding the failure:
```
nox > Session blacken was successful.
nox > Running session lint_setup_py
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint_setup_py
nox > python -m pip install docutils pygments
nox > python setup.py check --restructuredtext --strict
running check
warning: check: Duplicate implicit target name: "contributing".
error: Please correct your package.
nox > Command python setup.py check --restructuredtext --strict failed with exit code 1
nox > Session lint_setup_py failed.
nox > Running session docs
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/docs
nox > python -m pip install -e .
nox > python -m pip install sphinx<3.0.0 alabaster recommonmark
nox > sphinx-build -W -T -N -b html -d docs/_build/doctrees/ docs/ docs/_build/html/
```
Affecting merging of #604.
|
1.0
|
`nox > * lint_setup_py: failed` in Kokoro - All Kokoro build seem to fail with this error. Example failure [1](https://source.cloud.google.com/results/invocations/18fbff30-c574-4ec4-b526-123c166f4a6b/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-spanner-django%2Fpresubmit%2Fworker_1/log), [2](https://source.cloud.google.com/results/invocations/7aeec0bd-70c1-4150-9d9f-46e4a412d908/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-spanner-django%2Fpresubmit%2Fworker_0/log).
Logs surrounding the failure:
```
nox > Session blacken was successful.
nox > Running session lint_setup_py
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint_setup_py
nox > python -m pip install docutils pygments
nox > python setup.py check --restructuredtext --strict
running check
warning: check: Duplicate implicit target name: "contributing".
error: Please correct your package.
nox > Command python setup.py check --restructuredtext --strict failed with exit code 1
nox > Session lint_setup_py failed.
nox > Running session docs
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/docs
nox > python -m pip install -e .
nox > python -m pip install sphinx<3.0.0 alabaster recommonmark
nox > sphinx-build -W -T -N -b html -d docs/_build/doctrees/ docs/ docs/_build/html/
```
Affecting merging of #604.
|
process
|
nox lint setup py failed in kokoro all kokoro build seem to fail with this error example failure logs surrounding the failure nox session blacken was successful nox running session lint setup py nox creating virtual environment virtualenv using in nox lint setup py nox python m pip install docutils pygments nox python setup py check restructuredtext strict running check warning check duplicate implicit target name contributing error please correct your package nox command python setup py check restructuredtext strict failed with exit code nox session lint setup py failed nox running session docs nox creating virtual environment virtualenv using in nox docs nox python m pip install e nox python m pip install sphinx alabaster recommonmark nox sphinx build w t n b html d docs build doctrees docs docs build html affecting merging of
| 1
|
36,643
| 15,043,365,401
|
IssuesEvent
|
2021-02-03 00:33:13
|
microsoft/vscode-cpptools
|
https://api.github.com/repos/microsoft/vscode-cpptools
|
closed
|
Using compileCommands with VS 2017 still adds VS 2019 paths to the browse.path and IntelliSense uses VS 2019 for VS 2017 headers
|
Feature: Compile Commands Language Service bug fixed (release pending)
|
Use a compile_commands.json that references a cl.exe with VS 2017, but have VS 2019 installed too.
Bug1: The VS 2019 paths get added to the browse.path still (in addition to the VS 2017 ones). The workaround is to set compilerPath to be blank or the VS 2017 compilerPath.
Bug2: Also, when I open a VS 2017 header, it tries to use VS 2019 as the compilerPath instead of the compile_commands.json compiler (VS 2017):
```
[ C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.16.27023\include\xstring ]:
Process ID: 2092
Memory Usage: 17 MB
Compiler Path: C:/Program Files (x86)/Microsoft Visual Studio/2019/Preview/VC/Tools/MSVC/14.27.28826/bin/Hostx64/x64/cl.exe
Includes:
C:\PROGRAM FILES (X86)\MICROSOFT VISUAL STUDIO\2019\PREVIEW\VC\TOOLS\MSVC\14.27.28826\INCLUDE
C:\PROGRAM FILES (X86)\MICROSOFT VISUAL STUDIO\2019\PREVIEW\VC\TOOLS\MSVC\14.27.28826\ATLMFC\INCLUDE
```
which causes squiggles to appear.
Expected: It seems like it should behave as if the compilerPath is the one used in compile_commands.json if compilerPath isn't set.
|
1.0
|
Using compileCommands with VS 2017 still adds VS 2019 paths to the browse.path and IntelliSense uses VS 2019 for VS 2017 headers - Use a compile_commands.json that references a cl.exe with VS 2017, but have VS 2019 installed too.
Bug1: The VS 2019 paths get added to the browse.path still (in addition to the VS 2017 ones). The workaround is to set compilerPath to be blank or the VS 2017 compilerPath.
Bug2: Also, when I open a VS 2017 header, it tries to use VS 2019 as the compilerPath instead of the compile_commands.json compiler (VS 2017):
```
[ C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.16.27023\include\xstring ]:
Process ID: 2092
Memory Usage: 17 MB
Compiler Path: C:/Program Files (x86)/Microsoft Visual Studio/2019/Preview/VC/Tools/MSVC/14.27.28826/bin/Hostx64/x64/cl.exe
Includes:
C:\PROGRAM FILES (X86)\MICROSOFT VISUAL STUDIO\2019\PREVIEW\VC\TOOLS\MSVC\14.27.28826\INCLUDE
C:\PROGRAM FILES (X86)\MICROSOFT VISUAL STUDIO\2019\PREVIEW\VC\TOOLS\MSVC\14.27.28826\ATLMFC\INCLUDE
```
which causes squiggles to appear.
Expected: It seems like it should behave as if the compilerPath is the one used in compile_commands.json if compilerPath isn't set.
|
non_process
|
using compilecommands with vs still adds vs paths to the browse path and intellisense uses vs for vs headers use a compile commands json that references a cl exe with vs but have vs installed too the vs paths get added to the browse path still in addition to the vs ones the workaround is to set compilerpath to be blank or the vs compilerpath also when i open a vs header it tries to use vs as the compilerpath instead of the compile commands json compiler vs process id memory usage mb compiler path c program files microsoft visual studio preview vc tools msvc bin cl exe includes c program files microsoft visual studio preview vc tools msvc include c program files microsoft visual studio preview vc tools msvc atlmfc include which causes squiggles to appear expected it seems like it should behave as if the compilerpath is the one used in compile commands json if compilerpath isn t set
| 0
|
291,867
| 21,942,676,701
|
IssuesEvent
|
2022-05-23 19:54:39
|
spring-projects/spring-boot
|
https://api.github.com/repos/spring-projects/spring-boot
|
closed
|
For consistency with Spring Security's reference documentation, use Lambda-based APIs in Spring Security examples
|
type: documentation
|
This was raised on #29932 which added an example for securing H2's web console, but we should do a broader review as there may be other examples that should also be updated.
|
1.0
|
For consistency with Spring Security's reference documentation, use Lambda-based APIs in Spring Security examples - This was raised on #29932 which added an example for securing H2's web console, but we should do a broader review as there may be other examples that should also be updated.
|
non_process
|
for consistency with spring security s reference documentation use lambda based apis in spring security examples this was raised on which added an example for securing s web console but we should do a broader review as there may be other examples that should also be updated
| 0
|
23,196
| 6,388,901,810
|
IssuesEvent
|
2017-08-03 16:29:27
|
RSS-Bridge/rss-bridge
|
https://api.github.com/repos/RSS-Bridge/rss-bridge
|
closed
|
[request-pull] Ignore promoted tweets in TwitterBridge
|
code inclusion request
|
The following changes since commit a4b9611e66d3095c943a5c63306965d4e4cbf839:
[phpcs] Add missing rules (2017-07-29 19:55:12 +0200)
are available in the git repository at:
https://framagit.org/peetah/rss-bridge.git/ TwitterWithoutPromotedTweets
for you to fetch changes up to 485b465a2456bc5d10b7526cf94f9ae0439992ab:
[TwitterBridge] ignore promoted tweets (2017-08-03 00:44:21 +0200)
----------------------------------------------------------------
Pierre Mazière (1):
[TwitterBridge] ignore promoted tweets
bridges/TwitterBridge.php | 8 ++++++++
1 file changed, 8 insertions(+)
|
1.0
|
[request-pull] Ignore promoted tweets in TwitterBridge - The following changes since commit a4b9611e66d3095c943a5c63306965d4e4cbf839:
[phpcs] Add missing rules (2017-07-29 19:55:12 +0200)
are available in the git repository at:
https://framagit.org/peetah/rss-bridge.git/ TwitterWithoutPromotedTweets
for you to fetch changes up to 485b465a2456bc5d10b7526cf94f9ae0439992ab:
[TwitterBridge] ignore promoted tweets (2017-08-03 00:44:21 +0200)
----------------------------------------------------------------
Pierre Mazière (1):
[TwitterBridge] ignore promoted tweets
bridges/TwitterBridge.php | 8 ++++++++
1 file changed, 8 insertions(+)
|
non_process
|
ignore promoted tweets in twitterbridge the following changes since commit add missing rules are available in the git repository at twitterwithoutpromotedtweets for you to fetch changes up to ignore promoted tweets pierre mazière ignore promoted tweets bridges twitterbridge php file changed insertions
| 0
|
9,560
| 12,518,421,478
|
IssuesEvent
|
2020-06-03 12:55:35
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Postgres array_agg function
|
Database/Postgres Priority:P3 Querying/Processor Type:New Feature
|
Does anyone have any recommendations on using an aggregated array function with Metabase?
I tried using Postgres' "array_agg", but Metabase doesn't seem to recognize it.
|
1.0
|
Postgres array_agg function - Does anyone have any recommendations on using an aggregated array function with Metabase?
I tried using Postgres' "array_agg", but Metabase doesn't seem to recognize it.
|
process
|
postgres array agg function does anyone have any recommendations on using an aggregated array function with metabase i tried using postgres array agg but metabase doesn t seem to recognize it
| 1
|
11,521
| 14,401,383,918
|
IssuesEvent
|
2020-12-03 13:39:42
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
p_any_usernames and p_any_emails
|
enhancement p1 team:data processing
|
### Describe the ideal solution
New standard fields for usernames and email addresses to make writing rules quicker, primarily for SaaS logs.
### References
Some sample mappings below (not comprehensive):
GSuite.Reports => actor['email']
AWS.CloudTrail => userIdentity['userName']
OneLogin.Events => user_name
GitLab.Production and API => username
## -
### Additional context
Let's prioritize the SaaS-based logs, then move on to the Cloud/Host logs.
|
1.0
|
p_any_usernames and p_any_emails - ### Describe the ideal solution
New standard fields for usernames and email addresses to make writing rules quicker, primarily for SaaS logs.
### References
Some sample mappings below (not comprehensive):
GSuite.Reports => actor['email']
AWS.CloudTrail => userIdentity['userName']
OneLogin.Events => user_name
GitLab.Production and API => username
## -
### Additional context
Let's prioritize the SaaS-based logs, then move on to the Cloud/Host logs.
|
process
|
p any usernames and p any emails describe the ideal solution new standard fields for usernames and email addresses to make writing rules quicker primarily for saas logs references some sample mappings below not comprehensive gsuite reports actor aws cloudtrail useridentity onelogin events user name gitlab production and api username additional context let s prioritize the saas based logs then move on to the cloud host logs
| 1
|
80,687
| 30,485,212,546
|
IssuesEvent
|
2023-07-18 01:11:19
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Sessions manager's inline action buttons are not aligned
|
T-Defect S-Minor Help Wanted O-Frequent
|
### Steps to reproduce
1. Open `Sessions` user settings tab
2. Check `Sessions` on the filtered device list header
### Outcome
#### What did you expect?
The inline buttons should be aligned to the center

#### What happened instead?
They are aligned to the top

### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
develop.element.io
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Sessions manager's inline action buttons are not aligned - ### Steps to reproduce
1. Open `Sessions` user settings tab
2. Check `Sessions` on the filtered device list header
### Outcome
#### What did you expect?
The inline buttons should be aligned to the center

#### What happened instead?
They are aligned to the top

### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
develop.element.io
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
non_process
|
sessions manager s inline action buttons are not aligned steps to reproduce open sessions user settings tab check sessions on the filtered device list header outcome what did you expect the inline buttons should be aligned to the center what happened instead they are aligned to the top operating system no response browser information no response url for webapp develop element io application version no response homeserver no response will you send logs no
| 0
|
3,770
| 6,737,105,923
|
IssuesEvent
|
2017-10-19 08:09:11
|
openvstorage/volumedriver
|
https://api.github.com/repos/openvstorage/volumedriver
|
closed
|
Integrate our FUSE driver with the edge
|
process_wontfix state_verification
|
One (wild) idea to support OpenStack is to have a fuse mountpoint on each NOVA host. The fuse would in that case interact with the edge on the same host. The edge would in that case interact with the voldrv.
|
1.0
|
Integrate our FUSE driver with the edge - One (wild) idea to support OpenStack is to have a fuse mountpoint on each NOVA host. The fuse would in that case interact with the edge on the same host. The edge would in that case interact with the voldrv.
|
process
|
integrate our fuse driver with the edge one wild idea to support openstack is to have a fuse mountpoint on each nova host the fuse would in that case interact with the edge on the same host the edge would in that case interact with the voldrv
| 1
|
301,507
| 22,760,600,301
|
IssuesEvent
|
2022-07-07 20:44:37
|
tjhollan/ReinforcementLearning
|
https://api.github.com/repos/tjhollan/ReinforcementLearning
|
closed
|
Neural Networks notebook
|
documentation task
|
Create the notebook for the Neural Networks to explain the design process.
|
1.0
|
Neural Networks notebook - Create the notebook for the Neural Networks to explain the design process.
|
non_process
|
neural networks notebook create the notebook for the neural networks to explain the design process
| 0
|
11,815
| 14,630,759,698
|
IssuesEvent
|
2020-12-23 18:23:27
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
DT crash when generating thumbnails
|
bug: pending priority: high scope: image processing
|
<!-- IMPORTANT
Bug reports that do not make an effort to help the developers will be closed without notice.
Make sure that this bug has not already been opened and/or closed by searching the issues on GitHub, as duplicate bug reports will be closed.
A bug report simply stating that Darktable crashes is unhelpful, so please fill in most of the items below and provide detailed information.
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Some files create a crash when generating thumbnails (at start of DT in lightable).
But if DT is run with "-t 1" everything works normally.
**To Reproduce**
<!-- Provide detailed steps that can reproduce the behavior, such as:
1. Go to '...'
2. Click on '...'
3. Scroll down to '...'
4. See error
-->
Launch DT go in some directory where cache thumbnails have not been generated. See result of valgrind output attached
[dt_crash.txt](https://github.com/darktable-org/darktable/files/5733936/dt_crash.txt)
**Platform (please complete the following information):**
- Darktable Version: 3.5.0+127~g54fc5b4dd <!-- [e.g. 2.6.0] -->
- OS: fedora 33<!-- [e.g. Windows 8.1, Gentoo Linux] -->
- same result with or without opencl <!-- OpenCL activated or no? -->
- <!-- Which graphics card and driver version -->
**Additional context**
<!-- Add any other context about the problem here, for example:
- Can you reproduce with another Darktable version?
- Can you reproduce with a RAW or Jpeg or both?
- Are the steps above reproduce with a fresh edit (removing history)?
- Attach an XMP if this is necessary
- Did you compile Darktable yourself? If so which compiler was used, with what options?
- Is the issue still present using an empty/new config-dir
-->
|
1.0
|
DT crash when generating thumbnails - <!-- IMPORTANT
Bug reports that do not make an effort to help the developers will be closed without notice.
Make sure that this bug has not already been opened and/or closed by searching the issues on GitHub, as duplicate bug reports will be closed.
A bug report simply stating that Darktable crashes is unhelpful, so please fill in most of the items below and provide detailed information.
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Some files create a crash when generating thumbnails (at start of DT in lightable).
But if DT is run with "-t 1" everything works normally.
**To Reproduce**
<!-- Provide detailed steps that can reproduce the behavior, such as:
1. Go to '...'
2. Click on '...'
3. Scroll down to '...'
4. See error
-->
Launch DT go in some directory where cache thumbnails have not been generated. See result of valgrind output attached
[dt_crash.txt](https://github.com/darktable-org/darktable/files/5733936/dt_crash.txt)
**Platform (please complete the following information):**
- Darktable Version: 3.5.0+127~g54fc5b4dd <!-- [e.g. 2.6.0] -->
- OS: fedora 33<!-- [e.g. Windows 8.1, Gentoo Linux] -->
- same result with or without opencl <!-- OpenCL activated or no? -->
- <!-- Which graphics card and driver version -->
**Additional context**
<!-- Add any other context about the problem here, for example:
- Can you reproduce with another Darktable version?
- Can you reproduce with a RAW or Jpeg or both?
- Are the steps above reproduce with a fresh edit (removing history)?
- Attach an XMP if this is necessary
- Did you compile Darktable yourself? If so which compiler was used, with what options?
- Is the issue still present using an empty/new config-dir
-->
|
process
|
dt crash when generating thumbnails important bug reports that do not make an effort to help the developers will be closed without notice make sure that this bug has not already been opened and or closed by searching the issues on github as duplicate bug reports will be closed a bug report simply stating that darktable crashes is unhelpful so please fill in most of the items below and provide detailed information describe the bug some files create a crash when generating thumbnails at start of dt in lightable but if dt is run with t everything works normally to reproduce provide detailed steps that can reproduce the behavior such as go to click on scroll down to see error launch dt go in some directory where cache thumbnails have not been generated see result of valgrind output attached platform please complete the following information darktable version os fedora same result with or without opencl additional context add any other context about the problem here for example can you reproduce with another darktable version can you reproduce with a raw or jpeg or both are the steps above reproduce with a fresh edit removing history attach an xmp if this is necessary did you compile darktable yourself if so which compiler was used with what options is the issue still present using an empty new config dir
| 1
|
113,264
| 14,403,977,854
|
IssuesEvent
|
2020-12-03 16:42:39
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
Write up tasks for developers for Storybook pilot
|
design-system-team
|
## Issue Description
Write up the steps for developers who will be part of the storybook pilot
- Use a component — give user instructions on using a component
- Update a component — we give a specific component to update
- Fix responsiveness of a component — need to develop a table
---
## Tasks
- [ ] Write up instructions for the tasks with link
|
1.0
|
Write up tasks for developers for Storybook pilot - ## Issue Description
Write up the steps for developers who will be part of the storybook pilot
- Use a component — give user instructions on using a component
- Update a component — we give a specific component to update
- Fix responsiveness of a component — need to develop a table
---
## Tasks
- [ ] Write up instructions for the tasks with link
|
non_process
|
write up tasks for developers for storybook pilot issue description write up the steps for developers who will be part of the storybook pilot use a component — give user instructions on using a component update a component — we give a specific component to update fix responsiveness of a component — need to develop a table tasks write up instructions for the tasks with link
| 0
|
67,302
| 8,124,369,841
|
IssuesEvent
|
2018-08-16 17:20:01
|
MetaMask/metamask-extension
|
https://api.github.com/repos/MetaMask/metamask-extension
|
closed
|
Preview data and function
|
L03.0-oldUI N00-needsDesign P2-sooner T09-featureRequest T18-ConfirmableAction
|

I see "Data included: 36 bytes".
How can I inspect the 36 bytes that will be included?
|
1.0
|
Preview data and function - 
I see "Data included: 36 bytes".
How can I inspect the 36 bytes that will be included?
|
non_process
|
preview data and function i see data included bytes how can i inspect the bytes that will be included
| 0
|
110,606
| 16,981,711,235
|
IssuesEvent
|
2021-06-30 09:40:41
|
CatalystOne/library-quick-filter
|
https://api.github.com/repos/CatalystOne/library-quick-filter
|
closed
|
CVE-2021-31597 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-31597 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: library-quick-filter/package.json</p>
<p>Path to vulnerable library: library-quick-filter/node_modules/xmlhttprequest-ssl</p>
<p>
Dependency Hierarchy:
- karma-5.0.9.tgz (Root Library)
- socket.io-2.3.0.tgz
- socket.io-client-2.3.0.tgz
- engine.io-client-3.4.3.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/CatalystOne/library-quick-filter/commits/63f1083d30374726be65e2d676335932e77224c6">63f1083d30374726be65e2d676335932e77224c6</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.
<p>Publish Date: 2021-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597>CVE-2021-31597</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597</a></p>
<p>Release Date: 2021-04-23</p>
<p>Fix Resolution: xmlhttprequest-ssl - 1.6.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-31597 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - autoclosed - ## CVE-2021-31597 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: library-quick-filter/package.json</p>
<p>Path to vulnerable library: library-quick-filter/node_modules/xmlhttprequest-ssl</p>
<p>
Dependency Hierarchy:
- karma-5.0.9.tgz (Root Library)
- socket.io-2.3.0.tgz
- socket.io-client-2.3.0.tgz
- engine.io-client-3.4.3.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/CatalystOne/library-quick-filter/commits/63f1083d30374726be65e2d676335932e77224c6">63f1083d30374726be65e2d676335932e77224c6</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.
<p>Publish Date: 2021-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597>CVE-2021-31597</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597</a></p>
<p>Release Date: 2021-04-23</p>
<p>Fix Resolution: xmlhttprequest-ssl - 1.6.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in xmlhttprequest ssl tgz autoclosed cve high severity vulnerability vulnerable library xmlhttprequest ssl tgz xmlhttprequest for node library home page a href path to dependency file library quick filter package json path to vulnerable library library quick filter node modules xmlhttprequest ssl dependency hierarchy karma tgz root library socket io tgz socket io client tgz engine io client tgz x xmlhttprequest ssl tgz vulnerable library found in head commit a href found in base branch master vulnerability details the xmlhttprequest ssl package before for node js disables ssl certificate validation by default because rejectunauthorized when the property exists but is undefined is considered to be false within the https request function of node js in other words no certificate is ever rejected publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmlhttprequest ssl step up your open source security game with whitesource
| 0
|
37,959
| 2,832,546,855
|
IssuesEvent
|
2015-05-25 09:02:46
|
HGustavs/LenaSYS
|
https://api.github.com/repos/HGustavs/LenaSYS
|
opened
|
Character in regular expression is unsafe
|
All highPriority question
|
The character - was added to the regular expression in dugga.js at some early point. The - character is a pretty basic character to try when performing SQL injections, especially seing if -- works and is interpreted as a comment by the site, in which case SQL code could be injected. Is this a character that we want to or can allow without any other precautions?
|
1.0
|
Character in regular expression is unsafe - The character - was added to the regular expression in dugga.js at some early point. The - character is a pretty basic character to try when performing SQL injections, especially seing if -- works and is interpreted as a comment by the site, in which case SQL code could be injected. Is this a character that we want to or can allow without any other precautions?
|
non_process
|
character in regular expression is unsafe the character was added to the regular expression in dugga js at some early point the character is a pretty basic character to try when performing sql injections especially seing if works and is interpreted as a comment by the site in which case sql code could be injected is this a character that we want to or can allow without any other precautions
| 0
|
20,084
| 26,582,786,083
|
IssuesEvent
|
2023-01-22 17:09:22
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Rotate and Perspective doesn't update display
|
priority: high reproduce: confirmed scope: UI scope: image processing bug: pending
|
Rotating an image by right clicking and drawing, or adjusting the rotation slider, isn't applied until the module is minimized. When the module is opened again the rotation has been applied and the rectangle is displayed to adjust the portion of the image retained.
**To Reproduce**
1. Open an image in darkroom view
2. Open the rotate and perspective module
3. Right click on the image and draw a slanted line across the image. Release the right mouse button.
4. Notice the image doesn't update.
5. Minimize the module. Notice the image updates
6. Reopen the module. Notice the rectangle is displayed and adjustable to select the portion of the image.
**Expected behavior**
Image should update as rotation parameters are adjusted.
**Which commit introduced the error**
I looked at the recent changes to ashift.c and @dterrahe made a recent change to not recalculate crop when the module is focused. I wonder if that somehow interfered with the rotation updating. I'll try and bisect later today.
**Platform**
* darktable version : 4.3.0+277~g5890e107f
* OS : Linux
* Linux - Distro : Ubuntu 22.04
* Memory : 32G
* Graphics card : Nividia 3070
* Graphics driver : Nvidia 525
* OpenCL installed : yes
* OpenCL activated : yes
* Xorg : yes
* Desktop : gnome
* GTK+ : 3.24.5
* gcc : 11
* cflags :
* CMAKE_BUILD_TYPE : Release
**Additional context**
_Please provide any additional information you think may be useful, for example:_
- Can you reproduce with another darktable version(s)? 4.2 works correctly
- Can you reproduce with a RAW or Jpeg or both? **both**
- Are the steps above reproducible with a fresh edit (i.e. after discarding history)? **yes**
|
1.0
|
Rotate and Perspective doesn't update display - Rotating an image by right clicking and drawing, or adjusting the rotation slider, isn't applied until the module is minimized. When the module is opened again the rotation has been applied and the rectangle is displayed to adjust the portion of the image retained.
**To Reproduce**
1. Open an image in darkroom view
2. Open the rotate and perspective module
3. Right click on the image and draw a slanted line across the image. Release the right mouse button.
4. Notice the image doesn't update.
5. Minimize the module. Notice the image updates
6. Reopen the module. Notice the rectangle is displayed and adjustable to select the portion of the image.
**Expected behavior**
Image should update as rotation parameters are adjusted.
**Which commit introduced the error**
I looked at the recent changes to ashift.c and @dterrahe made a recent change to not recalculate crop when the module is focused. I wonder if that somehow interfered with the rotation updating. I'll try and bisect later today.
**Platform**
* darktable version : 4.3.0+277~g5890e107f
* OS : Linux
* Linux - Distro : Ubuntu 22.04
* Memory : 32G
* Graphics card : Nividia 3070
* Graphics driver : Nvidia 525
* OpenCL installed : yes
* OpenCL activated : yes
* Xorg : yes
* Desktop : gnome
* GTK+ : 3.24.5
* gcc : 11
* cflags :
* CMAKE_BUILD_TYPE : Release
**Additional context**
_Please provide any additional information you think may be useful, for example:_
- Can you reproduce with another darktable version(s)? 4.2 works correctly
- Can you reproduce with a RAW or Jpeg or both? **both**
- Are the steps above reproducible with a fresh edit (i.e. after discarding history)? **yes**
|
process
|
rotate and perspective doesn t update display rotating an image by right clicking and drawing or adjusting the rotation slider isn t applied until the module is minimized when the module is opened again the rotation has been applied and the rectangle is displayed to adjust the portion of the image retained to reproduce open an image in darkroom view open the rotate and perspective module right click on the image and draw a slanted line across the image release the right mouse button notice the image doesn t update minimize the module notice the image updates reopen the module notice the rectangle is displayed and adjustable to select the portion of the image expected behavior image should update as rotation parameters are adjusted which commit introduced the error i looked at the recent changes to ashift c and dterrahe made a recent change to not recalculate crop when the module is focused i wonder if that somehow interfered with the rotation updating i ll try and bisect later today platform darktable version os linux linux distro ubuntu memory graphics card nividia graphics driver nvidia opencl installed yes opencl activated yes xorg yes desktop gnome gtk gcc cflags cmake build type release additional context please provide any additional information you think may be useful for example can you reproduce with another darktable version s works correctly can you reproduce with a raw or jpeg or both both are the steps above reproducible with a fresh edit i e after discarding history yes
| 1
|
8,816
| 2,612,901,360
|
IssuesEvent
|
2015-02-27 17:24:15
|
chrsmith/windows-package-manager
|
https://api.github.com/repos/chrsmith/windows-package-manager
|
closed
|
Firefox won't update or uninstall
|
auto-migrated Type-Defect
|
```
What steps will reproduce the problem?
1. install FF 3.6.13 (using Npackd 1.14)
2. try to update or uninstall FF
3. :
---------------------------
Error
---------------------------
Uninstalling Firefox 3.6.13: Process C:/Program Files
(x86)/Npackd/org.mozilla.Firefox-3.6.13\.WPM\Uninstall.bat exited with the code
1
---------------------------
What is the expected output? What do you see instead?
I want update or uninstall so I can install manually or outside npackd
What version of the product are you using? On what operating system?
latest (1.15.6) on win7 x64
Please provide any additional information below.
```
Original issue reported on code.google.com by `mr.d...@gmail.com` on 2 May 2011 at 5:23
|
1.0
|
Firefox won't update or uninstall - ```
What steps will reproduce the problem?
1. install FF 3.6.13 (using Npackd 1.14)
2. try to update or uninstall FF
3. :
---------------------------
Error
---------------------------
Uninstalling Firefox 3.6.13: Process C:/Program Files
(x86)/Npackd/org.mozilla.Firefox-3.6.13\.WPM\Uninstall.bat exited with the code
1
---------------------------
What is the expected output? What do you see instead?
I want update or uninstall so I can install manually or outside npackd
What version of the product are you using? On what operating system?
latest (1.15.6) on win7 x64
Please provide any additional information below.
```
Original issue reported on code.google.com by `mr.d...@gmail.com` on 2 May 2011 at 5:23
|
non_process
|
firefox won t update or uninstall what steps will reproduce the problem install ff using npackd try to update or uninstall ff error uninstalling firefox process c program files npackd org mozilla firefox wpm uninstall bat exited with the code what is the expected output what do you see instead i want update or uninstall so i can install manually or outside npackd what version of the product are you using on what operating system latest on please provide any additional information below original issue reported on code google com by mr d gmail com on may at
| 0
|
590,402
| 17,777,226,731
|
IssuesEvent
|
2021-08-30 20:56:30
|
o3de/o3de
|
https://api.github.com/repos/o3de/o3de
|
opened
|
Feedback: Add tagging system to Asset Browser
|
feature/asset-pipeline kind/feature needs-triage sig/core triage/needs-information priority/major status/investigation WF2
|
[Migrated from JIRA LYN-5329]
*Description:*
Allow user to add new tags or manage tags to their asset for easy search and browse of the assets.
*Details:*
We do have a tagging system today, but it's hard coded. We should allow user to add and customize their own tags in the asset pipeline.
Example: [https://github.com/abstractfactory/openmetadata]
* UX team needs to develop some UX workflows to cater to this request.
* Adding, removing and editing tags in the asset browser experience inside the editor.
*Acceptance Criteria:*
* As a user, I can click on an asset file and add, remove and edit tags for a file.
* This data should be shared in a way that all systems can access and read it. So that when a user does a search across any of the tools, when necessary, it can utilize the tags system created here.
*Additional Information:*
updated AC and details. Waiting on confirmation (8/30/2021)
|
1.0
|
Feedback: Add tagging system to Asset Browser - [Migrated from JIRA LYN-5329]
*Description:*
Allow user to add new tags or manage tags to their asset for easy search and browse of the assets.
*Details:*
We do have a tagging system today, but it's hard coded. We should allow user to add and customize their own tags in the asset pipeline.
Example: [https://github.com/abstractfactory/openmetadata]
* UX team needs to develop some UX workflows to cater to this request.
* Adding, removing and editing tags in the asset browser experience inside the editor.
*Acceptance Criteria:*
* As a user, I can click on an asset file and add, remove and edit tags for a file.
* This data should be shared in a way that all systems can access and read it. So that when a user does a search across any of the tools, when necessary, it can utilize the tags system created here.
*Additional Information:*
updated AC and details. Waiting on confirmation (8/30/2021)
|
non_process
|
feedback add tagging system to asset browser description allow user to add new tags or manage tags to their asset for easy search and browse of the assets details we do have a tagging system today but it s hard coded we should allow user to add and customize their own tags in the asset pipeline example ux team needs to develop some ux workflows to cater to this request adding removing and editing tags in the asset browser experience inside the editor acceptance criteria as a user i can click on an asset file and add remove and edit tags for a file this data should be shared in a way that all systems can access and read it so that when a user does a search across any of the tools when necessary it can utilize the tags system created here additional information updated ac and details waiting on confirmation
| 0
|
170,080
| 14,240,111,285
|
IssuesEvent
|
2020-11-18 21:12:33
|
ha-pu/globaltrends
|
https://api.github.com/repos/ha-pu/globaltrends
|
closed
|
Vignette update
|
documentation
|
# Google Trends
-> What do we compute?
- Location volume
- Global volume -> VOI
- Global distribution -> DOI
# Analyze internationalization of firms
## Setup and start database
## Download control data
- Integrate with text from ## Download data from Google Trends
## Download and compute local volume
-> Explain about local volume = local interest in topic, local exposure
-> download_local_volume = combine:
- download_object
- download_mapping
- compute_score
## Download and compute global volume
-> Explain about global volume = global interest in topic, volume of internationalization
-> Mention abbreviation "VOI"
- download_global_control
- download_global_volume
## Compute degree of internationalization
-> Explain that global distribution = degree of internationalization
-> Mention abbreviation "DOI"
## Exports and plots
- plot_score
- plot_voi_ts
- plot_voi_box
- plot_doi_ts = plot_ts
- plot_doi_box = plot_box
- plot_voi_doi = plot_trend
|
1.0
|
Vignette update - # Google Trends
-> What do we compute?
- Location volume
- Global volume -> VOI
- Global distribution -> DOI
# Analyze internationalization of firms
## Setup and start database
## Download control data
- Integrate with text from ## Download data from Google Trends
## Download and compute local volume
-> Explain about local volume = local interest in topic, local exposure
-> download_local_volume = combine:
- download_object
- download_mapping
- compute_score
## Download and compute global volume
-> Explain about global volume = global interest in topic, volume of internationalization
-> Mention abbreviation "VOI"
- download_global_control
- download_global_volume
## Compute degree of internationalization
-> Explain that global distribution = degree of internationalization
-> Mention abbreviation "DOI"
## Exports and plots
- plot_score
- plot_voi_ts
- plot_voi_box
- plot_doi_ts = plot_ts
- plot_doi_box = plot_box
- plot_voi_doi = plot_trend
|
non_process
|
vignette update google trends what do we compute location volume global volume voi global distribution doi analyze internationalization of firms setup and start database download control data integrate with text from download data from google trends download and compute local volume explain about local volume local interest in topic local exposure download local volume combine download object download mapping compute score download and compute global volume explain about global volume global interest in topic volume of internationalization mention abbreviation voi download global control download global volume compute degree of internationalization explain that global distribution degree of internationalization mention abbreviation doi exports and plots plot score plot voi ts plot voi box plot doi ts plot ts plot doi box plot box plot voi doi plot trend
| 0
|
118,652
| 9,997,588,055
|
IssuesEvent
|
2019-07-12 05:14:15
|
codeforcash/croupier
|
https://api.github.com/repos/codeforcash/croupier
|
closed
|
Big numbers in betting table should be rendered with comma separators
|
🧪testing
|
...for users with large ranges (e.g. 1 - 50000), the large number should be displayed with comma separators.
|
1.0
|
Big numbers in betting table should be rendered with comma separators - ...for users with large ranges (e.g. 1 - 50000), the large number should be displayed with comma separators.
|
non_process
|
big numbers in betting table should be rendered with comma separators for users with large ranges e g the large number should be displayed with comma separators
| 0
|
791,400
| 27,862,423,431
|
IssuesEvent
|
2023-03-21 07:46:14
|
telerik/kendo-ui-core
|
https://api.github.com/repos/telerik/kendo-ui-core
|
closed
|
ListView throws exception when no template handler is supplied
|
C: ListView SEV: High S: Wrappers (ASP.NET Core) Priority 5 Next LIB FP: Planned
|
### Bug report
ListView throws exception when no template handler is supplied
### Reproduction of the problem
Configure a ListView with template id and don't provide a template handler.
### Current behavior
An exception is thrown.
### Expected/desired behavior
No exception is thrown.
### Workaround
Set a ClientTemplateHandler with a dummy value.
```
.ClientTemplateHandler("template")
```
### Environment
* **Kendo UI version:** 2023.1.314
* **Browser:** all
|
1.0
|
ListView throws exception when no template handler is supplied - ### Bug report
ListView throws exception when no template handler is supplied
### Reproduction of the problem
Configure a ListView with template id and don't provide a template handler.
### Current behavior
An exception is thrown.
### Expected/desired behavior
No exception is thrown.
### Workaround
Set a ClientTemplateHandler with a dummy value.
```
.ClientTemplateHandler("template")
```
### Environment
* **Kendo UI version:** 2023.1.314
* **Browser:** all
|
non_process
|
listview throws exception when no template handler is supplied bug report listview throws exception when no template handler is supplied reproduction of the problem configure a listview with template id and don t provide a template handler current behavior an exception is thrown expected desired behavior no exception is thrown workaround set a clienttemplatehandler with a dummy value clienttemplatehandler template environment kendo ui version browser all
| 0
|
38,299
| 12,534,401,152
|
IssuesEvent
|
2020-06-04 19:21:48
|
RG4421/HackShack-Session-Landing-Page
|
https://api.github.com/repos/RG4421/HackShack-Session-Landing-Page
|
opened
|
CVE-2020-11022 (Medium) detected in jquery-1.7.1.min.js
|
security vulnerability
|
## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/HackShack-Session-Landing-Page/node_modules/sockjs/examples/multiplex/index.html</p>
<p>Path to vulnerable library: /HackShack-Session-Landing-Page/node_modules/sockjs/examples/multiplex/index.html,/HackShack-Session-Landing-Page/node_modules/sockjs/examples/hapi/html/index.html,/HackShack-Session-Landing-Page/node_modules/sockjs/examples/echo/index.html,/HackShack-Session-Landing-Page/node_modules/sockjs/examples/express-3.x/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/RG4421/HackShack-Session-Landing-Page/commit/07bd1498ae0f65f0e53050b22c0e2348289e620e">07bd1498ae0f65f0e53050b22c0e2348289e620e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.7.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.7.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"}],"vulnerabilityIdentifier":"CVE-2020-11022","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-11022 (Medium) detected in jquery-1.7.1.min.js - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/HackShack-Session-Landing-Page/node_modules/sockjs/examples/multiplex/index.html</p>
<p>Path to vulnerable library: /HackShack-Session-Landing-Page/node_modules/sockjs/examples/multiplex/index.html,/HackShack-Session-Landing-Page/node_modules/sockjs/examples/hapi/html/index.html,/HackShack-Session-Landing-Page/node_modules/sockjs/examples/echo/index.html,/HackShack-Session-Landing-Page/node_modules/sockjs/examples/express-3.x/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/RG4421/HackShack-Session-Landing-Page/commit/07bd1498ae0f65f0e53050b22c0e2348289e620e">07bd1498ae0f65f0e53050b22c0e2348289e620e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.7.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.7.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"}],"vulnerabilityIdentifier":"CVE-2020-11022","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm hackshack session landing page node modules sockjs examples multiplex index html path to vulnerable library hackshack session landing page node modules sockjs examples multiplex index html hackshack session landing page node modules sockjs examples hapi html index html hackshack session landing page node modules sockjs examples echo index html hackshack session landing page node modules sockjs examples express x index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery vulnerabilityurl
| 0
|
9,889
| 12,889,978,678
|
IssuesEvent
|
2020-07-13 15:18:02
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[needs-docs][processing] move providers actions into the processing
|
3.0 Automatic new feature Easy Processing ToDocOrNotToDoc?
|
Original commit: https://github.com/qgis/QGIS/commit/c5d9830db2e8bcc9d60bdc1451e3cd318edc9fd5 by web-flow
panel toolbar (#6150)
|
1.0
|
[needs-docs][processing] move providers actions into the processing - Original commit: https://github.com/qgis/QGIS/commit/c5d9830db2e8bcc9d60bdc1451e3cd318edc9fd5 by web-flow
panel toolbar (#6150)
|
process
|
move providers actions into the processing original commit by web flow panel toolbar
| 1
|
532,082
| 15,529,526,226
|
IssuesEvent
|
2021-03-13 15:32:25
|
BookStackApp/BookStack
|
https://api.github.com/repos/BookStackApp/BookStack
|
closed
|
Permission generation fails in certain cases
|
:bug: Bug :factory: Back-End :rocket: Priority
|
### Log
```
[2021-03-11 02:25:41] production.ERROR: Trying to get property 'restricted' of non-object {"userId":5,"exception":"[object] (ErrorException(code: 0): Trying to get property 'restricted' of non-object at /var/www/html/app/Auth/Permissions/PermissionService.php:466)
I also tried regenerating the permissions with the command, but that gave me another error message:
root@bookstack-57b6767f87-cfgnv:/var/www/html# php artisan bookstack:regenerate-permissions
ErrorException : Trying to get property 'restricted' of non-object
at /var/www/html/app/Auth/Permissions/PermissionService.php:466
462| // For pages with a chapter, Check if explicit permissions are set on the Chapter
463| if ($entity->isA('page') && $entity->chapter_id !== 0 && $entity->chapter_id !== '0') {
464| $chapter = $this->getChapter($entity->chapter_id);
465| $hasPermissiveAccessToParents = $hasPermissiveAccessToParents && !$chapter->restricted;
466| if ($chapter->restricted) {
467| $hasExplicitAccessToParents = $this->mapHasActiveRestriction($permissionMap, $chapter, $role, $restrictionAction);
468| }
469| }
470|
Exception trace:
1 Illuminate\Foundation\Bootstrap\HandleExceptions::handleError("Trying to get property 'restricted' of non-object", "/var/www/html/app/Auth/Permissions/PermissionService.php")
/var/www/html/app/Auth/Permissions/PermissionService.php:466
```
### Findings
Can replicate with the following:
- You delete a page within a chapter.
- Move the parent chapter of the deleted page to a different book. (Deleted page book_id and chapter book_id become mis-aligned at this point).
- Delete that chapter.
- Perform a book sort operation where the original deleted page is included and a change is made.
Primary issue is the mis-alignment of the chapter and page book_id, caused due to page book move actions not taking into account deleted pages.
The error point of the permission service could do with a little attention also, But any heavy refactoring should be done in #2633 so this can be safely part of a patch release.
|
1.0
|
Permission generation fails in certain cases - ### Log
```
[2021-03-11 02:25:41] production.ERROR: Trying to get property 'restricted' of non-object {"userId":5,"exception":"[object] (ErrorException(code: 0): Trying to get property 'restricted' of non-object at /var/www/html/app/Auth/Permissions/PermissionService.php:466)
I also tried regenerating the permissions with the command, but that gave me another error message:
root@bookstack-57b6767f87-cfgnv:/var/www/html# php artisan bookstack:regenerate-permissions
ErrorException : Trying to get property 'restricted' of non-object
at /var/www/html/app/Auth/Permissions/PermissionService.php:466
462| // For pages with a chapter, Check if explicit permissions are set on the Chapter
463| if ($entity->isA('page') && $entity->chapter_id !== 0 && $entity->chapter_id !== '0') {
464| $chapter = $this->getChapter($entity->chapter_id);
465| $hasPermissiveAccessToParents = $hasPermissiveAccessToParents && !$chapter->restricted;
466| if ($chapter->restricted) {
467| $hasExplicitAccessToParents = $this->mapHasActiveRestriction($permissionMap, $chapter, $role, $restrictionAction);
468| }
469| }
470|
Exception trace:
1 Illuminate\Foundation\Bootstrap\HandleExceptions::handleError("Trying to get property 'restricted' of non-object", "/var/www/html/app/Auth/Permissions/PermissionService.php")
/var/www/html/app/Auth/Permissions/PermissionService.php:466
```
### Findings
Can replicate with the following:
- You delete a page within a chapter.
- Move the parent chapter of the deleted page to a different book. (Deleted page book_id and chapter book_id become mis-aligned at this point).
- Delete that chapter.
- Perform a book sort operation where the original deleted page is included and a change is made.
Primary issue is the mis-alignment of the chapter and page book_id, caused due to page book move actions not taking into account deleted pages.
The error point of the permission service could do with a little attention also, But any heavy refactoring should be done in #2633 so this can be safely part of a patch release.
|
non_process
|
permission generation fails in certain cases log production error trying to get property restricted of non object userid exception errorexception code trying to get property restricted of non object at var www html app auth permissions permissionservice php i also tried regenerating the permissions with the command but that gave me another error message root bookstack cfgnv var www html php artisan bookstack regenerate permissions errorexception trying to get property restricted of non object at var www html app auth permissions permissionservice php for pages with a chapter check if explicit permissions are set on the chapter if entity isa page entity chapter id entity chapter id chapter this getchapter entity chapter id haspermissiveaccesstoparents haspermissiveaccesstoparents chapter restricted if chapter restricted hasexplicitaccesstoparents this maphasactiverestriction permissionmap chapter role restrictionaction exception trace illuminate foundation bootstrap handleexceptions handleerror trying to get property restricted of non object var www html app auth permissions permissionservice php var www html app auth permissions permissionservice php findings can replicate with the following you delete a page within a chapter move the parent chapter of the deleted page to a different book deleted page book id and chapter book id become mis aligned at this point delete that chapter perform a book sort operation where the original deleted page is included and a change is made primary issue is the mis alignment of the chapter and page book id caused due to page book move actions not taking into account deleted pages the error point of the permission service could do with a little attention also but any heavy refactoring should be done in so this can be safely part of a patch release
| 0
|
403
| 2,848,057,309
|
IssuesEvent
|
2015-05-29 20:31:41
|
mitchellh/packer
|
https://api.github.com/repos/mitchellh/packer
|
closed
|
Packer deletes existing versions of Vagrant boxes
|
bug post-processor/atlas post-processor/vagrant
|
Yesterday, I published a new version of my box. Today, I wanted to update one of the providers, not knowing that Packer does not allow that. This would be fine, except that Packer, once it errored out, deleted the whole version, including other providers.
**Preferrably, the boxes should remain intact if nothing has been uploaded, or at least other providers should remain available.**
Here is the log that led to the deletion of my previously-published version:
```
==> virtualbox-ovf (vagrant-cloud): Verifying box is accessible: dreamscapes/archlinux
virtualbox-ovf (vagrant-cloud): Box accessible and matches tag
==> virtualbox-ovf (vagrant-cloud): Creating version: 2015.04.01
virtualbox-ovf (vagrant-cloud): Version exists, skipping creation
^^^^^^^^^^^^^^^^^
==> virtualbox-ovf (vagrant-cloud): Creating provider: virtualbox
==> virtualbox-ovf (vagrant-cloud): Cleaning up provider
virtualbox-ovf (vagrant-cloud): Provider was not created, not deleting
==> virtualbox-ovf (vagrant-cloud): Cleaning up version
virtualbox-ovf (vagrant-cloud): Deleting version: 2015.04.01
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
Packer version: b49d74d (master)
|
2.0
|
Packer deletes existing versions of Vagrant boxes - Yesterday, I published a new version of my box. Today, I wanted to update one of the providers, not knowing that Packer does not allow that. This would be fine, except that Packer, once it errored out, deleted the whole version, including other providers.
**Preferrably, the boxes should remain intact if nothing has been uploaded, or at least other providers should remain available.**
Here is the log that led to the deletion of my previously-published version:
```
==> virtualbox-ovf (vagrant-cloud): Verifying box is accessible: dreamscapes/archlinux
virtualbox-ovf (vagrant-cloud): Box accessible and matches tag
==> virtualbox-ovf (vagrant-cloud): Creating version: 2015.04.01
virtualbox-ovf (vagrant-cloud): Version exists, skipping creation
^^^^^^^^^^^^^^^^^
==> virtualbox-ovf (vagrant-cloud): Creating provider: virtualbox
==> virtualbox-ovf (vagrant-cloud): Cleaning up provider
virtualbox-ovf (vagrant-cloud): Provider was not created, not deleting
==> virtualbox-ovf (vagrant-cloud): Cleaning up version
virtualbox-ovf (vagrant-cloud): Deleting version: 2015.04.01
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
Packer version: b49d74d (master)
|
process
|
packer deletes existing versions of vagrant boxes yesterday i published a new version of my box today i wanted to update one of the providers not knowing that packer does not allow that this would be fine except that packer once it errored out deleted the whole version including other providers preferrably the boxes should remain intact if nothing has been uploaded or at least other providers should remain available here is the log that led to the deletion of my previously published version virtualbox ovf vagrant cloud verifying box is accessible dreamscapes archlinux virtualbox ovf vagrant cloud box accessible and matches tag virtualbox ovf vagrant cloud creating version virtualbox ovf vagrant cloud version exists skipping creation virtualbox ovf vagrant cloud creating provider virtualbox virtualbox ovf vagrant cloud cleaning up provider virtualbox ovf vagrant cloud provider was not created not deleting virtualbox ovf vagrant cloud cleaning up version virtualbox ovf vagrant cloud deleting version packer version master
| 1
|
12,963
| 15,341,635,617
|
IssuesEvent
|
2021-02-27 12:52:55
|
topcoder-platform/community-app
|
https://api.github.com/repos/topcoder-platform/community-app
|
closed
|
Hidden Subcommunity filter and search bar must be displayed again when other buckets are selected
|
P2 ShapeupProcess challenge- recommender-tool
|
1. Select recommended challenges toggle on
2. subcommunity filter and search bar is hidden
3. switch to any other bucket like All Challenges bucket or My Challenges bucket or Past Challenges
expected: subcommunity filter and search bar must be displayed again, they must be again hidden when switched to open for registration bucket
actual: subcommunity filter and search bar are still hidden
https://user-images.githubusercontent.com/58783823/109332744-8fe38180-7884-11eb-840a-8e7ca620de2c.mov
|
1.0
|
Hidden Subcommunity filter and search bar must be displayed again when other buckets are selected - 1. Select recommended challenges toggle on
2. subcommunity filter and search bar is hidden
3. switch to any other bucket like All Challenges bucket or My Challenges bucket or Past Challenges
expected: subcommunity filter and search bar must be displayed again, they must be again hidden when switched to open for registration bucket
actual: subcommunity filter and search bar are still hidden
https://user-images.githubusercontent.com/58783823/109332744-8fe38180-7884-11eb-840a-8e7ca620de2c.mov
|
process
|
hidden subcommunity filter and search bar must be displayed again when other buckets are selected select recommended challenges toggle on subcommunity filter and search bar is hidden switch to any other bucket like all challenges bucket or my challenges bucket or past challenges expected subcommunity filter and search bar must be displayed again they must be again hidden when switched to open for registration bucket actual subcommunity filter and search bar are still hidden
| 1
|
22,410
| 31,142,292,948
|
IssuesEvent
|
2023-08-16 01:44:50
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Flaky test: cy.origin assertions #consoleProps .should() and .and()
|
OS: linux process: flaky test topic: flake ❄️ stage: flake stale
|
### Link to dashboard or CircleCI failure
https://app.circleci.com/pipelines/github/cypress-io/cypress/41360/workflows/72d9b47d-a5ae-4a3e-96fe-85b1ec5f2674/jobs/1712793
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/e2e/origin/commands/assertions.cy.ts#L27
### Analysis
<img width="1230" alt="Screen Shot 2022-08-05 at 12 19 51 PM" src="https://user-images.githubusercontent.com/26726429/183146344-eb783f9e-c654-4363-b88a-458739781b66.png">
### Cypress Version
10.4.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
1.0
|
Flaky test: cy.origin assertions #consoleProps .should() and .and() - ### Link to dashboard or CircleCI failure
https://app.circleci.com/pipelines/github/cypress-io/cypress/41360/workflows/72d9b47d-a5ae-4a3e-96fe-85b1ec5f2674/jobs/1712793
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/e2e/origin/commands/assertions.cy.ts#L27
### Analysis
<img width="1230" alt="Screen Shot 2022-08-05 at 12 19 51 PM" src="https://user-images.githubusercontent.com/26726429/183146344-eb783f9e-c654-4363-b88a-458739781b66.png">
### Cypress Version
10.4.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
process
|
flaky test cy origin assertions consoleprops should and and link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at pm src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
| 1
|
9,548
| 12,512,317,702
|
IssuesEvent
|
2020-06-02 22:28:26
|
shirou/gopsutil
|
https://api.github.com/repos/shirou/gopsutil
|
closed
|
FAIL: Test_Children
|
os:linux package:process
|
Hello,
When I try to test the repo process, it fails with
```
process : pushd /builddir/build/BUILD/gopsutil-2.17.11/_build/src/github.com/shirou/gopsutil/process
+ go test -compiler gc -ldflags ' -extldflags '\''-Wl,-z,relro '\'''
--- FAIL: Test_Children (0.00s)
process_test.go:360: error process does not have children
```
Thank you for your help
|
1.0
|
FAIL: Test_Children - Hello,
When I try to test the repo process, it fails with
```
process : pushd /builddir/build/BUILD/gopsutil-2.17.11/_build/src/github.com/shirou/gopsutil/process
+ go test -compiler gc -ldflags ' -extldflags '\''-Wl,-z,relro '\'''
--- FAIL: Test_Children (0.00s)
process_test.go:360: error process does not have children
```
Thank you for your help
|
process
|
fail test children hello when i try to test the repo process it fails with process pushd builddir build build gopsutil build src github com shirou gopsutil process go test compiler gc ldflags extldflags wl z relro fail test children process test go error process does not have children thank you for your help
| 1
|
123,213
| 17,772,189,064
|
IssuesEvent
|
2021-08-30 14:50:12
|
kapseliboi/sqlpad
|
https://api.github.com/repos/kapseliboi/sqlpad
|
opened
|
CVE-2021-23337 (High) detected in lodash-4.17.19.tgz
|
security vulnerability
|
## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.19.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.19.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.19.tgz</a></p>
<p>Path to dependency file: sqlpad/server/package.json</p>
<p>Path to vulnerable library: sqlpad/server/node_modules/clickhouse/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- clickhouse-2.4.0.tgz (Root Library)
- :x: **lodash-4.17.19.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/sqlpad/commit/95024fc09fd71a1bc52f23bc0709ce5daa7e9f98">95024fc09fd71a1bc52f23bc0709ce5daa7e9f98</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23337 (High) detected in lodash-4.17.19.tgz - ## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.19.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.19.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.19.tgz</a></p>
<p>Path to dependency file: sqlpad/server/package.json</p>
<p>Path to vulnerable library: sqlpad/server/node_modules/clickhouse/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- clickhouse-2.4.0.tgz (Root Library)
- :x: **lodash-4.17.19.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/sqlpad/commit/95024fc09fd71a1bc52f23bc0709ce5daa7e9f98">95024fc09fd71a1bc52f23bc0709ce5daa7e9f98</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file sqlpad server package json path to vulnerable library sqlpad server node modules clickhouse node modules lodash package json dependency hierarchy clickhouse tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to command injection via the template function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
| 0
|
2,161
| 5,006,786,534
|
IssuesEvent
|
2016-12-12 15:07:34
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Custom filter drop down menu having check boxes to allow the user to select multiple values in filter
|
inprocess
|
@Hi
I am trying to add a custom filter drop down menu with check boxes as below in custom filter.
<div class="dropdown">
<button class="btn btn-primary dropdown-toggle" type="button" data-toggle="dropdown">Dropdown Example
<span class="caret"></span></button>
<ul class="dropdown-menu">
<li><input type="checkbox" /><a href="#">HTML</a></li>
<li><input type="checkbox" /><a href="#">CSS</a></li>
<li><input type="checkbox" /><a href="#">JavaScript</a></li>
</ul>
</div>
In header the drop down menu is behind the table body when ever I click on drop down menu to open, so I am not able to filter the table.
I have tried with z-index and some other css but the list is not on top of table
See the below screen shot and give me a solution
Thanks in advance
|
1.0
|
Custom filter drop down menu having check boxes to allow the user to select multiple values in filter - @Hi
I am trying to add a custom filter drop down menu with check boxes as below in custom filter.
<div class="dropdown">
<button class="btn btn-primary dropdown-toggle" type="button" data-toggle="dropdown">Dropdown Example
<span class="caret"></span></button>
<ul class="dropdown-menu">
<li><input type="checkbox" /><a href="#">HTML</a></li>
<li><input type="checkbox" /><a href="#">CSS</a></li>
<li><input type="checkbox" /><a href="#">JavaScript</a></li>
</ul>
</div>
In header the drop down menu is behind the table body when ever I click on drop down menu to open, so I am not able to filter the table.
I have tried with z-index and some other css but the list is not on top of table
See the below screen shot and give me a solution
Thanks in advance
|
process
|
custom filter drop down menu having check boxes to allow the user to select multiple values in filter hi i am trying to add a custom filter drop down menu with check boxes as below in custom filter dropdown example html css javascript in header the drop down menu is behind the table body when ever i click on drop down menu to open so i am not able to filter the table i have tried with z index and some other css but the list is not on top of table see the below screen shot and give me a solution thanks in advance
| 1
|
1,606
| 4,225,868,492
|
IssuesEvent
|
2016-07-02 03:27:11
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Investigate flaky test-stdout-close-catch on FreeBSD
|
freebsd process stream test
|
<!--
Thank you for reporting an issue. Please fill in the template below. If unsure
about something, just do as best as you're able.
Version: usually output of `node -v`
Platform: either `uname -a` output, or if Windows, version and 32 or 64-bit
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: 7.0.0-pre (current master)
Example failure: https://ci.nodejs.org/job/node-test-commit-freebsd/2561/nodes=freebsd10-64/console
```
not ok 976 parallel/test-stdout-close-catch
# TIMEOUT
---
duration_ms: 60.37
```
<!-- Enter your issue details below this comment. -->
|
1.0
|
Investigate flaky test-stdout-close-catch on FreeBSD - <!--
Thank you for reporting an issue. Please fill in the template below. If unsure
about something, just do as best as you're able.
Version: usually output of `node -v`
Platform: either `uname -a` output, or if Windows, version and 32 or 64-bit
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: 7.0.0-pre (current master)
Example failure: https://ci.nodejs.org/job/node-test-commit-freebsd/2561/nodes=freebsd10-64/console
```
not ok 976 parallel/test-stdout-close-catch
# TIMEOUT
---
duration_ms: 60.37
```
<!-- Enter your issue details below this comment. -->
|
process
|
investigate flaky test stdout close catch on freebsd thank you for reporting an issue please fill in the template below if unsure about something just do as best as you re able version usually output of node v platform either uname a output or if windows version and or bit subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version pre current master example failure not ok parallel test stdout close catch timeout duration ms
| 1
|
1,328
| 3,875,452,529
|
IssuesEvent
|
2016-04-12 01:10:31
|
sysown/proxysql
|
https://api.github.com/repos/sysown/proxysql
|
closed
|
Reimplement fast forward
|
ADMIN AUTHENTICATION CONNECTION POOL MYSQL PROTOCOL QUERY PROCESSOR ROUTING
|
Following the implementation of fast forward, it was disabled after writing http://www.proxysql.com/2015/06/sql-load-balancing-benchmark-comparing.html and commenting `I would always use ProxySQL's query processing that implements important features` .
In other words, a fully featured ProxySQL is faster than other proxies out there with very limited features.
Although, for several reasons, fast forward mode is being reintroduced again:
* reach high performance when complex features aren't required , and benchmark
* draft support for unsupported part of the protocol (prepared statement, see #444)
Closes #256
Closes #264
Closes #265
Closes #300
|
1.0
|
Reimplement fast forward - Following the implementation of fast forward, it was disabled after writing http://www.proxysql.com/2015/06/sql-load-balancing-benchmark-comparing.html and commenting `I would always use ProxySQL's query processing that implements important features` .
In other words, a fully featured ProxySQL is faster than other proxies out there with very limited features.
Although, for several reasons, fast forward mode is being reintroduced again:
* reach high performance when complex features aren't required , and benchmark
* draft support for unsupported part of the protocol (prepared statement, see #444)
Closes #256
Closes #264
Closes #265
Closes #300
|
process
|
reimplement fast forward following the implementation of fast forward it was disabled after writing and commenting i would always use proxysql s query processing that implements important features in other words a fully featured proxysql is faster than other proxies out there with very limited features although for several reasons fast forward mode is being reintroduced again reach high performance when complex features aren t required and benchmark draft support for unsupported part of the protocol prepared statement see closes closes closes closes
| 1
|
683,018
| 23,365,880,957
|
IssuesEvent
|
2022-08-10 15:19:15
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.tumblr.com - site is not usable
|
browser-firefox priority-critical engine-gecko
|
<!-- @browser: Firefox 104.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:104.0) Gecko/20100101 Firefox/104.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/108845 -->
**URL**: https://www.tumblr.com/login?redirect_to=%2Fdashboard
**Browser / Version**: Firefox 104.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Site loads for a second, enough to see the login screen, but quickly changes to a blank white page with no way to interact with anything.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/8/cfd2339a-1917-4778-81e3-4ac42df02e54.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220807190148</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/8/0a4b8559-ca0c-4115-8d18-24359694fcb4)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.tumblr.com - site is not usable - <!-- @browser: Firefox 104.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:104.0) Gecko/20100101 Firefox/104.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/108845 -->
**URL**: https://www.tumblr.com/login?redirect_to=%2Fdashboard
**Browser / Version**: Firefox 104.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Site loads for a second, enough to see the login screen, but quickly changes to a blank white page with no way to interact with anything.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/8/cfd2339a-1917-4778-81e3-4ac42df02e54.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220807190148</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/8/0a4b8559-ca0c-4115-8d18-24359694fcb4)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
site is not usable url browser version firefox operating system windows tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce site loads for a second enough to see the login screen but quickly changes to a blank white page with no way to interact with anything view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
7,107
| 10,263,618,548
|
IssuesEvent
|
2019-08-22 14:42:24
|
usgs/libcomcat
|
https://api.github.com/repos/usgs/libcomcat
|
closed
|
Create issue templates for better issue management
|
process
|
Templates will automatically label the issue and list requirements for issue requests from external users. (This will also be used to test the new issue management system.)
|
1.0
|
Create issue templates for better issue management - Templates will automatically label the issue and list requirements for issue requests from external users. (This will also be used to test the new issue management system.)
|
process
|
create issue templates for better issue management templates will automatically label the issue and list requirements for issue requests from external users this will also be used to test the new issue management system
| 1
|
11,109
| 13,957,680,036
|
IssuesEvent
|
2020-10-24 08:07:01
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
DE: request for a new harvesting
|
DE - Germany Geoportal Harvesting process
|
Dear Geoportal Helpdesk,
As mentioned in Roberts Mail from 2020/03/02 we would like to initiate a new push of our metadata records to the EU Geoportal. For this reason we kindly ask you to start a new harvesting of our catalogue instance and publish them for us in the Geoportal harvesting "sandbox", please.
Thanks in advance and best regards,
Anja Litka (on behalf of SDI Germany)
|
1.0
|
DE: request for a new harvesting - Dear Geoportal Helpdesk,
As mentioned in Roberts Mail from 2020/03/02 we would like to initiate a new push of our metadata records to the EU Geoportal. For this reason we kindly ask you to start a new harvesting of our catalogue instance and publish them for us in the Geoportal harvesting "sandbox", please.
Thanks in advance and best regards,
Anja Litka (on behalf of SDI Germany)
|
process
|
de request for a new harvesting dear geoportal helpdesk as mentioned in roberts mail from we would like to initiate a new push of our metadata records to the eu geoportal for this reason we kindly ask you to start a new harvesting of our catalogue instance and publish them for us in the geoportal harvesting quot sandbox quot please thanks in advance and best regards anja litka on behalf of sdi germany
| 1
|
12,980
| 15,354,666,709
|
IssuesEvent
|
2021-03-01 10:05:20
|
ooi-data/CE04OSPS-SF01B-3A-FLORTD104-streamed-flort_d_data_record
|
https://api.github.com/repos/ooi-data/CE04OSPS-SF01B-3A-FLORTD104-streamed-flort_d_data_record
|
opened
|
🛑 Processing failed: OSError
|
process
|
## Overview
`OSError` found in `processing_task` task during run ended on 2021-03-01T10:05:19.950368.
## Details
Flow name: `CE04OSPS-SF01B-3A-FLORTD104-streamed-flort_d_data_record`
Task name: `processing_task`
Error type: `OSError`
Error message: [Errno 16] Please reduce your request rate.
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 234, in _call_s3
return await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (SlowDown) when calling the DeleteObjects operation (reached max retries: 4): Please reduce your request rate.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 71, in processing_task
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 311, in finalize_zarr
source_store.fs.delete(source_store.root, recursive=True)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1138, in delete
return self.rm(path, recursive=recursive, maxdepth=maxdepth)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1445, in rm
super().rm(path, recursive=recursive, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 196, in rm
maybe_sync(self._rm, self, path, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 100, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 71, in sync
raise exc.with_traceback(tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 55, in f
result[0] = await future
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1404, in _rm
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1396, in _bulk_delete
await self._call_s3(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err) from err
OSError: [Errno 16] Please reduce your request rate.
```
</details>
|
1.0
|
🛑 Processing failed: OSError - ## Overview
`OSError` found in `processing_task` task during run ended on 2021-03-01T10:05:19.950368.
## Details
Flow name: `CE04OSPS-SF01B-3A-FLORTD104-streamed-flort_d_data_record`
Task name: `processing_task`
Error type: `OSError`
Error message: [Errno 16] Please reduce your request rate.
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 234, in _call_s3
return await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (SlowDown) when calling the DeleteObjects operation (reached max retries: 4): Please reduce your request rate.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 71, in processing_task
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 311, in finalize_zarr
source_store.fs.delete(source_store.root, recursive=True)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1138, in delete
return self.rm(path, recursive=recursive, maxdepth=maxdepth)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1445, in rm
super().rm(path, recursive=recursive, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 196, in rm
maybe_sync(self._rm, self, path, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 100, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 71, in sync
raise exc.with_traceback(tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 55, in f
result[0] = await future
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1404, in _rm
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1396, in _bulk_delete
await self._call_s3(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err) from err
OSError: [Errno 16] Please reduce your request rate.
```
</details>
|
process
|
🛑 processing failed oserror overview oserror found in processing task task during run ended on details flow name streamed flort d data record task name processing task error type oserror error message please reduce your request rate traceback traceback most recent call last file srv conda envs notebook lib site packages core py line in call return await method additional kwargs file srv conda envs notebook lib site packages aiobotocore client py line in make api call raise error class parsed response operation name botocore exceptions clienterror an error occurred slowdown when calling the deleteobjects operation reached max retries please reduce your request rate the above exception was the direct cause of the following exception traceback most recent call last file usr share miniconda envs harvester lib site packages ooi harvester processor pipeline py line in processing task file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize zarr source store fs delete source store root recursive true file srv conda envs notebook lib site packages fsspec spec py line in delete return self rm path recursive recursive maxdepth maxdepth file srv conda envs notebook lib site packages core py line in rm super rm path recursive recursive kwargs file srv conda envs notebook lib site packages fsspec asyn py line in rm maybe sync self rm self path kwargs file srv conda envs notebook lib site packages fsspec asyn py line in maybe sync return sync loop func args kwargs file srv conda envs notebook lib site packages fsspec asyn py line in sync raise exc with traceback tb file srv conda envs notebook lib site packages fsspec asyn py line in f result await future file srv conda envs notebook lib site packages core py line in rm await asyncio gather file srv conda envs notebook lib site packages core py line in bulk delete await self call file srv conda envs notebook lib site packages core py line in call raise translate boto error err from err oserror please reduce your request rate
| 1
|
73,549
| 24,679,754,158
|
IssuesEvent
|
2022-10-18 20:11:07
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
opened
|
can't share images via share dialogue
|
T-Defect
|
### Steps to reproduce
1. Open Gallery
2. Select an image
3. Click the three dots
4. Select Element dbg
5. Select a room
6. The image isn't sent, instead the normal timeline view appears
### Outcome
#### What did you expect?
The confirmation dialogue with the "original size" check box to appear.
#### What happened instead?
It showed the normal timeline view and didn't send the image.
### Your phone model
ONEPLUS A5010
### Operating system version
Android 12
### Application version and app store
1.5.4-dev [212758362] (F-b12082) develop
### Homeserver
imninja.net
### Will you send logs?
Yes
### Are you willing to provide a PR?
No
|
1.0
|
can't share images via share dialogue - ### Steps to reproduce
1. Open Gallery
2. Select an image
3. Click the three dots
4. Select Element dbg
5. Select a room
6. The image isn't sent, instead the normal timeline view appears
### Outcome
#### What did you expect?
The confirmation dialogue with the "original size" check box to appear.
#### What happened instead?
It showed the normal timeline view and didn't send the image.
### Your phone model
ONEPLUS A5010
### Operating system version
Android 12
### Application version and app store
1.5.4-dev [212758362] (F-b12082) develop
### Homeserver
imninja.net
### Will you send logs?
Yes
### Are you willing to provide a PR?
No
|
non_process
|
can t share images via share dialogue steps to reproduce open gallery select an image click the three dots select element dbg select a room the image isn t sent instead the normal timeline view appears outcome what did you expect the confirmation dialogue with the original size check box to appear what happened instead it showed the normal timeline view and didn t send the image your phone model oneplus operating system version android application version and app store dev f develop homeserver imninja net will you send logs yes are you willing to provide a pr no
| 0
|
118,684
| 25,349,097,832
|
IssuesEvent
|
2022-11-19 14:56:13
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
Missing test case coverage for include/misc/byteorder.h functions
|
Enhancement Code Coverage
|
According to LCOV, for include/misc/byteorder.h we only have 58.1% line coverage for this header file.
Need to create a testcase for these functions that prove that each one of them works with all conditional branches taken.
|
1.0
|
Missing test case coverage for include/misc/byteorder.h functions - According to LCOV, for include/misc/byteorder.h we only have 58.1% line coverage for this header file.
Need to create a testcase for these functions that prove that each one of them works with all conditional branches taken.
|
non_process
|
missing test case coverage for include misc byteorder h functions according to lcov for include misc byteorder h we only have line coverage for this header file need to create a testcase for these functions that prove that each one of them works with all conditional branches taken
| 0
|
86,659
| 15,755,776,175
|
IssuesEvent
|
2021-03-31 02:22:12
|
phunware/phuntoken
|
https://api.github.com/repos/phunware/phuntoken
|
opened
|
WS-2019-0427 (Medium) detected in multiple libraries
|
security vulnerability
|
## WS-2019-0427 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>elliptic-6.0.2.tgz</b>, <b>elliptic-6.5.0.tgz</b>, <b>elliptic-6.3.3.tgz</b></p></summary>
<p>
<details><summary><b>elliptic-6.0.2.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.0.2.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.0.2.tgz</a></p>
<p>Path to dependency file: phuntoken/package.json</p>
<p>Path to vulnerable library: phuntoken/node_modules/eccrypto/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- gsn-provider-0.1.6.tgz (Root Library)
- eth-crypto-1.4.0.tgz
- eccrypto-1.1.1.tgz
- :x: **elliptic-6.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>elliptic-6.5.0.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.0.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.0.tgz</a></p>
<p>Path to dependency file: phuntoken/package.json</p>
<p>Path to vulnerable library: phuntoken/node_modules/ganache-cli/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- gsn-provider-0.1.6.tgz (Root Library)
- eth-sig-util-2.3.0.tgz
- :x: **elliptic-6.5.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>elliptic-6.3.3.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.3.3.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.3.3.tgz</a></p>
<p>Path to dependency file: phuntoken/package.json</p>
<p>Path to vulnerable library: phuntoken/node_modules/ethers/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- openzeppelin-test-helpers-0.4.2.tgz (Root Library)
- truffle-contract-4.0.28.tgz
- truffle-interface-adapter-0.2.3.tgz
- ethers-4.0.33.tgz
- :x: **elliptic-6.3.3.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The function getNAF() in elliptic library has information leakage. This issue is mitigated in version 6.5.2
<p>Publish Date: 2019-11-22
<p>URL: <a href=https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a>WS-2019-0427</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a">https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a</a></p>
<p>Release Date: 2020-05-24</p>
<p>Fix Resolution: v6.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0427 (Medium) detected in multiple libraries - ## WS-2019-0427 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>elliptic-6.0.2.tgz</b>, <b>elliptic-6.5.0.tgz</b>, <b>elliptic-6.3.3.tgz</b></p></summary>
<p>
<details><summary><b>elliptic-6.0.2.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.0.2.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.0.2.tgz</a></p>
<p>Path to dependency file: phuntoken/package.json</p>
<p>Path to vulnerable library: phuntoken/node_modules/eccrypto/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- gsn-provider-0.1.6.tgz (Root Library)
- eth-crypto-1.4.0.tgz
- eccrypto-1.1.1.tgz
- :x: **elliptic-6.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>elliptic-6.5.0.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.0.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.0.tgz</a></p>
<p>Path to dependency file: phuntoken/package.json</p>
<p>Path to vulnerable library: phuntoken/node_modules/ganache-cli/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- gsn-provider-0.1.6.tgz (Root Library)
- eth-sig-util-2.3.0.tgz
- :x: **elliptic-6.5.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>elliptic-6.3.3.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.3.3.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.3.3.tgz</a></p>
<p>Path to dependency file: phuntoken/package.json</p>
<p>Path to vulnerable library: phuntoken/node_modules/ethers/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- openzeppelin-test-helpers-0.4.2.tgz (Root Library)
- truffle-contract-4.0.28.tgz
- truffle-interface-adapter-0.2.3.tgz
- ethers-4.0.33.tgz
- :x: **elliptic-6.3.3.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The function getNAF() in elliptic library has information leakage. This issue is mitigated in version 6.5.2
<p>Publish Date: 2019-11-22
<p>URL: <a href=https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a>WS-2019-0427</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a">https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a</a></p>
<p>Release Date: 2020-05-24</p>
<p>Fix Resolution: v6.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws medium detected in multiple libraries ws medium severity vulnerability vulnerable libraries elliptic tgz elliptic tgz elliptic tgz elliptic tgz ec cryptography library home page a href path to dependency file phuntoken package json path to vulnerable library phuntoken node modules eccrypto node modules elliptic package json dependency hierarchy gsn provider tgz root library eth crypto tgz eccrypto tgz x elliptic tgz vulnerable library elliptic tgz ec cryptography library home page a href path to dependency file phuntoken package json path to vulnerable library phuntoken node modules ganache cli node modules elliptic package json dependency hierarchy gsn provider tgz root library eth sig util tgz x elliptic tgz vulnerable library elliptic tgz ec cryptography library home page a href path to dependency file phuntoken package json path to vulnerable library phuntoken node modules ethers node modules elliptic package json dependency hierarchy openzeppelin test helpers tgz root library truffle contract tgz truffle interface adapter tgz ethers tgz x elliptic tgz vulnerable library vulnerability details the function getnaf in elliptic library has information leakage this issue is mitigated in version publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
21,636
| 30,053,175,085
|
IssuesEvent
|
2023-06-28 03:27:18
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] Port legacy metrics to MLv2
|
.Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
People can use legacy metrics in aggregation clauses, and we need first-class support from them in MLv2:
- [x] we need a method to list available metrics
- [x] `display-info` should handle an opaque legacy metric object (required fields: display name, description)
- [x] if a metric is used in a clause, it has to have a `selected: true` property in a metric list
- [x] it should be possible to construct a clause with the `aggregation-clause` method and pass it to the `aggregate` method
|
1.0
|
[MLv2] Port legacy metrics to MLv2 - People can use legacy metrics in aggregation clauses, and we need first-class support from them in MLv2:
- [x] we need a method to list available metrics
- [x] `display-info` should handle an opaque legacy metric object (required fields: display name, description)
- [x] if a metric is used in a clause, it has to have a `selected: true` property in a metric list
- [x] it should be possible to construct a clause with the `aggregation-clause` method and pass it to the `aggregate` method
|
process
|
port legacy metrics to people can use legacy metrics in aggregation clauses and we need first class support from them in we need a method to list available metrics display info should handle an opaque legacy metric object required fields display name description if a metric is used in a clause it has to have a selected true property in a metric list it should be possible to construct a clause with the aggregation clause method and pass it to the aggregate method
| 1
|
81,970
| 31,837,100,183
|
IssuesEvent
|
2023-09-14 14:08:05
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
closed
|
[🐛 Bug]: Python selenium msedge does not work with headless mode disabled. Message: unknown error: msedge failed to start: crashed
|
R-awaiting answer I-defect needs-triaging
|
### What happened?
I have a problem with a python selenium web scraper that uses microsoft edge as a webdriver. The msedgedriver works fine only when one of these is present:
edge_options.add_argument("--user-data-dir=C:\\Users\\xxx\\Desktop\\Edge_user_data")
edge_options.add_argument('--headless')
In case i want to use both, the program crashes, i have tried:
-giving msedge permissions
-reseting msedge browser
-duplicating the profile folder
-many other solutions from stackoverflow of git, but nothing has worked
### How can we reproduce the issue?
```shell
# Define Edge options
edge_options = webdriver.EdgeOptions()
edgepath = "C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe"
edge_options.setBinary = edgepath
service = webdriver.EdgeService(service_args=['--log-level=SEVERE', '--disable-build-check'])
# Using the user profile
edge_options.add_argument("--user-data-dir=C:\\Users\\xxx\\Desktop\\Edge_user_data")
# Options
logger = logging.getLogger()
logger.setLevel(logging.CRITICAL)
#logging.basicConfig(level=logging.WARNING)
edge_options.add_argument('--remote-debugging-port=0')
edge_options.add_argument('--no-first-run')
edge_options.add_argument('--no-default-browser-check')
edge_options.add_argument('--headless')
edge_options.add_argument('--log-level=3')
edge_options.add_argument("--disable-logging")
edge_options.add_argument('--start-maximized')
edge_options.add_argument('--disable-infobars')
edge_options.add_experimental_option('excludeSwitches', ['disable-popup-blocking'])
driver = webdriver.Edge(options=edge_options, service=service)
return driver
```
### Relevant log output
```shell
Traceback (most recent call last):
File "C:\Users\xxx\Desktop\opus-test17.py", line 3414, in <module>
slower_process(urls, result_file_path)
File "C:\Users\xxx\Desktop\opus-test17.py", line 3338, in slower_process
result = check_url(url)
^^^^^^^^^^^^^^
File "C:\Users\xxx\Desktop\opus-test17.py", line 42, in check_url
result = 123_check(url, username)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxx\Desktop\opus-test17.py", line 534, in 123_check
driver = initialize_driver5()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxx\Desktop\opus-test17.py", line 321, in initialize_driver5
driver = webdriver.Edge(options=edge_options, service=service)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\edge\webdriver.py", line 45, in __init__
super().__init__(
File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\chromium\webdriver.py", line 56, in __init__
super().__init__(
File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 206, in __init__
self.start_session(capabilities)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 290, in start_session
response = self.execute(Command.NEW_SESSION, caps)["value"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 345, in execute
self.error_handler.check_response(response)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 229, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: msedge failed to start: crashed.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from msedge location C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe is no longer running, so msedgedriver is assuming that msedge has crashed.)
Stacktrace:
GetHandleVerifier [0x00007FF74D583DB2+61490]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF74D516002+740642]
(No symbol) [0x00007FF74D2EB8AE]
(No symbol) [0x00007FF74D31CBFE]
(No symbol) [0x00007FF74D317F13]
(No symbol) [0x00007FF74D35B7F8]
(No symbol) [0x00007FF74D3536E3]
(No symbol) [0x00007FF74D325EAA]
(No symbol) [0x00007FF74D32518B]
(No symbol) [0x00007FF74D326634]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00007FF74D748D69+1207369]
(No symbol) [0x00007FF74D3A5304]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF74D4690F1+32273]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF74D4619E9+1801]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00007FF74D747944+1202212]
Microsoft::Applications::Events::ILogConfiguration::operator* [0x00007FF74D51E998+19784]
Microsoft::Applications::Events::ILogConfiguration::operator* [0x00007FF74D51AE54+4612]
Microsoft::Applications::Events::ILogConfiguration::operator* [0x00007FF74D51AF86+4918]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF74D50F451+713073]
BaseThreadInitThunk [0x00007FF87E917614+20]
RtlUserThreadStart [0x00007FF87FD226B1+33]
```
### Operating System
Windows 10 Pro
### Selenium version
Python 3.11.4
### What are the browser(s) and version(s) where you see this issue?
Microsoft Edge Version 116.0.1938.81 (Official build) (64-bit)
### What are the browser driver(s) and version(s) where you see this issue?
msedgedriver Version: 116.0.1938.81
### Are you using Selenium Grid?
_No response_
|
1.0
|
[🐛 Bug]: Python selenium msedge does not work with headless mode disabled. Message: unknown error: msedge failed to start: crashed - ### What happened?
I have a problem with a python selenium web scraper that uses microsoft edge as a webdriver. The msedgedriver works fine only when one of these is present:
edge_options.add_argument("--user-data-dir=C:\\Users\\xxx\\Desktop\\Edge_user_data")
edge_options.add_argument('--headless')
In case i want to use both, the program crashes, i have tried:
-giving msedge permissions
-reseting msedge browser
-duplicating the profile folder
-many other solutions from stackoverflow of git, but nothing has worked
### How can we reproduce the issue?
```shell
# Define Edge options
edge_options = webdriver.EdgeOptions()
edgepath = "C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe"
edge_options.setBinary = edgepath
service = webdriver.EdgeService(service_args=['--log-level=SEVERE', '--disable-build-check'])
# Using the user profile
edge_options.add_argument("--user-data-dir=C:\\Users\\xxx\\Desktop\\Edge_user_data")
# Options
logger = logging.getLogger()
logger.setLevel(logging.CRITICAL)
#logging.basicConfig(level=logging.WARNING)
edge_options.add_argument('--remote-debugging-port=0')
edge_options.add_argument('--no-first-run')
edge_options.add_argument('--no-default-browser-check')
edge_options.add_argument('--headless')
edge_options.add_argument('--log-level=3')
edge_options.add_argument("--disable-logging")
edge_options.add_argument('--start-maximized')
edge_options.add_argument('--disable-infobars')
edge_options.add_experimental_option('excludeSwitches', ['disable-popup-blocking'])
driver = webdriver.Edge(options=edge_options, service=service)
return driver
```
### Relevant log output
```shell
Traceback (most recent call last):
File "C:\Users\xxx\Desktop\opus-test17.py", line 3414, in <module>
slower_process(urls, result_file_path)
File "C:\Users\xxx\Desktop\opus-test17.py", line 3338, in slower_process
result = check_url(url)
^^^^^^^^^^^^^^
File "C:\Users\xxx\Desktop\opus-test17.py", line 42, in check_url
result = 123_check(url, username)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxx\Desktop\opus-test17.py", line 534, in 123_check
driver = initialize_driver5()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxx\Desktop\opus-test17.py", line 321, in initialize_driver5
driver = webdriver.Edge(options=edge_options, service=service)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\edge\webdriver.py", line 45, in __init__
super().__init__(
File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\chromium\webdriver.py", line 56, in __init__
super().__init__(
File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 206, in __init__
self.start_session(capabilities)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 290, in start_session
response = self.execute(Command.NEW_SESSION, caps)["value"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 345, in execute
self.error_handler.check_response(response)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 229, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: msedge failed to start: crashed.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from msedge location C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe is no longer running, so msedgedriver is assuming that msedge has crashed.)
Stacktrace:
GetHandleVerifier [0x00007FF74D583DB2+61490]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF74D516002+740642]
(No symbol) [0x00007FF74D2EB8AE]
(No symbol) [0x00007FF74D31CBFE]
(No symbol) [0x00007FF74D317F13]
(No symbol) [0x00007FF74D35B7F8]
(No symbol) [0x00007FF74D3536E3]
(No symbol) [0x00007FF74D325EAA]
(No symbol) [0x00007FF74D32518B]
(No symbol) [0x00007FF74D326634]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00007FF74D748D69+1207369]
(No symbol) [0x00007FF74D3A5304]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF74D4690F1+32273]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF74D4619E9+1801]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00007FF74D747944+1202212]
Microsoft::Applications::Events::ILogConfiguration::operator* [0x00007FF74D51E998+19784]
Microsoft::Applications::Events::ILogConfiguration::operator* [0x00007FF74D51AE54+4612]
Microsoft::Applications::Events::ILogConfiguration::operator* [0x00007FF74D51AF86+4918]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF74D50F451+713073]
BaseThreadInitThunk [0x00007FF87E917614+20]
RtlUserThreadStart [0x00007FF87FD226B1+33]
```
### Operating System
Windows 10 Pro
### Selenium version
Python 3.11.4
### What are the browser(s) and version(s) where you see this issue?
Microsoft Edge Version 116.0.1938.81 (Official build) (64-bit)
### What are the browser driver(s) and version(s) where you see this issue?
msedgedriver Version: 116.0.1938.81
### Are you using Selenium Grid?
_No response_
|
non_process
|
python selenium msedge does not work with headless mode disabled message unknown error msedge failed to start crashed what happened i have a problem with a python selenium web scraper that uses microsoft edge as a webdriver the msedgedriver works fine only when one of these is present edge options add argument user data dir c users xxx desktop edge user data edge options add argument headless in case i want to use both the program crashes i have tried giving msedge permissions reseting msedge browser duplicating the profile folder many other solutions from stackoverflow of git but nothing has worked how can we reproduce the issue shell define edge options edge options webdriver edgeoptions edgepath c program files microsoft edge application msedge exe edge options setbinary edgepath service webdriver edgeservice service args using the user profile edge options add argument user data dir c users xxx desktop edge user data options logger logging getlogger logger setlevel logging critical logging basicconfig level logging warning edge options add argument remote debugging port edge options add argument no first run edge options add argument no default browser check edge options add argument headless edge options add argument log level edge options add argument disable logging edge options add argument start maximized edge options add argument disable infobars edge options add experimental option excludeswitches driver webdriver edge options edge options service service return driver relevant log output shell traceback most recent call last file c users xxx desktop opus py line in slower process urls result file path file c users xxx desktop opus py line in slower process result check url url file c users xxx desktop opus py line in check url result check url username file c users xxx desktop opus py line in check driver initialize file c users xxx desktop opus py line in initialize driver webdriver edge options edge options service service file c users xxx appdata local programs python lib site packages selenium webdriver edge webdriver py line in init super init file c users xxx appdata local programs python lib site packages selenium webdriver chromium webdriver py line in init super init file c users xxx appdata local programs python lib site packages selenium webdriver remote webdriver py line in init self start session capabilities file c users xxx appdata local programs python lib site packages selenium webdriver remote webdriver py line in start session response self execute command new session caps file c users xxx appdata local programs python lib site packages selenium webdriver remote webdriver py line in execute self error handler check response response file c users xxx appdata local programs python lib site packages selenium webdriver remote errorhandler py line in check response raise exception class message screen stacktrace selenium common exceptions webdriverexception message unknown error msedge failed to start crashed unknown error devtoolsactiveport file doesn t exist the process started from msedge location c program files microsoft edge application msedge exe is no longer running so msedgedriver is assuming that msedge has crashed stacktrace gethandleverifier microsoft applications events eventproperty eventproperty no symbol no symbol no symbol no symbol no symbol no symbol no symbol no symbol microsoft applications events ilogmanager dispatcheventbroadcast no symbol microsoft applications events eventproperty eventproperty microsoft applications events eventproperty eventproperty microsoft applications events ilogmanager dispatcheventbroadcast microsoft applications events ilogconfiguration operator microsoft applications events ilogconfiguration operator microsoft applications events ilogconfiguration operator microsoft applications events eventproperty eventproperty basethreadinitthunk rtluserthreadstart operating system windows pro selenium version python what are the browser s and version s where you see this issue microsoft edge version official build bit what are the browser driver s and version s where you see this issue msedgedriver version are you using selenium grid no response
| 0
|
460,799
| 13,218,367,414
|
IssuesEvent
|
2020-08-17 08:36:56
|
DimensionDev/Maskbook
|
https://api.github.com/repos/DimensionDev/Maskbook
|
closed
|
[Bug] Auto paste (text and img based payload) on Twitter mobile failed
|
Network: twitter.com Priority: P2 (Important) Type: Bug
|
# Bug Report
https://imgur.com/mGLTMTT
- After testing on Gecko on Android, #869 is working well on Twitter mobile, the icon should up and can be clicked & functional well (should merge this into `released` etc with double check)
- However, Auto paste (text and img based payload) on Twitter mobile failed; might be due to some selector mismatch on mobile twitter
## Environment
### System
- OS:
- OS Version:
### Browser
- Browser:
- Browser Version:
### Maskbook
- Maskbook Version:
- Installation: /* May be "Store", "ZIP", or "Self-Complied" */
- Build Commit: Optionally attach a Commit ID, if it is from an pre-release branch head
## Bug Info
### Expected Behavior
/* Write the expected behavior here. */
### Actual Behavior
/* Write the actual behavior here. */
### How To Reproduce
/* Specify how it may be produced here. */
|
1.0
|
[Bug] Auto paste (text and img based payload) on Twitter mobile failed - # Bug Report
https://imgur.com/mGLTMTT
- After testing on Gecko on Android, #869 is working well on Twitter mobile, the icon should up and can be clicked & functional well (should merge this into `released` etc with double check)
- However, Auto paste (text and img based payload) on Twitter mobile failed; might be due to some selector mismatch on mobile twitter
## Environment
### System
- OS:
- OS Version:
### Browser
- Browser:
- Browser Version:
### Maskbook
- Maskbook Version:
- Installation: /* May be "Store", "ZIP", or "Self-Complied" */
- Build Commit: Optionally attach a Commit ID, if it is from an pre-release branch head
## Bug Info
### Expected Behavior
/* Write the expected behavior here. */
### Actual Behavior
/* Write the actual behavior here. */
### How To Reproduce
/* Specify how it may be produced here. */
|
non_process
|
auto paste text and img based payload on twitter mobile failed bug report after testing on gecko on android is working well on twitter mobile the icon should up and can be clicked functional well should merge this into released etc with double check however auto paste text and img based payload on twitter mobile failed might be due to some selector mismatch on mobile twitter environment system os os version browser browser browser version maskbook maskbook version installation may be store zip or self complied build commit optionally attach a commit id if it is from an pre release branch head bug info expected behavior write the expected behavior here actual behavior write the actual behavior here how to reproduce specify how it may be produced here
| 0
|
22,668
| 31,896,051,966
|
IssuesEvent
|
2023-09-18 01:52:52
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Provide flat list of DwC terms for all terms
|
Class - MeasurementOrFact Class - ResourceRelationship Docs - List of Terms non-normative Process - complete
|
See feature request at https://github.com/tdwg/dwc-qa/issues/193:
We are currently providing a vertical and horizontal list of Darwin Core terms that are part of [Simple Darwin Core](https://dwc.tdwg.org/simple/):
https://github.com/tdwg/dwc/blob/master/dist/simple_dwc_horizontal.csv
https://github.com/tdwg/dwc/blob/master/dist/simple_dwc_vertical.csv
These lists are used to "programmatically check datasets for darwin core naming conventions". For completeness, we could also provide a **vertical list of all Darwin Core terms** (not only simple ones), named `dwc_vertical.csv`. Note: I don't think it makes sense to provide a horizontal version of that list, since the resourceRelationship and measurementOrFact classes are not designed to be represented in a flat (i.e. column header) format.
Like the current lists, the list could be regenerated as part of the build process.
|
1.0
|
Provide flat list of DwC terms for all terms - See feature request at https://github.com/tdwg/dwc-qa/issues/193:
We are currently providing a vertical and horizontal list of Darwin Core terms that are part of [Simple Darwin Core](https://dwc.tdwg.org/simple/):
https://github.com/tdwg/dwc/blob/master/dist/simple_dwc_horizontal.csv
https://github.com/tdwg/dwc/blob/master/dist/simple_dwc_vertical.csv
These lists are used to "programmatically check datasets for darwin core naming conventions". For completeness, we could also provide a **vertical list of all Darwin Core terms** (not only simple ones), named `dwc_vertical.csv`. Note: I don't think it makes sense to provide a horizontal version of that list, since the resourceRelationship and measurementOrFact classes are not designed to be represented in a flat (i.e. column header) format.
Like the current lists, the list could be regenerated as part of the build process.
|
process
|
provide flat list of dwc terms for all terms see feature request at we are currently providing a vertical and horizontal list of darwin core terms that are part of these lists are used to programmatically check datasets for darwin core naming conventions for completeness we could also provide a vertical list of all darwin core terms not only simple ones named dwc vertical csv note i don t think it makes sense to provide a horizontal version of that list since the resourcerelationship and measurementorfact classes are not designed to be represented in a flat i e column header format like the current lists the list could be regenerated as part of the build process
| 1
|
35,406
| 14,684,650,013
|
IssuesEvent
|
2021-01-01 04:15:56
|
terraform-providers/terraform-provider-azurerm
|
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
|
closed
|
Support for incident configuration for azurerm_sentinel_alert_rule_scheduled
|
enhancement service/sentinel
|
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
With the [IncidentConfiguration](https://github.com/Azure/azure-sdk-for-go/blob/master/services/preview/securityinsight/mgmt/2019-01-01-preview/securityinsight/models.go#L10886) for scheduled alert rules Azure Sentinel users are able to configure:
- `CreateIncident *bool` - whether a Sentinel incident should be created or not (if false, only an alert will be generated).
- `GroupingConfiguration *GroupingConfiguration` - Allow Azure Sentinel to group similar incidents in order to avoid creating overlapping incidents.
<!--- Please leave a helpful description of the feature request here. --->
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* azurerm_sentinel_alert_rule_scheduled
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
data "azurerm_log_analytics_workspace" "example" {
name = "log-analytics1"
resource_group_name = "my-resource-group"
}
data "azurerm_logic_app_workflow" "example" {
name = "workflow1"
resource_group_name = "my-resource-group"
}
resource "azurerm_sentinel_alert_rule_scheduled" "example" {
name = "example-with-playbook"
log_analytics_workspace_id = data.azurerm_log_analytics_workspace.example.workspace_id
display_name = "example-with-incident-config"
severity = "High"
incident_configuration {
create_incident = true
grouping_configuration {
reopen_closed_incident = true
lookback_duration = "PT1H"
entities_match_method = "Custom"
group_by_entities = [ "Account", "Host", "Ip", "Url" ]
}
}
query = <<QUERY
AzureActivity |
where OperationName == "Create or Update Virtual Machine" or OperationName =="Create Deployment" |
where ActivityStatus == "Succeeded" |
make-series dcount(ResourceId) default=0 on EventSubmissionTimestamp in range(ago(7d), now(), 1d) by Caller
QUERY
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://azure.microsoft.com/en-us/roadmap/virtual-network-service-endpoint-for-azure-cosmos-db/
--->
* #0000
|
1.0
|
Support for incident configuration for azurerm_sentinel_alert_rule_scheduled - ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
With the [IncidentConfiguration](https://github.com/Azure/azure-sdk-for-go/blob/master/services/preview/securityinsight/mgmt/2019-01-01-preview/securityinsight/models.go#L10886) for scheduled alert rules Azure Sentinel users are able to configure:
- `CreateIncident *bool` - whether a Sentinel incident should be created or not (if false, only an alert will be generated).
- `GroupingConfiguration *GroupingConfiguration` - Allow Azure Sentinel to group similar incidents in order to avoid creating overlapping incidents.
<!--- Please leave a helpful description of the feature request here. --->
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* azurerm_sentinel_alert_rule_scheduled
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
data "azurerm_log_analytics_workspace" "example" {
name = "log-analytics1"
resource_group_name = "my-resource-group"
}
data "azurerm_logic_app_workflow" "example" {
name = "workflow1"
resource_group_name = "my-resource-group"
}
resource "azurerm_sentinel_alert_rule_scheduled" "example" {
name = "example-with-playbook"
log_analytics_workspace_id = data.azurerm_log_analytics_workspace.example.workspace_id
display_name = "example-with-incident-config"
severity = "High"
incident_configuration {
create_incident = true
grouping_configuration {
reopen_closed_incident = true
lookback_duration = "PT1H"
entities_match_method = "Custom"
group_by_entities = [ "Account", "Host", "Ip", "Url" ]
}
}
query = <<QUERY
AzureActivity |
where OperationName == "Create or Update Virtual Machine" or OperationName =="Create Deployment" |
where ActivityStatus == "Succeeded" |
make-series dcount(ResourceId) default=0 on EventSubmissionTimestamp in range(ago(7d), now(), 1d) by Caller
QUERY
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://azure.microsoft.com/en-us/roadmap/virtual-network-service-endpoint-for-azure-cosmos-db/
--->
* #0000
|
non_process
|
support for incident configuration for azurerm sentinel alert rule scheduled community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description with the for scheduled alert rules azure sentinel users are able to configure createincident bool whether a sentinel incident should be created or not if false only an alert will be generated groupingconfiguration groupingconfiguration allow azure sentinel to group similar incidents in order to avoid creating overlapping incidents new or affected resource s azurerm sentinel alert rule scheduled potential terraform configuration hcl data azurerm log analytics workspace example name log resource group name my resource group data azurerm logic app workflow example name resource group name my resource group resource azurerm sentinel alert rule scheduled example name example with playbook log analytics workspace id data azurerm log analytics workspace example workspace id display name example with incident config severity high incident configuration create incident true grouping configuration reopen closed incident true lookback duration entities match method custom group by entities query query azureactivity where operationname create or update virtual machine or operationname create deployment where activitystatus succeeded make series dcount resourceid default on eventsubmissiontimestamp in range ago now by caller query references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation for example
| 0
|
716,082
| 24,620,470,190
|
IssuesEvent
|
2022-10-15 21:28:58
|
us4useu/arrus
|
https://api.github.com/repos/us4useu/arrus
|
closed
|
It should be possible to set txNPeriods = 0.5.
|
type: bug priority: high matlab stale
|
Now txNPeriods = 1 is minimal value that can be set.
|
1.0
|
It should be possible to set txNPeriods = 0.5. - Now txNPeriods = 1 is minimal value that can be set.
|
non_process
|
it should be possible to set txnperiods now txnperiods is minimal value that can be set
| 0
|
74,785
| 7,445,046,652
|
IssuesEvent
|
2018-03-28 02:06:41
|
Microsoft/vscode
|
https://api.github.com/repos/Microsoft/vscode
|
closed
|
Test large file tokenization/coloring optimization
|
testplan-item
|
- [x] anyOS @sbatten
- [x] Linux @Tyriar
Complexity: 2
Testing #45944
When opening a file, we will show the top of the document or the last viewed portion of the document. For former case, we always see colorful content as we do some *warmup* (tokenize the top of the document) in advance. However for the later one, we sometimes see white code (in dark theme) first and then they become colorful as the tokenization is not super fast.
Our optimization here is tokenizing the viewport when opening the file by guessing the initial state of tokenization. So it **can be wrong** at the very beginning and get corrected after a while.
Testers you may want to verify
* The tokenization of the viewport can be wrong but when the whole tokenization process is done, all contents are in good state.
* Make sure contents after the viewport are tokenized finally.
|
1.0
|
Test large file tokenization/coloring optimization - - [x] anyOS @sbatten
- [x] Linux @Tyriar
Complexity: 2
Testing #45944
When opening a file, we will show the top of the document or the last viewed portion of the document. For former case, we always see colorful content as we do some *warmup* (tokenize the top of the document) in advance. However for the later one, we sometimes see white code (in dark theme) first and then they become colorful as the tokenization is not super fast.
Our optimization here is tokenizing the viewport when opening the file by guessing the initial state of tokenization. So it **can be wrong** at the very beginning and get corrected after a while.
Testers you may want to verify
* The tokenization of the viewport can be wrong but when the whole tokenization process is done, all contents are in good state.
* Make sure contents after the viewport are tokenized finally.
|
non_process
|
test large file tokenization coloring optimization anyos sbatten linux tyriar complexity testing when opening a file we will show the top of the document or the last viewed portion of the document for former case we always see colorful content as we do some warmup tokenize the top of the document in advance however for the later one we sometimes see white code in dark theme first and then they become colorful as the tokenization is not super fast our optimization here is tokenizing the viewport when opening the file by guessing the initial state of tokenization so it can be wrong at the very beginning and get corrected after a while testers you may want to verify the tokenization of the viewport can be wrong but when the whole tokenization process is done all contents are in good state make sure contents after the viewport are tokenized finally
| 0
|
5,075
| 6,996,261,249
|
IssuesEvent
|
2017-12-15 23:19:21
|
angular/angular
|
https://api.github.com/repos/angular/angular
|
closed
|
Service Worker should check for updates on every navigation
|
comp: service-worker freq3: high severity2: inconvenient type: bug
|
Currently the SW checks for updates on cold starts. Cold starts are unpredictable. This technique is generally fine in production but makes it hard for developers to verify the update process.
The SW should instead always fetch `ngsw.json` on navigation.
|
1.0
|
Service Worker should check for updates on every navigation - Currently the SW checks for updates on cold starts. Cold starts are unpredictable. This technique is generally fine in production but makes it hard for developers to verify the update process.
The SW should instead always fetch `ngsw.json` on navigation.
|
non_process
|
service worker should check for updates on every navigation currently the sw checks for updates on cold starts cold starts are unpredictable this technique is generally fine in production but makes it hard for developers to verify the update process the sw should instead always fetch ngsw json on navigation
| 0
|
177,530
| 13,728,158,877
|
IssuesEvent
|
2020-10-04 10:16:52
|
Cookie-AutoDelete/Cookie-AutoDelete
|
https://api.github.com/repos/Cookie-AutoDelete/Cookie-AutoDelete
|
closed
|
[BUG] Some data does not get deleted
|
incomplete untested bug/issue
|
**Describe the bug**
Some data does not get deleted, despite the fact that I want them to get deleted. Undesirable data does **not** include cookies, it is only cache and local storage. That type of data is stored for non-whitelisted websites.
**To Reproduce**
Steps to reproduce the behavior:
1. Have all "Other Browsing Data Cleanup Options" enabled, few sites in whitelist.
2. Browse as usual for a long time.
3. Go to brave://settings/siteData (or chrome, that probably will work but I haven't checked)
4. Clean data using this extension again to make sure
**Expected Behavior**
I should see data only for whitelisted sites.
**Screenshots**
Screenshots will not be provided because it should contain websites.
**Your System Info**
- OS: macOS 11.0 beta
- Browser Info: Brave Version 1.14.84 Chromium: 85.0.4183.121 (Official Build) (64-bit)
- CookieAutoDelete Version: 3.5.1
|
1.0
|
[BUG] Some data does not get deleted - **Describe the bug**
Some data does not get deleted, despite the fact that I want them to get deleted. Undesirable data does **not** include cookies, it is only cache and local storage. That type of data is stored for non-whitelisted websites.
**To Reproduce**
Steps to reproduce the behavior:
1. Have all "Other Browsing Data Cleanup Options" enabled, few sites in whitelist.
2. Browse as usual for a long time.
3. Go to brave://settings/siteData (or chrome, that probably will work but I haven't checked)
4. Clean data using this extension again to make sure
**Expected Behavior**
I should see data only for whitelisted sites.
**Screenshots**
Screenshots will not be provided because it should contain websites.
**Your System Info**
- OS: macOS 11.0 beta
- Browser Info: Brave Version 1.14.84 Chromium: 85.0.4183.121 (Official Build) (64-bit)
- CookieAutoDelete Version: 3.5.1
|
non_process
|
some data does not get deleted describe the bug some data does not get deleted despite the fact that i want them to get deleted undesirable data does not include cookies it is only cache and local storage that type of data is stored for non whitelisted websites to reproduce steps to reproduce the behavior have all other browsing data cleanup options enabled few sites in whitelist browse as usual for a long time go to brave settings sitedata or chrome that probably will work but i haven t checked clean data using this extension again to make sure expected behavior i should see data only for whitelisted sites screenshots screenshots will not be provided because it should contain websites your system info os macos beta browser info brave version chromium official build bit cookieautodelete version
| 0
|
402,600
| 27,375,669,180
|
IssuesEvent
|
2023-02-28 05:35:35
|
aws-samples/measuring-demand-forecast-benefits
|
https://api.github.com/repos/aws-samples/measuring-demand-forecast-benefits
|
opened
|
Update for Forecast expanded holiday calendar launch
|
documentation good first issue
|
Last week Amazon Forecast [expanded country support](https://aws.amazon.com/about-aws/whats-new/2023/02/amazon-forecast-built-in-holiday-data-251-countries/) for its built-in holiday featurization.
In the past we've used [workalendar](https://github.com/workalendar/workalendar) for holiday RTS for 2 reasons:
1. At the time of writing it covered some relevant countries that weren't available with Forecast's built-in calendars, particularly for Asia
- This should no longer be the case
2. At the time of writing, Forecast (and Canvas) didn't support trans-national models including multiple different holiday calendars
- I believe this still applies
Need somebody to go through the repo and check for + remove any mentions of (1) in the commentary, to avoid confusing people with inaccurate info. Probably no actual code changes required because (2) means our demonstrated approach still makes sense.
An extra observation: I'm not seeing the new Forecast countries available through SageMaker Canvas UI yet, but assume they'll get there at some point 🤷♂️ Something to be aware of when updating the comments.
|
1.0
|
Update for Forecast expanded holiday calendar launch - Last week Amazon Forecast [expanded country support](https://aws.amazon.com/about-aws/whats-new/2023/02/amazon-forecast-built-in-holiday-data-251-countries/) for its built-in holiday featurization.
In the past we've used [workalendar](https://github.com/workalendar/workalendar) for holiday RTS for 2 reasons:
1. At the time of writing it covered some relevant countries that weren't available with Forecast's built-in calendars, particularly for Asia
- This should no longer be the case
2. At the time of writing, Forecast (and Canvas) didn't support trans-national models including multiple different holiday calendars
- I believe this still applies
Need somebody to go through the repo and check for + remove any mentions of (1) in the commentary, to avoid confusing people with inaccurate info. Probably no actual code changes required because (2) means our demonstrated approach still makes sense.
An extra observation: I'm not seeing the new Forecast countries available through SageMaker Canvas UI yet, but assume they'll get there at some point 🤷♂️ Something to be aware of when updating the comments.
|
non_process
|
update for forecast expanded holiday calendar launch last week amazon forecast for its built in holiday featurization in the past we ve used for holiday rts for reasons at the time of writing it covered some relevant countries that weren t available with forecast s built in calendars particularly for asia this should no longer be the case at the time of writing forecast and canvas didn t support trans national models including multiple different holiday calendars i believe this still applies need somebody to go through the repo and check for remove any mentions of in the commentary to avoid confusing people with inaccurate info probably no actual code changes required because means our demonstrated approach still makes sense an extra observation i m not seeing the new forecast countries available through sagemaker canvas ui yet but assume they ll get there at some point 🤷♂️ something to be aware of when updating the comments
| 0
|
12,468
| 14,938,157,965
|
IssuesEvent
|
2021-01-25 15:27:27
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM]User is not forcefully logged out when superadmin changes any user related permissions for that user
|
Bug P1 Participant manager datastore Process: Fixed Process: Release 2 Process: Tested QA Process: Tested dev Unknown backend
|
**Actual**:- User is not forcefully logged out when superadmin changes any user related permissions for that user.
**Expected**:- User should forcefully logged out of the App, when superadmin changes any user related permissions for that user.
**Note**:- Issue also applies when Superadmin update any details for that user.
|
4.0
|
[PM]User is not forcefully logged out when superadmin changes any user related permissions for that user - **Actual**:- User is not forcefully logged out when superadmin changes any user related permissions for that user.
**Expected**:- User should forcefully logged out of the App, when superadmin changes any user related permissions for that user.
**Note**:- Issue also applies when Superadmin update any details for that user.
|
process
|
user is not forcefully logged out when superadmin changes any user related permissions for that user actual user is not forcefully logged out when superadmin changes any user related permissions for that user expected user should forcefully logged out of the app when superadmin changes any user related permissions for that user note issue also applies when superadmin update any details for that user
| 1
|
144,655
| 19,293,482,141
|
IssuesEvent
|
2021-12-12 07:11:36
|
ghc-dev/Justin-Mccall
|
https://api.github.com/repos/ghc-dev/Justin-Mccall
|
closed
|
CVE-2020-11612 (High) detected in netty-codec-4.1.39.Final.jar - autoclosed
|
security vulnerability
|
## CVE-2020-11612 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-4.1.39.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.39.Final/38b9d79e31f6b00bd680f88c0289a2522d30d05b/netty-codec-4.1.39.Final.jar</p>
<p>
Dependency Hierarchy:
- netty-codec-http-4.1.39.Final.jar (Root Library)
- netty-handler-4.1.39.Final.jar
- :x: **netty-codec-4.1.39.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Justin-Mccall/commit/b854d26024ec02742a917adb05c857348a3ecf26">b854d26024ec02742a917adb05c857348a3ecf26</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The ZlibDecoders in Netty 4.1.x before 4.1.46 allow for unbounded memory allocation while decoding a ZlibEncoded byte stream. An attacker could send a large ZlibEncoded byte stream to the Netty server, forcing the server to allocate all of its free memory to a single decoder.
<p>Publish Date: 2020-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11612>CVE-2020-11612</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://netty.io/news/2020/02/28/4-1-46-Final.html">https://netty.io/news/2020/02/28/4-1-46-Final.html</a></p>
<p>Release Date: 2020-04-07</p>
<p>Fix Resolution: io.netty:netty-codec:4.1.46.Final;io.netty:netty-all:4.1.46.Final</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec","packageVersion":"4.1.39.Final","packageFilePaths":["/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"io.netty:netty-codec-http:4.1.39.Final;io.netty:netty-handler:4.1.39.Final;io.netty:netty-codec:4.1.39.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-codec:4.1.46.Final;io.netty:netty-all:4.1.46.Final","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-11612","vulnerabilityDetails":"The ZlibDecoders in Netty 4.1.x before 4.1.46 allow for unbounded memory allocation while decoding a ZlibEncoded byte stream. An attacker could send a large ZlibEncoded byte stream to the Netty server, forcing the server to allocate all of its free memory to a single decoder.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11612","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-11612 (High) detected in netty-codec-4.1.39.Final.jar - autoclosed - ## CVE-2020-11612 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-4.1.39.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.39.Final/38b9d79e31f6b00bd680f88c0289a2522d30d05b/netty-codec-4.1.39.Final.jar</p>
<p>
Dependency Hierarchy:
- netty-codec-http-4.1.39.Final.jar (Root Library)
- netty-handler-4.1.39.Final.jar
- :x: **netty-codec-4.1.39.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Justin-Mccall/commit/b854d26024ec02742a917adb05c857348a3ecf26">b854d26024ec02742a917adb05c857348a3ecf26</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The ZlibDecoders in Netty 4.1.x before 4.1.46 allow for unbounded memory allocation while decoding a ZlibEncoded byte stream. An attacker could send a large ZlibEncoded byte stream to the Netty server, forcing the server to allocate all of its free memory to a single decoder.
<p>Publish Date: 2020-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11612>CVE-2020-11612</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://netty.io/news/2020/02/28/4-1-46-Final.html">https://netty.io/news/2020/02/28/4-1-46-Final.html</a></p>
<p>Release Date: 2020-04-07</p>
<p>Fix Resolution: io.netty:netty-codec:4.1.46.Final;io.netty:netty-all:4.1.46.Final</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec","packageVersion":"4.1.39.Final","packageFilePaths":["/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"io.netty:netty-codec-http:4.1.39.Final;io.netty:netty-handler:4.1.39.Final;io.netty:netty-codec:4.1.39.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-codec:4.1.46.Final;io.netty:netty-all:4.1.46.Final","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-11612","vulnerabilityDetails":"The ZlibDecoders in Netty 4.1.x before 4.1.46 allow for unbounded memory allocation while decoding a ZlibEncoded byte stream. An attacker could send a large ZlibEncoded byte stream to the Netty server, forcing the server to allocate all of its free memory to a single decoder.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11612","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in netty codec final jar autoclosed cve high severity vulnerability vulnerable library netty codec final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files io netty netty codec final netty codec final jar dependency hierarchy netty codec http final jar root library netty handler final jar x netty codec final jar vulnerable library found in head commit a href found in base branch master vulnerability details the zlibdecoders in netty x before allow for unbounded memory allocation while decoding a zlibencoded byte stream an attacker could send a large zlibencoded byte stream to the netty server forcing the server to allocate all of its free memory to a single decoder publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty codec final io netty netty all final isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree io netty netty codec http final io netty netty handler final io netty netty codec final isminimumfixversionavailable true minimumfixversion io netty netty codec final io netty netty all final isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the zlibdecoders in netty x before allow for unbounded memory allocation while decoding a zlibencoded byte stream an attacker could send a large zlibencoded byte stream to the netty server forcing the server to allocate all of its free memory to a single decoder vulnerabilityurl
| 0
|
767
| 3,253,131,727
|
IssuesEvent
|
2015-10-19 17:44:53
|
beesmart-it/trend-hrm
|
https://api.github.com/repos/beesmart-it/trend-hrm
|
closed
|
Quick view of selection process applicant details
|
requirement selection process
|
Quick view of selection process applicant details using right panel
|
1.0
|
Quick view of selection process applicant details - Quick view of selection process applicant details using right panel
|
process
|
quick view of selection process applicant details quick view of selection process applicant details using right panel
| 1
|
133,396
| 5,202,492,124
|
IssuesEvent
|
2017-01-24 09:44:16
|
openvstorage/volumedriver
|
https://api.github.com/repos/openvstorage/volumedriver
|
opened
|
Edge communication on RDMA
|
priority_urgent SRP type_enhancement
|
Get the Edge to work reliably over RDMA. Please create the necessary tickets on the relevant repos.
|
1.0
|
Edge communication on RDMA - Get the Edge to work reliably over RDMA. Please create the necessary tickets on the relevant repos.
|
non_process
|
edge communication on rdma get the edge to work reliably over rdma please create the necessary tickets on the relevant repos
| 0
|
183,689
| 14,950,081,627
|
IssuesEvent
|
2021-01-26 12:32:53
|
reliablyhq/cli
|
https://api.github.com/repos/reliablyhq/cli
|
closed
|
Imported code licensing
|
documentation enhancement
|
Hey,
I think we need to make sure we follow the licences of code we import into our code base. i'm guessing, for instance, the [NestedMapLookup](https://github.com/reliablyhq/cli/blob/main/utils/map.go) comes from this [gist](https://gist.github.com/ChristopherThorpe/fd3720efe2ba83c929bf4105719ee967) which is using a [CC 4](https://creativecommons.org/licenses/by/4.0/) that imposes attribution (a simple link back in a comment would do).
Generally speaking, let's make sure any code we bring over works with our Apache license and that we attribute back.
|
1.0
|
Imported code licensing - Hey,
I think we need to make sure we follow the licences of code we import into our code base. i'm guessing, for instance, the [NestedMapLookup](https://github.com/reliablyhq/cli/blob/main/utils/map.go) comes from this [gist](https://gist.github.com/ChristopherThorpe/fd3720efe2ba83c929bf4105719ee967) which is using a [CC 4](https://creativecommons.org/licenses/by/4.0/) that imposes attribution (a simple link back in a comment would do).
Generally speaking, let's make sure any code we bring over works with our Apache license and that we attribute back.
|
non_process
|
imported code licensing hey i think we need to make sure we follow the licences of code we import into our code base i m guessing for instance the comes from this which is using a that imposes attribution a simple link back in a comment would do generally speaking let s make sure any code we bring over works with our apache license and that we attribute back
| 0
|
1,516
| 4,107,712,725
|
IssuesEvent
|
2016-06-06 13:58:19
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Multi-organism terms: regulation of protein localisation and cytokine secretion
|
multiorganism processes New term request PARL-UCL
|
I need to create some multi-organism terms for the annotation of PMID:25063865. Putting here as a reminder and in case of comments.
**ONE:
Leishmania LPG2 (Q25266) excludes mouse Synaptotagmin XI (Syt11) from the phagosome:**
modulation by symbiont of host cellular process ; GO:0044068
——[isa]modulation by symbiont of host protein localisation to phagocytic vesicle ; GO:NEW1
———[isa]negative regulation by symbiont of host protein localisation to phagocytic vesicle ; GO:NEW2
modulation by symbiont of host protein localisation to phagocytic vesicle ; GO:NEW1
Any process in which an organism modulates the frequency, rate or extent of protein localisation to the host phagosome. The host is defined as the larger of the organisms involved in a symbiotic interaction.
Exact synonym: modulation by symbiont of host protein localisation to phagosome [GOC:bf]
negative regulation by symbiont of host protein localisation to phagocytic vesicle ; GO:NEW2
Any process in which an organism stops, prevents, or reduces the frequency, rate or extent of protein localisation to the host phagosome. The host is defined as the larger of the organisms involved in a symbiotic interaction.
Exact synonym: inhibition of host protein localisation to phagosome
Exact synonym: disruption of host protein localisation to phagosome
Exact synonym: suppression of host protein localisation to phagosome
**TWO:
Leishmania GP63 cleaves Syntaxin XI (a negative regulator of cytokine secretion) and thereby promotes cytokine secretion:**
modulation by symbiont of host innate immune response ; GO:0052167]
—[isa]positive regulation by symbiont of host immune response ; GO:0052166
——[isa]positive regulation by symbiont of host cytokine secretion ; GO:NEW
positive regulation by symbiont of host cytokine secretion ; GO:NEW
Any process in which an organism activates, maintains or increases the frequency, rate or extent of cytokine secretion in the host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
exact synonym: induction of cytokine secretion [PMID: 25063865]
|
1.0
|
Multi-organism terms: regulation of protein localisation and cytokine secretion - I need to create some multi-organism terms for the annotation of PMID:25063865. Putting here as a reminder and in case of comments.
**ONE:
Leishmania LPG2 (Q25266) excludes mouse Synaptotagmin XI (Syt11) from the phagosome:**
modulation by symbiont of host cellular process ; GO:0044068
——[isa]modulation by symbiont of host protein localisation to phagocytic vesicle ; GO:NEW1
———[isa]negative regulation by symbiont of host protein localisation to phagocytic vesicle ; GO:NEW2
modulation by symbiont of host protein localisation to phagocytic vesicle ; GO:NEW1
Any process in which an organism modulates the frequency, rate or extent of protein localisation to the host phagosome. The host is defined as the larger of the organisms involved in a symbiotic interaction.
Exact synonym: modulation by symbiont of host protein localisation to phagosome [GOC:bf]
negative regulation by symbiont of host protein localisation to phagocytic vesicle ; GO:NEW2
Any process in which an organism stops, prevents, or reduces the frequency, rate or extent of protein localisation to the host phagosome. The host is defined as the larger of the organisms involved in a symbiotic interaction.
Exact synonym: inhibition of host protein localisation to phagosome
Exact synonym: disruption of host protein localisation to phagosome
Exact synonym: suppression of host protein localisation to phagosome
**TWO:
Leishmania GP63 cleaves Syntaxin XI (a negative regulator of cytokine secretion) and thereby promotes cytokine secretion:**
modulation by symbiont of host innate immune response ; GO:0052167]
—[isa]positive regulation by symbiont of host immune response ; GO:0052166
——[isa]positive regulation by symbiont of host cytokine secretion ; GO:NEW
positive regulation by symbiont of host cytokine secretion ; GO:NEW
Any process in which an organism activates, maintains or increases the frequency, rate or extent of cytokine secretion in the host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
exact synonym: induction of cytokine secretion [PMID: 25063865]
|
process
|
multi organism terms regulation of protein localisation and cytokine secretion i need to create some multi organism terms for the annotation of pmid putting here as a reminder and in case of comments one leishmania excludes mouse synaptotagmin xi from the phagosome modulation by symbiont of host cellular process go —— modulation by symbiont of host protein localisation to phagocytic vesicle go ——— negative regulation by symbiont of host protein localisation to phagocytic vesicle go modulation by symbiont of host protein localisation to phagocytic vesicle go any process in which an organism modulates the frequency rate or extent of protein localisation to the host phagosome the host is defined as the larger of the organisms involved in a symbiotic interaction exact synonym modulation by symbiont of host protein localisation to phagosome negative regulation by symbiont of host protein localisation to phagocytic vesicle go any process in which an organism stops prevents or reduces the frequency rate or extent of protein localisation to the host phagosome the host is defined as the larger of the organisms involved in a symbiotic interaction exact synonym inhibition of host protein localisation to phagosome exact synonym disruption of host protein localisation to phagosome exact synonym suppression of host protein localisation to phagosome two leishmania cleaves syntaxin xi a negative regulator of cytokine secretion and thereby promotes cytokine secretion modulation by symbiont of host innate immune response go — positive regulation by symbiont of host immune response go —— positive regulation by symbiont of host cytokine secretion go new positive regulation by symbiont of host cytokine secretion go new any process in which an organism activates maintains or increases the frequency rate or extent of cytokine secretion in the host organism the host is defined as the larger of the organisms involved in a symbiotic interaction exact synonym induction of cytokine secretion
| 1
|
419,296
| 28,142,257,236
|
IssuesEvent
|
2023-04-02 03:51:12
|
AY2223S2-CS2103T-W13-2/tp
|
https://api.github.com/repos/AY2223S2-CS2103T-W13-2/tp
|
closed
|
[PE-D][Tester C] `Group` is an unknown command
|
documentation Ready to fix
|
# Problem
The command `group m/add g/Team Dynamite` is specified under command summary and heavily mentioned in the developer guide.

# Solution
Probably something to do with the parser.
<!--session: 1680242322877-2a3d05ea-bb97-4bd2-ae27-1d1270550619--><!--Version: Web v3.4.7-->
-------------
Labels: `type.FunctionalityBug` `severity.High`
original: Zxun2/ped#11
|
1.0
|
[PE-D][Tester C] `Group` is an unknown command - # Problem
The command `group m/add g/Team Dynamite` is specified under command summary and heavily mentioned in the developer guide.

# Solution
Probably something to do with the parser.
<!--session: 1680242322877-2a3d05ea-bb97-4bd2-ae27-1d1270550619--><!--Version: Web v3.4.7-->
-------------
Labels: `type.FunctionalityBug` `severity.High`
original: Zxun2/ped#11
|
non_process
|
group is an unknown command problem the command group m add g team dynamite is specified under command summary and heavily mentioned in the developer guide solution probably something to do with the parser labels type functionalitybug severity high original ped
| 0
|
22,295
| 30,849,973,890
|
IssuesEvent
|
2023-08-02 16:01:38
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
@appsignal/cli 1.2.6 has 1 guarddog issues
|
npm-silent-process-execution
|
```{"npm-silent-process-execution":[{"code":" (0, child_process_1.spawn)(\"node\", [__filename], {\n cwd: process.cwd(),\n detached: true,\n env: __assign(__assign(__assign({}, process.env), env), { _APPSIGNAL_NODE_MODULE_PACKAGE_ROOT: packageRoot }),\n stdio:...).unref();","location":"package/dist/commands/demo.js:39","message":"This package is silently executing another executable"}]}```
|
1.0
|
@appsignal/cli 1.2.6 has 1 guarddog issues - ```{"npm-silent-process-execution":[{"code":" (0, child_process_1.spawn)(\"node\", [__filename], {\n cwd: process.cwd(),\n detached: true,\n env: __assign(__assign(__assign({}, process.env), env), { _APPSIGNAL_NODE_MODULE_PACKAGE_ROOT: packageRoot }),\n stdio:...).unref();","location":"package/dist/commands/demo.js:39","message":"This package is silently executing another executable"}]}```
|
process
|
appsignal cli has guarddog issues npm silent process execution n cwd process cwd n detached true n env assign assign assign process env env appsignal node module package root packageroot n stdio unref location package dist commands demo js message this package is silently executing another executable
| 1
|
20,502
| 27,166,657,522
|
IssuesEvent
|
2023-02-17 15:51:10
|
ESMValGroup/ESMValCore
|
https://api.github.com/repos/ESMValGroup/ESMValCore
|
closed
|
Preprocessor `multimodel` fails due to different scalar aux_coords
|
preprocessor
|
The creation of a preprocessor to compute global spatial averaged fields on native grids
correctly create 1D cubes but with different lat/lon (or i,j) values.
This trigger an iris exception error when the `multimodel` preprocessor attempt to merge the cubes
https://github.com/ESMValGroup/ESMValCore/blob/be6f654f059d5e07e0ab45ed4b90d92be6ad495d/esmvalcore/preprocessor/_multimodel.py#L293
As temporary workaround, using data regridding before the spatial average solves the problem,
but I think that the support to work with native grids data derivation is a relevant item for the system
**Attachments**
[test_time_mm.yml](https://github.com/ESMValGroup/ESMValCore/files/8796717/test_time_mm.yml.txt)
[main_log_debug.txt](https://github.com/ESMValGroup/ESMValCore/files/8796705/main_log_debug.txt)
|
1.0
|
Preprocessor `multimodel` fails due to different scalar aux_coords - The creation of a preprocessor to compute global spatial averaged fields on native grids
correctly create 1D cubes but with different lat/lon (or i,j) values.
This trigger an iris exception error when the `multimodel` preprocessor attempt to merge the cubes
https://github.com/ESMValGroup/ESMValCore/blob/be6f654f059d5e07e0ab45ed4b90d92be6ad495d/esmvalcore/preprocessor/_multimodel.py#L293
As temporary workaround, using data regridding before the spatial average solves the problem,
but I think that the support to work with native grids data derivation is a relevant item for the system
**Attachments**
[test_time_mm.yml](https://github.com/ESMValGroup/ESMValCore/files/8796717/test_time_mm.yml.txt)
[main_log_debug.txt](https://github.com/ESMValGroup/ESMValCore/files/8796705/main_log_debug.txt)
|
process
|
preprocessor multimodel fails due to different scalar aux coords the creation of a preprocessor to compute global spatial averaged fields on native grids correctly create cubes but with different lat lon or i j values this trigger an iris exception error when the multimodel preprocessor attempt to merge the cubes as temporary workaround using data regridding before the spatial average solves the problem but i think that the support to work with native grids data derivation is a relevant item for the system attachments
| 1
|
156,709
| 13,653,821,136
|
IssuesEvent
|
2020-09-27 14:34:24
|
fga-eps-mds/2020.1-eSaudeUnB-Wiki
|
https://api.github.com/repos/fga-eps-mds/2020.1-eSaudeUnB-Wiki
|
closed
|
Elaborar Plano de Qualidade do Projeto
|
EPS Product Owner documentation
|
<!--
nome: Solicitação de recurso
sobre: Sugira novas ideias para o projeto
titulo: "#number_issue: Name_for_issue"
rótulos: ''
designados: ''
-->
### **Descrição do Problema**
<!-- Apresentar um breve resumo do problema que a feature deseja resolver -->
Elaborar um plano com as estratégias para manter a qualidade dentro do projeto.
|
1.0
|
Elaborar Plano de Qualidade do Projeto - <!--
nome: Solicitação de recurso
sobre: Sugira novas ideias para o projeto
titulo: "#number_issue: Name_for_issue"
rótulos: ''
designados: ''
-->
### **Descrição do Problema**
<!-- Apresentar um breve resumo do problema que a feature deseja resolver -->
Elaborar um plano com as estratégias para manter a qualidade dentro do projeto.
|
non_process
|
elaborar plano de qualidade do projeto nome solicitação de recurso sobre sugira novas ideias para o projeto titulo number issue name for issue rótulos designados descrição do problema elaborar um plano com as estratégias para manter a qualidade dentro do projeto
| 0
|
796,156
| 28,100,219,203
|
IssuesEvent
|
2023-03-30 18:51:42
|
AY2223S2-CS2103T-T15-1/tp
|
https://api.github.com/repos/AY2223S2-CS2103T-T15-1/tp
|
closed
|
Add cost value counting to item inventories
|
type.Story priority.Medium
|
As a TTRPG player / facilitator, I am able to view the total gold value of an entity's inventory so that it is convenient for me to make value judgements on their holdings.
|
1.0
|
Add cost value counting to item inventories - As a TTRPG player / facilitator, I am able to view the total gold value of an entity's inventory so that it is convenient for me to make value judgements on their holdings.
|
non_process
|
add cost value counting to item inventories as a ttrpg player facilitator i am able to view the total gold value of an entity s inventory so that it is convenient for me to make value judgements on their holdings
| 0
|
8,054
| 11,221,134,694
|
IssuesEvent
|
2020-01-07 17:09:48
|
nrnb/GoogleSummerOfCode
|
https://api.github.com/repos/nrnb/GoogleSummerOfCode
|
opened
|
Develop a system that generates various spatial SBML models using deep learning
|
Difficulty: 2 Image processing Java Machine learning Python SBML XML
|
### Background
[SBML(Systems Biology Markup Language)](http://sbml.org) has several extension packages to extend its capability. One of its extension is called [Spatial Processes (spatial)](http://sbml.org/Documents/Specifications/SBML_Level_3/Packages/spatial) which supports for describing processes that involve a spatial component, and describing the geometries involved. SBML spatial extension enables users to build a spatial model and run a [spatial simulation](https://github.com/funasoul/docker-spatialsim/raw/master/images/sam2d_s1_cyt.gif).
We have been working on the development of a software tool, [XitoSBML](https://github.com/spatialsimulator/XitoSBML): a spatial model builder that will generate a spatial SBML model from microscopic images.
Although XitoSBML provides a user-friendly UI to create a spatial SBML model, there only exists few spatial SBML models for spatial simulation. In order to use XitoSBML, cell regions in microscopic images of cells need to be segmented beforehand by image processing. This process (which is called segmentation) is a bottleneck of creating spatial SBML models.
On the other hand, the accuracy of image processing using deep learning has been remarkable in recent years, and methods for [highly accurate segmentation](https://arxiv.org/abs/1505.04597) have been proposed.
This summer, we would like to mentor a student who will implement a system that automatically generates various spatial SBML models by comprehensively segmenting microscopic images of cells using deep learning and XitoSBML.
### Goal
As described in the "Background", the project could be split down into following 4 tasks.
- Collect images from a database in which microscopic images of cells have been published.
- Prepare some datasets which will be used by deep learning algorithm for performing segmentation.
- Train the learning machine (deep learning algorithm) to comprehensively segment microscopic images of cells
- Implement a software tool to automatically convert segmented images to Spatial SBML Models using XitoSBML
The following API / frameworks will be used for this project.
- [PyTorch](https://pytorch.org/) ... for deep learning
- [ImageJ API](https://imagej.net/Developing_Plugins_for_ImageJ_1.x) ... for XitoSBML
- [JSBML](http://sbml.org/Software/JSBML) ... for XitoSBML
### Difficulty Level 2
Although this project seems to have many tasks to solve, each of the tasks will not require enormous lines/time to code, because there already exists convenient API, well-documented API docs and an existing implementation of the learning machine. The most important part is to understand the specification of the SBML spatial extension and preparing the training datasets.
### Skills
Java and Python programming skills and some basic knowledge of handling XML documents are required. Nice to have knowledge/experience on SBML, image processing and mathematical background on machine learning.
- (essential) Java, Python, XML
- (nice to have) SBML, Image processing, mathematical background on machine learning
### Public Repository
- [XitoSBML](https://github.com/spatialsimulator/XitoSBML)
### References
- [SBML Spatial specification rel 0.94](https://sourceforge.net/p/sbml/code/HEAD/tree/trunk/specifications/sbml-level-3/version-1/spatial/specification/spatial-v1-sbml-l3v1-rel0.94.pdf)
- [JSBML Documentation](http://sbml.org/Software/JSBML/docs)
- [ImageJ API](https://imagej.net/Developing_Plugins_for_ImageJ_1.x)
- [PyTorch](https://pytorch.org/)
### Potential Mentors
- [Akira Funahashi](https://github.com/funasoul) Keio University, Japan
- [Yuta Tokuoka](https://github.com/tokkuman) Keio University, Japan
- [Kaito Ii](https://github.com/kaitoii11) Hewlett-Packard Japan, Ltd. , Japan
### Contact
[Akira Funahashi](mailto:funasoul@gmail.com)
[Yuta Tokuoka](mailto:tokuoka@fun.bio.keio.ac.jp)
|
1.0
|
Develop a system that generates various spatial SBML models using deep learning - ### Background
[SBML(Systems Biology Markup Language)](http://sbml.org) has several extension packages to extend its capability. One of its extension is called [Spatial Processes (spatial)](http://sbml.org/Documents/Specifications/SBML_Level_3/Packages/spatial) which supports for describing processes that involve a spatial component, and describing the geometries involved. SBML spatial extension enables users to build a spatial model and run a [spatial simulation](https://github.com/funasoul/docker-spatialsim/raw/master/images/sam2d_s1_cyt.gif).
We have been working on the development of a software tool, [XitoSBML](https://github.com/spatialsimulator/XitoSBML): a spatial model builder that will generate a spatial SBML model from microscopic images.
Although XitoSBML provides a user-friendly UI to create a spatial SBML model, there only exists few spatial SBML models for spatial simulation. In order to use XitoSBML, cell regions in microscopic images of cells need to be segmented beforehand by image processing. This process (which is called segmentation) is a bottleneck of creating spatial SBML models.
On the other hand, the accuracy of image processing using deep learning has been remarkable in recent years, and methods for [highly accurate segmentation](https://arxiv.org/abs/1505.04597) have been proposed.
This summer, we would like to mentor a student who will implement a system that automatically generates various spatial SBML models by comprehensively segmenting microscopic images of cells using deep learning and XitoSBML.
### Goal
As described in the "Background", the project could be split down into following 4 tasks.
- Collect images from a database in which microscopic images of cells have been published.
- Prepare some datasets which will be used by deep learning algorithm for performing segmentation.
- Train the learning machine (deep learning algorithm) to comprehensively segment microscopic images of cells
- Implement a software tool to automatically convert segmented images to Spatial SBML Models using XitoSBML
The following API / frameworks will be used for this project.
- [PyTorch](https://pytorch.org/) ... for deep learning
- [ImageJ API](https://imagej.net/Developing_Plugins_for_ImageJ_1.x) ... for XitoSBML
- [JSBML](http://sbml.org/Software/JSBML) ... for XitoSBML
### Difficulty Level 2
Although this project seems to have many tasks to solve, each of the tasks will not require enormous lines/time to code, because there already exists convenient API, well-documented API docs and an existing implementation of the learning machine. The most important part is to understand the specification of the SBML spatial extension and preparing the training datasets.
### Skills
Java and Python programming skills and some basic knowledge of handling XML documents are required. Nice to have knowledge/experience on SBML, image processing and mathematical background on machine learning.
- (essential) Java, Python, XML
- (nice to have) SBML, Image processing, mathematical background on machine learning
### Public Repository
- [XitoSBML](https://github.com/spatialsimulator/XitoSBML)
### References
- [SBML Spatial specification rel 0.94](https://sourceforge.net/p/sbml/code/HEAD/tree/trunk/specifications/sbml-level-3/version-1/spatial/specification/spatial-v1-sbml-l3v1-rel0.94.pdf)
- [JSBML Documentation](http://sbml.org/Software/JSBML/docs)
- [ImageJ API](https://imagej.net/Developing_Plugins_for_ImageJ_1.x)
- [PyTorch](https://pytorch.org/)
### Potential Mentors
- [Akira Funahashi](https://github.com/funasoul) Keio University, Japan
- [Yuta Tokuoka](https://github.com/tokkuman) Keio University, Japan
- [Kaito Ii](https://github.com/kaitoii11) Hewlett-Packard Japan, Ltd. , Japan
### Contact
[Akira Funahashi](mailto:funasoul@gmail.com)
[Yuta Tokuoka](mailto:tokuoka@fun.bio.keio.ac.jp)
|
process
|
develop a system that generates various spatial sbml models using deep learning background has several extension packages to extend its capability one of its extension is called which supports for describing processes that involve a spatial component and describing the geometries involved sbml spatial extension enables users to build a spatial model and run a we have been working on the development of a software tool a spatial model builder that will generate a spatial sbml model from microscopic images although xitosbml provides a user friendly ui to create a spatial sbml model there only exists few spatial sbml models for spatial simulation in order to use xitosbml cell regions in microscopic images of cells need to be segmented beforehand by image processing this process which is called segmentation is a bottleneck of creating spatial sbml models on the other hand the accuracy of image processing using deep learning has been remarkable in recent years and methods for have been proposed this summer we would like to mentor a student who will implement a system that automatically generates various spatial sbml models by comprehensively segmenting microscopic images of cells using deep learning and xitosbml goal as described in the background the project could be split down into following tasks collect images from a database in which microscopic images of cells have been published prepare some datasets which will be used by deep learning algorithm for performing segmentation train the learning machine deep learning algorithm to comprehensively segment microscopic images of cells implement a software tool to automatically convert segmented images to spatial sbml models using xitosbml the following api frameworks will be used for this project for deep learning for xitosbml for xitosbml difficulty level although this project seems to have many tasks to solve each of the tasks will not require enormous lines time to code because there already exists convenient api well documented api docs and an existing implementation of the learning machine the most important part is to understand the specification of the sbml spatial extension and preparing the training datasets skills java and python programming skills and some basic knowledge of handling xml documents are required nice to have knowledge experience on sbml image processing and mathematical background on machine learning essential java python xml nice to have sbml image processing mathematical background on machine learning public repository references potential mentors keio university japan keio university japan hewlett packard japan ltd japan contact mailto funasoul gmail com mailto tokuoka fun bio keio ac jp
| 1
|
13,199
| 15,630,718,798
|
IssuesEvent
|
2021-03-22 02:54:41
|
pingcap/tidb
|
https://api.github.com/repos/pingcap/tidb
|
closed
|
Behavior of IN expression is different from MySQL
|
component/coprocessor severity/critical sig/execution type/bug
|
## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
```SQL
create table t (a int);
insert into t values(-1);
select * from t where a in(18446744073709551615, 1);
```
### 2. What did you expect to see? (Required)
In MySQL 5.7 and MySQL 8.0:
```SQL
select a from t where a in (18446744073709551615, 1);
Empty set (0.00 sec)
```
### 3. What did you see instead (Required)
In TiDB:
```SQL
select a from t where a in(18446744073709551615, 1);
+-----+
| a |
|-----|
| -1 |
+-----+
1 row in set
Time: 0.009s
```
### 4. What is your TiDB version? (Required)
```SQL
select tidb_version();
+-------------------------------------------------------------------+
| tidb_version() |
|-------------------------------------------------------------------|
| Release Version: v4.0.0-beta.2-2292-g4218f2836-dirty |
| Edition: Community |
| Git Commit Hash: 4218f2836bb38ec79fd080fa88d09d3fe3766c3a |
| Git Branch: master |
| UTC Build Time: 2021-03-07 11:44:44 |
| GoVersion: go1.13 |
| Race Enabled: false |
| TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306 |
| Check Table Before Drop: false |
+-------------------------------------------------------------------+
1 row in set
Time: 0.009s
```
|
1.0
|
Behavior of IN expression is different from MySQL - ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
```SQL
create table t (a int);
insert into t values(-1);
select * from t where a in(18446744073709551615, 1);
```
### 2. What did you expect to see? (Required)
In MySQL 5.7 and MySQL 8.0:
```SQL
select a from t where a in (18446744073709551615, 1);
Empty set (0.00 sec)
```
### 3. What did you see instead (Required)
In TiDB:
```SQL
select a from t where a in(18446744073709551615, 1);
+-----+
| a |
|-----|
| -1 |
+-----+
1 row in set
Time: 0.009s
```
### 4. What is your TiDB version? (Required)
```SQL
select tidb_version();
+-------------------------------------------------------------------+
| tidb_version() |
|-------------------------------------------------------------------|
| Release Version: v4.0.0-beta.2-2292-g4218f2836-dirty |
| Edition: Community |
| Git Commit Hash: 4218f2836bb38ec79fd080fa88d09d3fe3766c3a |
| Git Branch: master |
| UTC Build Time: 2021-03-07 11:44:44 |
| GoVersion: go1.13 |
| Race Enabled: false |
| TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306 |
| Check Table Before Drop: false |
+-------------------------------------------------------------------+
1 row in set
Time: 0.009s
```
|
process
|
behavior of in expression is different from mysql bug report please answer these questions before submitting your issue thanks minimal reproduce step required sql create table t a int insert into t values select from t where a in what did you expect to see required in mysql and mysql sql select a from t where a in empty set sec what did you see instead required in tidb sql select a from t where a in a row in set time what is your tidb version required sql select tidb version tidb version release version beta dirty edition community git commit hash git branch master utc build time goversion race enabled false tikv min version check table before drop false row in set time
| 1
|
521,043
| 15,100,548,600
|
IssuesEvent
|
2021-02-08 05:44:57
|
staxrip/staxrip
|
https://api.github.com/repos/staxrip/staxrip
|
closed
|
Shortcuts F5, F9, F10 from inside the Code Editor should use definitions in Preview
|
added/fixed/done feature request priority medium
|
**Describe the bug**
When you decide, in PREVIEW menu editor, to set F9 to play with mpv.net (default) and let's say, remove F10 which is supposed to open MPC-HC.
**Expected behavior**
I expect that using those shortcuts from inside the the Code Editor would do the same. Currently F10 would still attempt to open MPC-HC, or look at it in the default path
**Request**
please make F5, F9, F10 work in the Code Editor window as they are set to work in the Preview window. Only those three can be seen as common between Code Editor and Preview Window.
|
1.0
|
Shortcuts F5, F9, F10 from inside the Code Editor should use definitions in Preview - **Describe the bug**
When you decide, in PREVIEW menu editor, to set F9 to play with mpv.net (default) and let's say, remove F10 which is supposed to open MPC-HC.
**Expected behavior**
I expect that using those shortcuts from inside the the Code Editor would do the same. Currently F10 would still attempt to open MPC-HC, or look at it in the default path
**Request**
please make F5, F9, F10 work in the Code Editor window as they are set to work in the Preview window. Only those three can be seen as common between Code Editor and Preview Window.
|
non_process
|
shortcuts from inside the code editor should use definitions in preview describe the bug when you decide in preview menu editor to set to play with mpv net default and let s say remove which is supposed to open mpc hc expected behavior i expect that using those shortcuts from inside the the code editor would do the same currently would still attempt to open mpc hc or look at it in the default path request please make work in the code editor window as they are set to work in the preview window only those three can be seen as common between code editor and preview window
| 0
|
9,757
| 30,494,953,595
|
IssuesEvent
|
2023-07-18 10:10:13
|
Accenture/sfmc-devtools
|
https://api.github.com/repos/Accenture/sfmc-devtools
|
closed
|
[TASK] commands execute, schedule and pause should return array of keys
|
c/automation c/query chore
|
previously, we went the easy way and decided to return true or false... but much more helpful would be the list of keys that was actually successfully dealt with
|
1.0
|
[TASK] commands execute, schedule and pause should return array of keys - previously, we went the easy way and decided to return true or false... but much more helpful would be the list of keys that was actually successfully dealt with
|
non_process
|
commands execute schedule and pause should return array of keys previously we went the easy way and decided to return true or false but much more helpful would be the list of keys that was actually successfully dealt with
| 0
|
21,036
| 6,130,302,844
|
IssuesEvent
|
2017-06-24 03:46:37
|
ganeti/ganeti
|
https://api.github.com/repos/ganeti/ganeti
|
closed
|
Add option to specify key size of SSL keys
|
imported_from_google_code Merged Priority-Medium Status:Duplicate Type-Enhancement
|
Originally reported of Google Code with ID 797.
```
It was requested to add an option to Ganeti to specify the key length of SSL keys.
```
Originally added on 2014-04-15 07:27:33 +0000 UTC.
|
1.0
|
Add option to specify key size of SSL keys - Originally reported of Google Code with ID 797.
```
It was requested to add an option to Ganeti to specify the key length of SSL keys.
```
Originally added on 2014-04-15 07:27:33 +0000 UTC.
|
non_process
|
add option to specify key size of ssl keys originally reported of google code with id it was requested to add an option to ganeti to specify the key length of ssl keys originally added on utc
| 0
|
16,470
| 21,392,156,428
|
IssuesEvent
|
2022-04-21 08:13:15
|
GoogleCloudPlatform/dotnet-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples
|
closed
|
Make CI support .NET 6
|
type: process priority: p2 samples
|
Some products, like KMS, would benefit from .NET 5 dependant samples.
FYI @sethvargo
|
1.0
|
Make CI support .NET 6 - Some products, like KMS, would benefit from .NET 5 dependant samples.
FYI @sethvargo
|
process
|
make ci support net some products like kms would benefit from net dependant samples fyi sethvargo
| 1
|
49,322
| 12,322,344,638
|
IssuesEvent
|
2020-05-13 10:10:49
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
opened
|
Error after running the program
|
type:build/install
|
I ran this program to check if Tensorflow is installed or not in jupyter notebook. And Got the following error.
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\interactiveshell.py", line 3325, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-8-25b92e4d5dec>", line 2, in <module>
hello = tf.constant('Hello, TensorFlow!')
AttributeError: module 'tensorflow' has no attribute 'constant'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\interactiveshell.py", line 2039, in showtraceback
stb = value._render_traceback_()
AttributeError: 'AttributeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 2453, in <module>
from tensorflow.python.util import deprecation
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 25, in <module>
from tensorflow.python.platform import tf_logging as logging
ImportError: cannot import name 'tf_logging' from 'tensorflow.python.platform' (C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow\python\platform\__init__.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\ultratb.py", line 1101, in get_records
return _fixed_getinnerframes(etb, number_of_lines_of_context, tb_offset)
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\ultratb.py", line 319, in wrapped
return f(*args, **kwargs)
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\ultratb.py", line 353, in _fixed_getinnerframes
records = fix_frame_records_filenames(inspect.getinnerframes(etb, context))
File "C:\Users\Vishal\Anaconda3\New\lib\inspect.py", line 1502, in getinnerframes
frameinfo = (tb.tb_frame,) + getframeinfo(tb, context)
File "C:\Users\Vishal\Anaconda3\New\lib\inspect.py", line 1460, in getframeinfo
filename = getsourcefile(frame) or getfile(frame)
File "C:\Users\Vishal\Anaconda3\New\lib\inspect.py", line 696, in getsourcefile
if getattr(getmodule(object, filename), '__loader__', None) is not None:
File "C:\Users\Vishal\Anaconda3\New\lib\inspect.py", line 733, in getmodule
if ismodule(module) and hasattr(module, '__file__'):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__
module = self._load()
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow\__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "C:\Users\Vishal\Anaconda3\New\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\__init__.py", line 42, in <module>
from . _api.v2 import audio
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\_api\v2\audio\__init__.py", line 10, in <module>
from tensorflow.python.ops.gen_audio_ops import decode_wav
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\ops\gen_audio_ops.py", line 9, in <module>
from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__
module = self._load()
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow\__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "C:\Users\Vishal\Anaconda3\New\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\interactiveshell.py", line 3325, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-8-25b92e4d5dec>", line 2, in <module>
hello = tf.constant('Hello, TensorFlow!')
AttributeError: module 'tensorflow' has no attribute 'constant'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\interactiveshell.py", line 2039, in showtraceback
stb = value._render_traceback_()
AttributeError: 'AttributeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 2453, in <module>
from tensorflow.python.util import deprecation
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 25, in <module>
from tensorflow.python.platform import tf_logging as logging
ImportError: cannot import name 'tf_logging' from 'tensorflow.python.platform' (C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow\python\platform\__init__.py)
Failed to load the native TensorFlow runtime.
|
1.0
|
Error after running the program - I ran this program to check if Tensorflow is installed or not in jupyter notebook. And Got the following error.
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\interactiveshell.py", line 3325, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-8-25b92e4d5dec>", line 2, in <module>
hello = tf.constant('Hello, TensorFlow!')
AttributeError: module 'tensorflow' has no attribute 'constant'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\interactiveshell.py", line 2039, in showtraceback
stb = value._render_traceback_()
AttributeError: 'AttributeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 2453, in <module>
from tensorflow.python.util import deprecation
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 25, in <module>
from tensorflow.python.platform import tf_logging as logging
ImportError: cannot import name 'tf_logging' from 'tensorflow.python.platform' (C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow\python\platform\__init__.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\ultratb.py", line 1101, in get_records
return _fixed_getinnerframes(etb, number_of_lines_of_context, tb_offset)
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\ultratb.py", line 319, in wrapped
return f(*args, **kwargs)
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\ultratb.py", line 353, in _fixed_getinnerframes
records = fix_frame_records_filenames(inspect.getinnerframes(etb, context))
File "C:\Users\Vishal\Anaconda3\New\lib\inspect.py", line 1502, in getinnerframes
frameinfo = (tb.tb_frame,) + getframeinfo(tb, context)
File "C:\Users\Vishal\Anaconda3\New\lib\inspect.py", line 1460, in getframeinfo
filename = getsourcefile(frame) or getfile(frame)
File "C:\Users\Vishal\Anaconda3\New\lib\inspect.py", line 696, in getsourcefile
if getattr(getmodule(object, filename), '__loader__', None) is not None:
File "C:\Users\Vishal\Anaconda3\New\lib\inspect.py", line 733, in getmodule
if ismodule(module) and hasattr(module, '__file__'):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__
module = self._load()
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow\__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "C:\Users\Vishal\Anaconda3\New\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\__init__.py", line 42, in <module>
from . _api.v2 import audio
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\_api\v2\audio\__init__.py", line 10, in <module>
from tensorflow.python.ops.gen_audio_ops import decode_wav
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\ops\gen_audio_ops.py", line 9, in <module>
from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__
module = self._load()
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow\__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "C:\Users\Vishal\Anaconda3\New\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\interactiveshell.py", line 3325, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-8-25b92e4d5dec>", line 2, in <module>
hello = tf.constant('Hello, TensorFlow!')
AttributeError: module 'tensorflow' has no attribute 'constant'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\IPython\core\interactiveshell.py", line 2039, in showtraceback
stb = value._render_traceback_()
AttributeError: 'AttributeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 2453, in <module>
from tensorflow.python.util import deprecation
File "C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 25, in <module>
from tensorflow.python.platform import tf_logging as logging
ImportError: cannot import name 'tf_logging' from 'tensorflow.python.platform' (C:\Users\Vishal\Anaconda3\New\lib\site-packages\tensorflow\python\platform\__init__.py)
Failed to load the native TensorFlow runtime.
|
non_process
|
error after running the program i ran this program to check if tensorflow is installed or not in jupyter notebook and got the following error import tensorflow as tf hello tf constant hello tensorflow sess tf session print sess run hello error root internal python error in the inspect module below is the traceback from this internal error traceback most recent call last file c users vishal new lib site packages ipython core interactiveshell py line in run code exec code obj self user global ns self user ns file line in hello tf constant hello tensorflow attributeerror module tensorflow has no attribute constant during handling of the above exception another exception occurred traceback most recent call last file c users vishal new lib site packages ipython core interactiveshell py line in showtraceback stb value render traceback attributeerror attributeerror object has no attribute render traceback during handling of the above exception another exception occurred traceback most recent call last file c users vishal new lib site packages tensorflow core python pywrap tensorflow py line in from tensorflow python pywrap tensorflow internal import file c users vishal new lib site packages tensorflow core python pywrap tensorflow internal py line in from tensorflow python util import deprecation file c users vishal new lib site packages tensorflow core python util deprecation py line in from tensorflow python platform import tf logging as logging importerror cannot import name tf logging from tensorflow python platform c users vishal new lib site packages tensorflow python platform init py during handling of the above exception another exception occurred traceback most recent call last file c users vishal new lib site packages ipython core ultratb py line in get records return fixed getinnerframes etb number of lines of context tb offset file c users vishal new lib site packages ipython core ultratb py line in wrapped return f args kwargs file c users vishal new lib site packages ipython core ultratb py line in fixed getinnerframes records fix frame records filenames inspect getinnerframes etb context file c users vishal new lib inspect py line in getinnerframes frameinfo tb tb frame getframeinfo tb context file c users vishal new lib inspect py line in getframeinfo filename getsourcefile frame or getfile frame file c users vishal new lib inspect py line in getsourcefile if getattr getmodule object filename loader none is not none file c users vishal new lib inspect py line in getmodule if ismodule module and hasattr module file file c users vishal new lib site packages tensorflow init py line in getattr module self load file c users vishal new lib site packages tensorflow init py line in load module importlib import module self name file c users vishal new lib importlib init py line in import module return bootstrap gcd import name package level file line in gcd import file line in find and load file line in find and load unlocked file line in call with frames removed file line in gcd import file line in find and load file line in find and load unlocked file line in load unlocked file line in exec module file line in call with frames removed file c users vishal new lib site packages tensorflow core init py line in from api import audio file c users vishal new lib site packages tensorflow core api audio init py line in from tensorflow python ops gen audio ops import decode wav file c users vishal new lib site packages tensorflow core python ops gen audio ops py line in from tensorflow python import pywrap tensorflow as pywrap tensorflow file c users vishal new lib site packages tensorflow init py line in getattr module self load file c users vishal new lib site packages tensorflow init py line in load module importlib import module self name file c users vishal new lib importlib init py line in import module return bootstrap gcd import name package level file c users vishal new lib site packages tensorflow core python init py line in from tensorflow python import pywrap tensorflow file c users vishal new lib site packages tensorflow core python pywrap tensorflow py line in raise importerror msg importerror traceback most recent call last file c users vishal new lib site packages ipython core interactiveshell py line in run code exec code obj self user global ns self user ns file line in hello tf constant hello tensorflow attributeerror module tensorflow has no attribute constant during handling of the above exception another exception occurred traceback most recent call last file c users vishal new lib site packages ipython core interactiveshell py line in showtraceback stb value render traceback attributeerror attributeerror object has no attribute render traceback during handling of the above exception another exception occurred traceback most recent call last file c users vishal new lib site packages tensorflow core python pywrap tensorflow py line in from tensorflow python pywrap tensorflow internal import file c users vishal new lib site packages tensorflow core python pywrap tensorflow internal py line in from tensorflow python util import deprecation file c users vishal new lib site packages tensorflow core python util deprecation py line in from tensorflow python platform import tf logging as logging importerror cannot import name tf logging from tensorflow python platform c users vishal new lib site packages tensorflow python platform init py failed to load the native tensorflow runtime
| 0
|
21,266
| 28,439,868,324
|
IssuesEvent
|
2023-04-15 19:27:08
|
cse442-at-ub/project_s23-atomic
|
https://api.github.com/repos/cse442-at-ub/project_s23-atomic
|
closed
|
Create cookies to stay logged in
|
Processing Task Sprint 3
|
**Task Tests**
*Test 1*
1. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/ Click on log in.
2. Enter the following credentials: Username [last] Password [12345678]
3. Verify you are taken to the home page: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/homepage
4. Go back to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/ and click the log in button, you should be redirected to the homepage instead of signing in: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q//homepage
5. Close the tab and go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/login and do the same for https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/homepage. In both cases you should be redirected/directly taken to the homepage because cookie is present.
6. Inspect element, go to application tab and verify under the cookie section, there is a cookie with the account's username.
*Test 2*
1. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/ Click on log in.
2. Since you just finished task test 1, you should still be logged in for the next hour and redirected to homepage every time.
3. Go to inspect element -> Application -> Cookie. Delete the cookie.
4. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/ and click log in. You should be sent to the login page again since the cookie no longer exists.
|
1.0
|
Create cookies to stay logged in - **Task Tests**
*Test 1*
1. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/ Click on log in.
2. Enter the following credentials: Username [last] Password [12345678]
3. Verify you are taken to the home page: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/homepage
4. Go back to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/ and click the log in button, you should be redirected to the homepage instead of signing in: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q//homepage
5. Close the tab and go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/login and do the same for https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/homepage. In both cases you should be redirected/directly taken to the homepage because cookie is present.
6. Inspect element, go to application tab and verify under the cookie section, there is a cookie with the account's username.
*Test 2*
1. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/ Click on log in.
2. Since you just finished task test 1, you should still be logged in for the next hour and redirected to homepage every time.
3. Go to inspect element -> Application -> Cookie. Delete the cookie.
4. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/ and click log in. You should be sent to the login page again since the cookie no longer exists.
|
process
|
create cookies to stay logged in task tests test go to click on log in enter the following credentials username password verify you are taken to the home page go back to and click the log in button you should be redirected to the homepage instead of signing in close the tab and go to and do the same for in both cases you should be redirected directly taken to the homepage because cookie is present inspect element go to application tab and verify under the cookie section there is a cookie with the account s username test go to click on log in since you just finished task test you should still be logged in for the next hour and redirected to homepage every time go to inspect element application cookie delete the cookie go to and click log in you should be sent to the login page again since the cookie no longer exists
| 1
|
38,489
| 15,707,528,145
|
IssuesEvent
|
2021-03-26 19:01:28
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Add Missing Schools to Schools Point/Polygon Layers
|
Service: Geo Type: Data Workgroup: AMD
|
Charles Z. called me today when he was working in the field adding new SZB. He keeps encountering locations where the School point/polygon is missing. He told me that Kenny M. in the MMC dispatches him to various locations to work on the SZB.
|
1.0
|
Add Missing Schools to Schools Point/Polygon Layers - Charles Z. called me today when he was working in the field adding new SZB. He keeps encountering locations where the School point/polygon is missing. He told me that Kenny M. in the MMC dispatches him to various locations to work on the SZB.
|
non_process
|
add missing schools to schools point polygon layers charles z called me today when he was working in the field adding new szb he keeps encountering locations where the school point polygon is missing he told me that kenny m in the mmc dispatches him to various locations to work on the szb
| 0
|
312,616
| 26,873,403,986
|
IssuesEvent
|
2023-02-04 18:55:30
|
MPMG-DCC-UFMG/F01
|
https://api.github.com/repos/MPMG-DCC-UFMG/F01
|
closed
|
Teste de generalizacao para a tag Informações Institucionais - Leis Municipais - Rio Casca
|
generalization test development template - Betha (26) tag - Informações Institucionais subtag - Leis Municipais
|
DoD: Realizar o teste de Generalização do validador da tag Informações Institucionais - Leis Municipais para o Município de Rio Casca.
|
1.0
|
Teste de generalizacao para a tag Informações Institucionais - Leis Municipais - Rio Casca - DoD: Realizar o teste de Generalização do validador da tag Informações Institucionais - Leis Municipais para o Município de Rio Casca.
|
non_process
|
teste de generalizacao para a tag informações institucionais leis municipais rio casca dod realizar o teste de generalização do validador da tag informações institucionais leis municipais para o município de rio casca
| 0
|
710,547
| 24,422,299,316
|
IssuesEvent
|
2022-10-05 21:34:31
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
opened
|
[Checkpoint] CheckpointSequential to support non-reentrant checkpoint
|
high priority oncall: distributed module: checkpoint
|
### 🚀 The feature, motivation and pitch
There are some use cases that use `checkpoint_sequential` but are limited due to lack of some features in reentrant checkpoint, we should enable non-reentrant here as well.
cc @albanD
### Alternatives
_No response_
### Additional context
_No response_
|
1.0
|
[Checkpoint] CheckpointSequential to support non-reentrant checkpoint - ### 🚀 The feature, motivation and pitch
There are some use cases that use `checkpoint_sequential` but are limited due to lack of some features in reentrant checkpoint, we should enable non-reentrant here as well.
cc @albanD
### Alternatives
_No response_
### Additional context
_No response_
|
non_process
|
checkpointsequential to support non reentrant checkpoint 🚀 the feature motivation and pitch there are some use cases that use checkpoint sequential but are limited due to lack of some features in reentrant checkpoint we should enable non reentrant here as well cc alband alternatives no response additional context no response
| 0
|
11,471
| 14,333,507,521
|
IssuesEvent
|
2020-11-27 06:01:20
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
Support more scan details
|
feature/accept feature/reviewing sig/coprocessor type/feature-request
|
## Feature Request
### Is your feature request related to a problem? Please describe:
Currently the scan detail returned to TiDB is not conforming the documentation, and it does not contain enough information as well.
Kvproto PR https://github.com/pingcap/kvproto/pull/600 adds more fields to the scan detail structure in the protocol, so that we can add more info into it.
### Describe the feature you'd like:
Introduce the following fields, in TiKV slow query, as well as when returning to TiDB;
```rust
uint64 processed_versions = 1;
uint64 total_versions = 2;
uint64 rocksdb_delete_skipped_count = 3;
uint64 rocksdb_key_skipped_count = 4;
uint64 rocksdb_block_cache_hit_count = 5;
uint64 rocksdb_block_read_count = 6;
uint64 rocksdb_block_read_byte = 7;
```
- Processed_versions means the number of visible versions. For example, deleted versions are not included.
- Total_versions means the number of all scanned versions Deleted versions are included.
- The rest fields: See [RocksDB PerfContext Wiki](https://github.com/facebook/rocksdb/wiki/Perf-Context-and-IO-Stats-Context)
In order to keep compatibility with old TiDB, we still need to preserve the original return data structure (and fill data for it).
|
1.0
|
Support more scan details - ## Feature Request
### Is your feature request related to a problem? Please describe:
Currently the scan detail returned to TiDB is not conforming the documentation, and it does not contain enough information as well.
Kvproto PR https://github.com/pingcap/kvproto/pull/600 adds more fields to the scan detail structure in the protocol, so that we can add more info into it.
### Describe the feature you'd like:
Introduce the following fields, in TiKV slow query, as well as when returning to TiDB;
```rust
uint64 processed_versions = 1;
uint64 total_versions = 2;
uint64 rocksdb_delete_skipped_count = 3;
uint64 rocksdb_key_skipped_count = 4;
uint64 rocksdb_block_cache_hit_count = 5;
uint64 rocksdb_block_read_count = 6;
uint64 rocksdb_block_read_byte = 7;
```
- Processed_versions means the number of visible versions. For example, deleted versions are not included.
- Total_versions means the number of all scanned versions Deleted versions are included.
- The rest fields: See [RocksDB PerfContext Wiki](https://github.com/facebook/rocksdb/wiki/Perf-Context-and-IO-Stats-Context)
In order to keep compatibility with old TiDB, we still need to preserve the original return data structure (and fill data for it).
|
process
|
support more scan details feature request is your feature request related to a problem please describe currently the scan detail returned to tidb is not conforming the documentation and it does not contain enough information as well kvproto pr adds more fields to the scan detail structure in the protocol so that we can add more info into it describe the feature you d like introduce the following fields in tikv slow query as well as when returning to tidb rust processed versions total versions rocksdb delete skipped count rocksdb key skipped count rocksdb block cache hit count rocksdb block read count rocksdb block read byte processed versions means the number of visible versions for example deleted versions are not included total versions means the number of all scanned versions deleted versions are included the rest fields see in order to keep compatibility with old tidb we still need to preserve the original return data structure and fill data for it
| 1
|
796
| 3,275,506,440
|
IssuesEvent
|
2015-10-26 15:49:39
|
grafeo/grafeo
|
https://api.github.com/repos/grafeo/grafeo
|
opened
|
Drawing Functions: Circle
|
Component: Image Processing priority: high
|
- Input: The array which will have the circle
- Center: The center of the circle, in image coordinates
- Radius: The radius of the circle
- Color: The color of the circle
- Thickness: The thickness of the line of the circle. Put negative values for filled circle
- LineType: The line
- Shift: Fractional bits shift
|
1.0
|
Drawing Functions: Circle - - Input: The array which will have the circle
- Center: The center of the circle, in image coordinates
- Radius: The radius of the circle
- Color: The color of the circle
- Thickness: The thickness of the line of the circle. Put negative values for filled circle
- LineType: The line
- Shift: Fractional bits shift
|
process
|
drawing functions circle input the array which will have the circle center the center of the circle in image coordinates radius the radius of the circle color the color of the circle thickness the thickness of the line of the circle put negative values for filled circle linetype the line shift fractional bits shift
| 1
|
90,054
| 25,957,983,859
|
IssuesEvent
|
2022-12-18 13:47:24
|
jimcarreer/dinao
|
https://api.github.com/repos/jimcarreer/dinao
|
closed
|
Make release process less painful
|
documentation build
|
Either document it or cobble some helper scripts to reduce release tedium.
|
1.0
|
Make release process less painful - Either document it or cobble some helper scripts to reduce release tedium.
|
non_process
|
make release process less painful either document it or cobble some helper scripts to reduce release tedium
| 0
|
3,272
| 6,348,115,547
|
IssuesEvent
|
2017-07-28 09:07:10
|
w3c/html
|
https://api.github.com/repos/w3c/html
|
closed
|
CFC: Move HTML5.1 2nd edition to PR
|
process
|
This is a Call For Consensus (CFC) to move [HTML5.1 2nd edition](https://rawgit.com/w3c/html/html5.1-2/single-page.html) to Proposed Recommendation (PR).
Two editorial changes were made to the spec after it became a Candidate Recommendation (recorded as issues #956 and #960 ), otherwise the spec remains stable.
Please respond to this CFC by the end of day on Thursday 27th July 2017. If you choose not to respond to this CFC it will be taken as implicit support for the proposal. Your opinion is important though, so please take time to actively respond.
|
1.0
|
CFC: Move HTML5.1 2nd edition to PR - This is a Call For Consensus (CFC) to move [HTML5.1 2nd edition](https://rawgit.com/w3c/html/html5.1-2/single-page.html) to Proposed Recommendation (PR).
Two editorial changes were made to the spec after it became a Candidate Recommendation (recorded as issues #956 and #960 ), otherwise the spec remains stable.
Please respond to this CFC by the end of day on Thursday 27th July 2017. If you choose not to respond to this CFC it will be taken as implicit support for the proposal. Your opinion is important though, so please take time to actively respond.
|
process
|
cfc move edition to pr this is a call for consensus cfc to move to proposed recommendation pr two editorial changes were made to the spec after it became a candidate recommendation recorded as issues and otherwise the spec remains stable please respond to this cfc by the end of day on thursday july if you choose not to respond to this cfc it will be taken as implicit support for the proposal your opinion is important though so please take time to actively respond
| 1
|
13,629
| 16,240,382,404
|
IssuesEvent
|
2021-05-07 08:49:13
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Error: [introspection-engine\connectors\sql-introspection-connector\src\introspection_helpers.rs:188:64] called `Option::unwrap()` on a `None` value
|
bug/1-repro-available kind/bug process/candidate team/migrations
|
<!-- If required, please update the title to be clear and descriptive -->
Command: `prisma introspect`
Version: `2.21.2`
Binary Version: `e421996c87d5f3c8f7eeadd502d4ad402c89464d`
Report: https://prisma-errors.netlify.app/report/13277
OS: `x64 win32 10.0.19041`
JS Stacktrace:
```
Error: [introspection-engine\connectors\sql-introspection-connector\src\introspection_helpers.rs:188:64] called `Option::unwrap()` on a `None` value
at ChildProcess.<anonymous> (<...>\backend\node_modules\prisma\build\index.js:39953:28)
at ChildProcess.emit (events.js:315:20)
at ChildProcess.EventEmitter.emit (domain.js:467:12)
at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
```
Rust Stacktrace:
```
0: <unknown>
1: <unknown>
2: <unknown>
3: <unknown>
4: <unknown>
5: <unknown>
6: <unknown>
7: <unknown>
8: <unknown>
9: <unknown>
10: <unknown>
11: <unknown>
12: <unknown>
13: <unknown>
14: <unknown>
15: <unknown>
16: <unknown>
17: <unknown>
18: <unknown>
19: <unknown>
20: <unknown>
21: <unknown>
22: <unknown>
23: <unknown>
24: <unknown>
25: <unknown>
26: <unknown>
27: BaseThreadInitThunk
28: RtlUserThreadStart
```
|
1.0
|
Error: [introspection-engine\connectors\sql-introspection-connector\src\introspection_helpers.rs:188:64] called `Option::unwrap()` on a `None` value - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma introspect`
Version: `2.21.2`
Binary Version: `e421996c87d5f3c8f7eeadd502d4ad402c89464d`
Report: https://prisma-errors.netlify.app/report/13277
OS: `x64 win32 10.0.19041`
JS Stacktrace:
```
Error: [introspection-engine\connectors\sql-introspection-connector\src\introspection_helpers.rs:188:64] called `Option::unwrap()` on a `None` value
at ChildProcess.<anonymous> (<...>\backend\node_modules\prisma\build\index.js:39953:28)
at ChildProcess.emit (events.js:315:20)
at ChildProcess.EventEmitter.emit (domain.js:467:12)
at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
```
Rust Stacktrace:
```
0: <unknown>
1: <unknown>
2: <unknown>
3: <unknown>
4: <unknown>
5: <unknown>
6: <unknown>
7: <unknown>
8: <unknown>
9: <unknown>
10: <unknown>
11: <unknown>
12: <unknown>
13: <unknown>
14: <unknown>
15: <unknown>
16: <unknown>
17: <unknown>
18: <unknown>
19: <unknown>
20: <unknown>
21: <unknown>
22: <unknown>
23: <unknown>
24: <unknown>
25: <unknown>
26: <unknown>
27: BaseThreadInitThunk
28: RtlUserThreadStart
```
|
process
|
error called option unwrap on a none value command prisma introspect version binary version report os js stacktrace error called option unwrap on a none value at childprocess backend node modules prisma build index js at childprocess emit events js at childprocess eventemitter emit domain js at process childprocess handle onexit internal child process js rust stacktrace basethreadinitthunk rtluserthreadstart
| 1
|
2,455
| 3,690,100,085
|
IssuesEvent
|
2016-02-25 18:46:05
|
citizencode/swarmbot
|
https://api.github.com/repos/citizencode/swarmbot
|
opened
|
refactor rewards, reward_types, and amounts
|
infrastructure
|
- [ ] reward links to reward_type
- [ ] reward#suggested_amount should change to #amount
- [ ] remove reward#amount
|
1.0
|
refactor rewards, reward_types, and amounts - - [ ] reward links to reward_type
- [ ] reward#suggested_amount should change to #amount
- [ ] remove reward#amount
|
non_process
|
refactor rewards reward types and amounts reward links to reward type reward suggested amount should change to amount remove reward amount
| 0
|
513,830
| 14,927,291,025
|
IssuesEvent
|
2021-01-24 14:56:24
|
the-hyjal-project/bugtracker
|
https://api.github.com/repos/the-hyjal-project/bugtracker
|
closed
|
NPC pets can be aggroed when they should be evading
|
Core Fixed High Priority! Player Player's Pet
|
**Describe Your Issue**
Pets of NPCs can be aggroed again after evading, while they're running back to their spawn or master. If the pet is close to you when beginning to run back, it will immediately re-aggro you, but you can also re-aggro it by attacking while it's running back. I have only tested this with [Razormane Wolf](https://classic.wowhead.com/npc=3939/razormane-wolf)/[Razormane Hunter](https://classic.wowhead.com/npc=3265/razormane-hunter) in Barrens, and [Voidwalker Minion](https://classic.wowhead.com/npc=8996/voidwalker-minion)/[Burning Blade Apprentice](https://classic.wowhead.com/npc=3198/burning-blade-apprentice) in Durotar.
**Steps To Reproduce**
1. Aggro either of the mobs I mentioned above, or their masters.
2. Run away until they de-aggro and begin running back to their spawn.
3. If the pet is close to you, it will immediately re-aggro you, but if not, hit it with an attack and it will aggro you. Quite easy to pull off as hunter.
Demonstrations:
[Voidwalker](https://streamable.com/a6ekij)
[Wolf](https://streamable.com/7r9o34)
[Wolf never stops chasing](https://streamable.com/d11egu)
[Earlier example with Feign Death](https://streamable.com/6iw6bh)
**Expected Behaviour**
The pet should evade until it runs back to its spawn point or master. I have no clear proof that it shouldn't be this way, however I think you'll agree it appears to be a bug. If the pet is close to you while chasing it'll simply chase you forever because it keeps re-aggroing you whenever it's supposed to return to its spawn. It could also be used to kite the mob.
I have tested with [Razormane Wolf](https://classic.wowhead.com/npc=3939/razormane-wolf), and [Voidwalker Minion](https://classic.wowhead.com/npc=8996/voidwalker-minion). I don't know if this is a general issue, affecting all NPC pets, or if it's an issue to some specific ones. As I level up on the PTR I will try and test this with more mobs, but you are capable of testing this with your magical GM powers, that'd be easier.
Here I will list some examples of NPCs, and their pets, that could be worth testing:
- [Savannah Matriarch](https://classic.wowhead.com/npc=3416/savannah-matriarch) -> [Savannah Cub](https://classic.wowhead.com/npc=5766/savannah-cub)
- [Kolkar Pack Runner](https://classic.wowhead.com/npc=3274/kolkar-pack-runner) -> [Kolkar Packhound](https://classic.wowhead.com/npc=4316/kolkar-packhound)
- [Bloodscalp Beastmaster](https://classic.wowhead.com/npc=699/bloodscalp-beastmaster) -> [Bloodscalp Tiger](https://classic.wowhead.com/npc=698/bloodscalp-tiger)
- [Magram Pack Runner](https://classic.wowhead.com/npc=4643/magram-pack-runner) -> [Magram Bonepaw](https://classic.wowhead.com/npc=4662/magram-bonepaw)
- [Skullsplitter Beastmaster](https://classic.wowhead.com/npc=784/skullsplitter-beastmaster) -> [Skullsplitter Panther](https://classic.wowhead.com/npc=756/skullsplitter-panther)
Warlock minions ([Imp](https://classic.wowhead.com/npc=12922/imp-minion), [VW](https://classic.wowhead.com/npc=8996/voidwalker-minion), [Demon girl](https://classic.wowhead.com/npc=10928/succubus-minion), [Demon dog](https://classic.wowhead.com/npc=9556/felhound-minion)):
- [Blackrock Summoner](https://classic.wowhead.com/npc=4463/blackrock-summoner)
- [Bloodsail Warlock](https://classic.wowhead.com/npc=1564/bloodsail-warlock)
- [Burning Blade Summoner](https://classic.wowhead.com/npc=4668/burning-blade-summoner)
- [Dalaran Conjuror](https://classic.wowhead.com/npc=1915/dalaran-conjuror)
- [Dark Strand Adept](https://classic.wowhead.com/npc=3728/dark-strand-adept)
- [Dark Strand Cultist](https://classic.wowhead.com/npc=3725/dark-strand-cultist)
- [Dark Strand Voidcaller](https://classic.wowhead.com/npc=2337/dark-strand-voidcaller)
- [Haldarr Felsworn](https://classic.wowhead.com/npc=6127/haldarr-felsworn)
- [Jaedenar Warlock](https://classic.wowhead.com/npc=7120/jaedenar-warlock)
- [Syndicate Conjuror](https://classic.wowhead.com/npc=2590/syndicate-conjuror)
- [Vilebranch Shadowcaster](https://classic.wowhead.com/npc=2642/vilebranch-shadowcaster)
- [Wastewander Shadow Mage](https://classic.wowhead.com/npc=5617/wastewander-shadow-mage)
- [Witherbark Shadowcaster](https://classic.wowhead.com/npc=2553/witherbark-shadowcaster)
There are also mobs with pets inside dungeons, but this is more of an issue outside dungeons, since mobs are supposed to not lose aggro in dungeons. Unless it's exploited somehow by people exiting the dungeon or whatever.
|
1.0
|
NPC pets can be aggroed when they should be evading - **Describe Your Issue**
Pets of NPCs can be aggroed again after evading, while they're running back to their spawn or master. If the pet is close to you when beginning to run back, it will immediately re-aggro you, but you can also re-aggro it by attacking while it's running back. I have only tested this with [Razormane Wolf](https://classic.wowhead.com/npc=3939/razormane-wolf)/[Razormane Hunter](https://classic.wowhead.com/npc=3265/razormane-hunter) in Barrens, and [Voidwalker Minion](https://classic.wowhead.com/npc=8996/voidwalker-minion)/[Burning Blade Apprentice](https://classic.wowhead.com/npc=3198/burning-blade-apprentice) in Durotar.
**Steps To Reproduce**
1. Aggro either of the mobs I mentioned above, or their masters.
2. Run away until they de-aggro and begin running back to their spawn.
3. If the pet is close to you, it will immediately re-aggro you, but if not, hit it with an attack and it will aggro you. Quite easy to pull off as hunter.
Demonstrations:
[Voidwalker](https://streamable.com/a6ekij)
[Wolf](https://streamable.com/7r9o34)
[Wolf never stops chasing](https://streamable.com/d11egu)
[Earlier example with Feign Death](https://streamable.com/6iw6bh)
**Expected Behaviour**
The pet should evade until it runs back to its spawn point or master. I have no clear proof that it shouldn't be this way, however I think you'll agree it appears to be a bug. If the pet is close to you while chasing it'll simply chase you forever because it keeps re-aggroing you whenever it's supposed to return to its spawn. It could also be used to kite the mob.
I have tested with [Razormane Wolf](https://classic.wowhead.com/npc=3939/razormane-wolf), and [Voidwalker Minion](https://classic.wowhead.com/npc=8996/voidwalker-minion). I don't know if this is a general issue, affecting all NPC pets, or if it's an issue to some specific ones. As I level up on the PTR I will try and test this with more mobs, but you are capable of testing this with your magical GM powers, that'd be easier.
Here I will list some examples of NPCs, and their pets, that could be worth testing:
- [Savannah Matriarch](https://classic.wowhead.com/npc=3416/savannah-matriarch) -> [Savannah Cub](https://classic.wowhead.com/npc=5766/savannah-cub)
- [Kolkar Pack Runner](https://classic.wowhead.com/npc=3274/kolkar-pack-runner) -> [Kolkar Packhound](https://classic.wowhead.com/npc=4316/kolkar-packhound)
- [Bloodscalp Beastmaster](https://classic.wowhead.com/npc=699/bloodscalp-beastmaster) -> [Bloodscalp Tiger](https://classic.wowhead.com/npc=698/bloodscalp-tiger)
- [Magram Pack Runner](https://classic.wowhead.com/npc=4643/magram-pack-runner) -> [Magram Bonepaw](https://classic.wowhead.com/npc=4662/magram-bonepaw)
- [Skullsplitter Beastmaster](https://classic.wowhead.com/npc=784/skullsplitter-beastmaster) -> [Skullsplitter Panther](https://classic.wowhead.com/npc=756/skullsplitter-panther)
Warlock minions ([Imp](https://classic.wowhead.com/npc=12922/imp-minion), [VW](https://classic.wowhead.com/npc=8996/voidwalker-minion), [Demon girl](https://classic.wowhead.com/npc=10928/succubus-minion), [Demon dog](https://classic.wowhead.com/npc=9556/felhound-minion)):
- [Blackrock Summoner](https://classic.wowhead.com/npc=4463/blackrock-summoner)
- [Bloodsail Warlock](https://classic.wowhead.com/npc=1564/bloodsail-warlock)
- [Burning Blade Summoner](https://classic.wowhead.com/npc=4668/burning-blade-summoner)
- [Dalaran Conjuror](https://classic.wowhead.com/npc=1915/dalaran-conjuror)
- [Dark Strand Adept](https://classic.wowhead.com/npc=3728/dark-strand-adept)
- [Dark Strand Cultist](https://classic.wowhead.com/npc=3725/dark-strand-cultist)
- [Dark Strand Voidcaller](https://classic.wowhead.com/npc=2337/dark-strand-voidcaller)
- [Haldarr Felsworn](https://classic.wowhead.com/npc=6127/haldarr-felsworn)
- [Jaedenar Warlock](https://classic.wowhead.com/npc=7120/jaedenar-warlock)
- [Syndicate Conjuror](https://classic.wowhead.com/npc=2590/syndicate-conjuror)
- [Vilebranch Shadowcaster](https://classic.wowhead.com/npc=2642/vilebranch-shadowcaster)
- [Wastewander Shadow Mage](https://classic.wowhead.com/npc=5617/wastewander-shadow-mage)
- [Witherbark Shadowcaster](https://classic.wowhead.com/npc=2553/witherbark-shadowcaster)
There are also mobs with pets inside dungeons, but this is more of an issue outside dungeons, since mobs are supposed to not lose aggro in dungeons. Unless it's exploited somehow by people exiting the dungeon or whatever.
|
non_process
|
npc pets can be aggroed when they should be evading describe your issue pets of npcs can be aggroed again after evading while they re running back to their spawn or master if the pet is close to you when beginning to run back it will immediately re aggro you but you can also re aggro it by attacking while it s running back i have only tested this with in barrens and in durotar steps to reproduce aggro either of the mobs i mentioned above or their masters run away until they de aggro and begin running back to their spawn if the pet is close to you it will immediately re aggro you but if not hit it with an attack and it will aggro you quite easy to pull off as hunter demonstrations expected behaviour the pet should evade until it runs back to its spawn point or master i have no clear proof that it shouldn t be this way however i think you ll agree it appears to be a bug if the pet is close to you while chasing it ll simply chase you forever because it keeps re aggroing you whenever it s supposed to return to its spawn it could also be used to kite the mob i have tested with and i don t know if this is a general issue affecting all npc pets or if it s an issue to some specific ones as i level up on the ptr i will try and test this with more mobs but you are capable of testing this with your magical gm powers that d be easier here i will list some examples of npcs and their pets that could be worth testing warlock minions there are also mobs with pets inside dungeons but this is more of an issue outside dungeons since mobs are supposed to not lose aggro in dungeons unless it s exploited somehow by people exiting the dungeon or whatever
| 0
|
15,705
| 9,022,263,166
|
IssuesEvent
|
2019-02-07 00:45:20
|
deeplearning4j/deeplearning4j
|
https://api.github.com/repos/deeplearning4j/deeplearning4j
|
closed
|
poor performance of yolo model
|
Performance
|
Hello,
I have some poor performance on the yolo model compared to native darknet yolo implementation.
My testing environment:
Graphic card:
Nividea GTX 1050 TI 4GB
OS: Windows7
Driver:
Nividea Cuda 9.0
Cudnn 7.0.5
Source code for reproducing
https://github.com/holger-prause/dl4j_yolo
output gist
https://gist.github.com/holger-prause/081fb0133a65986d79df8c4e26a167f0
The output is around 500 ms for the yolov2 model
--------------------------------------------------------------------
For comparison with darknet yolo (same machine - yolov2 model - but slightly newer driver -required for building)
Driver:
Nividea Cuda 9.2
Cudnn 7.1.4
darknet yolo
https://github.com/AlexeyAB
command used:
darknet.exe detector test data/coco.data cfg/yolov2.cfg yolov2.weights -i 0 -thresh 0.25 dog.jpg -ext_output
output from command line
Total BFLOPS 29.475
Loading weights from yolov2.weights...
seen 32
Done!
dog.jpg: Predicted in 0.032002 seconds.
bicycle: 82% (left: 119 top: 140 w: 451 h: 280)
dog: 82% (left: 125 top: 205 w: 195 h: 328)
bicycle: 25% (left: 132 top: 143 w: 121 h: 82)
truck: 74% (left: 447 top: 84 w: 258 h: 90)
|
True
|
poor performance of yolo model - Hello,
I have some poor performance on the yolo model compared to native darknet yolo implementation.
My testing environment:
Graphic card:
Nividea GTX 1050 TI 4GB
OS: Windows7
Driver:
Nividea Cuda 9.0
Cudnn 7.0.5
Source code for reproducing
https://github.com/holger-prause/dl4j_yolo
output gist
https://gist.github.com/holger-prause/081fb0133a65986d79df8c4e26a167f0
The output is around 500 ms for the yolov2 model
--------------------------------------------------------------------
For comparison with darknet yolo (same machine - yolov2 model - but slightly newer driver -required for building)
Driver:
Nividea Cuda 9.2
Cudnn 7.1.4
darknet yolo
https://github.com/AlexeyAB
command used:
darknet.exe detector test data/coco.data cfg/yolov2.cfg yolov2.weights -i 0 -thresh 0.25 dog.jpg -ext_output
output from command line
Total BFLOPS 29.475
Loading weights from yolov2.weights...
seen 32
Done!
dog.jpg: Predicted in 0.032002 seconds.
bicycle: 82% (left: 119 top: 140 w: 451 h: 280)
dog: 82% (left: 125 top: 205 w: 195 h: 328)
bicycle: 25% (left: 132 top: 143 w: 121 h: 82)
truck: 74% (left: 447 top: 84 w: 258 h: 90)
|
non_process
|
poor performance of yolo model hello i have some poor performance on the yolo model compared to native darknet yolo implementation my testing environment graphic card nividea gtx ti os driver nividea cuda cudnn source code for reproducing output gist the output is around ms for the model for comparison with darknet yolo same machine model but slightly newer driver required for building driver nividea cuda cudnn darknet yolo command used darknet exe detector test data coco data cfg cfg weights i thresh dog jpg ext output output from command line total bflops loading weights from weights seen done dog jpg predicted in seconds bicycle left top w h dog left top w h bicycle left top w h truck left top w h
| 0
|
22,393
| 31,142,287,391
|
IssuesEvent
|
2023-08-16 01:44:23
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Flaky test: Different value of snapshot "e2e visit / low response timeout / fails when network connection immediately fails"
|
OS: linux process: flaky test topic: flake ❄️ stage: flake stale
|
### Link to dashboard or CircleCI failure
https://app.circleci.com/pipelines/github/cypress-io/cypress/41620/workflows/65ca9aaa-c9ec-43b0-b68c-22b5e5fe56d5/jobs/1724403/tests#failed-test-0
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/system-tests/test/visit_spec.js#L164
### Analysis
n/a
### Cypress Version
10.4.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
1.0
|
Flaky test: Different value of snapshot "e2e visit / low response timeout / fails when network connection immediately fails" - ### Link to dashboard or CircleCI failure
https://app.circleci.com/pipelines/github/cypress-io/cypress/41620/workflows/65ca9aaa-c9ec-43b0-b68c-22b5e5fe56d5/jobs/1724403/tests#failed-test-0
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/system-tests/test/visit_spec.js#L164
### Analysis
n/a
### Cypress Version
10.4.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
process
|
flaky test different value of snapshot visit low response timeout fails when network connection immediately fails link to dashboard or circleci failure link to failing test in github analysis n a cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
| 1
|
97,215
| 16,218,753,579
|
IssuesEvent
|
2021-05-06 01:07:27
|
Reid-Turner/Practice
|
https://api.github.com/repos/Reid-Turner/Practice
|
opened
|
CVE-2015-8857 (High) detected in uglify-js-1.3.2.tgz
|
security vulnerability
|
## CVE-2015-8857 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-1.3.2.tgz</b></p></summary>
<p>JavaScript parser and compressor/beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-1.3.2.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-1.3.2.tgz</a></p>
<p>Path to dependency file: Practice/package.json</p>
<p>Path to vulnerable library: Practice/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- :x: **uglify-js-1.3.2.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript.
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8857>CVE-2015-8857</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p>
<p>Release Date: 2018-12-15</p>
<p>Fix Resolution: v2.4.24</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"uglify-js","packageVersion":"1.3.2","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"uglify-js:1.3.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v2.4.24"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2015-8857","vulnerabilityDetails":"The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8857","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2015-8857 (High) detected in uglify-js-1.3.2.tgz - ## CVE-2015-8857 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-1.3.2.tgz</b></p></summary>
<p>JavaScript parser and compressor/beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-1.3.2.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-1.3.2.tgz</a></p>
<p>Path to dependency file: Practice/package.json</p>
<p>Path to vulnerable library: Practice/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- :x: **uglify-js-1.3.2.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript.
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8857>CVE-2015-8857</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p>
<p>Release Date: 2018-12-15</p>
<p>Fix Resolution: v2.4.24</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"uglify-js","packageVersion":"1.3.2","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"uglify-js:1.3.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v2.4.24"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2015-8857","vulnerabilityDetails":"The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8857","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in uglify js tgz cve high severity vulnerability vulnerable library uglify js tgz javascript parser and compressor beautifier toolkit library home page a href path to dependency file practice package json path to vulnerable library practice node modules uglify js package json dependency hierarchy x uglify js tgz vulnerable library found in base branch main vulnerability details the uglify js package before for node js does not properly account for non boolean values when rewriting boolean expressions which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten javascript publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree uglify js isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails the uglify js package before for node js does not properly account for non boolean values when rewriting boolean expressions which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten javascript vulnerabilityurl
| 0
|
161,028
| 6,107,724,433
|
IssuesEvent
|
2017-06-21 08:49:12
|
jupyter/notebook
|
https://api.github.com/repos/jupyter/notebook
|
closed
|
Sandboxed iframe in the ViewHandler not rendering PDFs
|
component:File Browser priority:High type:Bug
|
Hey,
The new ViewHandler in v5.0 doesn't seem to be rendering pdfs either with Chrome or Safari.
Chrome 57.0.2987.133
Safari 10.1 (12603.1.30.0.34)
Actually it seems to be a security issue that was recently corrected but now they are not allowing sandboxed iframes to execute scripts. Apparently it's only when the pdf is loaded in a different tab.
Removing the sandbox directive from the iframe makes it render properly (but the iframe is sandboxed for a reason, I guess).
|
1.0
|
Sandboxed iframe in the ViewHandler not rendering PDFs - Hey,
The new ViewHandler in v5.0 doesn't seem to be rendering pdfs either with Chrome or Safari.
Chrome 57.0.2987.133
Safari 10.1 (12603.1.30.0.34)
Actually it seems to be a security issue that was recently corrected but now they are not allowing sandboxed iframes to execute scripts. Apparently it's only when the pdf is loaded in a different tab.
Removing the sandbox directive from the iframe makes it render properly (but the iframe is sandboxed for a reason, I guess).
|
non_process
|
sandboxed iframe in the viewhandler not rendering pdfs hey the new viewhandler in doesn t seem to be rendering pdfs either with chrome or safari chrome safari actually it seems to be a security issue that was recently corrected but now they are not allowing sandboxed iframes to execute scripts apparently it s only when the pdf is loaded in a different tab removing the sandbox directive from the iframe makes it render properly but the iframe is sandboxed for a reason i guess
| 0
|
6,701
| 9,814,779,770
|
IssuesEvent
|
2019-06-13 11:01:00
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
progress bar never ends for some batch mode operations
|
Bug Processing
|
Author Name: **Giovanni Manghi** (@gioman)
Original Redmine Issue: [22034](https://issues.qgis.org/issues/22034)
Affected QGIS version: 3.6.2
Redmine category:processing/core
---
For example the OGR tool to export data in PostGIS.
Actually the operation ends correctly, but the progress bar never reaches 100%.
|
1.0
|
progress bar never ends for some batch mode operations - Author Name: **Giovanni Manghi** (@gioman)
Original Redmine Issue: [22034](https://issues.qgis.org/issues/22034)
Affected QGIS version: 3.6.2
Redmine category:processing/core
---
For example the OGR tool to export data in PostGIS.
Actually the operation ends correctly, but the progress bar never reaches 100%.
|
process
|
progress bar never ends for some batch mode operations author name giovanni manghi gioman original redmine issue affected qgis version redmine category processing core for example the ogr tool to export data in postgis actually the operation ends correctly but the progress bar never reaches
| 1
|
90,701
| 26,171,906,428
|
IssuesEvent
|
2023-01-02 01:42:45
|
apache/beam
|
https://api.github.com/repos/apache/beam
|
closed
|
beam_PreCommit_PythonLint_Commit takes 50+ minutes to execute
|
build infra P3 bug jenkins
|
The beam_PreCommit_PythonLint_Commit phase takes 50**** minutes to execute as seen here:
[https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4252/](https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4252/) (53 minutes)
[https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4253/](https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4253/) (52 minutes)
According to the build blame report the mean time is ~10 minutes ([https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/buildTimeBlameReport/](https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/buildTimeBlameReport/))
Imported from Jira [BEAM-10255](https://issues.apache.org/jira/browse/BEAM-10255). Original Jira may contain additional context.
Reported by: tysonjh.
|
1.0
|
beam_PreCommit_PythonLint_Commit takes 50+ minutes to execute - The beam_PreCommit_PythonLint_Commit phase takes 50**** minutes to execute as seen here:
[https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4252/](https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4252/) (53 minutes)
[https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4253/](https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/4253/) (52 minutes)
According to the build blame report the mean time is ~10 minutes ([https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/buildTimeBlameReport/](https://builds.apache.org/job/beam_PreCommit_PythonLint_Commit/buildTimeBlameReport/))
Imported from Jira [BEAM-10255](https://issues.apache.org/jira/browse/BEAM-10255). Original Jira may contain additional context.
Reported by: tysonjh.
|
non_process
|
beam precommit pythonlint commit takes minutes to execute the beam precommit pythonlint commit phase takes minutes to execute as seen here minutes minutes according to the build blame report the mean time is minutes imported from jira original jira may contain additional context reported by tysonjh
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.