Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
18,523
24,551,909,772
IssuesEvent
2022-10-12 13:14:11
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[iOS] Dashboard > Statistics > UI issue for Day and Week tabs
Bug P2 iOS Process: Fixed Process: Tested dev
**Actual:** First 2 letters of the month text is showing in blue colour **Expected:** Colour of the text should be in light grey colour and blue colour should be removed Issue not observed for Month tab ![Day](https://user-images.githubusercontent.com/60386291/191757316-e286cc67-c4f8-46bc-bef4-57a104779921.png) ![Month](https://user-images.githubusercontent.com/60386291/191757346-2c47322b-d37b-49e1-98b9-552ce5579514.png)
2.0
[iOS] Dashboard > Statistics > UI issue for Day and Week tabs - **Actual:** First 2 letters of the month text is showing in blue colour **Expected:** Colour of the text should be in light grey colour and blue colour should be removed Issue not observed for Month tab ![Day](https://user-images.githubusercontent.com/60386291/191757316-e286cc67-c4f8-46bc-bef4-57a104779921.png) ![Month](https://user-images.githubusercontent.com/60386291/191757346-2c47322b-d37b-49e1-98b9-552ce5579514.png)
process
dashboard statistics ui issue for day and week tabs actual first letters of the month text is showing in blue colour expected colour of the text should be in light grey colour and blue colour should be removed issue not observed for month tab
1
20,240
26,850,367,781
IssuesEvent
2023-02-03 10:32:37
AvaloniaUI/Avalonia
https://api.github.com/repos/AvaloniaUI/Avalonia
closed
extra space below textblock's text with wrapping enabled
bug area-textprocessing
**Describe the bug** i noticed that there is an extra space below the textblock when i have wrapping enabled. **To Reproduce** Steps to reproduce the behavior: 1. add a border with MaxWidth and background color for easier observation. 2. inside the border, add a textblock with text wrapping enabled. 3. add long text. 4. observe the behavior. **Expected behavior** no extra space below the textblock. **Screenshots** ![image](https://user-images.githubusercontent.com/12814796/213076753-8d7941ee-27b6-4fbc-b807-c38475538f06.png) **Desktop (please complete the following information):** - OS: Windows 10 - Version 11.0.999-cibuild0028625-beta **Additional context** [reproduce.zip](https://github.com/AvaloniaUI/Avalonia/files/10441873/AvaloniaApplication.zip)
1.0
extra space below textblock's text with wrapping enabled - **Describe the bug** i noticed that there is an extra space below the textblock when i have wrapping enabled. **To Reproduce** Steps to reproduce the behavior: 1. add a border with MaxWidth and background color for easier observation. 2. inside the border, add a textblock with text wrapping enabled. 3. add long text. 4. observe the behavior. **Expected behavior** no extra space below the textblock. **Screenshots** ![image](https://user-images.githubusercontent.com/12814796/213076753-8d7941ee-27b6-4fbc-b807-c38475538f06.png) **Desktop (please complete the following information):** - OS: Windows 10 - Version 11.0.999-cibuild0028625-beta **Additional context** [reproduce.zip](https://github.com/AvaloniaUI/Avalonia/files/10441873/AvaloniaApplication.zip)
process
extra space below textblock s text with wrapping enabled describe the bug i noticed that there is an extra space below the textblock when i have wrapping enabled to reproduce steps to reproduce the behavior add a border with maxwidth and background color for easier observation inside the border add a textblock with text wrapping enabled add long text observe the behavior expected behavior no extra space below the textblock screenshots desktop please complete the following information os windows version beta additional context
1
9,786
12,801,173,779
IssuesEvent
2020-07-02 18:35:37
solid/process
https://api.github.com/repos/solid/process
closed
Github pull request application limiting Solid reach to talent
process proposal
To apply to be a Solid panellist, editor, administrator or creator one needs a GitHub account and know how to navigate GitHub enough to submit a pull request. Because GitHub is a tool with specific professional and geographically distributed users these steps act as filters to the applications. Submitting a pull request is very public which is good because when the Solid Director approves and application it is possible to refer back to the approval. However, the public nature may also mean applicants hesitate to apply at all. My proposal is to include a text "If you are interested in applying to become a panellist, editor, administrator or creator then email info@solidproject.org with your CV and motivation letter." This email can be received by administrators and put forward to the director for review. When the application is reviewed positively one of the administrators can submit a pull request with the directors review to have the benefit of reference. The thinking behind this design is that we may tap into a wider and more diverse pool of talent.
1.0
Github pull request application limiting Solid reach to talent - To apply to be a Solid panellist, editor, administrator or creator one needs a GitHub account and know how to navigate GitHub enough to submit a pull request. Because GitHub is a tool with specific professional and geographically distributed users these steps act as filters to the applications. Submitting a pull request is very public which is good because when the Solid Director approves and application it is possible to refer back to the approval. However, the public nature may also mean applicants hesitate to apply at all. My proposal is to include a text "If you are interested in applying to become a panellist, editor, administrator or creator then email info@solidproject.org with your CV and motivation letter." This email can be received by administrators and put forward to the director for review. When the application is reviewed positively one of the administrators can submit a pull request with the directors review to have the benefit of reference. The thinking behind this design is that we may tap into a wider and more diverse pool of talent.
process
github pull request application limiting solid reach to talent to apply to be a solid panellist editor administrator or creator one needs a github account and know how to navigate github enough to submit a pull request because github is a tool with specific professional and geographically distributed users these steps act as filters to the applications submitting a pull request is very public which is good because when the solid director approves and application it is possible to refer back to the approval however the public nature may also mean applicants hesitate to apply at all my proposal is to include a text if you are interested in applying to become a panellist editor administrator or creator then email info solidproject org with your cv and motivation letter this email can be received by administrators and put forward to the director for review when the application is reviewed positively one of the administrators can submit a pull request with the directors review to have the benefit of reference the thinking behind this design is that we may tap into a wider and more diverse pool of talent
1
5,308
3,560,939,702
IssuesEvent
2016-01-23 12:45:51
tarantool/tarantool
https://api.github.com/repos/tarantool/tarantool
opened
Permission denied on tarantoolctl enter
build
``` [buildslave@localhost ~]$ tarantoolctl enter example /usr/bin/tarantoolctl: Found example.lua in /etc/tarantool/instances.available /usr/bin/tarantoolctl: Connecting to /var/run/tarantool/example.control /usr/bin/tarantoolctl: unix/:/var/run/tarantool/example.control: Permission denied ``` My user was added to `tarantool` group: ``` [buildslave@localhost ~]$ id uid=1000(buildslave) gid=1000(buildslave) groups=1000(buildslave),992(tarantool) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 ``` A control socket should be group-readable: ``` ls -l /var/run/tarantool/ total 4 srwxr-xr-x. 1 tarantool tarantool 0 Jan 23 15:35 example.control -rw-r--r--. 1 tarantool tarantool 5 Jan 23 15:35 example.pid ``` Both RPM and Debian packages are affected.
1.0
Permission denied on tarantoolctl enter - ``` [buildslave@localhost ~]$ tarantoolctl enter example /usr/bin/tarantoolctl: Found example.lua in /etc/tarantool/instances.available /usr/bin/tarantoolctl: Connecting to /var/run/tarantool/example.control /usr/bin/tarantoolctl: unix/:/var/run/tarantool/example.control: Permission denied ``` My user was added to `tarantool` group: ``` [buildslave@localhost ~]$ id uid=1000(buildslave) gid=1000(buildslave) groups=1000(buildslave),992(tarantool) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 ``` A control socket should be group-readable: ``` ls -l /var/run/tarantool/ total 4 srwxr-xr-x. 1 tarantool tarantool 0 Jan 23 15:35 example.control -rw-r--r--. 1 tarantool tarantool 5 Jan 23 15:35 example.pid ``` Both RPM and Debian packages are affected.
non_process
permission denied on tarantoolctl enter tarantoolctl enter example usr bin tarantoolctl found example lua in etc tarantool instances available usr bin tarantoolctl connecting to var run tarantool example control usr bin tarantoolctl unix var run tarantool example control permission denied my user was added to tarantool group id uid buildslave gid buildslave groups buildslave tarantool context unconfined u unconfined r unconfined t a control socket should be group readable ls l var run tarantool total srwxr xr x tarantool tarantool jan example control rw r r tarantool tarantool jan example pid both rpm and debian packages are affected
0
271,539
29,515,732,506
IssuesEvent
2023-06-04 13:14:37
amaybaum-local/vprofile-project6
https://api.github.com/repos/amaybaum-local/vprofile-project6
opened
CVE-2020-14062 (High, reachable) detected in jackson-databind-2.9.10.4.jar
Mend: dependency security vulnerability
## CVE-2020-14062 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.10.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /Users/alexmaybaum/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.10.4/jackson-databind-2.9.10.4.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.10.4.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/amaybaum-local/vprofile-project6/commit/59df3151ec29836b16779bec457178ffba859535">59df3151ec29836b16779bec457178ffba859535</a></p> <p>Found in base branch: <b>vp-rem</b></p> </p> </details> <p></p> <details><summary> <img src='https://whitesource-resources.whitesourcesoftware.com/viaRed.png' width=19 height=20> Reachability Analysis</summary> <p> This vulnerability is potentially used ``` com.visualpathit.account.validator.UserValidator (Application) -> org.springframework.messaging.simp.config.AbstractMessageBrokerConfiguration$1 (Extension) -> org.springframework.messaging.simp.config.AbstractMessageBrokerConfiguration (Extension) -> org.springframework.messaging.converter.MappingJackson2MessageConverter (Extension) -> com.fasterxml.jackson.databind.ObjectMapper (Extension) -> com.fasterxml.jackson.databind.deser.BeanDeserializerFactory (Extension) -> ❌ com.fasterxml.jackson.databind.jsontype.impl.SubTypeValidator (Vulnerable Component) ``` </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to com.sun.org.apache.xalan.internal.lib.sql.JNDIConnectionPool (aka xalan2). <p>Publish Date: 2020-06-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-14062>CVE-2020-14062</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062</a></p> <p>Release Date: 2020-06-14</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
True
CVE-2020-14062 (High, reachable) detected in jackson-databind-2.9.10.4.jar - ## CVE-2020-14062 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.10.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /Users/alexmaybaum/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.10.4/jackson-databind-2.9.10.4.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.10.4.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/amaybaum-local/vprofile-project6/commit/59df3151ec29836b16779bec457178ffba859535">59df3151ec29836b16779bec457178ffba859535</a></p> <p>Found in base branch: <b>vp-rem</b></p> </p> </details> <p></p> <details><summary> <img src='https://whitesource-resources.whitesourcesoftware.com/viaRed.png' width=19 height=20> Reachability Analysis</summary> <p> This vulnerability is potentially used ``` com.visualpathit.account.validator.UserValidator (Application) -> org.springframework.messaging.simp.config.AbstractMessageBrokerConfiguration$1 (Extension) -> org.springframework.messaging.simp.config.AbstractMessageBrokerConfiguration (Extension) -> org.springframework.messaging.converter.MappingJackson2MessageConverter (Extension) -> com.fasterxml.jackson.databind.ObjectMapper (Extension) -> com.fasterxml.jackson.databind.deser.BeanDeserializerFactory (Extension) -> ❌ com.fasterxml.jackson.databind.jsontype.impl.SubTypeValidator (Vulnerable Component) ``` </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to com.sun.org.apache.xalan.internal.lib.sql.JNDIConnectionPool (aka xalan2). <p>Publish Date: 2020-06-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-14062>CVE-2020-14062</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062</a></p> <p>Release Date: 2020-06-14</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
non_process
cve high reachable detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library users alexmaybaum repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch vp rem reachability analysis this vulnerability is potentially used com visualpathit account validator uservalidator application org springframework messaging simp config abstractmessagebrokerconfiguration extension org springframework messaging simp config abstractmessagebrokerconfiguration extension org springframework messaging converter extension com fasterxml jackson databind objectmapper extension com fasterxml jackson databind deser beandeserializerfactory extension ❌ com fasterxml jackson databind jsontype impl subtypevalidator vulnerable component vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com sun org apache xalan internal lib sql jndiconnectionpool aka publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind rescue worker helmet automatic remediation is available for this issue
0
9,866
12,880,631,426
IssuesEvent
2020-07-12 07:13:52
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
ServiceController.ExecuteCommand() causes Exception (Cannot control service on computer '.'.)
area-System.ServiceProcess
@Anipik ### Description When running a program as administrator and trying to send my service a command I get an Exception (Cannot control service on computer '.'.) . I am able to start/stop the service without any issues, but I am not able to execute a command to it. ### Configuration .NET versions tried: 8.0/4.8/4.5 on Windows Form/Console Application and used in Windows 10 Professional x64, this seems to happen in any project I make. Full code: ``` try { System.ServiceProcess.ServiceController service = new System.ServiceProcess.ServiceController("TestService"); if (service.Status == ServiceControllerStatus.Stopped) { service.Start(); service.WaitForStatus(ServiceControllerStatus.Running); } service.ExecuteCommand(100); // Causes Exception every time } catch (Exception e) { MessageBox.Show(e.Message); throw e; } ```
1.0
ServiceController.ExecuteCommand() causes Exception (Cannot control service on computer '.'.) - @Anipik ### Description When running a program as administrator and trying to send my service a command I get an Exception (Cannot control service on computer '.'.) . I am able to start/stop the service without any issues, but I am not able to execute a command to it. ### Configuration .NET versions tried: 8.0/4.8/4.5 on Windows Form/Console Application and used in Windows 10 Professional x64, this seems to happen in any project I make. Full code: ``` try { System.ServiceProcess.ServiceController service = new System.ServiceProcess.ServiceController("TestService"); if (service.Status == ServiceControllerStatus.Stopped) { service.Start(); service.WaitForStatus(ServiceControllerStatus.Running); } service.ExecuteCommand(100); // Causes Exception every time } catch (Exception e) { MessageBox.Show(e.Message); throw e; } ```
process
servicecontroller executecommand causes exception cannot control service on computer anipik description when running a program as administrator and trying to send my service a command i get an exception cannot control service on computer i am able to start stop the service without any issues but i am not able to execute a command to it configuration net versions tried on windows form console application and used in windows professional this seems to happen in any project i make full code try system serviceprocess servicecontroller service new system serviceprocess servicecontroller testservice if service status servicecontrollerstatus stopped service start service waitforstatus servicecontrollerstatus running service executecommand causes exception every time catch exception e messagebox show e message throw e
1
818,964
30,713,677,873
IssuesEvent
2023-07-27 11:36:35
nacht-falter/aikido-course-website-django
https://api.github.com/repos/nacht-falter/aikido-course-website-django
closed
BUG: Exam application with grades higher than 1st kyu cause 404 error.
EPIC: Exams THEME: Course Registration and Management PRIORITY: Must-Have BUG
If users with a grade higher than 1st kyu apply for exams with their course registration, it causes a 404 error. The logic for exam applications seems to be flawed. Another solution could be to hide the exam field entirely for users with higher grades.
1.0
BUG: Exam application with grades higher than 1st kyu cause 404 error. - If users with a grade higher than 1st kyu apply for exams with their course registration, it causes a 404 error. The logic for exam applications seems to be flawed. Another solution could be to hide the exam field entirely for users with higher grades.
non_process
bug exam application with grades higher than kyu cause error if users with a grade higher than kyu apply for exams with their course registration it causes a error the logic for exam applications seems to be flawed another solution could be to hide the exam field entirely for users with higher grades
0
6,426
9,530,837,746
IssuesEvent
2019-04-29 14:43:47
codefordenver/org
https://api.github.com/repos/codefordenver/org
closed
Flowdock tutorial
Process
As a new user to flowdock, I would like some very short videos / visual - written tutorials that help me learn how to use flowdock, specifically threading.
1.0
Flowdock tutorial - As a new user to flowdock, I would like some very short videos / visual - written tutorials that help me learn how to use flowdock, specifically threading.
process
flowdock tutorial as a new user to flowdock i would like some very short videos visual written tutorials that help me learn how to use flowdock specifically threading
1
10,529
13,309,299,796
IssuesEvent
2020-08-26 03:35:07
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
Spawn, torch.multiprocessing
module: multiprocessing triaged
Hello all We have developed a multilingual TTS service, and we have got several DL models to run at test time , and those models can be run in parallel because they don't have any dependencies to each other (trying to get lower runtime, better performance) We do that on a GPU but I ran into several problems A simpler version of it is declared by below codes : import torch.multiprocessing as mp def train1(): print("\nx") q5 = np.random.randint(2,size=(4,2)) q5_targ = torch.tensor(q5).to(torch.device("cuda")) def train2(): print("\ny") g5 = np.random.randint(2,size=(4,2)) g5_targ = torch.tensor(g5).to(torch.device("cuda")) if __name__ == '__main__': p1 = mp.Process(target=train1, args=()) p2 = mp.Process(target=train2, args=()) p1.start() p2.start() p1.join() p2.join() when I run above codes, I get output of > x > y > . > . > . > RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method and when I run > from torch.multiprocessing import set_start_method > set_start_method('spawn') before those , I would see nothing as my output ! (means it doesn't even enter target functions ) I am on colab now , and using Tesla T4
1.0
Spawn, torch.multiprocessing - Hello all We have developed a multilingual TTS service, and we have got several DL models to run at test time , and those models can be run in parallel because they don't have any dependencies to each other (trying to get lower runtime, better performance) We do that on a GPU but I ran into several problems A simpler version of it is declared by below codes : import torch.multiprocessing as mp def train1(): print("\nx") q5 = np.random.randint(2,size=(4,2)) q5_targ = torch.tensor(q5).to(torch.device("cuda")) def train2(): print("\ny") g5 = np.random.randint(2,size=(4,2)) g5_targ = torch.tensor(g5).to(torch.device("cuda")) if __name__ == '__main__': p1 = mp.Process(target=train1, args=()) p2 = mp.Process(target=train2, args=()) p1.start() p2.start() p1.join() p2.join() when I run above codes, I get output of > x > y > . > . > . > RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method and when I run > from torch.multiprocessing import set_start_method > set_start_method('spawn') before those , I would see nothing as my output ! (means it doesn't even enter target functions ) I am on colab now , and using Tesla T4
process
spawn torch multiprocessing hello all we have developed a multilingual tts service and we have got several dl models to run at test time and those models can be run in parallel because they don t have any dependencies to each other trying to get lower runtime better performance we do that on a gpu but i ran into several problems a simpler version of it is declared by below codes import torch multiprocessing as mp def print nx np random randint size targ torch tensor to torch device cuda def print ny np random randint size targ torch tensor to torch device cuda if name main mp process target args mp process target args start start join join when i run above codes i get output of x y runtimeerror cannot re initialize cuda in forked subprocess to use cuda with multiprocessing you must use the spawn start method and when i run from torch multiprocessing import set start method set start method spawn before those i would see nothing as my output means it doesn t even enter target functions i am on colab now and using tesla
1
7,395
10,521,727,247
IssuesEvent
2019-09-30 06:58:18
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Parameter label "Expression" is not meaningful
Bug Processing
Author Name: **Harrissou Santanna** (@DelazJ) Original Redmine Issue: [19816](https://issues.qgis.org/issues/19816) Affected QGIS version: 3.3(master) Redmine category:processing/gui --- The "Random points inside polygons" algorithm provides a parameter named "Expression" which imho does not help to know what user is supposed to fill in. A more verbose label would be handy (i notice that in the user manual docs, this parameter is called "Number or density of points" and as I suppose we do not invent it, i'd bet for some changes that later hide/remove that label and maybe it's not the only one affected)
1.0
Parameter label "Expression" is not meaningful - Author Name: **Harrissou Santanna** (@DelazJ) Original Redmine Issue: [19816](https://issues.qgis.org/issues/19816) Affected QGIS version: 3.3(master) Redmine category:processing/gui --- The "Random points inside polygons" algorithm provides a parameter named "Expression" which imho does not help to know what user is supposed to fill in. A more verbose label would be handy (i notice that in the user manual docs, this parameter is called "Number or density of points" and as I suppose we do not invent it, i'd bet for some changes that later hide/remove that label and maybe it's not the only one affected)
process
parameter label expression is not meaningful author name harrissou santanna delazj original redmine issue affected qgis version master redmine category processing gui the random points inside polygons algorithm provides a parameter named expression which imho does not help to know what user is supposed to fill in a more verbose label would be handy i notice that in the user manual docs this parameter is called number or density of points and as i suppose we do not invent it i d bet for some changes that later hide remove that label and maybe it s not the only one affected
1
19,513
25,828,508,992
IssuesEvent
2022-12-12 14:35:00
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
Cannot open local terminal when remote container is opened as a workspace
bug remote terminal-process
<!-- Please search existing issues to avoid creating duplicates, and review our troubleshooting tips: https://code.visualstudio.com/docs/remote/troubleshooting --> <!-- Please attach logs to help us diagnose your issue. Learn more here: https://code.visualstudio.com/docs/remote/troubleshooting#_reporting-issues --> <!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ --> - VSCode Version: 1.63.2 - Local OS Version: arch - Remote OS Version: ubuntu 20.04 - Remote Extension/Connection Type: Docker - Logs: Steps to Reproduce: Create a minimal workspace and devcontainer file workspace.code-workspace ```json { "folders": [ { "path": "." } ], "settings": {} } ``` .devcontainer/devcontainer.json ```json { "image": "ubuntu:20.04" } ``` Select `Open Workspace in container` command in the command panel, then select `Terminal: Create New Integrated Terminal (local)`. The terminal fails to open and an error popup shows up: ``` The terminal process failed to launch: Starting directory (cwd) "~/ws/vscode-container-workspace-terminal-bug/workspace.code-workspace" is not a directory. ``` As a workaround, opening the workspace indirectly via `Open Folder in container`, then `Open Workspace From File`, results in working local terminal. <!-- Check to see if the problem is general, with a specific extension, or only happens when remote --> Does this issue occur when you try this locally?: No Does this issue occur when you try this locally and all extensions are disabled?: No <!-- If your issue only appears in Codespaces, please visit: https://github.com/github/feedback/discussions/categories/codespaces-feedback -->
1.0
Cannot open local terminal when remote container is opened as a workspace - <!-- Please search existing issues to avoid creating duplicates, and review our troubleshooting tips: https://code.visualstudio.com/docs/remote/troubleshooting --> <!-- Please attach logs to help us diagnose your issue. Learn more here: https://code.visualstudio.com/docs/remote/troubleshooting#_reporting-issues --> <!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ --> - VSCode Version: 1.63.2 - Local OS Version: arch - Remote OS Version: ubuntu 20.04 - Remote Extension/Connection Type: Docker - Logs: Steps to Reproduce: Create a minimal workspace and devcontainer file workspace.code-workspace ```json { "folders": [ { "path": "." } ], "settings": {} } ``` .devcontainer/devcontainer.json ```json { "image": "ubuntu:20.04" } ``` Select `Open Workspace in container` command in the command panel, then select `Terminal: Create New Integrated Terminal (local)`. The terminal fails to open and an error popup shows up: ``` The terminal process failed to launch: Starting directory (cwd) "~/ws/vscode-container-workspace-terminal-bug/workspace.code-workspace" is not a directory. ``` As a workaround, opening the workspace indirectly via `Open Folder in container`, then `Open Workspace From File`, results in working local terminal. <!-- Check to see if the problem is general, with a specific extension, or only happens when remote --> Does this issue occur when you try this locally?: No Does this issue occur when you try this locally and all extensions are disabled?: No <!-- If your issue only appears in Codespaces, please visit: https://github.com/github/feedback/discussions/categories/codespaces-feedback -->
process
cannot open local terminal when remote container is opened as a workspace vscode version local os version arch remote os version ubuntu remote extension connection type docker logs steps to reproduce create a minimal workspace and devcontainer file workspace code workspace json folders path settings devcontainer devcontainer json json image ubuntu select open workspace in container command in the command panel then select terminal create new integrated terminal local the terminal fails to open and an error popup shows up the terminal process failed to launch starting directory cwd ws vscode container workspace terminal bug workspace code workspace is not a directory as a workaround opening the workspace indirectly via open folder in container then open workspace from file results in working local terminal does this issue occur when you try this locally no does this issue occur when you try this locally and all extensions are disabled no
1
210,635
23,761,808,903
IssuesEvent
2022-09-01 09:29:11
remigiusz-donczyk/final-project
https://api.github.com/repos/remigiusz-donczyk/final-project
opened
script-security-1.78.jar: 1 vulnerabilities (highest severity is: 4.3)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>script-security-1.78.jar</b></p></summary> <p>Allows Jenkins administrators to control what in-process scripts can be run by less-privileged users.</p> <p>Library home page: <a href="https://github.com/jenkinsci/script-security-plugin">https://github.com/jenkinsci/script-security-plugin</a></p> <p> <p>Found in HEAD commit: <a href="https://github.com/remigiusz-donczyk/final-project/commit/c737bff522acd627979af76e7bd3a589477f0497">c737bff522acd627979af76e7bd3a589477f0497</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2022-30946](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-30946) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.3 | script-security-1.78.jar | Direct | org.jenkins-ci.plugins:script-security:1172.v35f6a_0b_8207e | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-30946</summary> ### Vulnerable Library - <b>script-security-1.78.jar</b></p> <p>Allows Jenkins administrators to control what in-process scripts can be run by less-privileged users.</p> <p>Library home page: <a href="https://github.com/jenkinsci/script-security-plugin">https://github.com/jenkinsci/script-security-plugin</a></p> <p> Dependency Hierarchy: - :x: **script-security-1.78.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/remigiusz-donczyk/final-project/commit/c737bff522acd627979af76e7bd3a589477f0497">c737bff522acd627979af76e7bd3a589477f0497</a></p> <p>Found in base branch: <b>dev</b></p> </p> <p></p> ### Vulnerability Details <p> A cross-site request forgery (CSRF) vulnerability in Jenkins Script Security Plugin 1158.v7c1b_73a_69a_08 and earlier allows attackers to have Jenkins send an HTTP request to an attacker-specified webserver. <p>Publish Date: 2022-05-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-30946>CVE-2022-30946</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>4.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.jenkins.io/security/advisory/2022-05-17/#SECURITY-2116">https://www.jenkins.io/security/advisory/2022-05-17/#SECURITY-2116</a></p> <p>Release Date: 2022-05-17</p> <p>Fix Resolution: org.jenkins-ci.plugins:script-security:1172.v35f6a_0b_8207e</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
True
script-security-1.78.jar: 1 vulnerabilities (highest severity is: 4.3) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>script-security-1.78.jar</b></p></summary> <p>Allows Jenkins administrators to control what in-process scripts can be run by less-privileged users.</p> <p>Library home page: <a href="https://github.com/jenkinsci/script-security-plugin">https://github.com/jenkinsci/script-security-plugin</a></p> <p> <p>Found in HEAD commit: <a href="https://github.com/remigiusz-donczyk/final-project/commit/c737bff522acd627979af76e7bd3a589477f0497">c737bff522acd627979af76e7bd3a589477f0497</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2022-30946](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-30946) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.3 | script-security-1.78.jar | Direct | org.jenkins-ci.plugins:script-security:1172.v35f6a_0b_8207e | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-30946</summary> ### Vulnerable Library - <b>script-security-1.78.jar</b></p> <p>Allows Jenkins administrators to control what in-process scripts can be run by less-privileged users.</p> <p>Library home page: <a href="https://github.com/jenkinsci/script-security-plugin">https://github.com/jenkinsci/script-security-plugin</a></p> <p> Dependency Hierarchy: - :x: **script-security-1.78.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/remigiusz-donczyk/final-project/commit/c737bff522acd627979af76e7bd3a589477f0497">c737bff522acd627979af76e7bd3a589477f0497</a></p> <p>Found in base branch: <b>dev</b></p> </p> <p></p> ### Vulnerability Details <p> A cross-site request forgery (CSRF) vulnerability in Jenkins Script Security Plugin 1158.v7c1b_73a_69a_08 and earlier allows attackers to have Jenkins send an HTTP request to an attacker-specified webserver. <p>Publish Date: 2022-05-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-30946>CVE-2022-30946</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>4.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.jenkins.io/security/advisory/2022-05-17/#SECURITY-2116">https://www.jenkins.io/security/advisory/2022-05-17/#SECURITY-2116</a></p> <p>Release Date: 2022-05-17</p> <p>Fix Resolution: org.jenkins-ci.plugins:script-security:1172.v35f6a_0b_8207e</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
non_process
script security jar vulnerabilities highest severity is vulnerable library script security jar allows jenkins administrators to control what in process scripts can be run by less privileged users library home page a href found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available medium script security jar direct org jenkins ci plugins script security details cve vulnerable library script security jar allows jenkins administrators to control what in process scripts can be run by less privileged users library home page a href dependency hierarchy x script security jar vulnerable library found in head commit a href found in base branch dev vulnerability details a cross site request forgery csrf vulnerability in jenkins script security plugin and earlier allows attackers to have jenkins send an http request to an attacker specified webserver publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org jenkins ci plugins script security step up your open source security game with mend
0
29,044
4,467,473,583
IssuesEvent
2016-08-25 04:58:21
ddurdle/GDrive-for-KODI
https://api.github.com/repos/ddurdle/GDrive-for-KODI
closed
Filenames with invalid characters
enhancement in-testing
Google Photos allows filenames with invalid characters (these are filesystem dependent), like "/" (forward slash). The addon can retrieve/view thumbnails for such files but can't display them. Google drive on Windows replaces those characters with "_" in order to show the files in the explorer.
1.0
Filenames with invalid characters - Google Photos allows filenames with invalid characters (these are filesystem dependent), like "/" (forward slash). The addon can retrieve/view thumbnails for such files but can't display them. Google drive on Windows replaces those characters with "_" in order to show the files in the explorer.
non_process
filenames with invalid characters google photos allows filenames with invalid characters these are filesystem dependent like forward slash the addon can retrieve view thumbnails for such files but can t display them google drive on windows replaces those characters with in order to show the files in the explorer
0
21,804
30,316,338,882
IssuesEvent
2023-07-10 15:49:15
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
Change term - relationshipOfResource
Term - change Class - ResourceRelationship non-normative Process - complete
## Term change The additional example represented here was missed in [an issue](https://github.com/tdwg/dwc/issues/194) at the end of the previous public review. * Submitter: Holly Little @hollyel * Efficacy Justification (why is this change necessary?): Helps clarify use for additional cases * Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): No semantic change * Stability Justification (what concerns are there that this might affect existing implementations?): None * Implications for dwciri: namespace (does this change affect a dwciri term version)?: None Current Term definition: https://dwc.tdwg.org/list/#dwc_relationshipOfResource Proposed attributes of the new term: * Term name (in lowerCamelCase for properties, UpperCamelCase for classes): relationshipOfResource * Organized in Class (e.g., Occurrence, Event, Location, Taxon): ReourceRelationship * Definition of the term (normative): The relationship of the subject (identified by resourceID) to the object (identified by relatedResourceID). * Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use a controlled vocabulary. * Examples (not normative): `sameAs`, `duplicate of`, `mother of`, `offspring of`, `sibling of`, `parasite of`, `host of`, `valid synonym of`, `located within`, `pollinator of members of taxon`, `pollinated specific plant`, `pollinated by members of taxon`**, `on slab with`** * Refines (identifier of the broader term this term refines; normative): None * Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): http://rs.tdwg.org/dwc/terms/version/relationshipOfResource-2021-07-15 * ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): DataSets/DataSet/Units/Unit/Associations/UnitAssociation/AssociationType
1.0
Change term - relationshipOfResource - ## Term change The additional example represented here was missed in [an issue](https://github.com/tdwg/dwc/issues/194) at the end of the previous public review. * Submitter: Holly Little @hollyel * Efficacy Justification (why is this change necessary?): Helps clarify use for additional cases * Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): No semantic change * Stability Justification (what concerns are there that this might affect existing implementations?): None * Implications for dwciri: namespace (does this change affect a dwciri term version)?: None Current Term definition: https://dwc.tdwg.org/list/#dwc_relationshipOfResource Proposed attributes of the new term: * Term name (in lowerCamelCase for properties, UpperCamelCase for classes): relationshipOfResource * Organized in Class (e.g., Occurrence, Event, Location, Taxon): ReourceRelationship * Definition of the term (normative): The relationship of the subject (identified by resourceID) to the object (identified by relatedResourceID). * Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use a controlled vocabulary. * Examples (not normative): `sameAs`, `duplicate of`, `mother of`, `offspring of`, `sibling of`, `parasite of`, `host of`, `valid synonym of`, `located within`, `pollinator of members of taxon`, `pollinated specific plant`, `pollinated by members of taxon`**, `on slab with`** * Refines (identifier of the broader term this term refines; normative): None * Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): http://rs.tdwg.org/dwc/terms/version/relationshipOfResource-2021-07-15 * ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): DataSets/DataSet/Units/Unit/Associations/UnitAssociation/AssociationType
process
change term relationshipofresource term change the additional example represented here was missed in at the end of the previous public review submitter holly little hollyel efficacy justification why is this change necessary helps clarify use for additional cases demand justification if the change is semantic in nature name at least two organizations that independently need this term no semantic change stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version none current term definition proposed attributes of the new term term name in lowercamelcase for properties uppercamelcase for classes relationshipofresource organized in class e g occurrence event location taxon reourcerelationship definition of the term normative the relationship of the subject identified by resourceid to the object identified by relatedresourceid usage comments recommendations regarding content etc not normative recommended best practice is to use a controlled vocabulary examples not normative sameas duplicate of mother of offspring of sibling of parasite of host of valid synonym of located within pollinator of members of taxon pollinated specific plant pollinated by members of taxon on slab with refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative abcd xpath of the equivalent term in abcd or efg not normative datasets dataset units unit associations unitassociation associationtype
1
778,834
27,331,069,302
IssuesEvent
2023-02-25 16:35:32
webaverse-studios/CharacterCreator
https://api.github.com/repos/webaverse-studios/CharacterCreator
closed
Add localization support
high priority feature season 0
We should support EFIGS (English, French, Italian, German, Spanish), Japanese, Chinese and Russian for the UX. - Language icon in the corner which opens a modal containing a select input with different language options. - There should be a note in the current language which explains that Chat is in alpha and only officially supports English at the moment. ```[tasklist] ### Tasks - [ ] https://github.com/webaverse-studios/CharacterCreator/issues/345 - [ ] https://github.com/webaverse-studios/CharacterCreator/issues/344 ```
1.0
Add localization support - We should support EFIGS (English, French, Italian, German, Spanish), Japanese, Chinese and Russian for the UX. - Language icon in the corner which opens a modal containing a select input with different language options. - There should be a note in the current language which explains that Chat is in alpha and only officially supports English at the moment. ```[tasklist] ### Tasks - [ ] https://github.com/webaverse-studios/CharacterCreator/issues/345 - [ ] https://github.com/webaverse-studios/CharacterCreator/issues/344 ```
non_process
add localization support we should support efigs english french italian german spanish japanese chinese and russian for the ux language icon in the corner which opens a modal containing a select input with different language options there should be a note in the current language which explains that chat is in alpha and only officially supports english at the moment tasks
0
14,839
18,236,286,251
IssuesEvent
2021-10-01 07:22:51
quark-engine/quark-engine
https://api.github.com/repos/quark-engine/quark-engine
closed
Add a process for handling invalid issues
work-in-progress issue-processing-state-04
We need to define a process to handle the issues we cannot reproduce, especially if the issue raiser has not responded for a long time.
1.0
Add a process for handling invalid issues - We need to define a process to handle the issues we cannot reproduce, especially if the issue raiser has not responded for a long time.
process
add a process for handling invalid issues we need to define a process to handle the issues we cannot reproduce especially if the issue raiser has not responded for a long time
1
15,089
18,798,435,058
IssuesEvent
2021-11-09 02:40:09
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
macOS: Symbol not found: __cg_jpeg_resync_to_restart when reprojecting raster layer with GDAL warp on macOS
Feedback stale Processing Bug MacOS
I work with Mac OX mojave with 3.17 and have this error when trying to reproject a raster map (but I've tried already to downgrade to 3.16, 3.12, 3.10 - same result): Symbol not found: __cg_jpeg_resync_to_restart ``` GIS-Version: 3.17.0-Master QGIS-Codeversion: 21908f09f5 Qt-Version: 5.14.2 GDAL-Version: 3.1.2 GEOS-Version: 3.8.1-CAPI-1.13.3 PROJ-Version: Rel. 6.3.2, May 1st, 2020 PDAL version: 2.2.0 (git-version: Release) Verarbeite Algorithmus… Algorithmus Transformieren (Reprojizieren) startet… Input parameters: { 'DATA_TYPE' : 0, 'EXTRA' : '', 'INPUT' : '/Users/carolineheitz/Downloads/srtm_38_03/srtm_38_03.tif', 'MULTITHREADING' : False, 'NODATA' : None, 'OPTIONS' : '', 'OUTPUT' : 'TEMPORARY_OUTPUT', 'RESAMPLING' : 0, 'SOURCE_CRS' : QgsCoordinateReferenceSystem('EPSG:4326'), 'TARGET_CRS' : QgsCoordinateReferenceSystem('EPSG:2056'), 'TARGET_EXTENT' : None, 'TARGET_EXTENT_CRS' : None, 'TARGET_RESOLUTION' : None } GDAL command: gdalwarp -s_srs EPSG:4326 -t_srs EPSG:2056 -r near -of GTiff /Users/carolineheitz/Downloads/srtm_38_03/srtm_38_03.tif /private/var/folders/0p/1r28hccn2hs1psvv25d_dxdr0000gn/T/processing_UAgcrX/8a4455ecec78469db7855bc5d0f8bf1c/OUTPUT.tif GDAL command output: dyld: Symbol not found: __cg_jpeg_resync_to_restart Referenced from: /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO Expected in: /Volumes/QGIS.app/QGIS.app/Contents/MacOS/lib/libjpeg.9.dylib in /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO Execution completed in 2.37 seconds Results: {'OUTPUT': '/private/var/folders/0p/1r28hccn2hs1psvv25d_dxdr0000gn/T/processing_UAgcrX/8a4455ecec78469db7855bc5d0f8bf1c/OUTPUT.tif'} Lade Ergebnis Layer Die folgenden Layer wurden nicht erzeugt. • /private/var/folders/0p/1r28hccn2hs1psvv25d_dxdr0000gn/T/processing_UAgcrX/8a4455ecec78469db7855bc5d0f8bf1c/OUTPUT.tif Im 'Protokoll-Fenster' im QGIS-Hauptfenster sind mehr Informationen zur Ausführung des Algorithmus zu finden. ``` ____ I would be very happy to get some help on this, thank you!!
1.0
macOS: Symbol not found: __cg_jpeg_resync_to_restart when reprojecting raster layer with GDAL warp on macOS - I work with Mac OX mojave with 3.17 and have this error when trying to reproject a raster map (but I've tried already to downgrade to 3.16, 3.12, 3.10 - same result): Symbol not found: __cg_jpeg_resync_to_restart ``` GIS-Version: 3.17.0-Master QGIS-Codeversion: 21908f09f5 Qt-Version: 5.14.2 GDAL-Version: 3.1.2 GEOS-Version: 3.8.1-CAPI-1.13.3 PROJ-Version: Rel. 6.3.2, May 1st, 2020 PDAL version: 2.2.0 (git-version: Release) Verarbeite Algorithmus… Algorithmus Transformieren (Reprojizieren) startet… Input parameters: { 'DATA_TYPE' : 0, 'EXTRA' : '', 'INPUT' : '/Users/carolineheitz/Downloads/srtm_38_03/srtm_38_03.tif', 'MULTITHREADING' : False, 'NODATA' : None, 'OPTIONS' : '', 'OUTPUT' : 'TEMPORARY_OUTPUT', 'RESAMPLING' : 0, 'SOURCE_CRS' : QgsCoordinateReferenceSystem('EPSG:4326'), 'TARGET_CRS' : QgsCoordinateReferenceSystem('EPSG:2056'), 'TARGET_EXTENT' : None, 'TARGET_EXTENT_CRS' : None, 'TARGET_RESOLUTION' : None } GDAL command: gdalwarp -s_srs EPSG:4326 -t_srs EPSG:2056 -r near -of GTiff /Users/carolineheitz/Downloads/srtm_38_03/srtm_38_03.tif /private/var/folders/0p/1r28hccn2hs1psvv25d_dxdr0000gn/T/processing_UAgcrX/8a4455ecec78469db7855bc5d0f8bf1c/OUTPUT.tif GDAL command output: dyld: Symbol not found: __cg_jpeg_resync_to_restart Referenced from: /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO Expected in: /Volumes/QGIS.app/QGIS.app/Contents/MacOS/lib/libjpeg.9.dylib in /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO Execution completed in 2.37 seconds Results: {'OUTPUT': '/private/var/folders/0p/1r28hccn2hs1psvv25d_dxdr0000gn/T/processing_UAgcrX/8a4455ecec78469db7855bc5d0f8bf1c/OUTPUT.tif'} Lade Ergebnis Layer Die folgenden Layer wurden nicht erzeugt. • /private/var/folders/0p/1r28hccn2hs1psvv25d_dxdr0000gn/T/processing_UAgcrX/8a4455ecec78469db7855bc5d0f8bf1c/OUTPUT.tif Im 'Protokoll-Fenster' im QGIS-Hauptfenster sind mehr Informationen zur Ausführung des Algorithmus zu finden. ``` ____ I would be very happy to get some help on this, thank you!!
process
macos symbol not found cg jpeg resync to restart when reprojecting raster layer with gdal warp on macos i work with mac ox mojave with and have this error when trying to reproject a raster map but i ve tried already to downgrade to same result symbol not found cg jpeg resync to restart gis version master qgis codeversion qt version gdal version geos version capi proj version rel may pdal version git version release verarbeite algorithmus… algorithmus transformieren reprojizieren startet… input parameters data type extra input users carolineheitz downloads srtm srtm tif multithreading false nodata none options output temporary output resampling source crs qgscoordinatereferencesystem epsg target crs qgscoordinatereferencesystem epsg target extent none target extent crs none target resolution none gdal command gdalwarp s srs epsg t srs epsg r near of gtiff users carolineheitz downloads srtm srtm tif private var folders t processing uagcrx output tif gdal command output dyld symbol not found cg jpeg resync to restart referenced from system library frameworks imageio framework versions a imageio expected in volumes qgis app qgis app contents macos lib libjpeg dylib in system library frameworks imageio framework versions a imageio execution completed in seconds results output private var folders t processing uagcrx output tif lade ergebnis layer die folgenden layer wurden nicht erzeugt • private var folders t processing uagcrx output tif im protokoll fenster im qgis hauptfenster sind mehr informationen zur ausführung des algorithmus zu finden i would be very happy to get some help on this thank you
1
18,204
24,258,581,963
IssuesEvent
2022-09-27 20:09:02
Altinn/app-template-dotnet
https://api.github.com/repos/Altinn/app-template-dotnet
closed
Update application to support that user can go back from confirmation page
kind/user-story area/process
## Description It should be possible to go back from the confirmation task to the previous task or whatever defined task in the BPMN process. In the below example SequenceFlow_4 takes the process from Task_2 (confirmation) to Task_1 (data) ```xml <bpmn2:process id="Altinn_Data_Confirmation_Process" isExecutable="false"> <bpmn2:startEvent id="StartEvent_1"> <bpmn2:outgoing>SequenceFlow_1</bpmn2:outgoing> </bpmn2:startEvent> <bpmn2:task id="Task_1" name="Fyll ut skjema" altinn:tasktype="data"> <bpmn2:incoming>SequenceFlow_1</bpmn2:incoming> <bpmn2:outgoing>SequenceFlow_2</bpmn2:outgoing> </bpmn2:task> <bpmn2:task id="Task_2" name="Bekreft skjemadata" altinn:tasktype="confirmation"> <bpmn2:incoming>SequenceFlow_2</bpmn2:incoming> <bpmn2:outgoing>SequenceFlow_3</bpmn2:outgoing> </bpmn2:task> <bpmn2:endEvent id="EndEvent_1"> <bpmn2:incoming>SequenceFlow_3</bpmn2:incoming> </bpmn2:endEvent> <bpmn2:sequenceFlow id="SequenceFlow_1" sourceRef="StartEvent_1" targetRef="Task_1" /> <bpmn2:sequenceFlow id="SequenceFlow_2" sourceRef="Task_1" targetRef="Task_2" /> <bpmn2:sequenceFlow id="SequenceFlow_4" sourceRef="Task_2" targetRef="Task_1" /> <bpmn2:sequenceFlow id="SequenceFlow_3" sourceRef="Task_2" targetRef="EndEvent_1" /> </bpmn2:process> ``` ## Screenshots TODO: create a conformation view with navigation buttons ## Considerations - The BPMN process will need to define what kind of sequence flows that is possible out from the confirmation step - We would need to know what is forward and what is backward. (depending on how the button layout would be) - We need language support so the naming of a button can be defined - We need to define what type of authorization action that is required to navigate backward. ## Acceptance criteria - It is possible to navigate from confirmation task and backward to a given task in the process - BPMN process is the source for the possible flows ## Development tasks - [ ] Design process navigation on the confirmation page. (this might be reused for other task types in the future) - [x] Analyse and design how we can define on the sequence flow that a flow is backward/forward (or some other concept (alternative flow?)) - [ ] Update API/models so App Frontend can know about the navigation possibilities - [ ] Updatate App frontend to add navigation on confirmation view - [x] Update App backend to enable logic on process backward/navigation to a specific task
1.0
Update application to support that user can go back from confirmation page - ## Description It should be possible to go back from the confirmation task to the previous task or whatever defined task in the BPMN process. In the below example SequenceFlow_4 takes the process from Task_2 (confirmation) to Task_1 (data) ```xml <bpmn2:process id="Altinn_Data_Confirmation_Process" isExecutable="false"> <bpmn2:startEvent id="StartEvent_1"> <bpmn2:outgoing>SequenceFlow_1</bpmn2:outgoing> </bpmn2:startEvent> <bpmn2:task id="Task_1" name="Fyll ut skjema" altinn:tasktype="data"> <bpmn2:incoming>SequenceFlow_1</bpmn2:incoming> <bpmn2:outgoing>SequenceFlow_2</bpmn2:outgoing> </bpmn2:task> <bpmn2:task id="Task_2" name="Bekreft skjemadata" altinn:tasktype="confirmation"> <bpmn2:incoming>SequenceFlow_2</bpmn2:incoming> <bpmn2:outgoing>SequenceFlow_3</bpmn2:outgoing> </bpmn2:task> <bpmn2:endEvent id="EndEvent_1"> <bpmn2:incoming>SequenceFlow_3</bpmn2:incoming> </bpmn2:endEvent> <bpmn2:sequenceFlow id="SequenceFlow_1" sourceRef="StartEvent_1" targetRef="Task_1" /> <bpmn2:sequenceFlow id="SequenceFlow_2" sourceRef="Task_1" targetRef="Task_2" /> <bpmn2:sequenceFlow id="SequenceFlow_4" sourceRef="Task_2" targetRef="Task_1" /> <bpmn2:sequenceFlow id="SequenceFlow_3" sourceRef="Task_2" targetRef="EndEvent_1" /> </bpmn2:process> ``` ## Screenshots TODO: create a conformation view with navigation buttons ## Considerations - The BPMN process will need to define what kind of sequence flows that is possible out from the confirmation step - We would need to know what is forward and what is backward. (depending on how the button layout would be) - We need language support so the naming of a button can be defined - We need to define what type of authorization action that is required to navigate backward. ## Acceptance criteria - It is possible to navigate from confirmation task and backward to a given task in the process - BPMN process is the source for the possible flows ## Development tasks - [ ] Design process navigation on the confirmation page. (this might be reused for other task types in the future) - [x] Analyse and design how we can define on the sequence flow that a flow is backward/forward (or some other concept (alternative flow?)) - [ ] Update API/models so App Frontend can know about the navigation possibilities - [ ] Updatate App frontend to add navigation on confirmation view - [x] Update App backend to enable logic on process backward/navigation to a specific task
process
update application to support that user can go back from confirmation page description it should be possible to go back from the confirmation task to the previous task or whatever defined task in the bpmn process in the below example sequenceflow takes the process from task confirmation to task data xml sequenceflow sequenceflow sequenceflow sequenceflow sequenceflow sequenceflow screenshots todo create a conformation view with navigation buttons considerations the bpmn process will need to define what kind of sequence flows that is possible out from the confirmation step we would need to know what is forward and what is backward depending on how the button layout would be we need language support so the naming of a button can be defined we need to define what type of authorization action that is required to navigate backward acceptance criteria it is possible to navigate from confirmation task and backward to a given task in the process bpmn process is the source for the possible flows development tasks design process navigation on the confirmation page this might be reused for other task types in the future analyse and design how we can define on the sequence flow that a flow is backward forward or some other concept alternative flow update api models so app frontend can know about the navigation possibilities updatate app frontend to add navigation on confirmation view update app backend to enable logic on process backward navigation to a specific task
1
3,881
10,239,387,010
IssuesEvent
2019-08-19 18:07:14
MicrosoftDocs/architecture-center
https://api.github.com/repos/MicrosoftDocs/architecture-center
closed
Autoscaling documentation not complete
architecture-center/svc assigned-to-author doc-enhancement triaged
Autoscaling documentation and pointers to specific detailed sections are available for Service Fabric, VM Scale Sets, App Services, Azure Functions, Cloud Service etc However, additional information for Azure Kubernetes Service, Azure Container Service and Azure Container Instances should be added. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 98e6cd92-676a-6d6f-2ea8-ebe9301f59a9 * Version Independent ID: edfbb571-5bcf-2339-2cce-5e6a22fd591c * Content: [Autoscaling guidance - Best practices for cloud applications](https://docs.microsoft.com/en-us/azure/architecture/best-practices/auto-scaling#feedback) * Content Source: [docs/best-practices/auto-scaling.md](https://github.com/mspnp/architecture-center/blob/master/docs/best-practices/auto-scaling.md) * Service: **architecture-center** * GitHub Login: @dragon119 * Microsoft Alias: **pnp**
1.0
Autoscaling documentation not complete - Autoscaling documentation and pointers to specific detailed sections are available for Service Fabric, VM Scale Sets, App Services, Azure Functions, Cloud Service etc However, additional information for Azure Kubernetes Service, Azure Container Service and Azure Container Instances should be added. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 98e6cd92-676a-6d6f-2ea8-ebe9301f59a9 * Version Independent ID: edfbb571-5bcf-2339-2cce-5e6a22fd591c * Content: [Autoscaling guidance - Best practices for cloud applications](https://docs.microsoft.com/en-us/azure/architecture/best-practices/auto-scaling#feedback) * Content Source: [docs/best-practices/auto-scaling.md](https://github.com/mspnp/architecture-center/blob/master/docs/best-practices/auto-scaling.md) * Service: **architecture-center** * GitHub Login: @dragon119 * Microsoft Alias: **pnp**
non_process
autoscaling documentation not complete autoscaling documentation and pointers to specific detailed sections are available for service fabric vm scale sets app services azure functions cloud service etc however additional information for azure kubernetes service azure container service and azure container instances should be added document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service architecture center github login microsoft alias pnp
0
232,717
17,792,422,464
IssuesEvent
2021-08-31 17:47:33
vmware-samples/vcenter-event-broker-appliance
https://api.github.com/repos/vmware-samples/vcenter-event-broker-appliance
closed
Update VEBA Architecture Diagram
documentation enhancement
**Is your feature request related to a problem? Please describe.** https://github.com/vmware-samples/vcenter-event-broker-appliance/blob/development/docs/kb/img/veba-architecture.png needs to be updated to include additional Event Providers(s), Endpoints and Deprecated Event Processors
1.0
Update VEBA Architecture Diagram - **Is your feature request related to a problem? Please describe.** https://github.com/vmware-samples/vcenter-event-broker-appliance/blob/development/docs/kb/img/veba-architecture.png needs to be updated to include additional Event Providers(s), Endpoints and Deprecated Event Processors
non_process
update veba architecture diagram is your feature request related to a problem please describe needs to be updated to include additional event providers s endpoints and deprecated event processors
0
8,302
11,463,305,445
IssuesEvent
2020-02-07 15:45:55
material-components/material-components-ios
https://api.github.com/repos/material-components/material-components-ios
closed
[TextControls] Internal issue: b/147444232
[TextControls] type:Process
This was filed as an internal issue. If you are a Googler, please visit [b/147444232](http://b/147444232) for more details. <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/147444232](http://b/147444232) - Blocked by: https://github.com/material-components/material-components-ios/issues/9579 - Blocked by: https://github.com/material-components/material-components-ios/issues/9564
1.0
[TextControls] Internal issue: b/147444232 - This was filed as an internal issue. If you are a Googler, please visit [b/147444232](http://b/147444232) for more details. <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/147444232](http://b/147444232) - Blocked by: https://github.com/material-components/material-components-ios/issues/9579 - Blocked by: https://github.com/material-components/material-components-ios/issues/9564
process
internal issue b this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug blocked by blocked by
1
18,760
24,663,809,785
IssuesEvent
2022-10-18 08:47:39
googleapis/google-api-dotnet-client
https://api.github.com/repos/googleapis/google-api-dotnet-client
closed
Remove .NET 4.0 targets
type: process priority: p2
We should look at all the targets we claim to support, and remove the ones that really don't work and aren't tested or supported.
1.0
Remove .NET 4.0 targets - We should look at all the targets we claim to support, and remove the ones that really don't work and aren't tested or supported.
process
remove net targets we should look at all the targets we claim to support and remove the ones that really don t work and aren t tested or supported
1
106,206
11,472,953,134
IssuesEvent
2020-02-09 20:17:24
NeProgramist/Cytrus
https://api.github.com/repos/NeProgramist/Cytrus
closed
NLP implementations in other languages
documentation
find information on existing NLP implementations in other languages. (Grammarly, etc)
1.0
NLP implementations in other languages - find information on existing NLP implementations in other languages. (Grammarly, etc)
non_process
nlp implementations in other languages find information on existing nlp implementations in other languages grammarly etc
0
21,969
3,587,528,815
IssuesEvent
2016-01-30 11:04:04
PRIDE-Utilities/jmzml
https://api.github.com/repos/PRIDE-Utilities/jmzml
closed
Debug message going to info level instead of debug level
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Use jmzml to open any standard mzML file What is the expected output? What do you see instead? The debuging message "Creating index" in MzMLIndexerFactory is sent to the log4j logget with INFO level. I believe it is supposed to be DEBUG level, just like other debugging messages in the project. What version of the product are you using? On what operating system? 1.7.2 Please provide any additional information below. Patch attached. ``` Original issue reported on code.google.com by `plu...@gmail.com` on 5 Feb 2015 at 5:43 Attachments: * [jmzml.patch](https://storage.googleapis.com/google-code-attachments/jmzml/issue-10/comment-0/jmzml.patch)
1.0
Debug message going to info level instead of debug level - ``` What steps will reproduce the problem? 1. Use jmzml to open any standard mzML file What is the expected output? What do you see instead? The debuging message "Creating index" in MzMLIndexerFactory is sent to the log4j logget with INFO level. I believe it is supposed to be DEBUG level, just like other debugging messages in the project. What version of the product are you using? On what operating system? 1.7.2 Please provide any additional information below. Patch attached. ``` Original issue reported on code.google.com by `plu...@gmail.com` on 5 Feb 2015 at 5:43 Attachments: * [jmzml.patch](https://storage.googleapis.com/google-code-attachments/jmzml/issue-10/comment-0/jmzml.patch)
non_process
debug message going to info level instead of debug level what steps will reproduce the problem use jmzml to open any standard mzml file what is the expected output what do you see instead the debuging message creating index in mzmlindexerfactory is sent to the logget with info level i believe it is supposed to be debug level just like other debugging messages in the project what version of the product are you using on what operating system please provide any additional information below patch attached original issue reported on code google com by plu gmail com on feb at attachments
0
615,126
19,214,965,341
IssuesEvent
2021-12-07 08:29:22
canonical-web-and-design/vanilla-framework
https://api.github.com/repos/canonical-web-and-design/vanilla-framework
closed
A11Y: Mobile card variant of table misuses aria-label
Priority: High
In mobile card table pattern we use `aria-label` to show column headers in mobile view. This causes screen reader to only read column title and completely skip cell value. We should not use `aria-label` this way.
1.0
A11Y: Mobile card variant of table misuses aria-label - In mobile card table pattern we use `aria-label` to show column headers in mobile view. This causes screen reader to only read column title and completely skip cell value. We should not use `aria-label` this way.
non_process
mobile card variant of table misuses aria label in mobile card table pattern we use aria label to show column headers in mobile view this causes screen reader to only read column title and completely skip cell value we should not use aria label this way
0
5,910
8,728,644,399
IssuesEvent
2018-12-10 17:57:03
PowercoderJr/FlagsRecognizer
https://api.github.com/repos/PowercoderJr/FlagsRecognizer
closed
Предоставить пользователю возможность обрезать или повернуть изображение
enhancement input preprocessing
Как вариант - предлагать указать 4 точки по углам флага, после чего привести выделенную область к прямоугольной форме
1.0
Предоставить пользователю возможность обрезать или повернуть изображение - Как вариант - предлагать указать 4 точки по углам флага, после чего привести выделенную область к прямоугольной форме
process
предоставить пользователю возможность обрезать или повернуть изображение как вариант предлагать указать точки по углам флага после чего привести выделенную область к прямоугольной форме
1
99,095
20,870,895,354
IssuesEvent
2022-03-22 11:51:23
GoogleForCreators/web-stories-wp
https://api.github.com/repos/GoogleForCreators/web-stories-wp
closed
Discovery: Consider using Puppeteer Testing Library
P2 Type: Infrastructure Type: Code Quality Pod: WP Package: E2E Test Utils Package: E2E Tests
<!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ --> ## Task Description <!-- A clear and concise description of what this task is about. --> With Puppeteer Testing Library ([`pptr-testing-library`](https://www.npmjs.com/package/pptr-testing-library)) we could bring our e2e test suite closer to our karma and unit test suites, making it easier to write write tests for all three targets. Let's do a brief evaluation and perhaps rewrite a few e2e tests to see how well it works. If successful, we can update all of the tests.
1.0
Discovery: Consider using Puppeteer Testing Library - <!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ --> ## Task Description <!-- A clear and concise description of what this task is about. --> With Puppeteer Testing Library ([`pptr-testing-library`](https://www.npmjs.com/package/pptr-testing-library)) we could bring our e2e test suite closer to our karma and unit test suites, making it easier to write write tests for all three targets. Let's do a brief evaluation and perhaps rewrite a few e2e tests to see how well it works. If successful, we can update all of the tests.
non_process
discovery consider using puppeteer testing library task description with puppeteer testing library we could bring our test suite closer to our karma and unit test suites making it easier to write write tests for all three targets let s do a brief evaluation and perhaps rewrite a few tests to see how well it works if successful we can update all of the tests
0
17,472
9,786,318,331
IssuesEvent
2019-06-09 15:38:37
kyma-project/kyma
https://api.github.com/repos/kyma-project/kyma
closed
Events publishing latency and throughput tests
area/application-connector area/eventing quality/performance
## Description We need test cases to test the "external events publishing throughput and latency". We need to have the test cases executed and analyzed ones so that we know the possible maximum assuming the performance test cluster setup. ## Requirements Provide the following tests |Name | Test | What to measure? (SLIs) | Related components| |-----|------|---------------------|------------------| |(https://github.com/kyma-project/kyma/issues/3524) Events publishing latency| Increase no of concurrent published events sent from outside the cluster | Measure sending events latency from outside the cluster for 3 percentiles (75, 90, 99), not checking actual delivery| istio,application-event-service, event-publish| |(https://github.com/kyma-project/kyma/issues/3524) Events publishing throughput| Increase no of concurrent external requests sent from outside the cluster | Measure sending events throughput, not checking actual delivery| istio,application-event-service, event-publish| - Run the tests against a cluster setup as provided by https://github.com/kyma-project/kyma/issues/3521 - Use the test runner setup and framework as provided by https://github.com/kyma-project/kyma/issues/3522 - Locate the test case sources as described here: https://github.com/kyma-project/kyma/issues/3522 ## Reasons To find out Kyma SLOs, to have performance test suite available
True
Events publishing latency and throughput tests - ## Description We need test cases to test the "external events publishing throughput and latency". We need to have the test cases executed and analyzed ones so that we know the possible maximum assuming the performance test cluster setup. ## Requirements Provide the following tests |Name | Test | What to measure? (SLIs) | Related components| |-----|------|---------------------|------------------| |(https://github.com/kyma-project/kyma/issues/3524) Events publishing latency| Increase no of concurrent published events sent from outside the cluster | Measure sending events latency from outside the cluster for 3 percentiles (75, 90, 99), not checking actual delivery| istio,application-event-service, event-publish| |(https://github.com/kyma-project/kyma/issues/3524) Events publishing throughput| Increase no of concurrent external requests sent from outside the cluster | Measure sending events throughput, not checking actual delivery| istio,application-event-service, event-publish| - Run the tests against a cluster setup as provided by https://github.com/kyma-project/kyma/issues/3521 - Use the test runner setup and framework as provided by https://github.com/kyma-project/kyma/issues/3522 - Locate the test case sources as described here: https://github.com/kyma-project/kyma/issues/3522 ## Reasons To find out Kyma SLOs, to have performance test suite available
non_process
events publishing latency and throughput tests description we need test cases to test the external events publishing throughput and latency we need to have the test cases executed and analyzed ones so that we know the possible maximum assuming the performance test cluster setup requirements provide the following tests name test what to measure slis related components events publishing latency increase no of concurrent published events sent from outside the cluster measure sending events latency from outside the cluster for percentiles not checking actual delivery istio application event service event publish events publishing throughput increase no of concurrent external requests sent from outside the cluster measure sending events throughput not checking actual delivery istio application event service event publish run the tests against a cluster setup as provided by use the test runner setup and framework as provided by locate the test case sources as described here reasons to find out kyma slos to have performance test suite available
0
252,867
19,074,436,094
IssuesEvent
2021-11-27 14:06:36
WomenPlusPlus/deploy-impact-21-kona-c
https://api.github.com/repos/WomenPlusPlus/deploy-impact-21-kona-c
closed
Delivery - Update of the Readme file
documentation
The ReadMe documentation should contain: Works on the browser Screenshots for the app once design is done Link to the presentation - demo
1.0
Delivery - Update of the Readme file - The ReadMe documentation should contain: Works on the browser Screenshots for the app once design is done Link to the presentation - demo
non_process
delivery update of the readme file the readme documentation should contain works on the browser screenshots for the app once design is done link to the presentation demo
0
19,506
25,815,012,771
IssuesEvent
2022-12-12 03:44:48
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
[Mirror] Dependencies from rules_go 0.37.0
P2 type: process team-OSS mirror request
### Please list the URLs of the archives you'd like to mirror: [rules_go 0.37.0](https://github.com/bazelbuild/rules_go/releases/tag/v0.37.0), released a few days ago, depends on a few new mirror URLs that currently 404: ``` WARNING: Download from https://mirror.bazel.build/github.com/golang/sys/archive/refs/tags/v0.3.0.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found WARNING: Download from https://mirror.bazel.build/github.com/googleapis/googleapis/archive/83c3605afb5a39952bf0a0809875d41cf2a558ca.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found ``` The associated callsites are here: * https://github.com/bazelbuild/rules_go/blob/1a8fe64877c6e71dbbf51cdc8ceb2eb10c13e521/go/private/repositories.bzl#L91 * https://github.com/bazelbuild/rules_go/blob/1a8fe64877c6e71dbbf51cdc8ceb2eb10c13e521/go/private/repositories.bzl#L252 --- Please mirror the following URLs: * https://github.com/golang/sys/archive/refs/tags/v0.3.0.zip * https://github.com/googleapis/googleapis/archive/83c3605afb5a39952bf0a0809875d41cf2a558ca.zip
1.0
[Mirror] Dependencies from rules_go 0.37.0 - ### Please list the URLs of the archives you'd like to mirror: [rules_go 0.37.0](https://github.com/bazelbuild/rules_go/releases/tag/v0.37.0), released a few days ago, depends on a few new mirror URLs that currently 404: ``` WARNING: Download from https://mirror.bazel.build/github.com/golang/sys/archive/refs/tags/v0.3.0.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found WARNING: Download from https://mirror.bazel.build/github.com/googleapis/googleapis/archive/83c3605afb5a39952bf0a0809875d41cf2a558ca.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found ``` The associated callsites are here: * https://github.com/bazelbuild/rules_go/blob/1a8fe64877c6e71dbbf51cdc8ceb2eb10c13e521/go/private/repositories.bzl#L91 * https://github.com/bazelbuild/rules_go/blob/1a8fe64877c6e71dbbf51cdc8ceb2eb10c13e521/go/private/repositories.bzl#L252 --- Please mirror the following URLs: * https://github.com/golang/sys/archive/refs/tags/v0.3.0.zip * https://github.com/googleapis/googleapis/archive/83c3605afb5a39952bf0a0809875d41cf2a558ca.zip
process
dependencies from rules go please list the urls of the archives you d like to mirror released a few days ago depends on a few new mirror urls that currently warning download from failed class java io filenotfoundexception get returned not found warning download from failed class java io filenotfoundexception get returned not found the associated callsites are here please mirror the following urls
1
146,043
5,592,593,825
IssuesEvent
2017-03-30 05:20:44
CS2103JAN2017-W09-B3/main
https://api.github.com/repos/CS2103JAN2017-W09-B3/main
closed
As an advanced user I want to add task reminder to Google Calendar
priority.low type.enhancement type.story
so that due task will be shown as full day calendar event
1.0
As an advanced user I want to add task reminder to Google Calendar - so that due task will be shown as full day calendar event
non_process
as an advanced user i want to add task reminder to google calendar so that due task will be shown as full day calendar event
0
10,161
13,044,162,649
IssuesEvent
2020-07-29 03:47:34
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `Encode` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `Encode` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @breeswish ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `Encode` from TiDB - ## Description Port the scalar function `Encode` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @breeswish ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function encode from tidb description port the scalar function encode from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
1
74,950
25,451,513,758
IssuesEvent
2022-11-24 10:49:09
matrix-org/synapse
https://api.github.com/repos/matrix-org/synapse
closed
media admin list API breaks when some event contains invalid info in content
A-Admin-API A-Validation S-Minor T-Defect O-Uncommon
### Description You can send events containing non-spec conform contents like this: ``` { "content": { "url": "mxc://<server>/<media>", "info": null, "body": "<filename>", "msgtype": "m.image" }, "type": "m.room.message" } ``` The ["List all media in a room" API](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/media_admin_api.md#list-all-media-in-a-room) processes it like this: [synapse/room.py at develop · matrix-org/synapse](https://github.com/matrix-org/synapse/blob/f38d7d79c8ec5c389c51327737bd517a27826bd6/synapse/storage/databases/main/room.py#L912) ``` [...] event_json = db_to_json(content_json) content = event_json["content"] content_url = content.get("url") thumbnail_url = content.get("info", {}).get("thumbnail_url") [...] ``` That results in an error, seen below. ### Steps to reproduce - Send non-spec-conforming image event containing `"info": null` in content - Send GET /_synapse/admin/v1/room/<room_id>/media - Receive error response with status code 500 ### Homeserver another homeserver ### Synapse Version v1.71.0 ### Installation Method Other (please mention below) ### Database Single PostgreSQL Server, never restored or migrated ### Workers Single process ### Platform Running in an `FROM debian:bullseye-slim` Container on a debian bullseye server. Build from this repository from source. ### Configuration _No response_ ### Relevant log output ```shell 2022-11-23 14:03:47,276 - synapse.http.server - 124 - ERROR - GET-417 - Failed handle request via 'ListMediaInRoom': <XForwardedForRequest at 0x7f2d4044f160 method='GET' uri='/_synapse/admin/v1/room/<roomId>/media' clientproto='HTTP/1.1' site='8008'> Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/synapse/http/server.py", line 307, in _async_render_wrapper callback_return = await self._async_render(request) File "/usr/local/lib/python3.9/dist-packages/synapse/http/server.py", line 513, in _async_render callback_return = await raw_callback_return File "/usr/local/lib/python3.9/dist-packages/synapse/rest/admin/media.py", line 209, in on_GET local_mxcs, remote_mxcs = await self.store.get_media_mxcs_in_room(room_id) File "/usr/local/lib/python3.9/dist-packages/synapse/storage/databases/main/room.py", line 862, in get_media_mxcs_in_room return await self.db_pool.runInteraction( File "/usr/local/lib/python3.9/dist-packages/synapse/storage/database.py", line 881, in runInteraction return await delay_cancellation(_runInteraction()) File "/usr/local/lib/python3.9/dist-packages/twisted/internet/defer.py", line 1693, in _inlineCallbacks result = context.run( File "/usr/local/lib/python3.9/dist-packages/twisted/python/failure.py", line 518, in throwExceptionIntoGenerator return g.throw(self.type, self.value, self.tb) File "/usr/local/lib/python3.9/dist-packages/synapse/storage/database.py", line 848, in _runInteraction result = await self.runWithConnection( File "/usr/local/lib/python3.9/dist-packages/synapse/storage/database.py", line 976, in runWithConnection return await make_deferred_yieldable( File "/usr/local/lib/python3.9/dist-packages/twisted/python/threadpool.py", line 244, in inContext result = inContext.theWork() # type: ignore[attr-defined] File "/usr/local/lib/python3.9/dist-packages/twisted/python/threadpool.py", line 260, in <lambda> inContext.theWork = lambda: context.call( # type: ignore[attr-defined] File "/usr/local/lib/python3.9/dist-packages/twisted/python/context.py", line 117, in callWithContext return self.currentContext().callWithContext(ctx, func, *args, **kw) File "/usr/local/lib/python3.9/dist-packages/twisted/python/context.py", line 82, in callWithContext return func(*args, **kw) File "/usr/local/lib/python3.9/dist-packages/twisted/enterprise/adbapi.py", line 282, in _runWithConnection result = func(conn, *args, **kw) File "/usr/local/lib/python3.9/dist-packages/synapse/storage/database.py", line 969, in inner_func return func(db_conn, *args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/synapse/storage/database.py", line 710, in new_transaction r = func(cursor, *args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/synapse/storage/databases/main/room.py", line 850, in _get_media_mxcs_in_room_txn local_mxcs, remote_mxcs = self._get_media_mxcs_in_room_txn(txn, room_id) File "/usr/local/lib/python3.9/dist-packages/synapse/storage/databases/main/room.py", line 916, in _get_media_mxcs_in_room_txn thumbnail_url = content.get("info", {}).get("thumbnail_url") AttributeError: 'NoneType' object has no attribute 'get' ``` ### Anything else that would be useful to know? _No response_
1.0
media admin list API breaks when some event contains invalid info in content - ### Description You can send events containing non-spec conform contents like this: ``` { "content": { "url": "mxc://<server>/<media>", "info": null, "body": "<filename>", "msgtype": "m.image" }, "type": "m.room.message" } ``` The ["List all media in a room" API](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/media_admin_api.md#list-all-media-in-a-room) processes it like this: [synapse/room.py at develop · matrix-org/synapse](https://github.com/matrix-org/synapse/blob/f38d7d79c8ec5c389c51327737bd517a27826bd6/synapse/storage/databases/main/room.py#L912) ``` [...] event_json = db_to_json(content_json) content = event_json["content"] content_url = content.get("url") thumbnail_url = content.get("info", {}).get("thumbnail_url") [...] ``` That results in an error, seen below. ### Steps to reproduce - Send non-spec-conforming image event containing `"info": null` in content - Send GET /_synapse/admin/v1/room/<room_id>/media - Receive error response with status code 500 ### Homeserver another homeserver ### Synapse Version v1.71.0 ### Installation Method Other (please mention below) ### Database Single PostgreSQL Server, never restored or migrated ### Workers Single process ### Platform Running in an `FROM debian:bullseye-slim` Container on a debian bullseye server. Build from this repository from source. ### Configuration _No response_ ### Relevant log output ```shell 2022-11-23 14:03:47,276 - synapse.http.server - 124 - ERROR - GET-417 - Failed handle request via 'ListMediaInRoom': <XForwardedForRequest at 0x7f2d4044f160 method='GET' uri='/_synapse/admin/v1/room/<roomId>/media' clientproto='HTTP/1.1' site='8008'> Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/synapse/http/server.py", line 307, in _async_render_wrapper callback_return = await self._async_render(request) File "/usr/local/lib/python3.9/dist-packages/synapse/http/server.py", line 513, in _async_render callback_return = await raw_callback_return File "/usr/local/lib/python3.9/dist-packages/synapse/rest/admin/media.py", line 209, in on_GET local_mxcs, remote_mxcs = await self.store.get_media_mxcs_in_room(room_id) File "/usr/local/lib/python3.9/dist-packages/synapse/storage/databases/main/room.py", line 862, in get_media_mxcs_in_room return await self.db_pool.runInteraction( File "/usr/local/lib/python3.9/dist-packages/synapse/storage/database.py", line 881, in runInteraction return await delay_cancellation(_runInteraction()) File "/usr/local/lib/python3.9/dist-packages/twisted/internet/defer.py", line 1693, in _inlineCallbacks result = context.run( File "/usr/local/lib/python3.9/dist-packages/twisted/python/failure.py", line 518, in throwExceptionIntoGenerator return g.throw(self.type, self.value, self.tb) File "/usr/local/lib/python3.9/dist-packages/synapse/storage/database.py", line 848, in _runInteraction result = await self.runWithConnection( File "/usr/local/lib/python3.9/dist-packages/synapse/storage/database.py", line 976, in runWithConnection return await make_deferred_yieldable( File "/usr/local/lib/python3.9/dist-packages/twisted/python/threadpool.py", line 244, in inContext result = inContext.theWork() # type: ignore[attr-defined] File "/usr/local/lib/python3.9/dist-packages/twisted/python/threadpool.py", line 260, in <lambda> inContext.theWork = lambda: context.call( # type: ignore[attr-defined] File "/usr/local/lib/python3.9/dist-packages/twisted/python/context.py", line 117, in callWithContext return self.currentContext().callWithContext(ctx, func, *args, **kw) File "/usr/local/lib/python3.9/dist-packages/twisted/python/context.py", line 82, in callWithContext return func(*args, **kw) File "/usr/local/lib/python3.9/dist-packages/twisted/enterprise/adbapi.py", line 282, in _runWithConnection result = func(conn, *args, **kw) File "/usr/local/lib/python3.9/dist-packages/synapse/storage/database.py", line 969, in inner_func return func(db_conn, *args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/synapse/storage/database.py", line 710, in new_transaction r = func(cursor, *args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/synapse/storage/databases/main/room.py", line 850, in _get_media_mxcs_in_room_txn local_mxcs, remote_mxcs = self._get_media_mxcs_in_room_txn(txn, room_id) File "/usr/local/lib/python3.9/dist-packages/synapse/storage/databases/main/room.py", line 916, in _get_media_mxcs_in_room_txn thumbnail_url = content.get("info", {}).get("thumbnail_url") AttributeError: 'NoneType' object has no attribute 'get' ``` ### Anything else that would be useful to know? _No response_
non_process
media admin list api breaks when some event contains invalid info in content description you can send events containing non spec conform contents like this content url mxc info null body msgtype m image type m room message the processes it like this event json db to json content json content event json content url content get url thumbnail url content get info get thumbnail url that results in an error seen below steps to reproduce send non spec conforming image event containing info null in content send get synapse admin room media receive error response with status code homeserver another homeserver synapse version installation method other please mention below database single postgresql server never restored or migrated workers single process platform running in an from debian bullseye slim container on a debian bullseye server build from this repository from source configuration no response relevant log output shell synapse http server error get failed handle request via listmediainroom media clientproto http site traceback most recent call last file usr local lib dist packages synapse http server py line in async render wrapper callback return await self async render request file usr local lib dist packages synapse http server py line in async render callback return await raw callback return file usr local lib dist packages synapse rest admin media py line in on get local mxcs remote mxcs await self store get media mxcs in room room id file usr local lib dist packages synapse storage databases main room py line in get media mxcs in room return await self db pool runinteraction file usr local lib dist packages synapse storage database py line in runinteraction return await delay cancellation runinteraction file usr local lib dist packages twisted internet defer py line in inlinecallbacks result context run file usr local lib dist packages twisted python failure py line in throwexceptionintogenerator return g throw self type self value self tb file usr local lib dist packages synapse storage database py line in runinteraction result await self runwithconnection file usr local lib dist packages synapse storage database py line in runwithconnection return await make deferred yieldable file usr local lib dist packages twisted python threadpool py line in incontext result incontext thework type ignore file usr local lib dist packages twisted python threadpool py line in incontext thework lambda context call type ignore file usr local lib dist packages twisted python context py line in callwithcontext return self currentcontext callwithcontext ctx func args kw file usr local lib dist packages twisted python context py line in callwithcontext return func args kw file usr local lib dist packages twisted enterprise adbapi py line in runwithconnection result func conn args kw file usr local lib dist packages synapse storage database py line in inner func return func db conn args kwargs file usr local lib dist packages synapse storage database py line in new transaction r func cursor args kwargs file usr local lib dist packages synapse storage databases main room py line in get media mxcs in room txn local mxcs remote mxcs self get media mxcs in room txn txn room id file usr local lib dist packages synapse storage databases main room py line in get media mxcs in room txn thumbnail url content get info get thumbnail url attributeerror nonetype object has no attribute get anything else that would be useful to know no response
0
222,566
7,433,871,143
IssuesEvent
2018-03-26 09:07:35
CS2103JAN2018-W14-B2/main
https://api.github.com/repos/CS2103JAN2018-W14-B2/main
closed
Create commands for employees to apply for a shift
priority.high type.enhancement
`apply <index>` lets an employee apply for the shift at the specified index. `unapply <index>` lets an employee remove himself from the shift at the specified index.
1.0
Create commands for employees to apply for a shift - `apply <index>` lets an employee apply for the shift at the specified index. `unapply <index>` lets an employee remove himself from the shift at the specified index.
non_process
create commands for employees to apply for a shift apply lets an employee apply for the shift at the specified index unapply lets an employee remove himself from the shift at the specified index
0
5,369
8,201,364,332
IssuesEvent
2018-09-01 16:47:22
rubberduck-vba/Rubberduck
https://api.github.com/repos/rubberduck-vba/Rubberduck
closed
Scale keyword in UDT as a member name is a parser error
bug critical parse-tree-processing regression
Stumbled upon this while I was playing with a `tagDEC` struct: ``` Option Explicit ' https://docs.microsoft.com/en-us/windows/desktop/api/wtypes/ns-wtypes-tagdec Private Type tagDec wReserved As Integer scale As Byte sign As Byte Hi32 As Long Lo64Lo As Long Lo64Hi As Long End Type ``` The parser is choking on the `scale` member because it's a keyword. Relevant log bit: ``` 2018-08-30 11:16:07.1279;ERROR-2.2.0.3624;Rubberduck.Parsing.VBA.Parsing.ModuleParser;Syntax error; offending token 'scale' at line 6, column 5 in the CodePaneCode version of module Module1.; 2018-08-30 11:16:07.1279;DEBUG-2.2.0.3624;Rubberduck.Parsing.VBA.Parsing.ModuleParser;SyntaxErrorException thrown in thread 31, ParseTaskID a1d3ada6-85ce-4f61-98a9-30829356b3bf.;Rubberduck.Parsing.Symbols.ParsingExceptions.MainParseSyntaxErrorException: mismatched input 'scale' expecting END_TYPE ---> Antlr4.Runtime.InputMismatchException: Exception of type 'Antlr4.Runtime.InputMismatchException' was thrown. at Antlr4.Runtime.DefaultErrorStrategy.RecoverInline(Parser recognizer) at Antlr4.Runtime.Parser.Match(Int32 ttype) at Rubberduck.Parsing.Grammar.VBAParser.udtDeclaration() in C:\projects\rubberduck\Rubberduck.Parsing\obj\Release\VBAParser.cs:line 13232 --- End of inner exception stack trace --- at Rubberduck.Parsing.Symbols.ParsingExceptions.MainParseExceptionErrorListener.SyntaxError(IRecognizer recognizer, IToken offendingSymbol, Int32 line, Int32 charPositionInLine, String msg, RecognitionException e) in C:\projects\rubberduck\Rubberduck.Parsing\Symbols\ParsingExceptions\MainParseExceptionErrorListener.cs:line 17 at Antlr4.Runtime.ProxyErrorListener`1.SyntaxError(IRecognizer recognizer, Symbol offendingSymbol, Int32 line, Int32 charPositionInLine, String msg, RecognitionException e) at Antlr4.Runtime.Parser.NotifyErrorListeners(IToken offendingToken, String msg, RecognitionException e) at Antlr4.Runtime.DefaultErrorStrategy.NotifyErrorListeners(Parser recognizer, String message, RecognitionException e) at Antlr4.Runtime.DefaultErrorStrategy.ReportInputMismatch(Parser recognizer, InputMismatchException e) at Antlr4.Runtime.DefaultErrorStrategy.ReportError(Parser recognizer, RecognitionException e) at Rubberduck.Parsing.Grammar.VBAParser.udtDeclaration() in C:\projects\rubberduck\Rubberduck.Parsing\obj\Release\VBAParser.cs:line 13237 at Rubberduck.Parsing.Grammar.VBAParser.moduleDeclarationsElement() in C:\projects\rubberduck\Rubberduck.Parsing\obj\Release\VBAParser.cs:line 2257 at Rubberduck.Parsing.Grammar.VBAParser.moduleDeclarations() in C:\projects\rubberduck\Rubberduck.Parsing\obj\Release\VBAParser.cs:line 1945 at Rubberduck.Parsing.Grammar.VBAParser.module() in C:\projects\rubberduck\Rubberduck.Parsing\obj\Release\VBAParser.cs:line 442 at Rubberduck.Parsing.Grammar.VBAParser.startRule() in C:\projects\rubberduck\Rubberduck.Parsing\obj\Release\VBAParser.cs:line 332 at Rubberduck.Parsing.VBA.Parsing.VBATokenStreamParser.Parse(ITokenStream tokenStream, PredictionMode predictionMode, IParserErrorListener errorListener) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\VBATokenStreamParser.cs:line 21 at Rubberduck.Parsing.VBA.Parsing.TokenStreamParserBase.ParseSll(String moduleName, ITokenStream tokenStream, CodeKind codeKind) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\TokenStreamParserBase.cs:line 84 at Rubberduck.Parsing.VBA.Parsing.TokenStreamParserBase.ParseWithFallBack(String moduleName, CommonTokenStream tokenStream, CodeKind codeKind) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\TokenStreamParserBase.cs:line 52 at Rubberduck.Parsing.VBA.Parsing.TokenStreamParserBase.Parse(String moduleName, CommonTokenStream tokenStream, CodeKind codeKind, ParserMode parserMode) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\TokenStreamParserBase.cs:line 32 at Rubberduck.Parsing.VBA.Parsing.TokenStreamParserStringParserAdapterWithPreprocessing.Parse(String moduleName, String projectId, String code, CancellationToken token, CodeKind codeKind, ParserMode parserMode) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\TokenStreamParserStringParserAdapterWithPreprocessing.cs:line 29 at Rubberduck.Parsing.VBA.Parsing.ModuleParser.CodePanePassResults(QualifiedModuleName module, CancellationToken token, TokenStreamRewriter rewriter) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\ModuleParser.cs:line 167 at Rubberduck.Parsing.VBA.Parsing.ModuleParser.ParseInternal(QualifiedModuleName module, CancellationToken cancellationToken, TokenStreamRewriter rewriter, Guid taskId) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\ModuleParser.cs:line 87 at Rubberduck.Parsing.VBA.Parsing.ModuleParser.Parse(QualifiedModuleName module, CancellationToken cancellationToken, TokenStreamRewriter rewriter) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\ModuleParser.cs:line 43 Token: scale at L6C5 Kind of parsed code: CodePaneCode Component: Module1 (code pane version) ParseType: Main parse ```
1.0
Scale keyword in UDT as a member name is a parser error - Stumbled upon this while I was playing with a `tagDEC` struct: ``` Option Explicit ' https://docs.microsoft.com/en-us/windows/desktop/api/wtypes/ns-wtypes-tagdec Private Type tagDec wReserved As Integer scale As Byte sign As Byte Hi32 As Long Lo64Lo As Long Lo64Hi As Long End Type ``` The parser is choking on the `scale` member because it's a keyword. Relevant log bit: ``` 2018-08-30 11:16:07.1279;ERROR-2.2.0.3624;Rubberduck.Parsing.VBA.Parsing.ModuleParser;Syntax error; offending token 'scale' at line 6, column 5 in the CodePaneCode version of module Module1.; 2018-08-30 11:16:07.1279;DEBUG-2.2.0.3624;Rubberduck.Parsing.VBA.Parsing.ModuleParser;SyntaxErrorException thrown in thread 31, ParseTaskID a1d3ada6-85ce-4f61-98a9-30829356b3bf.;Rubberduck.Parsing.Symbols.ParsingExceptions.MainParseSyntaxErrorException: mismatched input 'scale' expecting END_TYPE ---> Antlr4.Runtime.InputMismatchException: Exception of type 'Antlr4.Runtime.InputMismatchException' was thrown. at Antlr4.Runtime.DefaultErrorStrategy.RecoverInline(Parser recognizer) at Antlr4.Runtime.Parser.Match(Int32 ttype) at Rubberduck.Parsing.Grammar.VBAParser.udtDeclaration() in C:\projects\rubberduck\Rubberduck.Parsing\obj\Release\VBAParser.cs:line 13232 --- End of inner exception stack trace --- at Rubberduck.Parsing.Symbols.ParsingExceptions.MainParseExceptionErrorListener.SyntaxError(IRecognizer recognizer, IToken offendingSymbol, Int32 line, Int32 charPositionInLine, String msg, RecognitionException e) in C:\projects\rubberduck\Rubberduck.Parsing\Symbols\ParsingExceptions\MainParseExceptionErrorListener.cs:line 17 at Antlr4.Runtime.ProxyErrorListener`1.SyntaxError(IRecognizer recognizer, Symbol offendingSymbol, Int32 line, Int32 charPositionInLine, String msg, RecognitionException e) at Antlr4.Runtime.Parser.NotifyErrorListeners(IToken offendingToken, String msg, RecognitionException e) at Antlr4.Runtime.DefaultErrorStrategy.NotifyErrorListeners(Parser recognizer, String message, RecognitionException e) at Antlr4.Runtime.DefaultErrorStrategy.ReportInputMismatch(Parser recognizer, InputMismatchException e) at Antlr4.Runtime.DefaultErrorStrategy.ReportError(Parser recognizer, RecognitionException e) at Rubberduck.Parsing.Grammar.VBAParser.udtDeclaration() in C:\projects\rubberduck\Rubberduck.Parsing\obj\Release\VBAParser.cs:line 13237 at Rubberduck.Parsing.Grammar.VBAParser.moduleDeclarationsElement() in C:\projects\rubberduck\Rubberduck.Parsing\obj\Release\VBAParser.cs:line 2257 at Rubberduck.Parsing.Grammar.VBAParser.moduleDeclarations() in C:\projects\rubberduck\Rubberduck.Parsing\obj\Release\VBAParser.cs:line 1945 at Rubberduck.Parsing.Grammar.VBAParser.module() in C:\projects\rubberduck\Rubberduck.Parsing\obj\Release\VBAParser.cs:line 442 at Rubberduck.Parsing.Grammar.VBAParser.startRule() in C:\projects\rubberduck\Rubberduck.Parsing\obj\Release\VBAParser.cs:line 332 at Rubberduck.Parsing.VBA.Parsing.VBATokenStreamParser.Parse(ITokenStream tokenStream, PredictionMode predictionMode, IParserErrorListener errorListener) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\VBATokenStreamParser.cs:line 21 at Rubberduck.Parsing.VBA.Parsing.TokenStreamParserBase.ParseSll(String moduleName, ITokenStream tokenStream, CodeKind codeKind) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\TokenStreamParserBase.cs:line 84 at Rubberduck.Parsing.VBA.Parsing.TokenStreamParserBase.ParseWithFallBack(String moduleName, CommonTokenStream tokenStream, CodeKind codeKind) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\TokenStreamParserBase.cs:line 52 at Rubberduck.Parsing.VBA.Parsing.TokenStreamParserBase.Parse(String moduleName, CommonTokenStream tokenStream, CodeKind codeKind, ParserMode parserMode) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\TokenStreamParserBase.cs:line 32 at Rubberduck.Parsing.VBA.Parsing.TokenStreamParserStringParserAdapterWithPreprocessing.Parse(String moduleName, String projectId, String code, CancellationToken token, CodeKind codeKind, ParserMode parserMode) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\TokenStreamParserStringParserAdapterWithPreprocessing.cs:line 29 at Rubberduck.Parsing.VBA.Parsing.ModuleParser.CodePanePassResults(QualifiedModuleName module, CancellationToken token, TokenStreamRewriter rewriter) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\ModuleParser.cs:line 167 at Rubberduck.Parsing.VBA.Parsing.ModuleParser.ParseInternal(QualifiedModuleName module, CancellationToken cancellationToken, TokenStreamRewriter rewriter, Guid taskId) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\ModuleParser.cs:line 87 at Rubberduck.Parsing.VBA.Parsing.ModuleParser.Parse(QualifiedModuleName module, CancellationToken cancellationToken, TokenStreamRewriter rewriter) in C:\projects\rubberduck\Rubberduck.Parsing\VBA\Parsing\ModuleParser.cs:line 43 Token: scale at L6C5 Kind of parsed code: CodePaneCode Component: Module1 (code pane version) ParseType: Main parse ```
process
scale keyword in udt as a member name is a parser error stumbled upon this while i was playing with a tagdec struct option explicit private type tagdec wreserved as integer scale as byte sign as byte as long as long as long end type the parser is choking on the scale member because it s a keyword relevant log bit error rubberduck parsing vba parsing moduleparser syntax error offending token scale at line column in the codepanecode version of module debug rubberduck parsing vba parsing moduleparser syntaxerrorexception thrown in thread parsetaskid rubberduck parsing symbols parsingexceptions mainparsesyntaxerrorexception mismatched input scale expecting end type runtime inputmismatchexception exception of type runtime inputmismatchexception was thrown at runtime defaulterrorstrategy recoverinline parser recognizer at runtime parser match ttype at rubberduck parsing grammar vbaparser udtdeclaration in c projects rubberduck rubberduck parsing obj release vbaparser cs line end of inner exception stack trace at rubberduck parsing symbols parsingexceptions mainparseexceptionerrorlistener syntaxerror irecognizer recognizer itoken offendingsymbol line charpositioninline string msg recognitionexception e in c projects rubberduck rubberduck parsing symbols parsingexceptions mainparseexceptionerrorlistener cs line at runtime proxyerrorlistener syntaxerror irecognizer recognizer symbol offendingsymbol line charpositioninline string msg recognitionexception e at runtime parser notifyerrorlisteners itoken offendingtoken string msg recognitionexception e at runtime defaulterrorstrategy notifyerrorlisteners parser recognizer string message recognitionexception e at runtime defaulterrorstrategy reportinputmismatch parser recognizer inputmismatchexception e at runtime defaulterrorstrategy reporterror parser recognizer recognitionexception e at rubberduck parsing grammar vbaparser udtdeclaration in c projects rubberduck rubberduck parsing obj release vbaparser cs line at rubberduck parsing grammar vbaparser moduledeclarationselement in c projects rubberduck rubberduck parsing obj release vbaparser cs line at rubberduck parsing grammar vbaparser moduledeclarations in c projects rubberduck rubberduck parsing obj release vbaparser cs line at rubberduck parsing grammar vbaparser module in c projects rubberduck rubberduck parsing obj release vbaparser cs line at rubberduck parsing grammar vbaparser startrule in c projects rubberduck rubberduck parsing obj release vbaparser cs line at rubberduck parsing vba parsing vbatokenstreamparser parse itokenstream tokenstream predictionmode predictionmode iparsererrorlistener errorlistener in c projects rubberduck rubberduck parsing vba parsing vbatokenstreamparser cs line at rubberduck parsing vba parsing tokenstreamparserbase parsesll string modulename itokenstream tokenstream codekind codekind in c projects rubberduck rubberduck parsing vba parsing tokenstreamparserbase cs line at rubberduck parsing vba parsing tokenstreamparserbase parsewithfallback string modulename commontokenstream tokenstream codekind codekind in c projects rubberduck rubberduck parsing vba parsing tokenstreamparserbase cs line at rubberduck parsing vba parsing tokenstreamparserbase parse string modulename commontokenstream tokenstream codekind codekind parsermode parsermode in c projects rubberduck rubberduck parsing vba parsing tokenstreamparserbase cs line at rubberduck parsing vba parsing tokenstreamparserstringparseradapterwithpreprocessing parse string modulename string projectid string code cancellationtoken token codekind codekind parsermode parsermode in c projects rubberduck rubberduck parsing vba parsing tokenstreamparserstringparseradapterwithpreprocessing cs line at rubberduck parsing vba parsing moduleparser codepanepassresults qualifiedmodulename module cancellationtoken token tokenstreamrewriter rewriter in c projects rubberduck rubberduck parsing vba parsing moduleparser cs line at rubberduck parsing vba parsing moduleparser parseinternal qualifiedmodulename module cancellationtoken cancellationtoken tokenstreamrewriter rewriter guid taskid in c projects rubberduck rubberduck parsing vba parsing moduleparser cs line at rubberduck parsing vba parsing moduleparser parse qualifiedmodulename module cancellationtoken cancellationtoken tokenstreamrewriter rewriter in c projects rubberduck rubberduck parsing vba parsing moduleparser cs line token scale at kind of parsed code codepanecode component code pane version parsetype main parse
1
244,296
26,375,083,878
IssuesEvent
2023-01-12 01:16:56
noahbjohnson/electron-foundation
https://api.github.com/repos/noahbjohnson/electron-foundation
opened
electron-devtools-installer-3.1.1.tgz: 2 vulnerabilities (highest severity is: 5.5)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>electron-devtools-installer-3.1.1.tgz</b></p></summary> <p></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/jszip</p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (electron-devtools-installer version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [WS-2023-0004](https://github.com/Stuk/jszip/commit/2edab366119c9ee948357c02f1206c28566cdf15) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | jszip-3.5.0.tgz | Transitive | 3.2.0 | &#10060; | | [CVE-2021-23413](https://www.mend.io/vulnerability-database/CVE-2021-23413) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | jszip-3.5.0.tgz | Transitive | 3.2.0 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2023-0004</summary> ### Vulnerable Library - <b>jszip-3.5.0.tgz</b></p> <p>Create, read and edit .zip files with JavaScript http://stuartk.com/jszip</p> <p>Library home page: <a href="https://registry.npmjs.org/jszip/-/jszip-3.5.0.tgz">https://registry.npmjs.org/jszip/-/jszip-3.5.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/jszip</p> <p> Dependency Hierarchy: - electron-devtools-installer-3.1.1.tgz (Root Library) - unzip-crx-3-0.2.0.tgz - :x: **jszip-3.5.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> jszip before 3.8.0 does not sanitize filenames when files are loaded with `loadAsync`, which makes the library vunerable to zip-slip attack. <p>Publish Date: 2023-01-04 <p>URL: <a href=https://github.com/Stuk/jszip/commit/2edab366119c9ee948357c02f1206c28566cdf15>WS-2023-0004</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2023-01-04</p> <p>Fix Resolution (jszip): 3.6.0</p> <p>Direct dependency fix Resolution (electron-devtools-installer): 3.2.0</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-23413</summary> ### Vulnerable Library - <b>jszip-3.5.0.tgz</b></p> <p>Create, read and edit .zip files with JavaScript http://stuartk.com/jszip</p> <p>Library home page: <a href="https://registry.npmjs.org/jszip/-/jszip-3.5.0.tgz">https://registry.npmjs.org/jszip/-/jszip-3.5.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/jszip</p> <p> Dependency Hierarchy: - electron-devtools-installer-3.1.1.tgz (Root Library) - unzip-crx-3-0.2.0.tgz - :x: **jszip-3.5.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> This affects the package jszip before 3.7.0. Crafting a new zip file with filenames set to Object prototype values (e.g __proto__, toString, etc) results in a returned object with a modified prototype instance. <p>Publish Date: 2021-07-25 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23413>CVE-2021-23413</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23413">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23413</a></p> <p>Release Date: 2021-07-25</p> <p>Fix Resolution (jszip): 3.7.0</p> <p>Direct dependency fix Resolution (electron-devtools-installer): 3.2.0</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
True
electron-devtools-installer-3.1.1.tgz: 2 vulnerabilities (highest severity is: 5.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>electron-devtools-installer-3.1.1.tgz</b></p></summary> <p></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/jszip</p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (electron-devtools-installer version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [WS-2023-0004](https://github.com/Stuk/jszip/commit/2edab366119c9ee948357c02f1206c28566cdf15) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | jszip-3.5.0.tgz | Transitive | 3.2.0 | &#10060; | | [CVE-2021-23413](https://www.mend.io/vulnerability-database/CVE-2021-23413) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | jszip-3.5.0.tgz | Transitive | 3.2.0 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2023-0004</summary> ### Vulnerable Library - <b>jszip-3.5.0.tgz</b></p> <p>Create, read and edit .zip files with JavaScript http://stuartk.com/jszip</p> <p>Library home page: <a href="https://registry.npmjs.org/jszip/-/jszip-3.5.0.tgz">https://registry.npmjs.org/jszip/-/jszip-3.5.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/jszip</p> <p> Dependency Hierarchy: - electron-devtools-installer-3.1.1.tgz (Root Library) - unzip-crx-3-0.2.0.tgz - :x: **jszip-3.5.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> jszip before 3.8.0 does not sanitize filenames when files are loaded with `loadAsync`, which makes the library vunerable to zip-slip attack. <p>Publish Date: 2023-01-04 <p>URL: <a href=https://github.com/Stuk/jszip/commit/2edab366119c9ee948357c02f1206c28566cdf15>WS-2023-0004</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2023-01-04</p> <p>Fix Resolution (jszip): 3.6.0</p> <p>Direct dependency fix Resolution (electron-devtools-installer): 3.2.0</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-23413</summary> ### Vulnerable Library - <b>jszip-3.5.0.tgz</b></p> <p>Create, read and edit .zip files with JavaScript http://stuartk.com/jszip</p> <p>Library home page: <a href="https://registry.npmjs.org/jszip/-/jszip-3.5.0.tgz">https://registry.npmjs.org/jszip/-/jszip-3.5.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/jszip</p> <p> Dependency Hierarchy: - electron-devtools-installer-3.1.1.tgz (Root Library) - unzip-crx-3-0.2.0.tgz - :x: **jszip-3.5.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> This affects the package jszip before 3.7.0. Crafting a new zip file with filenames set to Object prototype values (e.g __proto__, toString, etc) results in a returned object with a modified prototype instance. <p>Publish Date: 2021-07-25 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23413>CVE-2021-23413</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23413">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23413</a></p> <p>Release Date: 2021-07-25</p> <p>Fix Resolution (jszip): 3.7.0</p> <p>Direct dependency fix Resolution (electron-devtools-installer): 3.2.0</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
non_process
electron devtools installer tgz vulnerabilities highest severity is vulnerable library electron devtools installer tgz path to dependency file package json path to vulnerable library node modules jszip vulnerabilities cve severity cvss dependency type fixed in electron devtools installer version remediation available medium jszip tgz transitive medium jszip tgz transitive details ws vulnerable library jszip tgz create read and edit zip files with javascript library home page a href path to dependency file package json path to vulnerable library node modules jszip dependency hierarchy electron devtools installer tgz root library unzip crx tgz x jszip tgz vulnerable library found in base branch master vulnerability details jszip before does not sanitize filenames when files are loaded with loadasync which makes the library vunerable to zip slip attack publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution jszip direct dependency fix resolution electron devtools installer step up your open source security game with mend cve vulnerable library jszip tgz create read and edit zip files with javascript library home page a href path to dependency file package json path to vulnerable library node modules jszip dependency hierarchy electron devtools installer tgz root library unzip crx tgz x jszip tgz vulnerable library found in base branch master vulnerability details this affects the package jszip before crafting a new zip file with filenames set to object prototype values e g proto tostring etc results in a returned object with a modified prototype instance publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jszip direct dependency fix resolution electron devtools installer step up your open source security game with mend
0
137,362
12,751,283,690
IssuesEvent
2020-06-27 10:03:25
Imagine-Programming/hindsight
https://api.github.com/repos/Imagine-Programming/hindsight
opened
[documentation] generate a Wiki from the XML comments in code.
documentation
Currently, the code is thoroughly documented through XML comment blocks. These XML comment blocks are not extremely well supported for C++ in Visual Studio, but they do describe all classes, methods and relevant members with references to other classes, methods and relevant members. The next step is to generate some form of wiki from those XML blocks, making the code easier to navigate and understand for people that are new to Windows API programming, the project and/or C++.
1.0
[documentation] generate a Wiki from the XML comments in code. - Currently, the code is thoroughly documented through XML comment blocks. These XML comment blocks are not extremely well supported for C++ in Visual Studio, but they do describe all classes, methods and relevant members with references to other classes, methods and relevant members. The next step is to generate some form of wiki from those XML blocks, making the code easier to navigate and understand for people that are new to Windows API programming, the project and/or C++.
non_process
generate a wiki from the xml comments in code currently the code is thoroughly documented through xml comment blocks these xml comment blocks are not extremely well supported for c in visual studio but they do describe all classes methods and relevant members with references to other classes methods and relevant members the next step is to generate some form of wiki from those xml blocks making the code easier to navigate and understand for people that are new to windows api programming the project and or c
0
20,272
26,902,779,032
IssuesEvent
2023-02-06 16:45:07
dtcenter/MET
https://api.github.com/repos/dtcenter/MET
reopened
Bugfix: Missing NDBC buoy location information when running ASCII2NC
type: bug requestor: NOAA/EMC alert: NEED ACCOUNT KEY MET: PreProcessing Tools (Point) priority: high
## Describe the Problem ## I tested ASCII2NC with MET 11.0.0/METplus 5.0.0 to convert NDBC buoy data (*.txt) into a METplus compatible point obs netCDF file and got the following error messages in the log file: ERROR : No location information found for station 44084 do not process file /lfs/h1/ops/dev/dcom/20230124/validation_data/marine/buoy/44084.txt ERROR : No location information found for station 46275 do not process file /lfs/h1/ops/dev/dcom/20230124/validation_data/marine/buoy/46275.txt ERROR : No location information found for station CSXA2 do not process file /lfs/h1/ops/dev/dcom/20230124/validation_data/marine/buoy/CSXA2.txt ... How were NDBC buoy location information obtained in MET/METplus? Please update the buoy location information to include all NDBC buoys. ### Expected Behavior ### All NDBC buoys have an assigned location. ### Environment ### Describe your runtime environment: *1. Machine: (e.g. HPC name, Linux Workstation, Mac Laptop)* WCOSS2 and Hera *2. OS: (e.g. RedHat Linux, MacOS)* *3. Software version number(s)* MET 11.0.0/METplus 5.0.0 ### To Reproduce ### NDBC buoy data directory on WCOSS2: /lfs/h1/ops/dev/dcom/$YYYYMMDD/validation_data/marine/buoy METplus use case: https://metplus.readthedocs.io/en/latest/generated/model_applications/marine_and_cryosphere/PointStat_fcstGFS_obsNDBC_WaveHeight.html#sphx-glr-generated-model-applications-marine-and-cryosphere-pointstat-fcstgfs-obsndbc-waveheight-py ### Relevant Deadlines ### MET 11.0.1/METplus 5.0.1 ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [X] Select **engineer(s)** or **no engineer** required @JohnHalleyGotway - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [x] Select **Organization** level **Project** for support of the current coordinated release - [x] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label - [x] Select **Milestone** as the next bugfix version ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [X] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose) ## Bugfix Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [X] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [x] Fork this repository or create a branch of **main_\<Version>**. Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>` - [x] Fix the bug and test your changes. - [x] Add/update log messages for easier debugging. - [x] Add/update unit tests. - [x] Add/update documentation. - [x] Add any new Python packages to the [METplus Components Python Requirements](https://metplus.readthedocs.io/en/develop/Users_Guide/overview.html#metplus-components-python-requirements) table. - [x] Push local changes to GitHub. - [ ] Submit a pull request to merge into **main_\<Version>**. Pull request: `bugfix <Issue Number> main_<Version> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Development** issues Select: **Organization** level software support **Project** for the current coordinated release Select: **Milestone** as the next bugfix version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` Select: **Reviewer(s)** and **Development** issues Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Close this issue.
1.0
Bugfix: Missing NDBC buoy location information when running ASCII2NC - ## Describe the Problem ## I tested ASCII2NC with MET 11.0.0/METplus 5.0.0 to convert NDBC buoy data (*.txt) into a METplus compatible point obs netCDF file and got the following error messages in the log file: ERROR : No location information found for station 44084 do not process file /lfs/h1/ops/dev/dcom/20230124/validation_data/marine/buoy/44084.txt ERROR : No location information found for station 46275 do not process file /lfs/h1/ops/dev/dcom/20230124/validation_data/marine/buoy/46275.txt ERROR : No location information found for station CSXA2 do not process file /lfs/h1/ops/dev/dcom/20230124/validation_data/marine/buoy/CSXA2.txt ... How were NDBC buoy location information obtained in MET/METplus? Please update the buoy location information to include all NDBC buoys. ### Expected Behavior ### All NDBC buoys have an assigned location. ### Environment ### Describe your runtime environment: *1. Machine: (e.g. HPC name, Linux Workstation, Mac Laptop)* WCOSS2 and Hera *2. OS: (e.g. RedHat Linux, MacOS)* *3. Software version number(s)* MET 11.0.0/METplus 5.0.0 ### To Reproduce ### NDBC buoy data directory on WCOSS2: /lfs/h1/ops/dev/dcom/$YYYYMMDD/validation_data/marine/buoy METplus use case: https://metplus.readthedocs.io/en/latest/generated/model_applications/marine_and_cryosphere/PointStat_fcstGFS_obsNDBC_WaveHeight.html#sphx-glr-generated-model-applications-marine-and-cryosphere-pointstat-fcstgfs-obsndbc-waveheight-py ### Relevant Deadlines ### MET 11.0.1/METplus 5.0.1 ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [X] Select **engineer(s)** or **no engineer** required @JohnHalleyGotway - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [x] Select **Organization** level **Project** for support of the current coordinated release - [x] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label - [x] Select **Milestone** as the next bugfix version ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [X] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose) ## Bugfix Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [X] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [x] Fork this repository or create a branch of **main_\<Version>**. Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>` - [x] Fix the bug and test your changes. - [x] Add/update log messages for easier debugging. - [x] Add/update unit tests. - [x] Add/update documentation. - [x] Add any new Python packages to the [METplus Components Python Requirements](https://metplus.readthedocs.io/en/develop/Users_Guide/overview.html#metplus-components-python-requirements) table. - [x] Push local changes to GitHub. - [ ] Submit a pull request to merge into **main_\<Version>**. Pull request: `bugfix <Issue Number> main_<Version> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Development** issues Select: **Organization** level software support **Project** for the current coordinated release Select: **Milestone** as the next bugfix version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` Select: **Reviewer(s)** and **Development** issues Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Close this issue.
process
bugfix missing ndbc buoy location information when running describe the problem i tested with met metplus to convert ndbc buoy data txt into a metplus compatible point obs netcdf file and got the following error messages in the log file error no location information found for station do not process file lfs ops dev dcom validation data marine buoy txt error no location information found for station do not process file lfs ops dev dcom validation data marine buoy txt error no location information found for station do not process file lfs ops dev dcom validation data marine buoy txt how were ndbc buoy location information obtained in met metplus please update the buoy location information to include all ndbc buoys expected behavior all ndbc buoys have an assigned location environment describe your runtime environment machine e g hpc name linux workstation mac laptop and hera os e g redhat linux macos software version number s met metplus to reproduce ndbc buoy data directory on lfs ops dev dcom yyyymmdd validation data marine buoy metplus use case relevant deadlines met metplus funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required johnhalleygotway select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select organization level project for support of the current coordinated release select repository level project for development toward the next official release or add alert need project assignment label select milestone as the next bugfix version define related issue s consider the impact to the other metplus components bugfix checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of main branch name bugfix main fix the bug and test your changes add update log messages for easier debugging add update unit tests add update documentation add any new python packages to the table push local changes to github submit a pull request to merge into main pull request bugfix main define the pull request metadata as permissions allow select reviewer s and development issues select organization level software support project for the current coordinated release select milestone as the next bugfix version iterate until the reviewer s accept and merge your changes delete your fork or branch complete the steps above to fix the bug on the develop branch branch name bugfix develop pull request bugfix develop select reviewer s and development issues select repository level development cycle project for the next official release select milestone as the next official version close this issue
1
22,539
31,710,988,143
IssuesEvent
2023-09-09 09:08:23
h4sh5/npm-auto-scanner
https://api.github.com/repos/h4sh5/npm-auto-scanner
opened
@alephium/cli 0.21.2 has 1 guarddog issues
npm-silent-process-execution
```{"npm-silent-process-execution":[{"code":" const p = spawn('java', ['-jar', jarFile], {\n detached: true,\n stdio: 'ignore',\n env: { ...process.env, ALEPHIUM_HOME: devDir, ALEPHIUM_FILE_LOG_LEVEL: 'DEBUG', ALEPHIUM_WALLET_HOME: devDir }\n })","location":"package/scripts/start-devnet.js:69","message":"This package is silently executing another executable"}]}```
1.0
@alephium/cli 0.21.2 has 1 guarddog issues - ```{"npm-silent-process-execution":[{"code":" const p = spawn('java', ['-jar', jarFile], {\n detached: true,\n stdio: 'ignore',\n env: { ...process.env, ALEPHIUM_HOME: devDir, ALEPHIUM_FILE_LOG_LEVEL: 'DEBUG', ALEPHIUM_WALLET_HOME: devDir }\n })","location":"package/scripts/start-devnet.js:69","message":"This package is silently executing another executable"}]}```
process
alephium cli has guarddog issues npm silent process execution n detached true n stdio ignore n env process env alephium home devdir alephium file log level debug alephium wallet home devdir n location package scripts start devnet js message this package is silently executing another executable
1
149,048
11,880,703,174
IssuesEvent
2020-03-27 11:11:43
microsoft/azure-pipelines-tasks
https://api.github.com/repos/microsoft/azure-pipelines-tasks
closed
Publish Code Coverage Results. Error: Incorrect Format
Area: Test bug
## Required Information **Question, Bug, or Feature?** *Type*: Bug **Enter Task Name**: Publish Code Coverage Results ## Environment - Server - Azure Pipelines or TFS on-premises? Azure pipelines. Please email me for exact details (see below). - Agent - Hosted or Private: Hosted ## Issue Description Task is not able to publish code coverage. "Error during reading report 'E:\A\_work\27\s\server\combinedcoverageoutput\cobertura-coverage.xml' (Size: 1.4MB): Input string was not in a correct format." I'm unsure about why it's showing this error--the format looks correct to me. I need help determining exactly what part of the file has incorrect format ### Task logs Unable to post logs in public repo due to sensitive information. Please email me (see below). ### Error logs Unable to post logs in public repo due to sensitive information. Please email me (see below). Please email me at ragulat@microsoft for more details.
1.0
Publish Code Coverage Results. Error: Incorrect Format - ## Required Information **Question, Bug, or Feature?** *Type*: Bug **Enter Task Name**: Publish Code Coverage Results ## Environment - Server - Azure Pipelines or TFS on-premises? Azure pipelines. Please email me for exact details (see below). - Agent - Hosted or Private: Hosted ## Issue Description Task is not able to publish code coverage. "Error during reading report 'E:\A\_work\27\s\server\combinedcoverageoutput\cobertura-coverage.xml' (Size: 1.4MB): Input string was not in a correct format." I'm unsure about why it's showing this error--the format looks correct to me. I need help determining exactly what part of the file has incorrect format ### Task logs Unable to post logs in public repo due to sensitive information. Please email me (see below). ### Error logs Unable to post logs in public repo due to sensitive information. Please email me (see below). Please email me at ragulat@microsoft for more details.
non_process
publish code coverage results error incorrect format required information question bug or feature type bug enter task name publish code coverage results environment server azure pipelines or tfs on premises azure pipelines please email me for exact details see below agent hosted or private hosted issue description task is not able to publish code coverage error during reading report e a work s server combinedcoverageoutput cobertura coverage xml size input string was not in a correct format i m unsure about why it s showing this error the format looks correct to me i need help determining exactly what part of the file has incorrect format task logs unable to post logs in public repo due to sensitive information please email me see below error logs unable to post logs in public repo due to sensitive information please email me see below please email me at ragulat microsoft for more details
0
240,956
26,256,562,937
IssuesEvent
2023-01-06 01:37:14
tlkh/serverless-transformers
https://api.github.com/repos/tlkh/serverless-transformers
opened
CVE-2021-33503 (High) detected in urllib3-1.26.2-py2.py3-none-any.whl
security vulnerability
## CVE-2021-33503 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.26.2-py2.py3-none-any.whl</b></p></summary> <p>HTTP library with thread-safe connection pooling, file post, and more.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/f5/71/45d36a8df68f3ebb098d6861b2c017f3d094538c0fb98fa61d4dc43e69b9/urllib3-1.26.2-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/f5/71/45d36a8df68f3ebb098d6861b2c017f3d094538c0fb98fa61d4dc43e69b9/urllib3-1.26.2-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /requirements.txt</p> <p>Path to vulnerable library: /requirements.txt,/backend/requirements.txt</p> <p> Dependency Hierarchy: - streamlit-0.55.2-py2.py3-none-any.whl (Root Library) - boto3-1.16.20-py2.py3-none-any.whl - botocore-1.19.20-py2.py3-none-any.whl - :x: **urllib3-1.26.2-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/tlkh/serverless-transformers/commit/e726ca4e59ed46043e300fe16d6cf883c6ebd22e">e726ca4e59ed46043e300fe16d6cf883c6ebd22e</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in urllib3 before 1.26.5. When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect. <p>Publish Date: 2021-06-29 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-33503>CVE-2021-33503</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg">https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg</a></p> <p>Release Date: 2021-06-29</p> <p>Fix Resolution: urllib3 - 1.26.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-33503 (High) detected in urllib3-1.26.2-py2.py3-none-any.whl - ## CVE-2021-33503 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.26.2-py2.py3-none-any.whl</b></p></summary> <p>HTTP library with thread-safe connection pooling, file post, and more.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/f5/71/45d36a8df68f3ebb098d6861b2c017f3d094538c0fb98fa61d4dc43e69b9/urllib3-1.26.2-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/f5/71/45d36a8df68f3ebb098d6861b2c017f3d094538c0fb98fa61d4dc43e69b9/urllib3-1.26.2-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /requirements.txt</p> <p>Path to vulnerable library: /requirements.txt,/backend/requirements.txt</p> <p> Dependency Hierarchy: - streamlit-0.55.2-py2.py3-none-any.whl (Root Library) - boto3-1.16.20-py2.py3-none-any.whl - botocore-1.19.20-py2.py3-none-any.whl - :x: **urllib3-1.26.2-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/tlkh/serverless-transformers/commit/e726ca4e59ed46043e300fe16d6cf883c6ebd22e">e726ca4e59ed46043e300fe16d6cf883c6ebd22e</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in urllib3 before 1.26.5. When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect. <p>Publish Date: 2021-06-29 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-33503>CVE-2021-33503</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg">https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg</a></p> <p>Release Date: 2021-06-29</p> <p>Fix Resolution: urllib3 - 1.26.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in none any whl cve high severity vulnerability vulnerable library none any whl http library with thread safe connection pooling file post and more library home page a href path to dependency file requirements txt path to vulnerable library requirements txt backend requirements txt dependency hierarchy streamlit none any whl root library none any whl botocore none any whl x none any whl vulnerable library found in head commit a href found in base branch main vulnerability details an issue was discovered in before when provided with a url containing many characters in the authority component the authority regular expression exhibits catastrophic backtracking causing a denial of service if a url were passed as a parameter or redirected to via an http redirect publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
4,632
7,477,934,088
IssuesEvent
2018-04-04 09:53:18
Open-EO/openeo-api
https://api.github.com/repos/Open-EO/openeo-api
closed
Make the "Image Collection" a process
enhancement process graphs processes suggestion
According to https://open-eo.github.io/openeo-api-poc/processgraphs/index.html we currently we have a separate object type definition called "image collection", which basically loads a product from /data. Why do we need an individual type for this? Can't this be simply a process, like everything else? This would mean we can replace the Image collection object: ``` { "product_id": <string> } ``` with a process like this: ``` { "process_id": "data" "args": { "product_id": "Sentinel2-L1C" } } ``` The process name could also be load_imagery, load_data, product, ... At first this looks more complicated, but overall it would *simplify* the overall process_graph definition, as we would use only one object type (Process). In the ende a value in the argument set would be only: `<Value> := <string|number|array|boolean|null|Process>` and not: `<Value> := <string|number|array|boolean|null|Process|ImageCollection>` This makes it more general and would allow easier extension, e.g. to load other data types than imagery (time series?)
2.0
Make the "Image Collection" a process - According to https://open-eo.github.io/openeo-api-poc/processgraphs/index.html we currently we have a separate object type definition called "image collection", which basically loads a product from /data. Why do we need an individual type for this? Can't this be simply a process, like everything else? This would mean we can replace the Image collection object: ``` { "product_id": <string> } ``` with a process like this: ``` { "process_id": "data" "args": { "product_id": "Sentinel2-L1C" } } ``` The process name could also be load_imagery, load_data, product, ... At first this looks more complicated, but overall it would *simplify* the overall process_graph definition, as we would use only one object type (Process). In the ende a value in the argument set would be only: `<Value> := <string|number|array|boolean|null|Process>` and not: `<Value> := <string|number|array|boolean|null|Process|ImageCollection>` This makes it more general and would allow easier extension, e.g. to load other data types than imagery (time series?)
process
make the image collection a process according to we currently we have a separate object type definition called image collection which basically loads a product from data why do we need an individual type for this can t this be simply a process like everything else this would mean we can replace the image collection object product id with a process like this process id data args product id the process name could also be load imagery load data product at first this looks more complicated but overall it would simplify the overall process graph definition as we would use only one object type process in the ende a value in the argument set would be only and not this makes it more general and would allow easier extension e g to load other data types than imagery time series
1
316,124
9,636,975,475
IssuesEvent
2019-05-16 07:40:02
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.xvideos.com - site is not usable
browser-firefox-mobile engine-gecko nsfw priority-critical
<!-- @browser: Firefox Mobile 67.0 --> <!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:67.0) Gecko/67.0 Firefox/67.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://www.xvideos.com/video10196259/young_slut_hardcore_teen_sex **Browser / Version**: Firefox Mobile 67.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: video won't play **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190506235559</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.xvideos.com - site is not usable - <!-- @browser: Firefox Mobile 67.0 --> <!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:67.0) Gecko/67.0 Firefox/67.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://www.xvideos.com/video10196259/young_slut_hardcore_teen_sex **Browser / Version**: Firefox Mobile 67.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: video won't play **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190506235559</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description video won t play steps to reproduce browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta from with ❤️
0
21,246
28,370,484,921
IssuesEvent
2023-04-12 16:35:40
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
reopened
TSC Structure and composition and the move to a community based TSC.
Process
Problem statement: we want to avoid a pay-to-play situation where companies purchase influence on the project using higher membership levels
1.0
TSC Structure and composition and the move to a community based TSC. - Problem statement: we want to avoid a pay-to-play situation where companies purchase influence on the project using higher membership levels
process
tsc structure and composition and the move to a community based tsc problem statement we want to avoid a pay to play situation where companies purchase influence on the project using higher membership levels
1
21,745
30,259,058,668
IssuesEvent
2023-07-07 06:40:11
Graylog2/graylog2-server
https://api.github.com/repos/Graylog2/graylog2-server
closed
Pipeline Rule Function substring() utilizes Keyword "end"
processing bug triaged
When creating a pipeline rule, using the substring() function, there is no way to explicitely state the parameter names (i.e. value:, start:, end:) when using an value for end. The "end:" is interpreted in the same way as the "end" finishing the rule (including syntax highlighting), which leads to the rule no longer be saveable (without any error indicator, which led to some confusion on my side). ## Expected Behavior Pipeline rules utilizing substring() while explicitely stating parameter names should be working. ## Current Behavior When using substring() with explicit stating of parameter names including "end:", the rule can no longer be saved, without any error indication. There also is no syntax error shown when using the function this way. Only the syntax highlighting gives a hint to the problem. ## Possible Solution Rename the "end:" parameter to something else which is not a reserved keyword. Alternatively: change the behavior of the pipeline rule editor to not fail due to usage of explicit parameter names. ## Steps to Reproduce (for bugs) 1. create a new pipeline rule 2. use the substring() function with all parameters explicitely stated `let result = substring(value:"some text here", start:3, end:5);` 3. try to save the rule ## Context This problem occured while creating a lengthy text manipulation rule with multiple usages of the substring() function, while still keeping up the readability (by explicitely stating parameter names). Although not using parameter names fixes the problem, this does not help keeping the rule readable and understandable. ## Your Environment * Graylog Version: 4.2.4+b643d2b
1.0
Pipeline Rule Function substring() utilizes Keyword "end" - When creating a pipeline rule, using the substring() function, there is no way to explicitely state the parameter names (i.e. value:, start:, end:) when using an value for end. The "end:" is interpreted in the same way as the "end" finishing the rule (including syntax highlighting), which leads to the rule no longer be saveable (without any error indicator, which led to some confusion on my side). ## Expected Behavior Pipeline rules utilizing substring() while explicitely stating parameter names should be working. ## Current Behavior When using substring() with explicit stating of parameter names including "end:", the rule can no longer be saved, without any error indication. There also is no syntax error shown when using the function this way. Only the syntax highlighting gives a hint to the problem. ## Possible Solution Rename the "end:" parameter to something else which is not a reserved keyword. Alternatively: change the behavior of the pipeline rule editor to not fail due to usage of explicit parameter names. ## Steps to Reproduce (for bugs) 1. create a new pipeline rule 2. use the substring() function with all parameters explicitely stated `let result = substring(value:"some text here", start:3, end:5);` 3. try to save the rule ## Context This problem occured while creating a lengthy text manipulation rule with multiple usages of the substring() function, while still keeping up the readability (by explicitely stating parameter names). Although not using parameter names fixes the problem, this does not help keeping the rule readable and understandable. ## Your Environment * Graylog Version: 4.2.4+b643d2b
process
pipeline rule function substring utilizes keyword end when creating a pipeline rule using the substring function there is no way to explicitely state the parameter names i e value start end when using an value for end the end is interpreted in the same way as the end finishing the rule including syntax highlighting which leads to the rule no longer be saveable without any error indicator which led to some confusion on my side expected behavior pipeline rules utilizing substring while explicitely stating parameter names should be working current behavior when using substring with explicit stating of parameter names including end the rule can no longer be saved without any error indication there also is no syntax error shown when using the function this way only the syntax highlighting gives a hint to the problem possible solution rename the end parameter to something else which is not a reserved keyword alternatively change the behavior of the pipeline rule editor to not fail due to usage of explicit parameter names steps to reproduce for bugs create a new pipeline rule use the substring function with all parameters explicitely stated let result substring value some text here start end try to save the rule context this problem occured while creating a lengthy text manipulation rule with multiple usages of the substring function while still keeping up the readability by explicitely stating parameter names although not using parameter names fixes the problem this does not help keeping the rule readable and understandable your environment graylog version
1
618,378
19,433,755,583
IssuesEvent
2021-12-21 14:51:26
BIDMCDigitalPsychiatry/LAMP-platform
https://api.github.com/repos/BIDMCDigitalPsychiatry/LAMP-platform
closed
Optional Question Default
bug frontend priority 1
Can it be fixed amongst the platform that studies are by default unchecked for the optional questions? Both studies w researcher id yhe0wtfn6n6sbvsan0js and 4aq1kry81ktrb5v1smvs include many, if not all surveys to entirely composed of optional questions when they are all required. Please see to this asap. Move this to production asap once its done.
1.0
Optional Question Default - Can it be fixed amongst the platform that studies are by default unchecked for the optional questions? Both studies w researcher id yhe0wtfn6n6sbvsan0js and 4aq1kry81ktrb5v1smvs include many, if not all surveys to entirely composed of optional questions when they are all required. Please see to this asap. Move this to production asap once its done.
non_process
optional question default can it be fixed amongst the platform that studies are by default unchecked for the optional questions both studies w researcher id and include many if not all surveys to entirely composed of optional questions when they are all required please see to this asap move this to production asap once its done
0
11,729
14,567,967,016
IssuesEvent
2020-12-17 10:58:13
e4exp/paper_manager_abstract
https://api.github.com/repos/e4exp/paper_manager_abstract
opened
F^2-Softmax: Diversifying Neural Text Generation via Frequency Factorized Softmax
2020 Natural Language Processing _read_later
* https://arxiv.org/abs/2009.09417 * 2020 近年のニューラルテキスト生成技術の進歩にもかかわらず、人間の言語の豊かな多様性を符号化することは未だに困難なままである。 我々は、最適でないテキスト生成は、主にトークン分布の不均衡に起因するものであり、最尤目的語を用いて学習した場合には、特に学習モデルの方向性がずれてしまうことを指摘する。 単純でありながら効果的な解決策として、我々は、周波数分布が歪んでいてもバランスのとれた学習を行うために、F^2-SoftmaxとMefMaxの2つの新しい手法を提案する。 MefMaxは、類似した周波数を持つトークンをグループ化し、クラス間の周波数質量を均等化しようとして、トークンを周波数クラスに一意に割り当てる。 次に、F^2-Softmaxは、ターゲットトークンの確率分布を、(i)周波数クラスと(ii)ターゲット周波数クラスのトークンの2つの条件付き確率の積に分解します。 モデルは、ボキャブラリーのサブセットに限定されているため、より均一な確率分布を学習します。関連する7つの指標について有意な性能向上が見られたことから、多様性だけでなく生成されるテキストの品質を向上させる上で、我々のアプローチが優れていることが示唆された。
1.0
F^2-Softmax: Diversifying Neural Text Generation via Frequency Factorized Softmax - * https://arxiv.org/abs/2009.09417 * 2020 近年のニューラルテキスト生成技術の進歩にもかかわらず、人間の言語の豊かな多様性を符号化することは未だに困難なままである。 我々は、最適でないテキスト生成は、主にトークン分布の不均衡に起因するものであり、最尤目的語を用いて学習した場合には、特に学習モデルの方向性がずれてしまうことを指摘する。 単純でありながら効果的な解決策として、我々は、周波数分布が歪んでいてもバランスのとれた学習を行うために、F^2-SoftmaxとMefMaxの2つの新しい手法を提案する。 MefMaxは、類似した周波数を持つトークンをグループ化し、クラス間の周波数質量を均等化しようとして、トークンを周波数クラスに一意に割り当てる。 次に、F^2-Softmaxは、ターゲットトークンの確率分布を、(i)周波数クラスと(ii)ターゲット周波数クラスのトークンの2つの条件付き確率の積に分解します。 モデルは、ボキャブラリーのサブセットに限定されているため、より均一な確率分布を学習します。関連する7つの指標について有意な性能向上が見られたことから、多様性だけでなく生成されるテキストの品質を向上させる上で、我々のアプローチが優れていることが示唆された。
process
f softmax diversifying neural text generation via frequency factorized softmax 近年のニューラルテキスト生成技術の進歩にもかかわらず、人間の言語の豊かな多様性を符号化することは未だに困難なままである。 我々は、最適でないテキスト生成は、主にトークン分布の不均衡に起因するものであり、最尤目的語を用いて学習した場合には、特に学習モデルの方向性がずれてしまうことを指摘する。 単純でありながら効果的な解決策として、我々は、周波数分布が歪んでいてもバランスのとれた学習を行うために、f 。 mefmaxは、類似した周波数を持つトークンをグループ化し、クラス間の周波数質量を均等化しようとして、トークンを周波数クラスに一意に割り当てる。 次に、f softmaxは、ターゲットトークンの確率分布を、 i 周波数クラスと ii 。 モデルは、ボキャブラリーのサブセットに限定されているため、より均一な確率分布を学習します。 、多様性だけでなく生成されるテキストの品質を向上させる上で、我々のアプローチが優れていることが示唆された。
1
306,128
23,144,503,994
IssuesEvent
2022-07-28 22:18:42
SPOpenSource/edgeconnect-python
https://api.github.com/repos/SPOpenSource/edgeconnect-python
closed
Preconfig code example missing parameter for serial_number to match against appliance serial
bug documentation
The preconfig generator example looks for a column in CSV file for `serial_number`, however, it doesn't use the data when saving the preconfig on Orchestrator to match against an appliance ```python # Set value for serial number if provided appliance_serial = row.get("serial_number") if appliance_serial is None: appliance_serial = "" else: pass ``` When orch.create_preconfig() is called, it doesn't assign this to the parameter serial_number: ```python orch.create_preconfig( preconfig_name=row["hostname"], yaml_preconfig=yaml_preconfig, auto_apply=auto_apply, tag=row["hostname"], ### MISSING -- serial_number = appliance_serial, comment=f"Created/Uploaded @ {comment_timestamp}", ) ```
1.0
Preconfig code example missing parameter for serial_number to match against appliance serial - The preconfig generator example looks for a column in CSV file for `serial_number`, however, it doesn't use the data when saving the preconfig on Orchestrator to match against an appliance ```python # Set value for serial number if provided appliance_serial = row.get("serial_number") if appliance_serial is None: appliance_serial = "" else: pass ``` When orch.create_preconfig() is called, it doesn't assign this to the parameter serial_number: ```python orch.create_preconfig( preconfig_name=row["hostname"], yaml_preconfig=yaml_preconfig, auto_apply=auto_apply, tag=row["hostname"], ### MISSING -- serial_number = appliance_serial, comment=f"Created/Uploaded @ {comment_timestamp}", ) ```
non_process
preconfig code example missing parameter for serial number to match against appliance serial the preconfig generator example looks for a column in csv file for serial number however it doesn t use the data when saving the preconfig on orchestrator to match against an appliance python set value for serial number if provided appliance serial row get serial number if appliance serial is none appliance serial else pass when orch create preconfig is called it doesn t assign this to the parameter serial number python orch create preconfig preconfig name row yaml preconfig yaml preconfig auto apply auto apply tag row missing serial number appliance serial comment f created uploaded comment timestamp
0
55,004
13,496,099,152
IssuesEvent
2020-09-12 02:08:40
spack/spack
https://api.github.com/repos/spack/spack
closed
Installation issue: flang
build-error
<!-- Thanks for taking the time to report this build failure. To proceed with the report please: 1. Title the issue "Installation issue: <name-of-the-package>". 2. Provide the information required below. We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! --> ### Steps to reproduce the issue <!-- Fill in the exact spec you are trying to build and the relevant part of the error message --> ```console $ spack install flang ... ==> Installing flang ==> No binary for flang found: installing from source ==> Using cached archive: /home/benwibking/spack/var/spack/cache/_source-cache/archive/b8/b8c621da53829f8c53bad73125556fb1839c9056d713433b05741f7e445199f2.tar.gz ==> flang: Executing phase: 'cmake' ==> flang: Executing phase: 'build' ==> Error: ProcessError: Command exited with status 2: 'make' 3 errors found in build log: 519 In file included from /usr/lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/ios_base.h :39: 520 In file included from /usr/lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/ext/atomicity.h :35: 521 In file included from /usr/lib/gcc/x86_64-linux-gnu/9/../../../../include/x86_64-linux-gnu/c++/ 9/bits/gthr.h:148: 522 In file included from /usr/lib/gcc/x86_64-linux-gnu/9/../../../../include/x86_64-linux-gnu/c++/ 9/bits/gthr-default.h:35: 523 In file included from /usr/include/pthread.h:22: 524 In file included from /usr/include/sched.h:43: >> 525 /usr/include/x86_64-linux-gnu/bits/sched.h:92:12: error: exception specification in declaration does not match previous declaration 526 extern int getcpu (unsigned int *, unsigned int *) __THROW; 527 ^ 528 /tmp/benwibking/spack-stage/spack-stage-flang-20190329-begpn7xyomjbpz64jolyivsvt5ruia2d/spack-s rc/include/legacy-util-api.h:91:15: note: previous declaration is here 529 unsigned long getcpu(void); 530 ^ 531 1 error generated. >> 532 make[2]: *** [tools/flang2/utils/ilitp/CMakeFiles/ilitp.dir/build.make:85: tools/flang2/utils/i litp/CMakeFiles/ilitp.dir/ilitp.cpp.o] Error 1 533 make[2]: Leaving directory '/tmp/benwibking/spack-stage/spack-stage-flang-20190329-begpn7xyomjb pz64jolyivsvt5ruia2d/spack-build' >> 534 make[1]: *** [CMakeFiles/Makefile2:1564: tools/flang2/utils/ilitp/CMakeFiles/ilitp.dir/all] Err or 2 535 make[1]: Leaving directory '/tmp/benwibking/spack-stage/spack-stage-flang-20190329-begpn7xyomjb pz64jolyivsvt5ruia2d/spack-build' 536 make: *** [Makefile:152: all] Error 2 See build log for details: /tmp/benwibking/spack-stage/spack-stage-flang-20190329-begpn7xyomjbpz64jolyivsvt5ruia2d/spack-build-out.txt ``` ### Information on your system <!-- Please include the output of `spack debug report` --> * **Spack:** 0.15.4-879-28c6ce971 * **Python:** 3.8.2 * **Platform:** linux-ubuntu20.04-zen2 <!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. --> ### Additional information <!-- Please upload the following files. They should be present in the stage directory of the failing build. Also upload any config.log or similar file if one exists. --> * [spack-build-out.txt](https://github.com/spack/spack/files/5185001/spack-build-out.txt) * [spack-build-env.txt](https://github.com/spack/spack/files/5185006/spack-build-env.txt) <!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. --> @naromero77 ### General information <!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. --> - [x] I have run `spack debug report` and reported the version of Spack/Python/Platform - [x] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers - [x] I have uploaded the build log and environment files - [x] I have searched the issues of this repo and believe this is not a duplicate
1.0
Installation issue: flang - <!-- Thanks for taking the time to report this build failure. To proceed with the report please: 1. Title the issue "Installation issue: <name-of-the-package>". 2. Provide the information required below. We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! --> ### Steps to reproduce the issue <!-- Fill in the exact spec you are trying to build and the relevant part of the error message --> ```console $ spack install flang ... ==> Installing flang ==> No binary for flang found: installing from source ==> Using cached archive: /home/benwibking/spack/var/spack/cache/_source-cache/archive/b8/b8c621da53829f8c53bad73125556fb1839c9056d713433b05741f7e445199f2.tar.gz ==> flang: Executing phase: 'cmake' ==> flang: Executing phase: 'build' ==> Error: ProcessError: Command exited with status 2: 'make' 3 errors found in build log: 519 In file included from /usr/lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/ios_base.h :39: 520 In file included from /usr/lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/ext/atomicity.h :35: 521 In file included from /usr/lib/gcc/x86_64-linux-gnu/9/../../../../include/x86_64-linux-gnu/c++/ 9/bits/gthr.h:148: 522 In file included from /usr/lib/gcc/x86_64-linux-gnu/9/../../../../include/x86_64-linux-gnu/c++/ 9/bits/gthr-default.h:35: 523 In file included from /usr/include/pthread.h:22: 524 In file included from /usr/include/sched.h:43: >> 525 /usr/include/x86_64-linux-gnu/bits/sched.h:92:12: error: exception specification in declaration does not match previous declaration 526 extern int getcpu (unsigned int *, unsigned int *) __THROW; 527 ^ 528 /tmp/benwibking/spack-stage/spack-stage-flang-20190329-begpn7xyomjbpz64jolyivsvt5ruia2d/spack-s rc/include/legacy-util-api.h:91:15: note: previous declaration is here 529 unsigned long getcpu(void); 530 ^ 531 1 error generated. >> 532 make[2]: *** [tools/flang2/utils/ilitp/CMakeFiles/ilitp.dir/build.make:85: tools/flang2/utils/i litp/CMakeFiles/ilitp.dir/ilitp.cpp.o] Error 1 533 make[2]: Leaving directory '/tmp/benwibking/spack-stage/spack-stage-flang-20190329-begpn7xyomjb pz64jolyivsvt5ruia2d/spack-build' >> 534 make[1]: *** [CMakeFiles/Makefile2:1564: tools/flang2/utils/ilitp/CMakeFiles/ilitp.dir/all] Err or 2 535 make[1]: Leaving directory '/tmp/benwibking/spack-stage/spack-stage-flang-20190329-begpn7xyomjb pz64jolyivsvt5ruia2d/spack-build' 536 make: *** [Makefile:152: all] Error 2 See build log for details: /tmp/benwibking/spack-stage/spack-stage-flang-20190329-begpn7xyomjbpz64jolyivsvt5ruia2d/spack-build-out.txt ``` ### Information on your system <!-- Please include the output of `spack debug report` --> * **Spack:** 0.15.4-879-28c6ce971 * **Python:** 3.8.2 * **Platform:** linux-ubuntu20.04-zen2 <!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. --> ### Additional information <!-- Please upload the following files. They should be present in the stage directory of the failing build. Also upload any config.log or similar file if one exists. --> * [spack-build-out.txt](https://github.com/spack/spack/files/5185001/spack-build-out.txt) * [spack-build-env.txt](https://github.com/spack/spack/files/5185006/spack-build-env.txt) <!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. --> @naromero77 ### General information <!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. --> - [x] I have run `spack debug report` and reported the version of Spack/Python/Platform - [x] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers - [x] I have uploaded the build log and environment files - [x] I have searched the issues of this repo and believe this is not a duplicate
non_process
installation issue flang thanks for taking the time to report this build failure to proceed with the report please title the issue installation issue provide the information required below we encourage you to try as much as possible to reduce your problem to the minimal example that still reproduces the issue that would help us a lot in fixing it quickly and effectively steps to reproduce the issue console spack install flang installing flang no binary for flang found installing from source using cached archive home benwibking spack var spack cache source cache archive tar gz flang executing phase cmake flang executing phase build error processerror command exited with status make errors found in build log in file included from usr lib gcc linux gnu include c bits ios base h in file included from usr lib gcc linux gnu include c ext atomicity h in file included from usr lib gcc linux gnu include linux gnu c bits gthr h in file included from usr lib gcc linux gnu include linux gnu c bits gthr default h in file included from usr include pthread h in file included from usr include sched h usr include linux gnu bits sched h error exception specification in declaration does not match previous declaration extern int getcpu unsigned int unsigned int throw tmp benwibking spack stage spack stage flang spack s rc include legacy util api h note previous declaration is here unsigned long getcpu void error generated make tools utils ilitp cmakefiles ilitp dir build make tools utils i litp cmakefiles ilitp dir ilitp cpp o error make leaving directory tmp benwibking spack stage spack stage flang spack build make err or make leaving directory tmp benwibking spack stage spack stage flang spack build make error see build log for details tmp benwibking spack stage spack stage flang spack build out txt information on your system spack python platform linux additional information and mention them here if they exist general information i have run spack debug report and reported the version of spack python platform i have run spack maintainers and mentioned any maintainers i have uploaded the build log and environment files i have searched the issues of this repo and believe this is not a duplicate
0
92,844
8,379,266,107
IssuesEvent
2018-10-06 23:14:46
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
pull-kubernetes-integration always fails
kind/bug sig/testing
<!-- This form is for bug reports and feature requests ONLY! If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/). If the matter is security related, please disclose it privately via https://kubernetes.io/security/. --> **Is this a BUG REPORT or FEATURE REQUEST?**: /kind bug /sig testing > Uncomment only one, leave it on its own line: > > /kind bug > /kind feature **What happened**: pull-kubernetes-integration always fails https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/68403/pull-kubernetes-integration/30058/ https://k8s-testgrid.appspot.com/presubmits-kubernetes-blocking#pull-kubernetes-integration **What you expected to happen**: pull-kubernetes-integration always happy. **How to reproduce it (as minimally and precisely as possible)**: **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`): - Cloud provider or hardware configuration: - OS (e.g. from /etc/os-release): - Kernel (e.g. `uname -a`): - Install tools: - Others:
1.0
pull-kubernetes-integration always fails - <!-- This form is for bug reports and feature requests ONLY! If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/). If the matter is security related, please disclose it privately via https://kubernetes.io/security/. --> **Is this a BUG REPORT or FEATURE REQUEST?**: /kind bug /sig testing > Uncomment only one, leave it on its own line: > > /kind bug > /kind feature **What happened**: pull-kubernetes-integration always fails https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/68403/pull-kubernetes-integration/30058/ https://k8s-testgrid.appspot.com/presubmits-kubernetes-blocking#pull-kubernetes-integration **What you expected to happen**: pull-kubernetes-integration always happy. **How to reproduce it (as minimally and precisely as possible)**: **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`): - Cloud provider or hardware configuration: - OS (e.g. from /etc/os-release): - Kernel (e.g. `uname -a`): - Install tools: - Others:
non_process
pull kubernetes integration always fails this form is for bug reports and feature requests only if you re looking for help check and the if the matter is security related please disclose it privately via is this a bug report or feature request kind bug sig testing uncomment only one leave it on its own line kind bug kind feature what happened pull kubernetes integration always fails what you expected to happen pull kubernetes integration always happy how to reproduce it as minimally and precisely as possible anything else we need to know environment kubernetes version use kubectl version cloud provider or hardware configuration os e g from etc os release kernel e g uname a install tools others
0
26,045
4,194,861,969
IssuesEvent
2016-06-25 10:24:58
coherence-community/oracle-bedrock
https://api.github.com/repos/coherence-community/oracle-bedrock
closed
Introduce EntrySetMatcher
Module: Testing Support Priority: Minor Type: New Feature
As a developer I often would like to assert against two Set<Entry.Map>s, especially when those Entry Sets are across Coherence Clusters.
1.0
Introduce EntrySetMatcher - As a developer I often would like to assert against two Set<Entry.Map>s, especially when those Entry Sets are across Coherence Clusters.
non_process
introduce entrysetmatcher as a developer i often would like to assert against two set s especially when those entry sets are across coherence clusters
0
22,005
30,509,665,167
IssuesEvent
2023-07-18 19:44:47
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
[MLv2] Make `joinable_columns` return JS array
.metabase-lib .Team/QueryProcessor :hammer_and_wrench:
Currently, it seems like `joinable_columns` returns a Clojure vector in JS, and the FE has to use `to_array` from `cljs/cljs.core` to make it work. We should update the JS wrapper to do that
1.0
[MLv2] Make `joinable_columns` return JS array - Currently, it seems like `joinable_columns` returns a Clojure vector in JS, and the FE has to use `to_array` from `cljs/cljs.core` to make it work. We should update the JS wrapper to do that
process
make joinable columns return js array currently it seems like joinable columns returns a clojure vector in js and the fe has to use to array from cljs cljs core to make it work we should update the js wrapper to do that
1
18,104
24,128,636,769
IssuesEvent
2022-09-21 04:39:37
pubestpubest/CPE65BasicComProjGroup3
https://api.github.com/repos/pubestpubest/CPE65BasicComProjGroup3
closed
ทำอาหาร!!
inProcess
- [x] เอาวัตถุดิบไปเตรียม _(สับ ทอด นวด บลาๆๆๆ)_ - [x] ทอด - [ ] สับ - [x] เอาวัตถุดิบมารวมกัน _(ex.ขนมปัง + เนื้อ = เบอร์เกอร์, เบอร์เกอร์+จาน)_ ![image](https://user-images.githubusercontent.com/110964000/189985476-1a424617-963d-4d45-81b5-2e32868f18d5.png) ![image](https://user-images.githubusercontent.com/110964000/189985573-0cf60fa8-fd73-4b71-92b5-50284c22e239.png)
1.0
ทำอาหาร!! - - [x] เอาวัตถุดิบไปเตรียม _(สับ ทอด นวด บลาๆๆๆ)_ - [x] ทอด - [ ] สับ - [x] เอาวัตถุดิบมารวมกัน _(ex.ขนมปัง + เนื้อ = เบอร์เกอร์, เบอร์เกอร์+จาน)_ ![image](https://user-images.githubusercontent.com/110964000/189985476-1a424617-963d-4d45-81b5-2e32868f18d5.png) ![image](https://user-images.githubusercontent.com/110964000/189985573-0cf60fa8-fd73-4b71-92b5-50284c22e239.png)
process
ทำอาหาร เอาวัตถุดิบไปเตรียม สับ ทอด นวด บลาๆๆๆ ทอด สับ เอาวัตถุดิบมารวมกัน ex ขนมปัง เนื้อ เบอร์เกอร์ เบอร์เกอร์ จาน
1
8,834
11,944,918,872
IssuesEvent
2020-04-03 04:04:05
googleapis/nodejs-bigtable
https://api.github.com/repos/googleapis/nodejs-bigtable
closed
enable "noImplicitAny" check in tsconfig.json
api: bigtable type: process
Clean up synth.py. we don't wanna exclude `tsconfig.json` in generated client library. reference: #https://github.com/googleapis/nodejs-bigtable/pull/631
1.0
enable "noImplicitAny" check in tsconfig.json - Clean up synth.py. we don't wanna exclude `tsconfig.json` in generated client library. reference: #https://github.com/googleapis/nodejs-bigtable/pull/631
process
enable noimplicitany check in tsconfig json clean up synth py we don t wanna exclude tsconfig json in generated client library reference
1
42,088
2,869,096,986
IssuesEvent
2015-06-05 23:18:35
dart-lang/test
https://api.github.com/repos/dart-lang/test
closed
pkg/unittest: guardAsync remove 'testNum' and finallyBody
bug Fixed Priority-Medium
<a href="https://github.com/kevmoo"><img src="https://avatars.githubusercontent.com/u/17034?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [kevmoo](https://github.com/kevmoo)** _Originally opened as dart-lang/sdk#8878_ ---- testNum is an implementation detail. The \_currentTest field is private. No way for a user of unittest to know what it is. If finallyBody throws, there is no mechanism to handle exceptions correctly. If a caller to guardAsync wants to have finally behavior, they can do so in the function that pass in Also it appears no one is user either of these arguments.
1.0
pkg/unittest: guardAsync remove 'testNum' and finallyBody - <a href="https://github.com/kevmoo"><img src="https://avatars.githubusercontent.com/u/17034?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [kevmoo](https://github.com/kevmoo)** _Originally opened as dart-lang/sdk#8878_ ---- testNum is an implementation detail. The \_currentTest field is private. No way for a user of unittest to know what it is. If finallyBody throws, there is no mechanism to handle exceptions correctly. If a caller to guardAsync wants to have finally behavior, they can do so in the function that pass in Also it appears no one is user either of these arguments.
non_process
pkg unittest guardasync remove testnum and finallybody issue by originally opened as dart lang sdk testnum is an implementation detail the currenttest field is private no way for a user of unittest to know what it is if finallybody throws there is no mechanism to handle exceptions correctly if a caller to guardasync wants to have finally behavior they can do so in the function that pass in also it appears no one is user either of these arguments
0
9,073
12,141,091,625
IssuesEvent
2020-04-23 21:44:52
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Array vs basic type
devops-cicd-process/tech devops/prod
Is there a way to determine whether a template parameter is enumerable or not? I like to have template parameters that can contain either a single value (e.g. a string) or multiple values (e.g. a list of strings). But then, if I try to enumerate my parameter with a `${{ each value in parameters.values }}` construct, it works only if an actual list is passed. I would like to know if there is some function or hidden property (e.g. `parameter.values.count`) I could use in this situation. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6 * Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18 * Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#qa) * Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Array vs basic type - Is there a way to determine whether a template parameter is enumerable or not? I like to have template parameters that can contain either a single value (e.g. a string) or multiple values (e.g. a list of strings). But then, if I try to enumerate my parameter with a `${{ each value in parameters.values }}` construct, it works only if an actual list is passed. I would like to know if there is some function or hidden property (e.g. `parameter.values.count`) I could use in this situation. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6 * Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18 * Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#qa) * Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
array vs basic type is there a way to determine whether a template parameter is enumerable or not i like to have template parameters that can contain either a single value e g a string or multiple values e g a list of strings but then if i try to enumerate my parameter with a each value in parameters values construct it works only if an actual list is passed i would like to know if there is some function or hidden property e g parameter values count i could use in this situation document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
8,883
11,982,914,462
IssuesEvent
2020-04-07 13:41:43
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
MP and other parentage issues effector-mediated terms
multi-species process
<img width="339" alt="Screenshot 2020-03-02 at 16 58 04" src="https://user-images.githubusercontent.com/7359272/75698730-23e93000-5ca7-11ea-9428-c39c9f3c30fb.png"> I noticed that GO:0140423 effector-mediated suppression of pattern-triggered immunity signaling was not a child of GO:0140418 effector-mediated modulation of host process by symbiont and PTI is a host process. I think the issue is that GO:0140418 effector-mediated modulation of host process by symbiont should be above GO:0140415 effector-mediated modulation of host defenses by symbiont I think effectors can modulate host processes that are not immune system (at least that was why we requested the term. However, the e.g., we were looking at today the symbiont appeared to alter host photosynthesis, but it was really only doing this to reduce ROS production to damlp PTI... @CuzickA do you have any other examples of modulation of none immune processes by pathogen effectors?
1.0
MP and other parentage issues effector-mediated terms - <img width="339" alt="Screenshot 2020-03-02 at 16 58 04" src="https://user-images.githubusercontent.com/7359272/75698730-23e93000-5ca7-11ea-9428-c39c9f3c30fb.png"> I noticed that GO:0140423 effector-mediated suppression of pattern-triggered immunity signaling was not a child of GO:0140418 effector-mediated modulation of host process by symbiont and PTI is a host process. I think the issue is that GO:0140418 effector-mediated modulation of host process by symbiont should be above GO:0140415 effector-mediated modulation of host defenses by symbiont I think effectors can modulate host processes that are not immune system (at least that was why we requested the term. However, the e.g., we were looking at today the symbiont appeared to alter host photosynthesis, but it was really only doing this to reduce ROS production to damlp PTI... @CuzickA do you have any other examples of modulation of none immune processes by pathogen effectors?
process
mp and other parentage issues effector mediated terms img width alt screenshot at src i noticed that go effector mediated suppression of pattern triggered immunity signaling was not a child of go effector mediated modulation of host process by symbiont and pti is a host process i think the issue is that go effector mediated modulation of host process by symbiont should be above go effector mediated modulation of host defenses by symbiont i think effectors can modulate host processes that are not immune system at least that was why we requested the term however the e g we were looking at today the symbiont appeared to alter host photosynthesis but it was really only doing this to reduce ros production to damlp pti cuzicka do you have any other examples of modulation of none immune processes by pathogen effectors
1
9,960
3,079,778,290
IssuesEvent
2015-08-21 18:13:11
discorick/space_invaders
https://api.github.com/repos/discorick/space_invaders
closed
Ember 1.13.7 LinkIssueTest!
3 - Done Test1 Test2
blahblahblah <!--- @huboard:{"order":13.375,"milestone_order":0.0103759765625,"custom_state":""} -->
2.0
Ember 1.13.7 LinkIssueTest! - blahblahblah <!--- @huboard:{"order":13.375,"milestone_order":0.0103759765625,"custom_state":""} -->
non_process
ember linkissuetest blahblahblah huboard order milestone order custom state
0
17,798
23,724,409,500
IssuesEvent
2022-08-30 18:10:12
darktable-org/darktable
https://api.github.com/repos/darktable-org/darktable
closed
IOP bug with corrupted datas
priority: high understood: clear reproduce: confirmed scope: image processing bug: pending
**Describe the bug/issue** An old IOP bug is back! @TurboGit: I know you will not like that one. **To Reproduce** 1. Be sure to have 2 instances of diffuse & sharpen module on destination image (EDIT : I also see with 2 instances of denoise (profiled) module ; this issue is unfortunately not always reproducible). 2. Copy development on another image (I test it with image that have only one instance of diffuse & sharpen) 3. Paste it to the destination image with 2 instances of diffuse & sharpen 4. See the error: ![image](https://user-images.githubusercontent.com/45535283/185458054-900a64f3-4132-4a0b-bffe-53c7da5516a3.png) **Expected behavior** No issue and image correctly displayed. **Platform** _Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _ * darktable version : latest master * Linux - Distro : Debian Sid
1.0
IOP bug with corrupted datas - **Describe the bug/issue** An old IOP bug is back! @TurboGit: I know you will not like that one. **To Reproduce** 1. Be sure to have 2 instances of diffuse & sharpen module on destination image (EDIT : I also see with 2 instances of denoise (profiled) module ; this issue is unfortunately not always reproducible). 2. Copy development on another image (I test it with image that have only one instance of diffuse & sharpen) 3. Paste it to the destination image with 2 instances of diffuse & sharpen 4. See the error: ![image](https://user-images.githubusercontent.com/45535283/185458054-900a64f3-4132-4a0b-bffe-53c7da5516a3.png) **Expected behavior** No issue and image correctly displayed. **Platform** _Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _ * darktable version : latest master * Linux - Distro : Debian Sid
process
iop bug with corrupted datas describe the bug issue an old iop bug is back turbogit i know you will not like that one to reproduce be sure to have instances of diffuse sharpen module on destination image edit i also see with instances of denoise profiled module this issue is unfortunately not always reproducible copy development on another image i test it with image that have only one instance of diffuse sharpen paste it to the destination image with instances of diffuse sharpen see the error expected behavior no issue and image correctly displayed platform please fill as much information as possible in the list given below please state unknown where you do not know the answer and remove any sections that are not applicable darktable version latest master linux distro debian sid
1
16,136
21,606,963,717
IssuesEvent
2022-05-04 05:15:52
gridcoin-community/Gridcoin-Research
https://api.github.com/repos/gridcoin-community/Gridcoin-Research
closed
libssl1.1 dropped from Ubuntu 22.04, needs build against libssl3
compatibility temporary
# Bug Report **Current behavior** Install fails or when previous install present after upgrade system errors with libssl1.1 present sudo apt install gridcoinresearch-qt Reading package lists... Done Building dependency tree... Done Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gridcoinresearch-qt : Depends: libssl1.1 (>= 1.1.0) but it is not installable **Expected behavior** Installation of gridcoinresearch-qt **Steps to reproduce:** 1. Add Ubuntu PPA on 22.04 or upgrade from 21.10 to 22.04 with gridcoinresearch-qt installed and users may experience issues with their system. 2. sudo apt install gridcoinresearch-qt -> install fails due to no libssl1.1 available in 22.04 **Gridcoin version** <!-- List the version number/commit ID, and if it is an official binary, self compiled or a distribution package such as PPA. If you are not on the latest Leisure release please try updating to that and see if your issue still persists. --> **Machine specs** - OS: Pop!_OS 22.04 (Ubuntu 22.04 derivative) **Related dependency** https://github.com/gridcoin-community/Gridcoin-Research/blob/development/depends/packages/openssl.mk
True
libssl1.1 dropped from Ubuntu 22.04, needs build against libssl3 - # Bug Report **Current behavior** Install fails or when previous install present after upgrade system errors with libssl1.1 present sudo apt install gridcoinresearch-qt Reading package lists... Done Building dependency tree... Done Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gridcoinresearch-qt : Depends: libssl1.1 (>= 1.1.0) but it is not installable **Expected behavior** Installation of gridcoinresearch-qt **Steps to reproduce:** 1. Add Ubuntu PPA on 22.04 or upgrade from 21.10 to 22.04 with gridcoinresearch-qt installed and users may experience issues with their system. 2. sudo apt install gridcoinresearch-qt -> install fails due to no libssl1.1 available in 22.04 **Gridcoin version** <!-- List the version number/commit ID, and if it is an official binary, self compiled or a distribution package such as PPA. If you are not on the latest Leisure release please try updating to that and see if your issue still persists. --> **Machine specs** - OS: Pop!_OS 22.04 (Ubuntu 22.04 derivative) **Related dependency** https://github.com/gridcoin-community/Gridcoin-Research/blob/development/depends/packages/openssl.mk
non_process
dropped from ubuntu needs build against bug report current behavior install fails or when previous install present after upgrade system errors with present sudo apt install gridcoinresearch qt reading package lists done building dependency tree done reading state information done some packages could not be installed this may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of incoming the following information may help to resolve the situation the following packages have unmet dependencies gridcoinresearch qt depends but it is not installable expected behavior installation of gridcoinresearch qt steps to reproduce add ubuntu ppa on or upgrade from to with gridcoinresearch qt installed and users may experience issues with their system sudo apt install gridcoinresearch qt install fails due to no available in gridcoin version list the version number commit id and if it is an official binary self compiled or a distribution package such as ppa if you are not on the latest leisure release please try updating to that and see if your issue still persists machine specs os pop os ubuntu derivative related dependency
0
15,793
19,985,615,885
IssuesEvent
2022-01-30 16:09:50
LinasVidziunas/Unsupervised-lesion-detection-with-multi-view-MRI-and-autoencoders
https://api.github.com/repos/LinasVidziunas/Unsupervised-lesion-detection-with-multi-view-MRI-and-autoencoders
opened
Resizing images
bug enhancement Data preprocessing
Our current ad hoc approach of skipping images that are not of the size (320, 320) is not the best as it re-appends the previous image that fit the size, which is probably not optimal for training and reduces the already limited number of slices we have. Suggestion rescale the images down to 300x300. Since all of the images are squares (at least I think), we won't encounter any problems with aspects (ex. 512x1024 (1:2) to 300x300 (1:1)). As an added benefit if this is easily implemented, we can use the same method and model for all the different views.
1.0
Resizing images - Our current ad hoc approach of skipping images that are not of the size (320, 320) is not the best as it re-appends the previous image that fit the size, which is probably not optimal for training and reduces the already limited number of slices we have. Suggestion rescale the images down to 300x300. Since all of the images are squares (at least I think), we won't encounter any problems with aspects (ex. 512x1024 (1:2) to 300x300 (1:1)). As an added benefit if this is easily implemented, we can use the same method and model for all the different views.
process
resizing images our current ad hoc approach of skipping images that are not of the size is not the best as it re appends the previous image that fit the size which is probably not optimal for training and reduces the already limited number of slices we have suggestion rescale the images down to since all of the images are squares at least i think we won t encounter any problems with aspects ex to as an added benefit if this is easily implemented we can use the same method and model for all the different views
1
50,709
13,187,690,227
IssuesEvent
2020-08-13 04:14:52
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
closed
[genie-icetray] code review needs to be added to sphinx docs (Trac #1214)
Migrated from Trac combo reconstruction defect
The code review at `resources/genieCodeReview.txt` needs to be moved to `resources/docs/` and put into rst style. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1214">https://code.icecube.wisc.edu/ticket/1214</a>, reported by david.schultz and owned by kclark</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:13:35", "description": "The code review at `resources/genieCodeReview.txt` needs to be moved to `resources/docs/` and put into rst style.", "reporter": "david.schultz", "cc": "melanie.day", "resolution": "fixed", "_ts": "1550067215093672", "component": "combo reconstruction", "summary": "[genie-icetray] code review needs to be added to sphinx docs", "priority": "critical", "keywords": "", "time": "2015-08-19T19:41:36", "milestone": "", "owner": "kclark", "type": "defect" } ``` </p> </details>
1.0
[genie-icetray] code review needs to be added to sphinx docs (Trac #1214) - The code review at `resources/genieCodeReview.txt` needs to be moved to `resources/docs/` and put into rst style. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1214">https://code.icecube.wisc.edu/ticket/1214</a>, reported by david.schultz and owned by kclark</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:13:35", "description": "The code review at `resources/genieCodeReview.txt` needs to be moved to `resources/docs/` and put into rst style.", "reporter": "david.schultz", "cc": "melanie.day", "resolution": "fixed", "_ts": "1550067215093672", "component": "combo reconstruction", "summary": "[genie-icetray] code review needs to be added to sphinx docs", "priority": "critical", "keywords": "", "time": "2015-08-19T19:41:36", "milestone": "", "owner": "kclark", "type": "defect" } ``` </p> </details>
non_process
code review needs to be added to sphinx docs trac the code review at resources geniecodereview txt needs to be moved to resources docs and put into rst style migrated from json status closed changetime description the code review at resources geniecodereview txt needs to be moved to resources docs and put into rst style reporter david schultz cc melanie day resolution fixed ts component combo reconstruction summary code review needs to be added to sphinx docs priority critical keywords time milestone owner kclark type defect
0
1,806
4,541,739,608
IssuesEvent
2016-09-09 18:48:07
ParsePlatform/parse-server
https://api.github.com/repos/ParsePlatform/parse-server
closed
{"code":209,"error":"invalid session token"} on v2.2.15
in-process pr-submitted
I upgraded parse-server from v2.2.7 to v2.2.15. I started getting {"code":209,"error":"invalid session token"} for all the users who had logged in 2-3 days earlier. when i reverted back to v2.2.7. things started working again. is there an issue with v2.2.15
1.0
{"code":209,"error":"invalid session token"} on v2.2.15 - I upgraded parse-server from v2.2.7 to v2.2.15. I started getting {"code":209,"error":"invalid session token"} for all the users who had logged in 2-3 days earlier. when i reverted back to v2.2.7. things started working again. is there an issue with v2.2.15
process
code error invalid session token on i upgraded parse server from to i started getting code error invalid session token for all the users who had logged in days earlier when i reverted back to things started working again is there an issue with
1
17,271
23,051,126,916
IssuesEvent
2022-07-24 16:43:07
nodejs/node
https://api.github.com/repos/nodejs/node
closed
`subprocess.stdout` can be undefined instead of null
child_process doc
### Affected URL(s) https://nodejs.org/docs/latest-v16.x/api/child_process.html#subprocessstdout ### Description of the problem > The `subprocess.stdout` property can be null if the child process could not be successfully spawned. However, it can be `undefined` in some cases: ```console $ docker run --rm -it node:16 /bin/bash root@c12d92a299db:/# ulimit -n 17 root@c12d92a299db:/# node -e 'p=require("child_process").spawn("uname");p.stdout.on("data",console.log)' [eval]:1 p=require("child_process").spawn("uname");p.stdout.on("data",console.log) ^ TypeError: Cannot read properties of undefined (reading 'on') at [eval]:1:52 at Script.runInThisContext (node:vm:129:12) at Object.runInThisContext (node:vm:305:38) at node:internal/process/execution:76:19 at [eval]-wrapper:6:22 at evalScript (node:internal/process/execution:75:60) at node:internal/main/eval_string:27:3 root@c12d92a299db:/# node -e 'console.log(require("child_process").spawn("uname"))' <ref *1> ChildProcess { _events: [Object: null prototype] {}, _eventsCount: 0, _maxListeners: undefined, _closesNeeded: 1, _closesGot: 0, connected: false, signalCode: null, exitCode: null, killed: false, spawnfile: 'uname', _handle: Process { onexit: [Function (anonymous)], [Symbol(owner_symbol)]: [Circular *1] }, spawnargs: [ 'uname' ], [Symbol(kCapture)]: false } node:events:505 throw er; // Unhandled 'error' event ^ Error: spawn uname EMFILE at Process.ChildProcess._handle.onexit (node:internal/child_process:283:19) at onErrorNT (node:internal/child_process:478:16) at processTicksAndRejections (node:internal/process/task_queues:83:21) Emitted 'error' event on ChildProcess instance at: at Process.ChildProcess._handle.onexit (node:internal/child_process:289:12) at onErrorNT (node:internal/child_process:478:16) at processTicksAndRejections (node:internal/process/task_queues:83:21) { errno: -24, code: 'EMFILE', syscall: 'spawn uname', path: 'uname', spawnargs: [] } ``` I think this should be documented.
1.0
`subprocess.stdout` can be undefined instead of null - ### Affected URL(s) https://nodejs.org/docs/latest-v16.x/api/child_process.html#subprocessstdout ### Description of the problem > The `subprocess.stdout` property can be null if the child process could not be successfully spawned. However, it can be `undefined` in some cases: ```console $ docker run --rm -it node:16 /bin/bash root@c12d92a299db:/# ulimit -n 17 root@c12d92a299db:/# node -e 'p=require("child_process").spawn("uname");p.stdout.on("data",console.log)' [eval]:1 p=require("child_process").spawn("uname");p.stdout.on("data",console.log) ^ TypeError: Cannot read properties of undefined (reading 'on') at [eval]:1:52 at Script.runInThisContext (node:vm:129:12) at Object.runInThisContext (node:vm:305:38) at node:internal/process/execution:76:19 at [eval]-wrapper:6:22 at evalScript (node:internal/process/execution:75:60) at node:internal/main/eval_string:27:3 root@c12d92a299db:/# node -e 'console.log(require("child_process").spawn("uname"))' <ref *1> ChildProcess { _events: [Object: null prototype] {}, _eventsCount: 0, _maxListeners: undefined, _closesNeeded: 1, _closesGot: 0, connected: false, signalCode: null, exitCode: null, killed: false, spawnfile: 'uname', _handle: Process { onexit: [Function (anonymous)], [Symbol(owner_symbol)]: [Circular *1] }, spawnargs: [ 'uname' ], [Symbol(kCapture)]: false } node:events:505 throw er; // Unhandled 'error' event ^ Error: spawn uname EMFILE at Process.ChildProcess._handle.onexit (node:internal/child_process:283:19) at onErrorNT (node:internal/child_process:478:16) at processTicksAndRejections (node:internal/process/task_queues:83:21) Emitted 'error' event on ChildProcess instance at: at Process.ChildProcess._handle.onexit (node:internal/child_process:289:12) at onErrorNT (node:internal/child_process:478:16) at processTicksAndRejections (node:internal/process/task_queues:83:21) { errno: -24, code: 'EMFILE', syscall: 'spawn uname', path: 'uname', spawnargs: [] } ``` I think this should be documented.
process
subprocess stdout can be undefined instead of null affected url s description of the problem the subprocess stdout property can be null if the child process could not be successfully spawned however it can be undefined in some cases console docker run rm it node bin bash root ulimit n root node e p require child process spawn uname p stdout on data console log p require child process spawn uname p stdout on data console log typeerror cannot read properties of undefined reading on at at script runinthiscontext node vm at object runinthiscontext node vm at node internal process execution at wrapper at evalscript node internal process execution at node internal main eval string root node e console log require child process spawn uname childprocess events eventscount maxlisteners undefined closesneeded closesgot connected false signalcode null exitcode null killed false spawnfile uname handle process onexit spawnargs false node events throw er unhandled error event error spawn uname emfile at process childprocess handle onexit node internal child process at onerrornt node internal child process at processticksandrejections node internal process task queues emitted error event on childprocess instance at at process childprocess handle onexit node internal child process at onerrornt node internal child process at processticksandrejections node internal process task queues errno code emfile syscall spawn uname path uname spawnargs i think this should be documented
1
6,430
9,531,487,536
IssuesEvent
2019-04-29 16:06:56
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
opened
Department of State: Change content on "Next Steps" page
Apply Process State Dept.
Who: Student What: Clarify that references are not required Why: As a student I want to know what is required in my application A/C - Step #2 should read - Review your experience (this will be bold) - Under this will be - Tell us about your work, military, or other experience.
1.0
Department of State: Change content on "Next Steps" page - Who: Student What: Clarify that references are not required Why: As a student I want to know what is required in my application A/C - Step #2 should read - Review your experience (this will be bold) - Under this will be - Tell us about your work, military, or other experience.
process
department of state change content on next steps page who student what clarify that references are not required why as a student i want to know what is required in my application a c step should read review your experience this will be bold under this will be tell us about your work military or other experience
1
10,104
13,044,162,130
IssuesEvent
2020-07-29 03:47:29
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `TimeLiteral` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `TimeLiteral` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @lonng ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `TimeLiteral` from TiDB - ## Description Port the scalar function `TimeLiteral` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @lonng ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function timeliteral from tidb description port the scalar function timeliteral from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
1
21,669
30,112,196,329
IssuesEvent
2023-06-30 08:39:55
0xPolygonMiden/miden-vm
https://api.github.com/repos/0xPolygonMiden/miden-vm
closed
Memory subsystem improvements
processor air
I want to summarize the improvements to memory subsystem (memory operations + memory chiplet) which I'm hoping to get into v0.3 release (though, maybe not all of them will make it in). These are listed in no particular order. 1. Memory access trace refactoring. Currently, for each memory access we record both the old state of the memory and the new state. While this works fine (and doesn't add any extra columns), it probably makes the constraint system more complicated than it needs to be. An alternative way is to track only a single set of values (e.g., values after the operation) and have a separate column which would track read/write flag - e.g., `0` means it was a read operation, `1` means it was a write operation. 2. Move range checks for `delta` to the stack as described in #299. This should let us get rid of one auxiliary column, but would also require changing opcodes for memory operations. As a part of this work, we'll also need to update how memory lookup rows are computed, and it might be a good opportunity to address #335 as well. 3. Currently, when generating memory trace we do two things sub-optimally (as described in #307): (1) we compute `delta`'s twice - first in `append_range_checks()` and then in `fill_trace()` methods. (2) we compute inverses of `delta` one by one. For programs which perform a lot of memory accesses this could be very costly as a single inversion is equivalent to something like 60 multiplications. A better way to do it would be to compute `delta`'s only once (e.g., in `append_range_checks()`), save them into a vector, and then pass this vector to `fill_trace()`. There, we'd be able to use batch inversion to speed things up considerably. 4. It would be really cool to add something like a `memcopy` operation. This operation would copy memory from one region to another in a single VM cycle. In the memory chiplet, the trace would probably require 3n rows to copy n words, but compared to the alternatives, this would be much more efficient.
1.0
Memory subsystem improvements - I want to summarize the improvements to memory subsystem (memory operations + memory chiplet) which I'm hoping to get into v0.3 release (though, maybe not all of them will make it in). These are listed in no particular order. 1. Memory access trace refactoring. Currently, for each memory access we record both the old state of the memory and the new state. While this works fine (and doesn't add any extra columns), it probably makes the constraint system more complicated than it needs to be. An alternative way is to track only a single set of values (e.g., values after the operation) and have a separate column which would track read/write flag - e.g., `0` means it was a read operation, `1` means it was a write operation. 2. Move range checks for `delta` to the stack as described in #299. This should let us get rid of one auxiliary column, but would also require changing opcodes for memory operations. As a part of this work, we'll also need to update how memory lookup rows are computed, and it might be a good opportunity to address #335 as well. 3. Currently, when generating memory trace we do two things sub-optimally (as described in #307): (1) we compute `delta`'s twice - first in `append_range_checks()` and then in `fill_trace()` methods. (2) we compute inverses of `delta` one by one. For programs which perform a lot of memory accesses this could be very costly as a single inversion is equivalent to something like 60 multiplications. A better way to do it would be to compute `delta`'s only once (e.g., in `append_range_checks()`), save them into a vector, and then pass this vector to `fill_trace()`. There, we'd be able to use batch inversion to speed things up considerably. 4. It would be really cool to add something like a `memcopy` operation. This operation would copy memory from one region to another in a single VM cycle. In the memory chiplet, the trace would probably require 3n rows to copy n words, but compared to the alternatives, this would be much more efficient.
process
memory subsystem improvements i want to summarize the improvements to memory subsystem memory operations memory chiplet which i m hoping to get into release though maybe not all of them will make it in these are listed in no particular order memory access trace refactoring currently for each memory access we record both the old state of the memory and the new state while this works fine and doesn t add any extra columns it probably makes the constraint system more complicated than it needs to be an alternative way is to track only a single set of values e g values after the operation and have a separate column which would track read write flag e g means it was a read operation means it was a write operation move range checks for delta to the stack as described in this should let us get rid of one auxiliary column but would also require changing opcodes for memory operations as a part of this work we ll also need to update how memory lookup rows are computed and it might be a good opportunity to address as well currently when generating memory trace we do two things sub optimally as described in we compute delta s twice first in append range checks and then in fill trace methods we compute inverses of delta one by one for programs which perform a lot of memory accesses this could be very costly as a single inversion is equivalent to something like multiplications a better way to do it would be to compute delta s only once e g in append range checks save them into a vector and then pass this vector to fill trace there we d be able to use batch inversion to speed things up considerably it would be really cool to add something like a memcopy operation this operation would copy memory from one region to another in a single vm cycle in the memory chiplet the trace would probably require rows to copy n words but compared to the alternatives this would be much more efficient
1
821,295
30,815,643,529
IssuesEvent
2023-08-01 13:19:32
AdguardTeam/AdguardForWindows
https://api.github.com/repos/AdguardTeam/AdguardForWindows
closed
Параметры отслеживания
bug Priority: P4
### AdGuard version AdGuard 7.14 ### Browser version Brave 1.47.186 ### OS version Windows 8.1 Enterpise х64 ### What filters do you have enabled? _No response_ ### What Stealth Mode options do you have enabled? _No response_ ### Support ticket ID _No response_ ### Issue Details Вы же сами пишите, что данный пункт заменен фильтром. Может стоит его удалить из за ненужности? ### Expected Behavior _No response_ ### Screenshots ![2023-07-29_13-50-00](https://github.com/AdguardTeam/AdguardForWindows/assets/110057379/623c89da-2855-455e-86b3-68309f9b7f53) ### Additional Information _No response_
1.0
Параметры отслеживания - ### AdGuard version AdGuard 7.14 ### Browser version Brave 1.47.186 ### OS version Windows 8.1 Enterpise х64 ### What filters do you have enabled? _No response_ ### What Stealth Mode options do you have enabled? _No response_ ### Support ticket ID _No response_ ### Issue Details Вы же сами пишите, что данный пункт заменен фильтром. Может стоит его удалить из за ненужности? ### Expected Behavior _No response_ ### Screenshots ![2023-07-29_13-50-00](https://github.com/AdguardTeam/AdguardForWindows/assets/110057379/623c89da-2855-455e-86b3-68309f9b7f53) ### Additional Information _No response_
non_process
параметры отслеживания adguard version adguard browser version brave os version windows enterpise what filters do you have enabled no response what stealth mode options do you have enabled no response support ticket id no response issue details вы же сами пишите что данный пункт заменен фильтром может стоит его удалить из за ненужности expected behavior no response screenshots additional information no response
0
12,002
14,738,139,128
IssuesEvent
2021-01-07 03:52:21
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
PayneWest Insurance, Account #: 071-2331 - Billings, MT
anc-ops anc-process anp-0.5 ant-bug ant-support
In GitLab by @kdjstudios on May 9, 2018, 09:33 **Submitted by:** "Martin Villegas" <martin.villegas@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-09-66829/conversation **Server:** Internal **Client/Site:** Billings **Account:** 2331 **Issue:** Client is trying to set up her online portal, but doesn’t receive the portal invite when sent. I tried sending them an invoice through the system and they do receive that, but somehow they don’t get the portal invite. Should I do something different than editing customer adding the email and clicking on the “send online portal invitation”?
1.0
PayneWest Insurance, Account #: 071-2331 - Billings, MT - In GitLab by @kdjstudios on May 9, 2018, 09:33 **Submitted by:** "Martin Villegas" <martin.villegas@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-09-66829/conversation **Server:** Internal **Client/Site:** Billings **Account:** 2331 **Issue:** Client is trying to set up her online portal, but doesn’t receive the portal invite when sent. I tried sending them an invoice through the system and they do receive that, but somehow they don’t get the portal invite. Should I do something different than editing customer adding the email and clicking on the “send online portal invitation”?
process
paynewest insurance account billings mt in gitlab by kdjstudios on may submitted by martin villegas helpdesk server internal client site billings account issue client is trying to set up her online portal but doesn’t receive the portal invite when sent i tried sending them an invoice through the system and they do receive that but somehow they don’t get the portal invite should i do something different than editing customer adding the email and clicking on the “send online portal invitation”
1
15,299
19,322,292,551
IssuesEvent
2021-12-14 07:33:54
googleapis/dotnet-spanner-nhibernate
https://api.github.com/repos/googleapis/dotnet-spanner-nhibernate
closed
Test creating/updating/deleting data model from entity model
type: process priority: p3
NHibernate allows users to automatically generate/update/delete the data model from the entity model. This feature should only generate valid Spanner DDL statements. This feature is currently not tested by the driver implementation.
1.0
Test creating/updating/deleting data model from entity model - NHibernate allows users to automatically generate/update/delete the data model from the entity model. This feature should only generate valid Spanner DDL statements. This feature is currently not tested by the driver implementation.
process
test creating updating deleting data model from entity model nhibernate allows users to automatically generate update delete the data model from the entity model this feature should only generate valid spanner ddl statements this feature is currently not tested by the driver implementation
1
16,670
21,774,829,000
IssuesEvent
2022-05-13 12:51:22
camunda/feel-scala
https://api.github.com/repos/camunda/feel-scala
opened
range function before/after are failing with date and time objects
type: bug team/process-automation
**Describe the bug** The FEEL expression `after(date and time(now()), date and time(today(),time("14:00:00")))` fails with `14:44:02.988 [main] WARN org.camunda.feel.FeelEngine - Suppressed failure: illegal arguments: List(ValDateTime(2022-05-13T14:44:02.987611+02:00[Europe/Berlin])) 14:44:02.990 [main] WARN org.camunda.feel.FeelEngine - Suppressed failure: illegal arguments: List(ValNull, ValLocalDateTime(2022-05-13T14:00))` **Expected behavior** range functions work as documented **Environment** * FEEL engine version: Ammonite Repl 2.5.3 (Scala 2.13.8 Java 11.0.13) | also: zeebe_8.0.1
1.0
range function before/after are failing with date and time objects - **Describe the bug** The FEEL expression `after(date and time(now()), date and time(today(),time("14:00:00")))` fails with `14:44:02.988 [main] WARN org.camunda.feel.FeelEngine - Suppressed failure: illegal arguments: List(ValDateTime(2022-05-13T14:44:02.987611+02:00[Europe/Berlin])) 14:44:02.990 [main] WARN org.camunda.feel.FeelEngine - Suppressed failure: illegal arguments: List(ValNull, ValLocalDateTime(2022-05-13T14:00))` **Expected behavior** range functions work as documented **Environment** * FEEL engine version: Ammonite Repl 2.5.3 (Scala 2.13.8 Java 11.0.13) | also: zeebe_8.0.1
process
range function before after are failing with date and time objects describe the bug the feel expression after date and time now date and time today time fails with warn org camunda feel feelengine suppressed failure illegal arguments list valdatetime warn org camunda feel feelengine suppressed failure illegal arguments list valnull vallocaldatetime expected behavior range functions work as documented environment feel engine version ammonite repl scala java also zeebe
1
13,821
16,584,119,148
IssuesEvent
2021-05-31 15:51:29
laugharn/link
https://api.github.com/repos/laugharn/link
closed
Processing Indicator
kind/improvement process/selected size/sm team/front
We need a way to visually indicate that something is happening when we submit our forms. - [x] Add a processing value to the app container - [x] Add a Processing component that overlays the screen with some sort of animation - [x] Lock the scroll position of the screen when the processing value is true - [x] Implement the processing setter on our forms
1.0
Processing Indicator - We need a way to visually indicate that something is happening when we submit our forms. - [x] Add a processing value to the app container - [x] Add a Processing component that overlays the screen with some sort of animation - [x] Lock the scroll position of the screen when the processing value is true - [x] Implement the processing setter on our forms
process
processing indicator we need a way to visually indicate that something is happening when we submit our forms add a processing value to the app container add a processing component that overlays the screen with some sort of animation lock the scroll position of the screen when the processing value is true implement the processing setter on our forms
1
233,691
7,703,408,733
IssuesEvent
2018-05-21 08:19:04
cybercongress/cyber-markets
https://api.github.com/repos/cybercongress/cyber-markets
closed
Recalculate tickers using old trades
Epic Priority: High Type: Enhancement
Update tickers using non real-time trades. Will be used to process historical data
1.0
Recalculate tickers using old trades - Update tickers using non real-time trades. Will be used to process historical data
non_process
recalculate tickers using old trades update tickers using non real time trades will be used to process historical data
0
659,597
21,934,408,070
IssuesEvent
2022-05-23 12:39:49
cloudmesh/cloudmesh-pi-burn
https://api.github.com/repos/cloudmesh/cloudmesh-pi-burn
opened
nfs in cloudmesh-pi-cluster
bug PRIORITY 1
- [ ] (high priority) allow --username== in commandline so you can set username [default: pi] - [ ] (medium priority) find out how to pass to pytest --username= and --hostname= - [ ] document how to - [ ] pytest on specified hostnames - [ ] we can assume if --username not specified username is pi - [ ] we can assume that if --hostname is not specified its red
1.0
nfs in cloudmesh-pi-cluster - - [ ] (high priority) allow --username== in commandline so you can set username [default: pi] - [ ] (medium priority) find out how to pass to pytest --username= and --hostname= - [ ] document how to - [ ] pytest on specified hostnames - [ ] we can assume if --username not specified username is pi - [ ] we can assume that if --hostname is not specified its red
non_process
nfs in cloudmesh pi cluster high priority allow username in commandline so you can set username medium priority find out how to pass to pytest username and hostname document how to pytest on specified hostnames we can assume if username not specified username is pi we can assume that if hostname is not specified its red
0
18,539
24,554,606,537
IssuesEvent
2022-10-12 14:58:00
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Android] [Angular Upgrade] Getting 'Invalid data sharing status' error message in the consent flow
Bug Blocker P0 Android Process: Fixed Process: Tested QA Process: Tested dev
**Steps:** 1. Sign in and complete the passcode process 2. Enroll to the study 3. Now, go back to SB 4. Edit the study for which participant has enrolled 5. Update 'Enforce e-consent flow again for enrolled participants' and Publish the study 6. Open the mobile app 7. Try to complete the consent flow and Verify **AR:** Getting 'Invalid data sharing status' error message in the consent flow **ER:** Participant should be able to complete the consent flow without any error's ![Android](https://user-images.githubusercontent.com/86007179/178019280-d11fda88-3cb0-4055-8812-d49ca631f36e.png)
3.0
[Android] [Angular Upgrade] Getting 'Invalid data sharing status' error message in the consent flow - **Steps:** 1. Sign in and complete the passcode process 2. Enroll to the study 3. Now, go back to SB 4. Edit the study for which participant has enrolled 5. Update 'Enforce e-consent flow again for enrolled participants' and Publish the study 6. Open the mobile app 7. Try to complete the consent flow and Verify **AR:** Getting 'Invalid data sharing status' error message in the consent flow **ER:** Participant should be able to complete the consent flow without any error's ![Android](https://user-images.githubusercontent.com/86007179/178019280-d11fda88-3cb0-4055-8812-d49ca631f36e.png)
process
getting invalid data sharing status error message in the consent flow steps sign in and complete the passcode process enroll to the study now go back to sb edit the study for which participant has enrolled update enforce e consent flow again for enrolled participants and publish the study open the mobile app try to complete the consent flow and verify ar getting invalid data sharing status error message in the consent flow er participant should be able to complete the consent flow without any error s
1
137,900
12,799,088,128
IssuesEvent
2020-07-02 14:52:43
2KPS/testset
https://api.github.com/repos/2KPS/testset
opened
2020.01.01 회의명111
documentation
## 2020.01.01 > 회의 목적 입력 ## 참여자 - 참여자 1 - 참여자 2 - 참여자 3 - 참여자 4 ## 회의 내용 #### 소제목 - 항목 1 - 항목 2 - 항목 3 #### 소제목 ```` 코드 블럭 ```` #### 소제목 [링크](http://www.naver.com)
1.0
2020.01.01 회의명111 - ## 2020.01.01 > 회의 목적 입력 ## 참여자 - 참여자 1 - 참여자 2 - 참여자 3 - 참여자 4 ## 회의 내용 #### 소제목 - 항목 1 - 항목 2 - 항목 3 #### 소제목 ```` 코드 블럭 ```` #### 소제목 [링크](http://www.naver.com)
non_process
회의 목적 입력 참여자 참여자 참여자 참여자 참여자 회의 내용 소제목 항목 항목 항목 소제목 코드 블럭 소제목
0
19,092
25,147,337,420
IssuesEvent
2022-11-10 07:02:21
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
support string replace for resources/attributes names
Stale priority:p3 processor/transform
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] resources and/or attributes key currently follows semconv, or k8s labels/annotations enriched by `k8sattrs` processor, would contains `.`, `/`, `-`, in cases the backend storage doesn't not allow such characters it will need a processor that converts them to a different character, e.g. `_`. Currently, resource / attributes processor allows doing this only by explicitly define the key name, in cases regexp is used in `k8sattrs` processor that, say captures all labels or annotations, there is no way to define all possible keys. **Describe the solution you'd like** A clear and concise description of what you want to happen. Support replace string action for resources/attributes name in resource/attributes processor, which seems to be a general enough solution that would support metrics/traces/logs. **Describe alternatives you've considered** Support such action in a dedicated processor that updates resources/attributes keys, or in current `transform` and `metricstransformprocessor`. **Additional context**
1.0
support string replace for resources/attributes names - **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] resources and/or attributes key currently follows semconv, or k8s labels/annotations enriched by `k8sattrs` processor, would contains `.`, `/`, `-`, in cases the backend storage doesn't not allow such characters it will need a processor that converts them to a different character, e.g. `_`. Currently, resource / attributes processor allows doing this only by explicitly define the key name, in cases regexp is used in `k8sattrs` processor that, say captures all labels or annotations, there is no way to define all possible keys. **Describe the solution you'd like** A clear and concise description of what you want to happen. Support replace string action for resources/attributes name in resource/attributes processor, which seems to be a general enough solution that would support metrics/traces/logs. **Describe alternatives you've considered** Support such action in a dedicated processor that updates resources/attributes keys, or in current `transform` and `metricstransformprocessor`. **Additional context**
process
support string replace for resources attributes names is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when resources and or attributes key currently follows semconv or labels annotations enriched by processor would contains in cases the backend storage doesn t not allow such characters it will need a processor that converts them to a different character e g currently resource attributes processor allows doing this only by explicitly define the key name in cases regexp is used in processor that say captures all labels or annotations there is no way to define all possible keys describe the solution you d like a clear and concise description of what you want to happen support replace string action for resources attributes name in resource attributes processor which seems to be a general enough solution that would support metrics traces logs describe alternatives you ve considered support such action in a dedicated processor that updates resources attributes keys or in current transform and metricstransformprocessor additional context
1
13,984
16,760,599,514
IssuesEvent
2021-06-13 17:56:24
Leviatan-Analytics/LA-data-processing
https://api.github.com/repos/Leviatan-Analytics/LA-data-processing
closed
Test feature matching algorithms with different filters on images [2]
Data Processing Sprint 2 Week 3
Estimated time: 2 hs per assignee Research and test feature matching algorithms with different image modifications (filters) and compare them. Output: Research document with the results of the different tests and the research repo with the python code of the tests.
1.0
Test feature matching algorithms with different filters on images [2] - Estimated time: 2 hs per assignee Research and test feature matching algorithms with different image modifications (filters) and compare them. Output: Research document with the results of the different tests and the research repo with the python code of the tests.
process
test feature matching algorithms with different filters on images estimated time hs per assignee research and test feature matching algorithms with different image modifications filters and compare them output research document with the results of the different tests and the research repo with the python code of the tests
1
30,795
7,260,193,686
IssuesEvent
2018-02-18 06:09:37
SolidZORO/blog
https://api.github.com/repos/SolidZORO/blog
opened
linux 下安装 git 和 zsh [#](install-git-zsh-on-linux)
code
记录一下两个必备工具的安装。 ## 安装 git `aptitude -y install git-core` ## 安装 curl `aptitude install curl` ## 安装 zsh `cat /etc/shells` 查看当前系统可以使用哪些shell ``` bash aptitude -y install zsh chsh -s /bin/zsh  chsh -s /bin/zsh [USERNAME] ``` ## 升级为 oh-my-zsh ``` bash sh -c "$(curl -fsSL https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh)" vi ~/.zshrc source ~/.zshrc ``` [](created_at:2017-09-04T23:03:11Z)
1.0
linux 下安装 git 和 zsh [#](install-git-zsh-on-linux) - 记录一下两个必备工具的安装。 ## 安装 git `aptitude -y install git-core` ## 安装 curl `aptitude install curl` ## 安装 zsh `cat /etc/shells` 查看当前系统可以使用哪些shell ``` bash aptitude -y install zsh chsh -s /bin/zsh  chsh -s /bin/zsh [USERNAME] ``` ## 升级为 oh-my-zsh ``` bash sh -c "$(curl -fsSL https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh)" vi ~/.zshrc source ~/.zshrc ``` [](created_at:2017-09-04T23:03:11Z)
non_process
linux 下安装 git 和 zsh install git zsh on linux 记录一下两个必备工具的安装。 安装 git aptitude y install git core 安装 curl aptitude install curl 安装 zsh cat etc shells 查看当前系统可以使用哪些shell bash aptitude y install zsh chsh s bin zsh  chsh s bin zsh 升级为 oh my zsh bash sh c curl fssl vi zshrc source zshrc created at
0
20,707
27,397,636,104
IssuesEvent
2023-02-28 21:04:57
dtcenter/MET
https://api.github.com/repos/dtcenter/MET
closed
Bugfix: Fix ASCII2NC to handle missing NDBC buoy location information
type: bug requestor: NOAA/EMC MET: PreProcessing Tools (Point) priority: high
## Describe the Problem ## I tested ASCII2NC with MET 11.0.0/METplus 5.0.0 to convert NDBC buoy data (*.txt) into a METplus compatible point obs netCDF file and got the following error messages in the log file: ERROR : No location information found for station 44084 do not process file /lfs/h1/ops/dev/dcom/20230124/validation_data/marine/buoy/44084.txt ERROR : No location information found for station 46275 do not process file /lfs/h1/ops/dev/dcom/20230124/validation_data/marine/buoy/46275.txt ERROR : No location information found for station CSXA2 do not process file /lfs/h1/ops/dev/dcom/20230124/validation_data/marine/buoy/CSXA2.txt ... How were NDBC buoy location information obtained in MET/METplus? Please update the buoy location information to include all NDBC buoys. ### Expected Behavior ### All NDBC buoys have an assigned location. ### Environment ### Describe your runtime environment: *1. Machine: (e.g. HPC name, Linux Workstation, Mac Laptop)* WCOSS2 and Hera *2. OS: (e.g. RedHat Linux, MacOS)* *3. Software version number(s)* MET 11.0.0/METplus 5.0.0 ### To Reproduce ### NDBC buoy data directory on WCOSS2: /lfs/h1/ops/dev/dcom/$YYYYMMDD/validation_data/marine/buoy METplus use case: https://metplus.readthedocs.io/en/latest/generated/model_applications/marine_and_cryosphere/PointStat_fcstGFS_obsNDBC_WaveHeight.html#sphx-glr-generated-model-applications-marine-and-cryosphere-pointstat-fcstgfs-obsndbc-waveheight-py ### Relevant Deadlines ### MET 11.0.1/METplus 5.0.1 ### Funding Source ### 2773542 ## Define the Metadata ## ### Assignee ### - [X] Select **engineer(s)** or **no engineer** required @JohnHalleyGotway - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [x] Select **Organization** level **Project** for support of the current coordinated release - [x] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label - [x] Select **Milestone** as the next bugfix version ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [X] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose) ## Bugfix Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [X] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [x] Fork this repository or create a branch of **main_\<Version>**. Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>` - [x] Fix the bug and test your changes. - [x] Add/update log messages for easier debugging. - [x] Add/update unit tests. - [x] Add/update documentation. - [x] Add any new Python packages to the [METplus Components Python Requirements](https://metplus.readthedocs.io/en/develop/Users_Guide/overview.html#metplus-components-python-requirements) table. - [x] Push local changes to GitHub. - [ ] Submit a pull request to merge into **main_\<Version>**. Pull request: `bugfix <Issue Number> main_<Version> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Development** issues Select: **Organization** level software support **Project** for the current coordinated release Select: **Milestone** as the next bugfix version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` Select: **Reviewer(s)** and **Development** issues Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Close this issue.
1.0
Bugfix: Fix ASCII2NC to handle missing NDBC buoy location information - ## Describe the Problem ## I tested ASCII2NC with MET 11.0.0/METplus 5.0.0 to convert NDBC buoy data (*.txt) into a METplus compatible point obs netCDF file and got the following error messages in the log file: ERROR : No location information found for station 44084 do not process file /lfs/h1/ops/dev/dcom/20230124/validation_data/marine/buoy/44084.txt ERROR : No location information found for station 46275 do not process file /lfs/h1/ops/dev/dcom/20230124/validation_data/marine/buoy/46275.txt ERROR : No location information found for station CSXA2 do not process file /lfs/h1/ops/dev/dcom/20230124/validation_data/marine/buoy/CSXA2.txt ... How were NDBC buoy location information obtained in MET/METplus? Please update the buoy location information to include all NDBC buoys. ### Expected Behavior ### All NDBC buoys have an assigned location. ### Environment ### Describe your runtime environment: *1. Machine: (e.g. HPC name, Linux Workstation, Mac Laptop)* WCOSS2 and Hera *2. OS: (e.g. RedHat Linux, MacOS)* *3. Software version number(s)* MET 11.0.0/METplus 5.0.0 ### To Reproduce ### NDBC buoy data directory on WCOSS2: /lfs/h1/ops/dev/dcom/$YYYYMMDD/validation_data/marine/buoy METplus use case: https://metplus.readthedocs.io/en/latest/generated/model_applications/marine_and_cryosphere/PointStat_fcstGFS_obsNDBC_WaveHeight.html#sphx-glr-generated-model-applications-marine-and-cryosphere-pointstat-fcstgfs-obsndbc-waveheight-py ### Relevant Deadlines ### MET 11.0.1/METplus 5.0.1 ### Funding Source ### 2773542 ## Define the Metadata ## ### Assignee ### - [X] Select **engineer(s)** or **no engineer** required @JohnHalleyGotway - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [x] Select **Organization** level **Project** for support of the current coordinated release - [x] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label - [x] Select **Milestone** as the next bugfix version ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [X] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose) ## Bugfix Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [X] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [x] Fork this repository or create a branch of **main_\<Version>**. Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>` - [x] Fix the bug and test your changes. - [x] Add/update log messages for easier debugging. - [x] Add/update unit tests. - [x] Add/update documentation. - [x] Add any new Python packages to the [METplus Components Python Requirements](https://metplus.readthedocs.io/en/develop/Users_Guide/overview.html#metplus-components-python-requirements) table. - [x] Push local changes to GitHub. - [ ] Submit a pull request to merge into **main_\<Version>**. Pull request: `bugfix <Issue Number> main_<Version> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Development** issues Select: **Organization** level software support **Project** for the current coordinated release Select: **Milestone** as the next bugfix version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` Select: **Reviewer(s)** and **Development** issues Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Close this issue.
process
bugfix fix to handle missing ndbc buoy location information describe the problem i tested with met metplus to convert ndbc buoy data txt into a metplus compatible point obs netcdf file and got the following error messages in the log file error no location information found for station do not process file lfs ops dev dcom validation data marine buoy txt error no location information found for station do not process file lfs ops dev dcom validation data marine buoy txt error no location information found for station do not process file lfs ops dev dcom validation data marine buoy txt how were ndbc buoy location information obtained in met metplus please update the buoy location information to include all ndbc buoys expected behavior all ndbc buoys have an assigned location environment describe your runtime environment machine e g hpc name linux workstation mac laptop and hera os e g redhat linux macos software version number s met metplus to reproduce ndbc buoy data directory on lfs ops dev dcom yyyymmdd validation data marine buoy metplus use case relevant deadlines met metplus funding source define the metadata assignee select engineer s or no engineer required johnhalleygotway select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select organization level project for support of the current coordinated release select repository level project for development toward the next official release or add alert need project assignment label select milestone as the next bugfix version define related issue s consider the impact to the other metplus components bugfix checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of main branch name bugfix main fix the bug and test your changes add update log messages for easier debugging add update unit tests add update documentation add any new python packages to the table push local changes to github submit a pull request to merge into main pull request bugfix main define the pull request metadata as permissions allow select reviewer s and development issues select organization level software support project for the current coordinated release select milestone as the next bugfix version iterate until the reviewer s accept and merge your changes delete your fork or branch complete the steps above to fix the bug on the develop branch branch name bugfix develop pull request bugfix develop select reviewer s and development issues select repository level development cycle project for the next official release select milestone as the next official version close this issue
1
7,720
10,824,554,919
IssuesEvent
2019-11-09 10:07:57
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Wrong layer name after processing procedures
Bug Processing
In older QGIS versions (2.x), if I saved a layer to a file (Save to file) and I gave a name to that e.g. "airports", by running the algorithm (Add saved layer to project - option ticked) the created and loaded layer's name was the same like the given filename. In newer versions this behavior disappeared, the resulted layer's name will be "Clipped", "Added geom info", "Intersected", instead of the given one. I suppose the old solution was better. Please up vote this issue, if you feel the same.
1.0
Wrong layer name after processing procedures - In older QGIS versions (2.x), if I saved a layer to a file (Save to file) and I gave a name to that e.g. "airports", by running the algorithm (Add saved layer to project - option ticked) the created and loaded layer's name was the same like the given filename. In newer versions this behavior disappeared, the resulted layer's name will be "Clipped", "Added geom info", "Intersected", instead of the given one. I suppose the old solution was better. Please up vote this issue, if you feel the same.
process
wrong layer name after processing procedures in older qgis versions x if i saved a layer to a file save to file and i gave a name to that e g airports by running the algorithm add saved layer to project option ticked the created and loaded layer s name was the same like the given filename in newer versions this behavior disappeared the resulted layer s name will be clipped added geom info intersected instead of the given one i suppose the old solution was better please up vote this issue if you feel the same
1
19,326
25,472,176,915
IssuesEvent
2022-11-25 11:08:30
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[IDP] [PM] Not able to update admin in the my account screen
Bug Blocker P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
**Pre-condition:** mfa should be disabled in the PM **Steps:** 1. Login to PM 2. Click on 'My account' tab 3. Remove the phone number 4. Try to click on 'Update' button and Verify **AR:** Not able to update admin , Update button is disabled **ER:** Update button should be enabled and account should be updated
3.0
[IDP] [PM] Not able to update admin in the my account screen - **Pre-condition:** mfa should be disabled in the PM **Steps:** 1. Login to PM 2. Click on 'My account' tab 3. Remove the phone number 4. Try to click on 'Update' button and Verify **AR:** Not able to update admin , Update button is disabled **ER:** Update button should be enabled and account should be updated
process
not able to update admin in the my account screen pre condition mfa should be disabled in the pm steps login to pm click on my account tab remove the phone number try to click on update button and verify ar not able to update admin update button is disabled er update button should be enabled and account should be updated
1
18,286
4,243,343,168
IssuesEvent
2016-07-06 22:38:33
KyleJHarper/stupidbashtard
https://api.github.com/repos/KyleJHarper/stupidbashtard
opened
Clean up phrasing
documentation
Review the documentation to ensure no misleading phrasing. Specifically, ensure code-oriented semantics are being used. "Thread safe" needs to be better worded. Obviously the intention was never to have a MT-safe library but rather to explain that a bash's dynamic scoping will bop you if you use subshells and/or try to set variables via functions... "Functions" do not follow normal rules for functions in other languages. They cannot be nested to grant scoping, despite being able to declare a function in a function. Anonymous functions (lambdas) don't exist. ... probably more.
1.0
Clean up phrasing - Review the documentation to ensure no misleading phrasing. Specifically, ensure code-oriented semantics are being used. "Thread safe" needs to be better worded. Obviously the intention was never to have a MT-safe library but rather to explain that a bash's dynamic scoping will bop you if you use subshells and/or try to set variables via functions... "Functions" do not follow normal rules for functions in other languages. They cannot be nested to grant scoping, despite being able to declare a function in a function. Anonymous functions (lambdas) don't exist. ... probably more.
non_process
clean up phrasing review the documentation to ensure no misleading phrasing specifically ensure code oriented semantics are being used thread safe needs to be better worded obviously the intention was never to have a mt safe library but rather to explain that a bash s dynamic scoping will bop you if you use subshells and or try to set variables via functions functions do not follow normal rules for functions in other languages they cannot be nested to grant scoping despite being able to declare a function in a function anonymous functions lambdas don t exist probably more
0
14,547
17,668,751,253
IssuesEvent
2021-08-23 00:32:35
lynnandtonic/nestflix.fun
https://api.github.com/repos/lynnandtonic/nestflix.fun
closed
Add Hockey High
suggested title in process
Please add as much of the following info as you can: Title: Hockey High Type (film/tv show): Film Film or show in which it appears: Brigsby Bear Is the parent film/show streaming anywhere? https://www.justwatch.com/us/movie/brigsby-bear About when in the parent film/show does it appear? 25:00 Actual footage of the film/show can be seen (yes/no)? Yes
1.0
Add Hockey High - Please add as much of the following info as you can: Title: Hockey High Type (film/tv show): Film Film or show in which it appears: Brigsby Bear Is the parent film/show streaming anywhere? https://www.justwatch.com/us/movie/brigsby-bear About when in the parent film/show does it appear? 25:00 Actual footage of the film/show can be seen (yes/no)? Yes
process
add hockey high please add as much of the following info as you can title hockey high type film tv show film film or show in which it appears brigsby bear is the parent film show streaming anywhere about when in the parent film show does it appear actual footage of the film show can be seen yes no yes
1
9,160
12,218,280,725
IssuesEvent
2020-05-01 18:59:40
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
closed
Kubernetes: Helm chart CI
P3 enhancement process
**Problem** Follow up to #653 to run those tests on every commit inside CI **Solution** Run a test job in CI that: - Installs and starts k3d - Installs helm - Installs [helm test tools](https://github.com/helm/chart-testing) - ct lint - ct install **Alternatives** - Deploy to a static environment for branches **Additional Context** See [similar setup](https://github.com/grafana/loki/blob/8e5fa03de3b3cf7efa15e4959bd5b7decae2d944/.circleci/config.yml#L109) I wrote for Loki.
1.0
Kubernetes: Helm chart CI - **Problem** Follow up to #653 to run those tests on every commit inside CI **Solution** Run a test job in CI that: - Installs and starts k3d - Installs helm - Installs [helm test tools](https://github.com/helm/chart-testing) - ct lint - ct install **Alternatives** - Deploy to a static environment for branches **Additional Context** See [similar setup](https://github.com/grafana/loki/blob/8e5fa03de3b3cf7efa15e4959bd5b7decae2d944/.circleci/config.yml#L109) I wrote for Loki.
process
kubernetes helm chart ci problem follow up to to run those tests on every commit inside ci solution run a test job in ci that installs and starts installs helm installs ct lint ct install alternatives deploy to a static environment for branches additional context see i wrote for loki
1
11,971
14,736,998,981
IssuesEvent
2021-01-07 00:34:53
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Grant Call 4-6-18 - Invoice update request
anc-process anc-ui anp-3 ant-enhancement ant-support
In GitLab by @kdjstudios on Apr 6, 2018, 12:58 **Submitted by:** Grant Allan'" <granta@selectcomm.ab.ca> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-06-47824/conversation **Server:** DEV **Client/Site:** SCC **Account:** All **Issue:** Grant mentioned today during our call this morning they would like to have a change made to the invoice layout. They would like to have the Resource ID (handle) on the invoice charge line items; rather than the account number. Their customers are more aware of the Resource ID (handle) rather than the new account numbers is why. They will want to keep the account number on the headers at the top.
1.0
Grant Call 4-6-18 - Invoice update request - In GitLab by @kdjstudios on Apr 6, 2018, 12:58 **Submitted by:** Grant Allan'" <granta@selectcomm.ab.ca> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-06-47824/conversation **Server:** DEV **Client/Site:** SCC **Account:** All **Issue:** Grant mentioned today during our call this morning they would like to have a change made to the invoice layout. They would like to have the Resource ID (handle) on the invoice charge line items; rather than the account number. Their customers are more aware of the Resource ID (handle) rather than the new account numbers is why. They will want to keep the account number on the headers at the top.
process
grant call invoice update request in gitlab by kdjstudios on apr submitted by grant allan helpdesk server dev client site scc account all issue grant mentioned today during our call this morning they would like to have a change made to the invoice layout they would like to have the resource id handle on the invoice charge line items rather than the account number their customers are more aware of the resource id handle rather than the new account numbers is why they will want to keep the account number on the headers at the top
1
7,476
10,569,321,037
IssuesEvent
2019-10-06 18:48:34
nodejs/security-wg
https://api.github.com/repos/nodejs/security-wg
opened
Node+Interactive Security WG Agenda
process
Let's use this issue to track all the suggestions and recommended agenda for Security WG in upcoming Collab Summit in Node+Interactive Montreal.
1.0
Node+Interactive Security WG Agenda - Let's use this issue to track all the suggestions and recommended agenda for Security WG in upcoming Collab Summit in Node+Interactive Montreal.
process
node interactive security wg agenda let s use this issue to track all the suggestions and recommended agenda for security wg in upcoming collab summit in node interactive montreal
1
181,073
21,641,751,885
IssuesEvent
2022-05-05 19:34:18
LibrIT/passhport
https://api.github.com/repos/LibrIT/passhport
closed
CVE-2020-7676 (Medium) detected in angular-1.4.2.min.js - autoclosed
security vulnerability
## CVE-2020-7676 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.4.2.min.js</b></p></summary> <p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js</a></p> <p>Path to dependency file: passhport/passhweb/app/static/node_modules/autocomplete.js/test/playground_angular.html</p> <p>Path to vulnerable library: passhport/passhweb/app/static/node_modules/autocomplete.js/test/playground_angular.html,passhport/passhweb/app/static/node_modules/autocomplete.js/examples/basic_angular.html</p> <p> Dependency Hierarchy: - :x: **angular-1.4.2.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/LibrIT/passhport/commit/6edbae4d4edb7cb3d1de32ee36f47ffd39f82d6e">6edbae4d4edb7cb3d1de32ee36f47ffd39f82d6e</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> angular.js prior to 1.8.0 allows cross site scripting. The regex-based input HTML replacement may turn sanitized code into unsanitized one. Wrapping "<option>" elements in "<select>" ones changes parsing behavior, leading to possibly unsanitizing code. <p>Publish Date: 2020-06-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7676>CVE-2020-7676</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7676">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7676</a></p> <p>Release Date: 2020-06-08</p> <p>Fix Resolution: 1.8.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7676 (Medium) detected in angular-1.4.2.min.js - autoclosed - ## CVE-2020-7676 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.4.2.min.js</b></p></summary> <p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js</a></p> <p>Path to dependency file: passhport/passhweb/app/static/node_modules/autocomplete.js/test/playground_angular.html</p> <p>Path to vulnerable library: passhport/passhweb/app/static/node_modules/autocomplete.js/test/playground_angular.html,passhport/passhweb/app/static/node_modules/autocomplete.js/examples/basic_angular.html</p> <p> Dependency Hierarchy: - :x: **angular-1.4.2.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/LibrIT/passhport/commit/6edbae4d4edb7cb3d1de32ee36f47ffd39f82d6e">6edbae4d4edb7cb3d1de32ee36f47ffd39f82d6e</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> angular.js prior to 1.8.0 allows cross site scripting. The regex-based input HTML replacement may turn sanitized code into unsanitized one. Wrapping "<option>" elements in "<select>" ones changes parsing behavior, leading to possibly unsanitizing code. <p>Publish Date: 2020-06-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7676>CVE-2020-7676</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7676">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7676</a></p> <p>Release Date: 2020-06-08</p> <p>Fix Resolution: 1.8.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in angular min js autoclosed cve medium severity vulnerability vulnerable library angular min js angularjs is an mvc framework for building web applications the core features include html enhanced with custom component and data binding capabilities dependency injection and strong focus on simplicity testability maintainability and boiler plate reduction library home page a href path to dependency file passhport passhweb app static node modules autocomplete js test playground angular html path to vulnerable library passhport passhweb app static node modules autocomplete js test playground angular html passhport passhweb app static node modules autocomplete js examples basic angular html dependency hierarchy x angular min js vulnerable library found in head commit a href found in base branch master vulnerability details angular js prior to allows cross site scripting the regex based input html replacement may turn sanitized code into unsanitized one wrapping elements in ones changes parsing behavior leading to possibly unsanitizing code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
2,972
5,963,362,007
IssuesEvent
2017-05-30 04:32:34
nodejs/node
https://api.github.com/repos/nodejs/node
closed
tests: deprecate test/parallel/test-child-process-double-pipe.js
child_process test
I am reviewing windows building, testing and other related windows stuff and noticed that this test is quite complex. That might be fair, but it's the only one that requires both `sed` and `grep`, which are sort of hard to install dependencies on windows (imo, I am now windows developer though). In order to ease windows node core hacking, we could deprecate it for some simpler piping test. Background: I am following https://github.com/nodejs/node/wiki/installation#building-on-windows and am trying to build those dependencies in JS, in order to just `npm install -g [...]` things. But `grep` and `sed` would be too hard I guess.
1.0
tests: deprecate test/parallel/test-child-process-double-pipe.js - I am reviewing windows building, testing and other related windows stuff and noticed that this test is quite complex. That might be fair, but it's the only one that requires both `sed` and `grep`, which are sort of hard to install dependencies on windows (imo, I am now windows developer though). In order to ease windows node core hacking, we could deprecate it for some simpler piping test. Background: I am following https://github.com/nodejs/node/wiki/installation#building-on-windows and am trying to build those dependencies in JS, in order to just `npm install -g [...]` things. But `grep` and `sed` would be too hard I guess.
process
tests deprecate test parallel test child process double pipe js i am reviewing windows building testing and other related windows stuff and noticed that this test is quite complex that might be fair but it s the only one that requires both sed and grep which are sort of hard to install dependencies on windows imo i am now windows developer though in order to ease windows node core hacking we could deprecate it for some simpler piping test background i am following and am trying to build those dependencies in js in order to just npm install g things but grep and sed would be too hard i guess
1
15,634
19,785,477,376
IssuesEvent
2022-01-18 05:57:58
linuxdeepin/developer-center
https://api.github.com/repos/linuxdeepin/developer-center
closed
QQ 启动十分钟后崩溃,请求删除 deepin-wine 中的一行代码
bug | functional behavior other | delay processing
<!--请将下方的缺陷汇报模板中的文字替换为您实际需要汇报的缺陷所对应的描述文字。--> <!--请保持一个 Issue 只专注一个缺陷,如果您有多个不同的缺陷需要汇报,请发起多个 Issue 。--> ## 缺陷描述 <!--简明清晰的描述你所需要汇报的缺陷(BUG)--> QQ 启动后运行一切正常,可以登录以及收发消息,但每次启动后约几分钟至十几分钟内崩溃。 ## 复现步骤 <!-- 描述可以重现缺陷(BUG)的操作步骤,以便我们复现缺陷并进行修复 复现步骤为: 1. 打开 '...' 2. 点击 '....' 3. 滚动至 '....' 4. 缺陷 '...' 发生 --> 使用官方安装包或 deepin 提供的安装包安装 QQ 并登录,并等待约十几分钟,QQ 崩溃并弹出错误报告。 ## 期望行为 QQ 应该能持续运行。 <!--简明清晰的描述你所期望的正确行为--> ## 截图 <!--如果适用于你所汇报的缺陷,可以附带截图来帮助描述你所遇到的缺陷(BUG)--> ## 复现环境: <!--部分缺陷可能需要在特定环境下才能复现,所以请尽可能详细的提供可能导致该缺陷的环境信息--> QQ 是通过 https://github.com/wszqkzqk/deepin-wine-ubuntu 安装。 ### 发行版以及版本 <!--如. Linux Deepin 15.7--> KUbuntu 18.04 ### 相关的软件包版本 <!--如. dde-file-manager v1.7 (4.5.6.2-2)--> <!--如果你不确定是哪些包出了问题,你也可以考虑提供一个列表来描述那些你认为可能相关联的包以及它们的版本号--> ### 其他内容 <!--描述其他任何和你所要汇报的缺陷相关的内容,以便我们定位问题并进行处理。如果没有其它信息,你也可以移除这个段落,如果必要时我们会根据实际情况询问其它细节--> 每次崩溃时所在的 EIP 对应的代码都一样,且终端出现以下内容: ``` ent\QQ\Bin\QQ.exe: netconnection.c:300:create_netconn_socket: 假设 ‘server->addr_len’ 失败。 ``` 怀疑可能是 QQ 错误地使用了 API 导致的 assertion 出错。希望在 deepin-wine 代码中 netconnection.c 中删除对应的 assertion 提高兼容性。
1.0
QQ 启动十分钟后崩溃,请求删除 deepin-wine 中的一行代码 - <!--请将下方的缺陷汇报模板中的文字替换为您实际需要汇报的缺陷所对应的描述文字。--> <!--请保持一个 Issue 只专注一个缺陷,如果您有多个不同的缺陷需要汇报,请发起多个 Issue 。--> ## 缺陷描述 <!--简明清晰的描述你所需要汇报的缺陷(BUG)--> QQ 启动后运行一切正常,可以登录以及收发消息,但每次启动后约几分钟至十几分钟内崩溃。 ## 复现步骤 <!-- 描述可以重现缺陷(BUG)的操作步骤,以便我们复现缺陷并进行修复 复现步骤为: 1. 打开 '...' 2. 点击 '....' 3. 滚动至 '....' 4. 缺陷 '...' 发生 --> 使用官方安装包或 deepin 提供的安装包安装 QQ 并登录,并等待约十几分钟,QQ 崩溃并弹出错误报告。 ## 期望行为 QQ 应该能持续运行。 <!--简明清晰的描述你所期望的正确行为--> ## 截图 <!--如果适用于你所汇报的缺陷,可以附带截图来帮助描述你所遇到的缺陷(BUG)--> ## 复现环境: <!--部分缺陷可能需要在特定环境下才能复现,所以请尽可能详细的提供可能导致该缺陷的环境信息--> QQ 是通过 https://github.com/wszqkzqk/deepin-wine-ubuntu 安装。 ### 发行版以及版本 <!--如. Linux Deepin 15.7--> KUbuntu 18.04 ### 相关的软件包版本 <!--如. dde-file-manager v1.7 (4.5.6.2-2)--> <!--如果你不确定是哪些包出了问题,你也可以考虑提供一个列表来描述那些你认为可能相关联的包以及它们的版本号--> ### 其他内容 <!--描述其他任何和你所要汇报的缺陷相关的内容,以便我们定位问题并进行处理。如果没有其它信息,你也可以移除这个段落,如果必要时我们会根据实际情况询问其它细节--> 每次崩溃时所在的 EIP 对应的代码都一样,且终端出现以下内容: ``` ent\QQ\Bin\QQ.exe: netconnection.c:300:create_netconn_socket: 假设 ‘server->addr_len’ 失败。 ``` 怀疑可能是 QQ 错误地使用了 API 导致的 assertion 出错。希望在 deepin-wine 代码中 netconnection.c 中删除对应的 assertion 提高兼容性。
process
qq 启动十分钟后崩溃,请求删除 deepin wine 中的一行代码 缺陷描述 qq 启动后运行一切正常,可以登录以及收发消息,但每次启动后约几分钟至十几分钟内崩溃。 复现步骤 描述可以重现缺陷(bug)的操作步骤,以便我们复现缺陷并进行修复 复现步骤为 打开 点击 滚动至 缺陷 发生 使用官方安装包或 deepin 提供的安装包安装 qq 并登录,并等待约十几分钟,qq 崩溃并弹出错误报告。 期望行为 qq 应该能持续运行。 截图 复现环境 qq 是通过 安装。 发行版以及版本 kubuntu 相关的软件包版本 其他内容 每次崩溃时所在的 eip 对应的代码都一样,且终端出现以下内容: ent qq bin qq exe netconnection c :create netconn socket 假设 ‘server addr len’ 失败。 怀疑可能是 qq 错误地使用了 api 导致的 assertion 出错。希望在 deepin wine 代码中 netconnection c 中删除对应的 assertion 提高兼容性。
1
11,705
14,545,447,542
IssuesEvent
2020-12-15 19:39:55
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Mapped name for System.AccessToken
Pri2 devops-cicd-process/tech devops/prod support-request
For use of System.AccessToken in YAML, does the token need to be mapped to SYSTEM_ACCESSTOKEN, or can the name be customized? For example, is the following form required? ``` steps: - powershell: Write-Host "This is a script that could use $env:SYSTEM_ACCESSTOKEN" env: SYSTEM_ACCESSTOKEN: $(System.AccessToken) ``` Or can we use a different environment variable name as follows? ``` steps: - powershell: Write-Host "This is a script that could use $env:MY_SPECIAL_TOKEN" env: MY_SPECIAL_TOKEN: $(System.AccessToken) ``` --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 67504b34-d64b-02a4-2e10-ab99f3b8cfe4 * Version Independent ID: 2cf63b2e-184b-7726-3b8a-d8baffd6fcce * Content: [Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml#server-jobs) * Content Source: [docs/pipelines/process/phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/phases.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Mapped name for System.AccessToken - For use of System.AccessToken in YAML, does the token need to be mapped to SYSTEM_ACCESSTOKEN, or can the name be customized? For example, is the following form required? ``` steps: - powershell: Write-Host "This is a script that could use $env:SYSTEM_ACCESSTOKEN" env: SYSTEM_ACCESSTOKEN: $(System.AccessToken) ``` Or can we use a different environment variable name as follows? ``` steps: - powershell: Write-Host "This is a script that could use $env:MY_SPECIAL_TOKEN" env: MY_SPECIAL_TOKEN: $(System.AccessToken) ``` --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 67504b34-d64b-02a4-2e10-ab99f3b8cfe4 * Version Independent ID: 2cf63b2e-184b-7726-3b8a-d8baffd6fcce * Content: [Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml#server-jobs) * Content Source: [docs/pipelines/process/phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/phases.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
mapped name for system accesstoken for use of system accesstoken in yaml does the token need to be mapped to system accesstoken or can the name be customized for example is the following form required steps powershell write host this is a script that could use env system accesstoken env system accesstoken system accesstoken or can we use a different environment variable name as follows steps powershell write host this is a script that could use env my special token env my special token system accesstoken document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
477,654
13,765,849,155
IssuesEvent
2020-10-07 13:54:09
buddyboss/buddyboss-platform
https://api.github.com/repos/buddyboss/buddyboss-platform
closed
Group Admin setting - can't remove the group parent
bug component: groups hacktoberfest hacktoberfest-accepted priority: medium
**Describe the bug** If you have assigned the Group Parent and then if you have to remove the group parent from the backend then you can't remove the group parent it's not removing the group parent. **To Reproduce** Steps to reproduce the behavior: 1. Go to backend groups and edit single group. 2. Assign the group parent and click on Save. 3. Now remove the group parent click on Save. 4. Parent group not removing from the group. **Expected behavior** You can remove the group parent from the group. **Screenshots** https://www.loom.com/share/6fe310252e8d4f29a61fa06eabc656b6 **Support ticket links** If applicable, add HelpScout link or ticket number where the issue was originally reported.
1.0
Group Admin setting - can't remove the group parent - **Describe the bug** If you have assigned the Group Parent and then if you have to remove the group parent from the backend then you can't remove the group parent it's not removing the group parent. **To Reproduce** Steps to reproduce the behavior: 1. Go to backend groups and edit single group. 2. Assign the group parent and click on Save. 3. Now remove the group parent click on Save. 4. Parent group not removing from the group. **Expected behavior** You can remove the group parent from the group. **Screenshots** https://www.loom.com/share/6fe310252e8d4f29a61fa06eabc656b6 **Support ticket links** If applicable, add HelpScout link or ticket number where the issue was originally reported.
non_process
group admin setting can t remove the group parent describe the bug if you have assigned the group parent and then if you have to remove the group parent from the backend then you can t remove the group parent it s not removing the group parent to reproduce steps to reproduce the behavior go to backend groups and edit single group assign the group parent and click on save now remove the group parent click on save parent group not removing from the group expected behavior you can remove the group parent from the group screenshots support ticket links if applicable add helpscout link or ticket number where the issue was originally reported
0
20,469
27,130,518,967
IssuesEvent
2023-02-16 09:27:19
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Polymorphic libraries and rpaths
P4 type: support / not a bug (process) team-Rules-CPP stale
### Description of the problem / feature request: We're making a library which has two different implementations, acting as a polymorphic library. If I denote the two versions of the library as `libpoly.so(a)` and `libpoly.so(b)`. When testing the different library versions, we've hit a couple of pain points where a binary built with a dep via `cc_import` on `libpoly.so(a)`. It's very difficult to test the same binary against `libpoly.so(b)` as Bazel has hardcoded the path to the binary into the `rpath` of the binary and therefore the dynamic loader can inadvertently pull in the wrong library. I'm wondering if there's a way to instead of forcing the rpath of a binary, to instead be able to pass the `LD_LIBRARY_PATH` as an env variable to the actions so that this can be changed to be dependent on what version of the library we want to run against.
1.0
Polymorphic libraries and rpaths - ### Description of the problem / feature request: We're making a library which has two different implementations, acting as a polymorphic library. If I denote the two versions of the library as `libpoly.so(a)` and `libpoly.so(b)`. When testing the different library versions, we've hit a couple of pain points where a binary built with a dep via `cc_import` on `libpoly.so(a)`. It's very difficult to test the same binary against `libpoly.so(b)` as Bazel has hardcoded the path to the binary into the `rpath` of the binary and therefore the dynamic loader can inadvertently pull in the wrong library. I'm wondering if there's a way to instead of forcing the rpath of a binary, to instead be able to pass the `LD_LIBRARY_PATH` as an env variable to the actions so that this can be changed to be dependent on what version of the library we want to run against.
process
polymorphic libraries and rpaths description of the problem feature request we re making a library which has two different implementations acting as a polymorphic library if i denote the two versions of the library as libpoly so a and libpoly so b when testing the different library versions we ve hit a couple of pain points where a binary built with a dep via cc import on libpoly so a it s very difficult to test the same binary against libpoly so b as bazel has hardcoded the path to the binary into the rpath of the binary and therefore the dynamic loader can inadvertently pull in the wrong library i m wondering if there s a way to instead of forcing the rpath of a binary to instead be able to pass the ld library path as an env variable to the actions so that this can be changed to be dependent on what version of the library we want to run against
1
20,595
30,602,244,772
IssuesEvent
2023-07-22 14:36:43
khanhduytran0/LiveContainer
https://api.github.com/repos/khanhduytran0/LiveContainer
opened
Delta Crashes
compatibility
### Describe the issue Delta (v 1.4) crashes after being opened in LiveContainer ### Instructions to reproduce See description ### What version of LiveContainer are you using? Build #22 ### Other Error message: -[NSUserDefaults setIsAltJITEnabled:]: unrecognized selector sent to instance 0x2801f56b0 Call stack: ( 0 CoreFoundation 0x00000001a1e22254 42C5C917-0447-3995-B50F-DE4D132C2435 + 41556 1 libobjc.A.dylib 0x000000019b1e7a68 objc_exception_throw + 60 2 CoreFoundation 0x00000001a1f963f0 42C5C917-0447-3995-B50F-DE4D132C2435 + 1565680 3 CoreFoundation 0x00000001a1e38360 42C5C917-0447-3995-B50F-DE4D132C2435 + 131936 4 CoreFoundation 0x00000001a1ea0660 _CF_forwarding_prep_0 + 96 5 Delta 0x000000010a1f3888 Delta + 129160 6 Delta 0x000000010a241ad0 Delta + 449232 7 Delta 0x000000010a240268 Delta + 442984 8 UIKitCore 0x00000001a4300f40 7B942FA4-CB76-3375-9972-F58C14492FB4 + 3518272 9 UIKitCore 0x00000001a4300664 7B942FA4-CB76-3375-9972-F58C14492FB4 + 3516004 10 UIKitCore 0x00000001a42ff640 7B942FA4-CB76-3375-9972-F58C14492FB4 + 3511872 11 UIKitCore 0x00000001a42ff2b8 7B942FA4-CB76-3375-9972-F58C14492FB4 + 3510968 12 UIKitCore 0x00000001a4349d0c 7B942FA4-CB76-3375-9972-F58C14492FB4 + 3816716 13 UIKitCore 0x00000001a4348d64 7B942FA4-CB76-3375-9972-F58C14492FB4 + 3812708 14 UIKitCore 0x00000001a43489ec UIApplicationMain + 340 15 Delta 0x000000010a1dbef8 Delta + 32504 16 LiveContainer 0x00000001048ea534 LiveContainerMain + 2140 17 dyld 0x00000001c01bd948 341BBF64-6034-357E-8AA6-E1E4B988E03C + 88392 )
True
Delta Crashes - ### Describe the issue Delta (v 1.4) crashes after being opened in LiveContainer ### Instructions to reproduce See description ### What version of LiveContainer are you using? Build #22 ### Other Error message: -[NSUserDefaults setIsAltJITEnabled:]: unrecognized selector sent to instance 0x2801f56b0 Call stack: ( 0 CoreFoundation 0x00000001a1e22254 42C5C917-0447-3995-B50F-DE4D132C2435 + 41556 1 libobjc.A.dylib 0x000000019b1e7a68 objc_exception_throw + 60 2 CoreFoundation 0x00000001a1f963f0 42C5C917-0447-3995-B50F-DE4D132C2435 + 1565680 3 CoreFoundation 0x00000001a1e38360 42C5C917-0447-3995-B50F-DE4D132C2435 + 131936 4 CoreFoundation 0x00000001a1ea0660 _CF_forwarding_prep_0 + 96 5 Delta 0x000000010a1f3888 Delta + 129160 6 Delta 0x000000010a241ad0 Delta + 449232 7 Delta 0x000000010a240268 Delta + 442984 8 UIKitCore 0x00000001a4300f40 7B942FA4-CB76-3375-9972-F58C14492FB4 + 3518272 9 UIKitCore 0x00000001a4300664 7B942FA4-CB76-3375-9972-F58C14492FB4 + 3516004 10 UIKitCore 0x00000001a42ff640 7B942FA4-CB76-3375-9972-F58C14492FB4 + 3511872 11 UIKitCore 0x00000001a42ff2b8 7B942FA4-CB76-3375-9972-F58C14492FB4 + 3510968 12 UIKitCore 0x00000001a4349d0c 7B942FA4-CB76-3375-9972-F58C14492FB4 + 3816716 13 UIKitCore 0x00000001a4348d64 7B942FA4-CB76-3375-9972-F58C14492FB4 + 3812708 14 UIKitCore 0x00000001a43489ec UIApplicationMain + 340 15 Delta 0x000000010a1dbef8 Delta + 32504 16 LiveContainer 0x00000001048ea534 LiveContainerMain + 2140 17 dyld 0x00000001c01bd948 341BBF64-6034-357E-8AA6-E1E4B988E03C + 88392 )
non_process
delta crashes describe the issue delta v crashes after being opened in livecontainer instructions to reproduce see description what version of livecontainer are you using build other error message unrecognized selector sent to instance call stack corefoundation libobjc a dylib objc exception throw corefoundation corefoundation corefoundation cf forwarding prep delta delta delta delta delta delta uikitcore uikitcore uikitcore uikitcore uikitcore uikitcore uikitcore uiapplicationmain delta delta livecontainer livecontainermain dyld
0
310,336
23,332,984,729
IssuesEvent
2022-08-09 07:29:08
KinsonDigital/GitHubData
https://api.github.com/repos/KinsonDigital/GitHubData
closed
🚧Fix v1.0.0-preview.2 release notes
good first issue low priority preview 📝documentation/product
### Complete The Item Below - [X] I have updated the title without removing the 🚧 emoji. ### Description Fix the release notes for version **v1.0.0-preview.2** by moving all of issue 7 items from the **Other 🪧** section to the **New Features ✨** section. Also remove any sections that contain nothing ### Acceptance Criteria - [ ] Release note items moved - [x] Empty release note sections removed ### ToDo Items - [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below. - [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below. - [X] Issue linked to the correct project _(if required)_. - [X] Issue linked to the correct milestone _(if required)_. - [x] Draft pull request created and linked to this issue _(only required with code changes)_. ### Issue Dependencies _No response_ ### Related Work _No response_ ### Additional Information: **_<details closed><summary>Change Type Labels</summary>_** | Change Type | Label | |---------------------|--------------------------------------------------------------------------------------| | Bug Fixes | https://github.com/KinsonDigital/Velaptor/labels/%F0%9F%90%9Bbug | | Breaking Changes | https://github.com/KinsonDigital/Velaptor/labels/%F0%9F%A7%A8breaking%20changes | | New Feature | https://github.com/KinsonDigital/Velaptor/labels/%E2%9C%A8new%20feature | | Workflow Changes | https://github.com/KinsonDigital/Velaptor/labels/workflow | | Code Doc Changes | https://github.com/KinsonDigital/Velaptor/labels/%F0%9F%93%91documentation%2Fcode | | Product Doc Changes | https://github.com/KinsonDigital/Velaptor/labels/%F0%9F%93%9Ddocumentation%2Fproduct | </details> **_<details closed><summary>Priority Type Labels</summary>_** | Priority Type | Label | |---------------------|--------------------------------------------------------------------| | Low Priority | https://github.com/KinsonDigital/Velaptor/labels/low%20priority | | Medium Priority | https://github.com/KinsonDigital/Velaptor/labels/medium%20priority | | High Priority | https://github.com/KinsonDigital/Velaptor/labels/high%20priority | </details> ### Code of Conduct - [X] I agree to follow this project's Code of Conduct.
1.0
🚧Fix v1.0.0-preview.2 release notes - ### Complete The Item Below - [X] I have updated the title without removing the 🚧 emoji. ### Description Fix the release notes for version **v1.0.0-preview.2** by moving all of issue 7 items from the **Other 🪧** section to the **New Features ✨** section. Also remove any sections that contain nothing ### Acceptance Criteria - [ ] Release note items moved - [x] Empty release note sections removed ### ToDo Items - [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below. - [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below. - [X] Issue linked to the correct project _(if required)_. - [X] Issue linked to the correct milestone _(if required)_. - [x] Draft pull request created and linked to this issue _(only required with code changes)_. ### Issue Dependencies _No response_ ### Related Work _No response_ ### Additional Information: **_<details closed><summary>Change Type Labels</summary>_** | Change Type | Label | |---------------------|--------------------------------------------------------------------------------------| | Bug Fixes | https://github.com/KinsonDigital/Velaptor/labels/%F0%9F%90%9Bbug | | Breaking Changes | https://github.com/KinsonDigital/Velaptor/labels/%F0%9F%A7%A8breaking%20changes | | New Feature | https://github.com/KinsonDigital/Velaptor/labels/%E2%9C%A8new%20feature | | Workflow Changes | https://github.com/KinsonDigital/Velaptor/labels/workflow | | Code Doc Changes | https://github.com/KinsonDigital/Velaptor/labels/%F0%9F%93%91documentation%2Fcode | | Product Doc Changes | https://github.com/KinsonDigital/Velaptor/labels/%F0%9F%93%9Ddocumentation%2Fproduct | </details> **_<details closed><summary>Priority Type Labels</summary>_** | Priority Type | Label | |---------------------|--------------------------------------------------------------------| | Low Priority | https://github.com/KinsonDigital/Velaptor/labels/low%20priority | | Medium Priority | https://github.com/KinsonDigital/Velaptor/labels/medium%20priority | | High Priority | https://github.com/KinsonDigital/Velaptor/labels/high%20priority | </details> ### Code of Conduct - [X] I agree to follow this project's Code of Conduct.
non_process
🚧fix preview release notes complete the item below i have updated the title without removing the 🚧 emoji description fix the release notes for version preview by moving all of issue items from the other 🪧 section to the new features ✨ section also remove any sections that contain nothing acceptance criteria release note items moved empty release note sections removed todo items change type labels added to this issue refer to the change type labels section below priority label added to this issue refer to the priority type labels section below issue linked to the correct project if required issue linked to the correct milestone if required draft pull request created and linked to this issue only required with code changes issue dependencies no response related work no response additional information change type labels change type label bug fixes breaking changes new feature workflow changes code doc changes product doc changes priority type labels priority type label low priority medium priority high priority code of conduct i agree to follow this project s code of conduct
0
5,412
8,247,724,507
IssuesEvent
2018-09-11 16:18:20
ArctosDB/new-collections
https://api.github.com/repos/ArctosDB/new-collections
closed
Assign collection mentor
Application in process
Default = AWG Chair or Vice-Chair AWG member can volunteer to act as primary contact, especially if they have similar collections or specific knowledge about a collection; can serve as ‘in kind support’ for collections to help offset costs
1.0
Assign collection mentor - Default = AWG Chair or Vice-Chair AWG member can volunteer to act as primary contact, especially if they have similar collections or specific knowledge about a collection; can serve as ‘in kind support’ for collections to help offset costs
process
assign collection mentor default awg chair or vice chair awg member can volunteer to act as primary contact especially if they have similar collections or specific knowledge about a collection can serve as ‘in kind support’ for collections to help offset costs
1
11,322
14,140,085,677
IssuesEvent
2020-11-10 10:42:07
MineCake147E/MonoAudio
https://api.github.com/repos/MineCake147E/MonoAudio
opened
.NET 5
CPU: AMD x64 🖥️ CPU: Apple A-Series📱🍎 CPU: Fujitsu A64FX 🖥️🖥️🖥️🖥️🖥️🖥️(📱) CPU: Intel x64 🖥️ CPU: Other ARMv8 📱 CPU: Qualcomm Snapdragon 📱 Feature: Signal Processing 🎛️ Kind: High Latency 🐌 Priority: High 🚅 Status: Waiting ⏳
In order to improve performance of MonoAudio, we have to move MonoAudio from .Net Standard 2.0 to .Net 5. - [ ] .NET 5 Release # Major changes - [ ] Adopt new APIs **almost EVERYWHERE** - [ ] x86/64 - [ ] SSE - [ ] SSE2 - [ ] SSE3 - [ ] SSE4.x - [ ] AVX - [ ] AVX2 - [ ] AVX512 - [ ] Bmi1 - [ ] Bmi2 - [ ] Fma - [ ] Lzcnt - [ ] Popcnt - [ ] ARM - [ ] AdvSimd - [ ] ArmBase - [ ] Crc32 - [ ] Dp - [ ] Rdm - [ ] Cross Platform - [ ] System.Numerics.BitOperations - [ ] System.MathF - [ ] System.Half - [ ] System.HashCode - [ ] System.Math.Tau - [ ] System.Math.FusedMultiplyAdd - [ ] System.MathF.FusedMultiplyAdd
1.0
.NET 5 - In order to improve performance of MonoAudio, we have to move MonoAudio from .Net Standard 2.0 to .Net 5. - [ ] .NET 5 Release # Major changes - [ ] Adopt new APIs **almost EVERYWHERE** - [ ] x86/64 - [ ] SSE - [ ] SSE2 - [ ] SSE3 - [ ] SSE4.x - [ ] AVX - [ ] AVX2 - [ ] AVX512 - [ ] Bmi1 - [ ] Bmi2 - [ ] Fma - [ ] Lzcnt - [ ] Popcnt - [ ] ARM - [ ] AdvSimd - [ ] ArmBase - [ ] Crc32 - [ ] Dp - [ ] Rdm - [ ] Cross Platform - [ ] System.Numerics.BitOperations - [ ] System.MathF - [ ] System.Half - [ ] System.HashCode - [ ] System.Math.Tau - [ ] System.Math.FusedMultiplyAdd - [ ] System.MathF.FusedMultiplyAdd
process
net in order to improve performance of monoaudio we have to move monoaudio from net standard to net net release major changes adopt new apis almost everywhere sse x avx fma lzcnt popcnt arm advsimd armbase dp rdm cross platform system numerics bitoperations system mathf system half system hashcode system math tau system math fusedmultiplyadd system mathf fusedmultiplyadd
1
96,850
20,117,415,081
IssuesEvent
2022-02-07 21:07:48
Aden-Q/gitalk-comments
https://api.github.com/repos/Aden-Q/gitalk-comments
opened
VS Code Issues/Fixes | Lyrics
Gitalk /2022/02/07/VS%20Code%20Issues:Fixes/
https://aden-q.github.io/2022/02/07/VS%20Code%20Issues:Fixes/#more Common issues and fixes for extensions in Visual Studio Code Environment: macOS Monterey
1.0
VS Code Issues/Fixes | Lyrics - https://aden-q.github.io/2022/02/07/VS%20Code%20Issues:Fixes/#more Common issues and fixes for extensions in Visual Studio Code Environment: macOS Monterey
non_process
vs code issues fixes lyrics common issues and fixes for extensions in visual studio code environment macos monterey
0
18,454
24,548,544,661
IssuesEvent
2022-10-12 10:42:00
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] Participant details screen > Consent history > Enrolled date is displayed as consented date when user provides the updated consent
Bug P1 Participant manager Process: Fixed Process: Tested dev
Steps: 1. In mobile, enroll to the study (enrolled date is (4/14/2022) 2. In SB, update the consent for a particular enrolled to study with e 'Enforce consent for enrolled participants' option) 3. In Mobile, click on the particular enrolled study 4. Provide the updated consent ( Eg: latest consent provided date 20/04//2022) 5. In PM, Click on the particular study 6. Click on the particular participant record 7. Observe the consented date in the latest consent history AR: Enrolled date is displayed as the consented date when the user provides the updated consent ER: Latest consent provided date should be displayed (As per the above scenario 20/04/2022 should be displayed as consented date ![image](https://user-images.githubusercontent.com/71445210/164450844-1b69afa6-98ad-48ec-88ab-98dfb490e493.png)
2.0
[PM] Participant details screen > Consent history > Enrolled date is displayed as consented date when user provides the updated consent - Steps: 1. In mobile, enroll to the study (enrolled date is (4/14/2022) 2. In SB, update the consent for a particular enrolled to study with e 'Enforce consent for enrolled participants' option) 3. In Mobile, click on the particular enrolled study 4. Provide the updated consent ( Eg: latest consent provided date 20/04//2022) 5. In PM, Click on the particular study 6. Click on the particular participant record 7. Observe the consented date in the latest consent history AR: Enrolled date is displayed as the consented date when the user provides the updated consent ER: Latest consent provided date should be displayed (As per the above scenario 20/04/2022 should be displayed as consented date ![image](https://user-images.githubusercontent.com/71445210/164450844-1b69afa6-98ad-48ec-88ab-98dfb490e493.png)
process
participant details screen consent history enrolled date is displayed as consented date when user provides the updated consent steps in mobile enroll to the study enrolled date is in sb update the consent for a particular enrolled to study with e enforce consent for enrolled participants option in mobile click on the particular enrolled study provide the updated consent eg latest consent provided date in pm click on the particular study click on the particular participant record observe the consented date in the latest consent history ar enrolled date is displayed as the consented date when the user provides the updated consent er latest consent provided date should be displayed as per the above scenario should be displayed as consented date
1
18,363
24,492,331,404
IssuesEvent
2022-10-10 04:21:08
phamtanduongtk29/html-css-training
https://api.github.com/repos/phamtanduongtk29/html-css-training
opened
Create and Responsive plans section and need help section
not yet processing
- Estimates: 7 hours - Create detail plan - Create information
1.0
Create and Responsive plans section and need help section - - Estimates: 7 hours - Create detail plan - Create information
process
create and responsive plans section and need help section estimates hours create detail plan create information
1