Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
75,696
3,471,088,742
IssuesEvent
2015-12-23 13:13:00
mantidproject/mantid
https://api.github.com/repos/mantidproject/mantid
closed
REFM instrument definition
Component: Reflectometry Misc: SSC Priority: High
This issue was originally [TRAC 11308](http://trac.mantidproject.org/mantid/ticket/11308) Allow better sharing of reflectomtery tools - - - - Keywords: SSC, 2015, SNS reflOther
1.0
REFM instrument definition - This issue was originally [TRAC 11308](http://trac.mantidproject.org/mantid/ticket/11308) Allow better sharing of reflectomtery tools - - - - Keywords: SSC, 2015, SNS reflOther
non_process
refm instrument definition this issue was originally allow better sharing of reflectomtery tools keywords ssc sns reflother
0
5,969
8,790,937,355
IssuesEvent
2018-12-21 10:46:09
tinyMediaManager/tinyMediaManager
https://api.github.com/repos/tinyMediaManager/tinyMediaManager
closed
Removing or moving extras
bug processing
__What TMM version are you using?__ 2.9.11 __release, pre-release, nightly, or directly from GitHub/branch?__ release __What is the actual behaviour?__ Rename/cleanup move extras to ".deletedByTMM" or move from ./MovieFolder/extras to ./MovieFolder/ I have empty FileName pattern in settings because i want left original file titles. My folder name pattern: $T { - $U }($Y). Replace non ASCII checked. __What is the expected behaviour?__ Just left as is extras folder. Just ignore it and treat all files as extras (.jpg, video, audio etc.). Or just add option to exclude pattern in names, extensions etc. __Steps to reproduce:__ create folder structure: /MovieFolder/extras add video to main directory and few files to extras. Scrap info from servers and try rename only folder with empty filename pattern. `2018-05-05 12:15:24,604 DEBUG [tmmpool-rename-T1] o.t.core.movie.MovieRenamer:582 - Deleting C:\Movies\MovieTitle\extras\Making of.mkv 2018-05-05 12:15:24,651 DEBUG [tmmpool-rename-T1] org.tinymediamanager.core.Utils:642 - try to move file C:\Movies\MovieTitle\extras\Making of.mkv to C:\Movies\.deletedByTMM\MovieTitle\extras\Making of.mkv 2018-05-05 12:15:24,839 INFO [tmmpool-rename-T1] org.tinymediamanager.core.Utils:697 - Successfully moved file from C:\Movies\MovieTitle\extras\Making of.mkv to C:\Movies\.deletedByTMM\MovieTitle\extras\Making of.mkv 2018-05-05 12:15:24,839 DEBUG [tmmpool-rename-T1] o.t.core.movie.MovieRenamer:589 - Deleting empty Directory C:\Movies\MovieTitle\extras` __Additional__ windows
1.0
Removing or moving extras - __What TMM version are you using?__ 2.9.11 __release, pre-release, nightly, or directly from GitHub/branch?__ release __What is the actual behaviour?__ Rename/cleanup move extras to ".deletedByTMM" or move from ./MovieFolder/extras to ./MovieFolder/ I have empty FileName pattern in settings because i want left original file titles. My folder name pattern: $T { - $U }($Y). Replace non ASCII checked. __What is the expected behaviour?__ Just left as is extras folder. Just ignore it and treat all files as extras (.jpg, video, audio etc.). Or just add option to exclude pattern in names, extensions etc. __Steps to reproduce:__ create folder structure: /MovieFolder/extras add video to main directory and few files to extras. Scrap info from servers and try rename only folder with empty filename pattern. `2018-05-05 12:15:24,604 DEBUG [tmmpool-rename-T1] o.t.core.movie.MovieRenamer:582 - Deleting C:\Movies\MovieTitle\extras\Making of.mkv 2018-05-05 12:15:24,651 DEBUG [tmmpool-rename-T1] org.tinymediamanager.core.Utils:642 - try to move file C:\Movies\MovieTitle\extras\Making of.mkv to C:\Movies\.deletedByTMM\MovieTitle\extras\Making of.mkv 2018-05-05 12:15:24,839 INFO [tmmpool-rename-T1] org.tinymediamanager.core.Utils:697 - Successfully moved file from C:\Movies\MovieTitle\extras\Making of.mkv to C:\Movies\.deletedByTMM\MovieTitle\extras\Making of.mkv 2018-05-05 12:15:24,839 DEBUG [tmmpool-rename-T1] o.t.core.movie.MovieRenamer:589 - Deleting empty Directory C:\Movies\MovieTitle\extras` __Additional__ windows
process
removing or moving extras what tmm version are you using release pre release nightly or directly from github branch release what is the actual behaviour rename cleanup move extras to deletedbytmm or move from moviefolder extras to moviefolder i have empty filename pattern in settings because i want left original file titles my folder name pattern t u y replace non ascii checked what is the expected behaviour just left as is extras folder just ignore it and treat all files as extras jpg video audio etc or just add option to exclude pattern in names extensions etc steps to reproduce create folder structure moviefolder extras add video to main directory and few files to extras scrap info from servers and try rename only folder with empty filename pattern debug o t core movie movierenamer deleting c movies movietitle extras making of mkv debug org tinymediamanager core utils try to move file c movies movietitle extras making of mkv to c movies deletedbytmm movietitle extras making of mkv info org tinymediamanager core utils successfully moved file from c movies movietitle extras making of mkv to c movies deletedbytmm movietitle extras making of mkv debug o t core movie movierenamer deleting empty directory c movies movietitle extras additional windows
1
18,288
24,387,412,330
IssuesEvent
2022-10-04 12:53:40
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Cannot drill-through "View these ..." when aggregated results are filtered
Type:Bug Priority:P2 Querying/Processor .Reproduced
**Describe the bug** This setup ![image](https://user-images.githubusercontent.com/5941039/96227361-4e68ce80-0f94-11eb-97b8-ec857d8dd055.png) Throws this error ![image](https://user-images.githubusercontent.com/5941039/96227438-680a1600-0f94-11eb-8b9b-9f767af62e7e.png) When Viewing (drilling down to) Details: ![image](https://user-images.githubusercontent.com/5941039/96227613-a69fd080-0f94-11eb-971e-a0ce3da01423.png) **Logs** Please include javascript console and server logs around the time this bug occurred. For information about how to get these, consult our [bug troubleshooting guide](https://metabase.com/docs/latest/troubleshooting-guide/bugs.html) Value does not match schema: {:query {:filter (named [nil nil nil (named (named [nil (named [(named (not (= :value :aggregation)) :value) nil (not (present? "type-info"))] "field") nil] "Must be a valid instance of one of these clauses: :and, :or, :not, :=, :!=, :<, :>, :<=, :>=, :between, :starts-with, :ends-with, :contains, :does-not-contain, :inside, :is-empty, :not-empty, :is-null, :not-null, :time-interval, :segment") "other-clauses") (named (named [nil (named [(named (not (= :value :aggregation)) :value) nil (not (present? "type-info"))] "field") nil] "Must be a valid instance of one of these clauses: :and, :or, :not, :=, :!=, :<, :>, :<=, :>=, :between, :starts-with, :ends-with, :contains, :does-not-contain, :inside, :is-empty, :not-empty, :is-null, :not-null, :time-interval, :segment") "other-clauses") (named (named [nil (named [(named (not (= :value :aggregation)) :value) nil (not (present? "type-info"))] "field") nil] "Must be a valid instance of one of these clauses: :and, :or, :not, :=, :!=, :<, :>, :<=, :>=, :between, :starts-with, :ends-with, :contains, :does-not-contain, :inside, :is-empty, :not-empty, :is-null, :not-null, :time-interval, :segment") "other-clauses") (named (named [nil (named [(named (not (= :value :aggregation)) :value) nil (not (present? "type-info"))] "field") nil] "Must be a valid instance of one of these clauses: :and, :or, :not, :=, :!=, :<, :>, :<=, :>=, :between, :starts-with, :ends-with, :contains, :does-not-contain, :inside, :is-empty, :not-empty, :is-null, :not-null, :time-interval, :segment") "other-clauses") (named (named [nil (named [(named (not (= :value :aggregation)) :value) nil (not (present? "type-info"))] "field") nil] "Must be a valid instance of one of these clauses: :and, :or, :not, :=, :!=, :<, :>, :<=, :>=, :between, :starts-with, :ends-with, :contains, :does-not-contain, :inside, :is-empty, :not-empty, :is-null, :not-null, :time-interval, :segment") "other-clauses")] "Must be a valid instance of one of these clauses: :and, :or, :not, :=, :!=, :<, :>, :<=, :>=, :between, :starts-with, :ends-with, :contains, :does-not-contain, :inside, :is-empty, :not-empty, :is-null, :not-null, :time-interval, :segment")}} **To Reproduce** Steps to reproduce the behavior: 1. Create question as outlined above 2. Drill down with "View these" as shown above **Expected behavior** List of detail records If I take away filter in "Filter" section (see above) then it work.s **Screenshots** See above **Information about your Metabase Installation:** You can get this information by going to Admin -> Troubleshooting. - Your browser and the version: Chrome 86.0.4240.75 - Your operating system: Docker on Debian 10 - Your databases: SQL Server (for source) - Metabase version: v0.36.6 - Metabase hosting environment: Docker on Debian 10 - Metabase internal database: MySQL **Severity** How severe an issue is this bug to you? Is this annoying, blocking some users, blocking an upgrade or blocking your usage of Metabase entirely? Is a show stopper for this application scenario **Additional context** Add any other context about the problem here.
1.0
Cannot drill-through "View these ..." when aggregated results are filtered - **Describe the bug** This setup ![image](https://user-images.githubusercontent.com/5941039/96227361-4e68ce80-0f94-11eb-97b8-ec857d8dd055.png) Throws this error ![image](https://user-images.githubusercontent.com/5941039/96227438-680a1600-0f94-11eb-8b9b-9f767af62e7e.png) When Viewing (drilling down to) Details: ![image](https://user-images.githubusercontent.com/5941039/96227613-a69fd080-0f94-11eb-971e-a0ce3da01423.png) **Logs** Please include javascript console and server logs around the time this bug occurred. For information about how to get these, consult our [bug troubleshooting guide](https://metabase.com/docs/latest/troubleshooting-guide/bugs.html) Value does not match schema: {:query {:filter (named [nil nil nil (named (named [nil (named [(named (not (= :value :aggregation)) :value) nil (not (present? "type-info"))] "field") nil] "Must be a valid instance of one of these clauses: :and, :or, :not, :=, :!=, :<, :>, :<=, :>=, :between, :starts-with, :ends-with, :contains, :does-not-contain, :inside, :is-empty, :not-empty, :is-null, :not-null, :time-interval, :segment") "other-clauses") (named (named [nil (named [(named (not (= :value :aggregation)) :value) nil (not (present? "type-info"))] "field") nil] "Must be a valid instance of one of these clauses: :and, :or, :not, :=, :!=, :<, :>, :<=, :>=, :between, :starts-with, :ends-with, :contains, :does-not-contain, :inside, :is-empty, :not-empty, :is-null, :not-null, :time-interval, :segment") "other-clauses") (named (named [nil (named [(named (not (= :value :aggregation)) :value) nil (not (present? "type-info"))] "field") nil] "Must be a valid instance of one of these clauses: :and, :or, :not, :=, :!=, :<, :>, :<=, :>=, :between, :starts-with, :ends-with, :contains, :does-not-contain, :inside, :is-empty, :not-empty, :is-null, :not-null, :time-interval, :segment") "other-clauses") (named (named [nil (named [(named (not (= :value :aggregation)) :value) nil (not (present? "type-info"))] "field") nil] "Must be a valid instance of one of these clauses: :and, :or, :not, :=, :!=, :<, :>, :<=, :>=, :between, :starts-with, :ends-with, :contains, :does-not-contain, :inside, :is-empty, :not-empty, :is-null, :not-null, :time-interval, :segment") "other-clauses") (named (named [nil (named [(named (not (= :value :aggregation)) :value) nil (not (present? "type-info"))] "field") nil] "Must be a valid instance of one of these clauses: :and, :or, :not, :=, :!=, :<, :>, :<=, :>=, :between, :starts-with, :ends-with, :contains, :does-not-contain, :inside, :is-empty, :not-empty, :is-null, :not-null, :time-interval, :segment") "other-clauses")] "Must be a valid instance of one of these clauses: :and, :or, :not, :=, :!=, :<, :>, :<=, :>=, :between, :starts-with, :ends-with, :contains, :does-not-contain, :inside, :is-empty, :not-empty, :is-null, :not-null, :time-interval, :segment")}} **To Reproduce** Steps to reproduce the behavior: 1. Create question as outlined above 2. Drill down with "View these" as shown above **Expected behavior** List of detail records If I take away filter in "Filter" section (see above) then it work.s **Screenshots** See above **Information about your Metabase Installation:** You can get this information by going to Admin -> Troubleshooting. - Your browser and the version: Chrome 86.0.4240.75 - Your operating system: Docker on Debian 10 - Your databases: SQL Server (for source) - Metabase version: v0.36.6 - Metabase hosting environment: Docker on Debian 10 - Metabase internal database: MySQL **Severity** How severe an issue is this bug to you? Is this annoying, blocking some users, blocking an upgrade or blocking your usage of Metabase entirely? Is a show stopper for this application scenario **Additional context** Add any other context about the problem here.
process
cannot drill through view these when aggregated results are filtered describe the bug this setup throws this error when viewing drilling down to details logs please include javascript console and server logs around the time this bug occurred for information about how to get these consult our value does not match schema query filter named field nil must be a valid instance of one of these clauses and or not between starts with ends with contains does not contain inside is empty not empty is null not null time interval segment other clauses named named field nil must be a valid instance of one of these clauses and or not between starts with ends with contains does not contain inside is empty not empty is null not null time interval segment other clauses named named field nil must be a valid instance of one of these clauses and or not between starts with ends with contains does not contain inside is empty not empty is null not null time interval segment other clauses named named field nil must be a valid instance of one of these clauses and or not between starts with ends with contains does not contain inside is empty not empty is null not null time interval segment other clauses named named field nil must be a valid instance of one of these clauses and or not between starts with ends with contains does not contain inside is empty not empty is null not null time interval segment other clauses must be a valid instance of one of these clauses and or not between starts with ends with contains does not contain inside is empty not empty is null not null time interval segment to reproduce steps to reproduce the behavior create question as outlined above drill down with view these as shown above expected behavior list of detail records if i take away filter in filter section see above then it work s screenshots see above information about your metabase installation you can get this information by going to admin troubleshooting your browser and the version chrome your operating system docker on debian your databases sql server for source metabase version metabase hosting environment docker on debian metabase internal database mysql severity how severe an issue is this bug to you is this annoying blocking some users blocking an upgrade or blocking your usage of metabase entirely is a show stopper for this application scenario additional context add any other context about the problem here
1
54,668
3,070,948,693
IssuesEvent
2015-08-19 08:55:27
pavel-pimenov/flylinkdc-r5xx
https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx
closed
При большом количестве потоков поиска программа виснет
bug imported Priority-High
_From [a.rain...@gmail.com](https://code.google.com/u/117892482479228821242/) on June 29, 2009 12:37:35_ Несколько раз был зафиксирован момент когда при запуске нового поиска, именно при обращении к хабу программа наглухо виснет, т.е. нагрузка на процессор падает в 0 и вообще какие либо действия прекращаются. По прошествии таймаутов в сокетах начинают "отваливаться" новые соединения, но какой либо другой активности не наблюдается, даже счётчик потоков не уменьшается Проявляется данное безобразие только при большой нагрузке, поэтому отловить крайне проблематично, количество потоков в проге переваливает за 200 Вполне вероятно этот же баг является причиной зависания при больших очередях скачки. ps: тестировалось при пустой очереди скачки на ~300 хабах, из которых где- то половина рабочая _Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=14_
1.0
При большом количестве потоков поиска программа виснет - _From [a.rain...@gmail.com](https://code.google.com/u/117892482479228821242/) on June 29, 2009 12:37:35_ Несколько раз был зафиксирован момент когда при запуске нового поиска, именно при обращении к хабу программа наглухо виснет, т.е. нагрузка на процессор падает в 0 и вообще какие либо действия прекращаются. По прошествии таймаутов в сокетах начинают "отваливаться" новые соединения, но какой либо другой активности не наблюдается, даже счётчик потоков не уменьшается Проявляется данное безобразие только при большой нагрузке, поэтому отловить крайне проблематично, количество потоков в проге переваливает за 200 Вполне вероятно этот же баг является причиной зависания при больших очередях скачки. ps: тестировалось при пустой очереди скачки на ~300 хабах, из которых где- то половина рабочая _Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=14_
non_process
при большом количестве потоков поиска программа виснет from on june несколько раз был зафиксирован момент когда при запуске нового поиска именно при обращении к хабу программа наглухо виснет т е нагрузка на процессор падает в и вообще какие либо действия прекращаются по прошествии таймаутов в сокетах начинают отваливаться новые соединения но какой либо другой активности не наблюдается даже счётчик потоков не уменьшается проявляется данное безобразие только при большой нагрузке поэтому отловить крайне проблематично количество потоков в проге переваливает за вполне вероятно этот же баг является причиной зависания при больших очередях скачки ps тестировалось при пустой очереди скачки на хабах из которых где то половина рабочая original issue
0
140,235
31,861,979,903
IssuesEvent
2023-09-15 11:37:23
kamilsk/dotfiles
https://api.github.com/repos/kamilsk/dotfiles
closed
command: add books as the same as obsidian
type: feature scope: code impact: medium effort: easy
**Motivation:** review disk usage, now I have 27G audiobooks. ```bash $ books # cd ~/Library/Containers/com.apple.BKAgentService/Data/Documents/iBooks # du -h * | sort -h ```
1.0
command: add books as the same as obsidian - **Motivation:** review disk usage, now I have 27G audiobooks. ```bash $ books # cd ~/Library/Containers/com.apple.BKAgentService/Data/Documents/iBooks # du -h * | sort -h ```
non_process
command add books as the same as obsidian motivation review disk usage now i have audiobooks bash books cd library containers com apple bkagentservice data documents ibooks du h sort h
0
17,149
22,699,177,960
IssuesEvent
2022-07-05 09:06:24
anitsh/til
https://api.github.com/repos/anitsh/til
opened
Information Technology Infrastructure Library (ITIL)
wip process
The Information Technology Infrastructure Library (ITIL) is a set of detailed practices for IT activities such as [IT service management](https://en.wikipedia.org/wiki/IT_service_management) (ITSM) and [IT asset management](https://en.wikipedia.org/wiki/IT_asset_management) (ITAM) that focus on aligning IT services with the needs of business. ITIL describes processes, procedures, tasks, and checklists which are neither organization-specific nor technology-specific, but can be applied by an organization toward strategy, delivering value, and maintaining a minimum level of competency. It allows the organization to establish a baseline from which it can plan, implement, and measure. It is used to demonstrate compliance and to measure improvement. There is no formal independent third party compliance assessment available for ITIL compliance in an organization. Certification in ITIL is only available to individuals. ITIL® is a globally recognised best practice methodology for IT service management that is used all over the world by leading organisations. ITIL® ensures that their IT services are aligned to the needs of their business. ITIL® provides trusted guidance on how businesses can use their IT services to support their goals and facilitate business growth. ITIL® is backed by an official range of certifications that provide knowledge of service management best practice and guidance on how to apply this knowledge across the IT service lifecycle. The range of certifications starts with ITIL® Foundation, which provides entry-level knowledge of the principles of ITIL®, and progress to ITIL® Expert, further demonstrating knowledge of the whole IT service lifecycle. ITIL® is Structured into five core publications, that revolve around the service lifecycle. These provide best practice guidance for an integrated approach to IT service management. ![image](https://user-images.githubusercontent.com/414141/177292243-b70141bd-1a83-4a02-bfbb-66ffc8c8a0cf.png) Service Strategy Understand how to make strategies of IT Service Lifecycle. It determines what services the IT organisation should offer and what capabilities are needed to be developed. The aim of Service Strategy is to make the organisations think and act in a strategic manner. Service Design Effectively design new IT services. It includes designing new services and changing and improving the existing ones. Service Transition Build and implement IT services. It ensures that changes to services and service management processes are carried out in an organised way. Service Operation Effective and efficient delivery of IT services. It includes fulfilling customer requests, resolving service disappointments, fixing problems, and carrying out routine operational tasks. Continual Service Improvement Continually improve the quality of IT services in line with the concept of continual service improvement, adopted in ISO 20000. Service Support was the practice of those disciplines that enabled IT Services to be provided effectively. Service Delivery covered the management of the IT services themselves. It involved a number of management practices to ensure that IT services were actually provided as agreed between the Service Provider and the Customer.
1.0
Information Technology Infrastructure Library (ITIL) - The Information Technology Infrastructure Library (ITIL) is a set of detailed practices for IT activities such as [IT service management](https://en.wikipedia.org/wiki/IT_service_management) (ITSM) and [IT asset management](https://en.wikipedia.org/wiki/IT_asset_management) (ITAM) that focus on aligning IT services with the needs of business. ITIL describes processes, procedures, tasks, and checklists which are neither organization-specific nor technology-specific, but can be applied by an organization toward strategy, delivering value, and maintaining a minimum level of competency. It allows the organization to establish a baseline from which it can plan, implement, and measure. It is used to demonstrate compliance and to measure improvement. There is no formal independent third party compliance assessment available for ITIL compliance in an organization. Certification in ITIL is only available to individuals. ITIL® is a globally recognised best practice methodology for IT service management that is used all over the world by leading organisations. ITIL® ensures that their IT services are aligned to the needs of their business. ITIL® provides trusted guidance on how businesses can use their IT services to support their goals and facilitate business growth. ITIL® is backed by an official range of certifications that provide knowledge of service management best practice and guidance on how to apply this knowledge across the IT service lifecycle. The range of certifications starts with ITIL® Foundation, which provides entry-level knowledge of the principles of ITIL®, and progress to ITIL® Expert, further demonstrating knowledge of the whole IT service lifecycle. ITIL® is Structured into five core publications, that revolve around the service lifecycle. These provide best practice guidance for an integrated approach to IT service management. ![image](https://user-images.githubusercontent.com/414141/177292243-b70141bd-1a83-4a02-bfbb-66ffc8c8a0cf.png) Service Strategy Understand how to make strategies of IT Service Lifecycle. It determines what services the IT organisation should offer and what capabilities are needed to be developed. The aim of Service Strategy is to make the organisations think and act in a strategic manner. Service Design Effectively design new IT services. It includes designing new services and changing and improving the existing ones. Service Transition Build and implement IT services. It ensures that changes to services and service management processes are carried out in an organised way. Service Operation Effective and efficient delivery of IT services. It includes fulfilling customer requests, resolving service disappointments, fixing problems, and carrying out routine operational tasks. Continual Service Improvement Continually improve the quality of IT services in line with the concept of continual service improvement, adopted in ISO 20000. Service Support was the practice of those disciplines that enabled IT Services to be provided effectively. Service Delivery covered the management of the IT services themselves. It involved a number of management practices to ensure that IT services were actually provided as agreed between the Service Provider and the Customer.
process
information technology infrastructure library itil the information technology infrastructure library itil is a set of detailed practices for it activities such as itsm and itam that focus on aligning it services with the needs of business itil describes processes procedures tasks and checklists which are neither organization specific nor technology specific but can be applied by an organization toward strategy delivering value and maintaining a minimum level of competency it allows the organization to establish a baseline from which it can plan implement and measure it is used to demonstrate compliance and to measure improvement there is no formal independent third party compliance assessment available for itil compliance in an organization certification in itil is only available to individuals itil® is a globally recognised best practice methodology for it service management that is used all over the world by leading organisations itil® ensures that their it services are aligned to the needs of their business itil® provides trusted guidance on how businesses can use their it services to support their goals and facilitate business growth itil® is backed by an official range of certifications that provide knowledge of service management best practice and guidance on how to apply this knowledge across the it service lifecycle the range of certifications starts with itil® foundation which provides entry level knowledge of the principles of itil® and progress to itil® expert further demonstrating knowledge of the whole it service lifecycle itil® is structured into five core publications that revolve around the service lifecycle these provide best practice guidance for an integrated approach to it service management service strategy understand how to make strategies of it service lifecycle it determines what services the it organisation should offer and what capabilities are needed to be developed the aim of service strategy is to make the organisations think and act in a strategic manner service design effectively design new it services it includes designing new services and changing and improving the existing ones service transition build and implement it services it ensures that changes to services and service management processes are carried out in an organised way service operation effective and efficient delivery of it services it includes fulfilling customer requests resolving service disappointments fixing problems and carrying out routine operational tasks continual service improvement continually improve the quality of it services in line with the concept of continual service improvement adopted in iso service support was the practice of those disciplines that enabled it services to be provided effectively service delivery covered the management of the it services themselves it involved a number of management practices to ensure that it services were actually provided as agreed between the service provider and the customer
1
789,142
27,780,500,557
IssuesEvent
2023-03-16 20:38:37
SunnyDaye/github-issues-template
https://api.github.com/repos/SunnyDaye/github-issues-template
opened
Site Terms link is not working
bug Severity 1 Priority 1
**Describe the bug** The Site Terms link leads to a 404 page not found error. **To Reproduce** Steps to reproduce the behavior: 1. Hover over 'resources' 2. Click on 'Site Terms' **Expected behavior** The Site Terms link should lead to a page with the terms of each tour site. **Desktop (please complete the following information):** - OS: [Windows 11] - Browser [Chrome]
1.0
Site Terms link is not working - **Describe the bug** The Site Terms link leads to a 404 page not found error. **To Reproduce** Steps to reproduce the behavior: 1. Hover over 'resources' 2. Click on 'Site Terms' **Expected behavior** The Site Terms link should lead to a page with the terms of each tour site. **Desktop (please complete the following information):** - OS: [Windows 11] - Browser [Chrome]
non_process
site terms link is not working describe the bug the site terms link leads to a page not found error to reproduce steps to reproduce the behavior hover over resources click on site terms expected behavior the site terms link should lead to a page with the terms of each tour site desktop please complete the following information os browser
0
8,191
11,391,981,909
IssuesEvent
2020-01-30 00:50:29
knative/serving
https://api.github.com/repos/knative/serving
closed
Revisit TestScaleTo50 SLO after Istio 1.1
area/networking area/test-and-release kind/feature kind/process
## In what area(s)? <!-- Remove the '> ' to select --> /area networking /area test-and-release <!-- Other classifications: /kind process --> ## Describe the feature In https://github.com/knative/serving/issues/2850 we saw that our SLO can't be very high yet. This appears to be an issue with the service mesh in Istio 1.0.x. Since 1.1 will bring improvements, we should wait a little bit to switch to that, and increase the SLO. ## Related https://github.com/knative/serving/issues/2850
1.0
Revisit TestScaleTo50 SLO after Istio 1.1 - ## In what area(s)? <!-- Remove the '> ' to select --> /area networking /area test-and-release <!-- Other classifications: /kind process --> ## Describe the feature In https://github.com/knative/serving/issues/2850 we saw that our SLO can't be very high yet. This appears to be an issue with the service mesh in Istio 1.0.x. Since 1.1 will bring improvements, we should wait a little bit to switch to that, and increase the SLO. ## Related https://github.com/knative/serving/issues/2850
process
revisit slo after istio in what area s to select area networking area test and release other classifications kind process describe the feature in we saw that our slo can t be very high yet this appears to be an issue with the service mesh in istio x since will bring improvements we should wait a little bit to switch to that and increase the slo related
1
2,744
5,651,694,594
IssuesEvent
2017-04-08 07:14:03
facebook/osquery
https://api.github.com/repos/facebook/osquery
opened
Audit events can cause instability if a backlog_wait_time is not set to 1
Linux process auditing wishlist
As part of the osquery audit configuration, the audit setting for `backlog_wait_time` is set to 1. This is the clock delay applied when the audit backlog is filled. This queue may fill if lots of events are queue and the owner of the userland Netlink socket cannot dequeue fast enough. Example of a default configuration: ``` $ sudo auditctl -s enabled 0 failure 1 pid 0 rate_limit 0 backlog_limit 0 lost 37863 backlog 0 backlog_wait_time 60000 loginuid_immutable 0 unlocked ``` This configuration can lead to extreme latency and locking of the system. When `osqueryd` starts a safer configuration is applied: https://github.com/facebook/osquery/blob/master/osquery/events/linux/audit.cpp#L181 It's possible for this configuration to change during the execution of osquery. It would be great if osquery could proactively detect these configuration changes and respond accordingly (by changing them back to safe values).
1.0
Audit events can cause instability if a backlog_wait_time is not set to 1 - As part of the osquery audit configuration, the audit setting for `backlog_wait_time` is set to 1. This is the clock delay applied when the audit backlog is filled. This queue may fill if lots of events are queue and the owner of the userland Netlink socket cannot dequeue fast enough. Example of a default configuration: ``` $ sudo auditctl -s enabled 0 failure 1 pid 0 rate_limit 0 backlog_limit 0 lost 37863 backlog 0 backlog_wait_time 60000 loginuid_immutable 0 unlocked ``` This configuration can lead to extreme latency and locking of the system. When `osqueryd` starts a safer configuration is applied: https://github.com/facebook/osquery/blob/master/osquery/events/linux/audit.cpp#L181 It's possible for this configuration to change during the execution of osquery. It would be great if osquery could proactively detect these configuration changes and respond accordingly (by changing them back to safe values).
process
audit events can cause instability if a backlog wait time is not set to as part of the osquery audit configuration the audit setting for backlog wait time is set to this is the clock delay applied when the audit backlog is filled this queue may fill if lots of events are queue and the owner of the userland netlink socket cannot dequeue fast enough example of a default configuration sudo auditctl s enabled failure pid rate limit backlog limit lost backlog backlog wait time loginuid immutable unlocked this configuration can lead to extreme latency and locking of the system when osqueryd starts a safer configuration is applied it s possible for this configuration to change during the execution of osquery it would be great if osquery could proactively detect these configuration changes and respond accordingly by changing them back to safe values
1
12,463
14,937,186,442
IssuesEvent
2021-01-25 14:20:32
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
Cutomer logo > Alignment issue
Bug P2 Participant manager Process: Fixed Process: Tested dev
1. Cutomer logo > Alignment issue 2. My Account circle icon text is not centralized [Note : Issue should be fixed in all screens] ![logo3](https://user-images.githubusercontent.com/71445210/103030083-188e2a80-4581-11eb-8989-d4eed1c96e16.png)
2.0
Cutomer logo > Alignment issue - 1. Cutomer logo > Alignment issue 2. My Account circle icon text is not centralized [Note : Issue should be fixed in all screens] ![logo3](https://user-images.githubusercontent.com/71445210/103030083-188e2a80-4581-11eb-8989-d4eed1c96e16.png)
process
cutomer logo alignment issue cutomer logo alignment issue my account circle icon text is not centralized
1
4,970
7,806,853,410
IssuesEvent
2018-06-11 15:09:32
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
closed
Error while importing a dump
CONNECTION POOL PROTOCOL QUERY PROCESSOR bug
``` root@galera-n3:~# cat talkyoo_preview_dump.sql | mysql -uroot -pXXX -h 10.0.0.219 -P6033 talkyoo_preview ERROR 1231 (#4200) at line 101: Variable 'character_set_client' can't be set to the value of 'NULL' ```
1.0
Error while importing a dump - ``` root@galera-n3:~# cat talkyoo_preview_dump.sql | mysql -uroot -pXXX -h 10.0.0.219 -P6033 talkyoo_preview ERROR 1231 (#4200) at line 101: Variable 'character_set_client' can't be set to the value of 'NULL' ```
process
error while importing a dump root galera cat talkyoo preview dump sql mysql uroot pxxx h talkyoo preview error at line variable character set client can t be set to the value of null
1
323,092
23,933,305,637
IssuesEvent
2022-09-10 21:50:24
cloudflare/cloudflare-docs
https://api.github.com/repos/cloudflare/cloudflare-docs
opened
Clarify how to create an API token for running Wrangler in CI/CD
documentation content:edit
### Which Cloudflare product does this pertain to? Workers ### Existing documentation URL(s) https://developers.cloudflare.com/workers/wrangler/ci-cd/ ### Section that requires update Cloudflare API token ### What needs to change? It's not really clear to me (a) how and where to get an API token, and (b) what token permissions are sufficient for https://github.com/cloudflare/wrangler-action ### How should it change? 1. Provide a link to this page https://developers.cloudflare.com/api/tokens/create/ with some context 2. Either create an API Token template specifically for [cloudflare/wrangler-action](https://github.com/cloudflare/wrangler-action), similarly to the "Edit Cloudflare Workers" template. Or, walk through all the required token permissions when creating a custom token (perhaps, show a screenshot of the token permissions section). Or, just tell me to choose the "Edit Cloudflare Workers" template and I'm good to go. ### Additional information _No response_
1.0
Clarify how to create an API token for running Wrangler in CI/CD - ### Which Cloudflare product does this pertain to? Workers ### Existing documentation URL(s) https://developers.cloudflare.com/workers/wrangler/ci-cd/ ### Section that requires update Cloudflare API token ### What needs to change? It's not really clear to me (a) how and where to get an API token, and (b) what token permissions are sufficient for https://github.com/cloudflare/wrangler-action ### How should it change? 1. Provide a link to this page https://developers.cloudflare.com/api/tokens/create/ with some context 2. Either create an API Token template specifically for [cloudflare/wrangler-action](https://github.com/cloudflare/wrangler-action), similarly to the "Edit Cloudflare Workers" template. Or, walk through all the required token permissions when creating a custom token (perhaps, show a screenshot of the token permissions section). Or, just tell me to choose the "Edit Cloudflare Workers" template and I'm good to go. ### Additional information _No response_
non_process
clarify how to create an api token for running wrangler in ci cd which cloudflare product does this pertain to workers existing documentation url s section that requires update cloudflare api token what needs to change it s not really clear to me a how and where to get an api token and b what token permissions are sufficient for how should it change provide a link to this page with some context either create an api token template specifically for similarly to the edit cloudflare workers template or walk through all the required token permissions when creating a custom token perhaps show a screenshot of the token permissions section or just tell me to choose the edit cloudflare workers template and i m good to go additional information no response
0
6,463
9,546,597,581
IssuesEvent
2019-05-01 20:24:58
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
closed
Remove Go to Saved from Create Internship Opportunity
Apply Process Approved Opportunity Create Requirements Ready State Dept.
Who: Interns What: Viewing internship - should not see Saved internships section Why: Old design Remove the Saved internships section from the Create internship opportunity view. This was part of an old mock up and was removed. ![image.png](https://images.zenhubusercontent.com/59ee08f1a468affe6df7cd6f/f0aacb1e-32af-4600-a317-1af28939cf31) Mock: https://opm.invisionapp.com/share/ZEPNZR09Q54#/320303454_State_-_Opportunity_-Desktop-
1.0
Remove Go to Saved from Create Internship Opportunity - Who: Interns What: Viewing internship - should not see Saved internships section Why: Old design Remove the Saved internships section from the Create internship opportunity view. This was part of an old mock up and was removed. ![image.png](https://images.zenhubusercontent.com/59ee08f1a468affe6df7cd6f/f0aacb1e-32af-4600-a317-1af28939cf31) Mock: https://opm.invisionapp.com/share/ZEPNZR09Q54#/320303454_State_-_Opportunity_-Desktop-
process
remove go to saved from create internship opportunity who interns what viewing internship should not see saved internships section why old design remove the saved internships section from the create internship opportunity view this was part of an old mock up and was removed mock
1
3,699
6,726,708,854
IssuesEvent
2017-10-17 10:53:28
our-city-app/oca-backend
https://api.github.com/repos/our-city-app/oca-backend
reopened
Not possible to change from simple to advanced order in services in the city dashboard
process_duplicate
![image 2017-10-13 14 25 52](https://user-images.githubusercontent.com/26439611/31546149-6bc9691e-b022-11e7-8b44-7a99e18f02a6.jpg) ![image 2017-10-13 14 25 56](https://user-images.githubusercontent.com/26439611/31546152-6e0c7220-b022-11e7-8060-c449bf89d7f6.jpg)
1.0
Not possible to change from simple to advanced order in services in the city dashboard - ![image 2017-10-13 14 25 52](https://user-images.githubusercontent.com/26439611/31546149-6bc9691e-b022-11e7-8b44-7a99e18f02a6.jpg) ![image 2017-10-13 14 25 56](https://user-images.githubusercontent.com/26439611/31546152-6e0c7220-b022-11e7-8060-c449bf89d7f6.jpg)
process
not possible to change from simple to advanced order in services in the city dashboard
1
4,810
3,896,644,849
IssuesEvent
2016-04-16 00:02:25
lionheart/openradar-mirror
https://api.github.com/repos/lionheart/openradar-mirror
opened
16770719: Interface Builder sizes UIViews poorly when adding as subviews
classification:ui/usability reproducible:always status:open
#### Description This is a duplicate of rdar://8387508 Make a new .xib. Drag on a UIView as subview. Interface Builder makes the thing the full size, even if you drag it partially on the screen. I *never* want a view hanging off the side. I can’t manipulate it or work with it in any reasonable way there. If I did, I would manually adjust it back off the superview, probably using the ruler pane (or more likely code). If I drop a new view onto a superview, the initial bounds of the subview should be within the superview! - Product Version: iOS 7.1.1 Created: 2014-04-30T17:18:39.909244 Originated: 2014-04-30T13:18:00 Open Radar Link: http://www.openradar.me/16770719
True
16770719: Interface Builder sizes UIViews poorly when adding as subviews - #### Description This is a duplicate of rdar://8387508 Make a new .xib. Drag on a UIView as subview. Interface Builder makes the thing the full size, even if you drag it partially on the screen. I *never* want a view hanging off the side. I can’t manipulate it or work with it in any reasonable way there. If I did, I would manually adjust it back off the superview, probably using the ruler pane (or more likely code). If I drop a new view onto a superview, the initial bounds of the subview should be within the superview! - Product Version: iOS 7.1.1 Created: 2014-04-30T17:18:39.909244 Originated: 2014-04-30T13:18:00 Open Radar Link: http://www.openradar.me/16770719
non_process
interface builder sizes uiviews poorly when adding as subviews description this is a duplicate of rdar make a new xib drag on a uiview as subview interface builder makes the thing the full size even if you drag it partially on the screen i never want a view hanging off the side i can’t manipulate it or work with it in any reasonable way there if i did i would manually adjust it back off the superview probably using the ruler pane or more likely code if i drop a new view onto a superview the initial bounds of the subview should be within the superview product version ios created originated open radar link
0
616
3,083,115,928
IssuesEvent
2015-08-24 06:21:22
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Problem with chunking (DITA OT 2.x).
bug P2 preprocess
I uploaded a zip here: http://www.oxygenxml.com/forum/files/ug.zip If the usermanual DITA Map gets published to plain XHTML the "eppo-installation-linux" HTML file should contain inside it the content from the two subtopics but it no longer does that. The used chunk method is: <topicref href="topics/eppo-installation-linux.dita" chunk="select-branch to-content" navtitle="Linux Installation" locktitle="yes"> <topicref href="concepts/installation-installer-linux.dita" toc="no"/> <topicref href="concepts/installation-requirements-linux.dita" toc="no"/> </topicref>
1.0
Problem with chunking (DITA OT 2.x). - I uploaded a zip here: http://www.oxygenxml.com/forum/files/ug.zip If the usermanual DITA Map gets published to plain XHTML the "eppo-installation-linux" HTML file should contain inside it the content from the two subtopics but it no longer does that. The used chunk method is: <topicref href="topics/eppo-installation-linux.dita" chunk="select-branch to-content" navtitle="Linux Installation" locktitle="yes"> <topicref href="concepts/installation-installer-linux.dita" toc="no"/> <topicref href="concepts/installation-requirements-linux.dita" toc="no"/> </topicref>
process
problem with chunking dita ot x i uploaded a zip here if the usermanual dita map gets published to plain xhtml the eppo installation linux html file should contain inside it the content from the two subtopics but it no longer does that the used chunk method is
1
14,235
17,154,705,232
IssuesEvent
2021-07-14 04:29:07
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
opened
Provision to configure app-specific email addresses
Android P1 Participant datastore Process: Enhancement iOS
Add the ability to configure app-specific email addresses (based on App ID) for the following types of emails 1. Feedback, Contact Us 'To' addresses 2. The 'from' address for all outgoing emails to app users 3. The 'support email' displayed in outgoing mails. Outgoing mails refer to the Welcome/Registration, Forgot Password, Account Lock related emails sent to app users from the system.
1.0
Provision to configure app-specific email addresses - Add the ability to configure app-specific email addresses (based on App ID) for the following types of emails 1. Feedback, Contact Us 'To' addresses 2. The 'from' address for all outgoing emails to app users 3. The 'support email' displayed in outgoing mails. Outgoing mails refer to the Welcome/Registration, Forgot Password, Account Lock related emails sent to app users from the system.
process
provision to configure app specific email addresses add the ability to configure app specific email addresses based on app id for the following types of emails feedback contact us to addresses the from address for all outgoing emails to app users the support email displayed in outgoing mails outgoing mails refer to the welcome registration forgot password account lock related emails sent to app users from the system
1
1,196
3,697,266,488
IssuesEvent
2016-02-27 15:17:29
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
closed
enforce_autocommit_on_reads
ADMIN CONNECTION POOL MYSQL PROTOCOL QUERY PROCESSOR
## WHY Since 3449ab0f598db7bd537f1065983a0594927122d8 ( #438 ) ProxySQL tracks the value of autocommit and enforces it in database connection. This implementation had the drawback that if a client sends `set autocommit=0` , and read/write split is implemented too, transactions are opened on slaves as pointed in #469 , and therefore the feature was reverted. Now ProxySQL doesn't track anymore the value of autocommit, and this means that statements could be executed in autocommit mode when the client assumes that autocommit is OFF . This is problematic for application and libraries that do not explicitly starts transactions. See: - http://docs.sqlalchemy.org/en/latest/core/connections.html#understanding-autocommit - https://www.python.org/dev/peps/pep-0249/#commit ## WHAT * [ ] add a new variable `mysql-enforce_autocommit_on_reads` * [ ] if the variable is set to `false` (default) and a `SELECT` (not `FOR UPDATE`) is executed, the client value of `autocommit` is not enforced into the database connection * [ ] in all the other cases, the client value of `autocommit`enforced into the database connection
1.0
enforce_autocommit_on_reads - ## WHY Since 3449ab0f598db7bd537f1065983a0594927122d8 ( #438 ) ProxySQL tracks the value of autocommit and enforces it in database connection. This implementation had the drawback that if a client sends `set autocommit=0` , and read/write split is implemented too, transactions are opened on slaves as pointed in #469 , and therefore the feature was reverted. Now ProxySQL doesn't track anymore the value of autocommit, and this means that statements could be executed in autocommit mode when the client assumes that autocommit is OFF . This is problematic for application and libraries that do not explicitly starts transactions. See: - http://docs.sqlalchemy.org/en/latest/core/connections.html#understanding-autocommit - https://www.python.org/dev/peps/pep-0249/#commit ## WHAT * [ ] add a new variable `mysql-enforce_autocommit_on_reads` * [ ] if the variable is set to `false` (default) and a `SELECT` (not `FOR UPDATE`) is executed, the client value of `autocommit` is not enforced into the database connection * [ ] in all the other cases, the client value of `autocommit`enforced into the database connection
process
enforce autocommit on reads why since proxysql tracks the value of autocommit and enforces it in database connection this implementation had the drawback that if a client sends set autocommit and read write split is implemented too transactions are opened on slaves as pointed in and therefore the feature was reverted now proxysql doesn t track anymore the value of autocommit and this means that statements could be executed in autocommit mode when the client assumes that autocommit is off this is problematic for application and libraries that do not explicitly starts transactions see what add a new variable mysql enforce autocommit on reads if the variable is set to false default and a select not for update is executed the client value of autocommit is not enforced into the database connection in all the other cases the client value of autocommit enforced into the database connection
1
107,464
23,417,717,812
IssuesEvent
2022-08-13 07:37:54
creativecommons/commoners
https://api.github.com/repos/creativecommons/commoners
opened
[Bug] Site navigation broken after CCID login
🟧 priority: high 🚦 status: awaiting triage 🛠 goal: fix 💻 aspect: code
## Description After logging in to the CCGN site, one is redirected back to the CCGN site, and menu items don't lead to expected links. ## Reproduction 1. Visit: https://network.creativecommons.org/ [must not yet be logged in] 2. Click on "Members" in top navigation menu 3. Get redirected to https://login.creativecommons.org to log in (because Members is a protected area?) 4. Log in with a CCID (eg: I did with my nate@creativecommons.org CCID account) 5. Get redirected back to the CCGN home page, but now with a URL like https://network.creativecommons.org/?ticket=[long string of letters and numbers] (Note that one is not in the Members area that one first clicked on) 6. Click on "Members" in top navigation menu again 7. CCGN home page just reloads with same URL with ticket without going to https://network.creativecommons.org/members/ ## Expectation One should be able to travel from an unauthenticated state to log in and back to a URL that requires authorization seamlessly. ## Screenshots <img width="1409" alt="image" src="https://user-images.githubusercontent.com/997548/184473953-e9e06813-97d9-4649-a932-024b4584d572.png"> ## Environment - Device: MacBook Pro laptop - OS: MacOS Monterey 12.5 - Browser: both Brave Version 1.41.100 Chromium: 103.0.5060.134 (Official Build) (arm64) and Firefox 103.0.2 (64-bit) ## Additional context Could possibly be related to my somewhat unique creativecommons.org email address/account?
1.0
[Bug] Site navigation broken after CCID login - ## Description After logging in to the CCGN site, one is redirected back to the CCGN site, and menu items don't lead to expected links. ## Reproduction 1. Visit: https://network.creativecommons.org/ [must not yet be logged in] 2. Click on "Members" in top navigation menu 3. Get redirected to https://login.creativecommons.org to log in (because Members is a protected area?) 4. Log in with a CCID (eg: I did with my nate@creativecommons.org CCID account) 5. Get redirected back to the CCGN home page, but now with a URL like https://network.creativecommons.org/?ticket=[long string of letters and numbers] (Note that one is not in the Members area that one first clicked on) 6. Click on "Members" in top navigation menu again 7. CCGN home page just reloads with same URL with ticket without going to https://network.creativecommons.org/members/ ## Expectation One should be able to travel from an unauthenticated state to log in and back to a URL that requires authorization seamlessly. ## Screenshots <img width="1409" alt="image" src="https://user-images.githubusercontent.com/997548/184473953-e9e06813-97d9-4649-a932-024b4584d572.png"> ## Environment - Device: MacBook Pro laptop - OS: MacOS Monterey 12.5 - Browser: both Brave Version 1.41.100 Chromium: 103.0.5060.134 (Official Build) (arm64) and Firefox 103.0.2 (64-bit) ## Additional context Could possibly be related to my somewhat unique creativecommons.org email address/account?
non_process
site navigation broken after ccid login description after logging in to the ccgn site one is redirected back to the ccgn site and menu items don t lead to expected links reproduction visit click on members in top navigation menu get redirected to to log in because members is a protected area log in with a ccid eg i did with my nate creativecommons org ccid account get redirected back to the ccgn home page but now with a url like note that one is not in the members area that one first clicked on click on members in top navigation menu again ccgn home page just reloads with same url with ticket without going to expectation one should be able to travel from an unauthenticated state to log in and back to a url that requires authorization seamlessly screenshots img width alt image src environment device macbook pro laptop os macos monterey browser both brave version chromium official build and firefox bit additional context could possibly be related to my somewhat unique creativecommons org email address account
0
16,540
21,566,062,086
IssuesEvent
2022-05-01 21:48:11
fmnas/fmnas-site
https://api.github.com/repos/fmnas/fmnas-site
closed
Change application response text
form processor x-small (<1h)
remove "applications are reviewed Sunday through Thursday" and add: If you have not already done so, please submit a few photos of home (inside and outside, wherever your pets are allowed to go)/yard/fence/current pets, which can be emailed as .jpg attachments. Since we are unable to do pre-adoption home visits for our distance adopters, we rely on your photos to give us the best picture of the life your Forget Me Not pet will be living when they join your family.
1.0
Change application response text - remove "applications are reviewed Sunday through Thursday" and add: If you have not already done so, please submit a few photos of home (inside and outside, wherever your pets are allowed to go)/yard/fence/current pets, which can be emailed as .jpg attachments. Since we are unable to do pre-adoption home visits for our distance adopters, we rely on your photos to give us the best picture of the life your Forget Me Not pet will be living when they join your family.
process
change application response text remove applications are reviewed sunday through thursday and add if you have not already done so please submit a few photos of home inside and outside wherever your pets are allowed to go yard fence current pets which can be emailed as jpg attachments since we are unable to do pre adoption home visits for our distance adopters we rely on your photos to give us the best picture of the life your forget me not pet will be living when they join your family
1
16,381
21,104,130,243
IssuesEvent
2022-04-04 17:01:21
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Release 5.1 - March 2022
P1 type: process team-OSS
# Status of Bazel 5.1 - Expected release date: 2022-03-24 - [Release blockers milestone](https://github.com/bazelbuild/bazel/milestone/35) To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone. To cherry-pick a mainline commit into 5.1, simply send a PR against the `release-5.1.0` branch. Task list: - [x] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit) - [x] Send for review the release announcement PR: https://github.com/bazelbuild/bazel-blog/pull/271 - [x] Push the release, notify package maintainers: - [x] Update the documentation - [x] Push the blog post - [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
1.0
Release 5.1 - March 2022 - # Status of Bazel 5.1 - Expected release date: 2022-03-24 - [Release blockers milestone](https://github.com/bazelbuild/bazel/milestone/35) To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone. To cherry-pick a mainline commit into 5.1, simply send a PR against the `release-5.1.0` branch. Task list: - [x] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit) - [x] Send for review the release announcement PR: https://github.com/bazelbuild/bazel-blog/pull/271 - [x] Push the release, notify package maintainers: - [x] Update the documentation - [x] Push the blog post - [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
process
release march status of bazel expected release date to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into simply send a pr against the release branch task list send for review the release announcement pr push the release notify package maintainers update the documentation push the blog post update the
1
240,072
7,800,375,519
IssuesEvent
2018-06-09 08:39:13
tine20/Tine-2.0-Open-Source-Groupware-and-CRM
https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM
closed
0008960: more than one list (group membership) filter breaks contact search
Addressbook Bug Mantis high priority
**Reported by pschuele on 26 Sep 2013 12:30** **Version:** Collin (2013.10.1~beta1) more than one list (group membership) filter breaks contact search **Steps to reproduce:** &quot;filters&quot;: [ { &quot;field&quot;: &quot;list&quot;, &quot;operator&quot;: &quot;equals&quot;, &quot;value&quot;: &quot;7112fba3e1d614025067281db5d6385ac03f4736&quot;, &quot;id&quot;: &quot;ext-record-271&quot; }, { &quot;field&quot;: &quot;list&quot;, &quot;operator&quot;: &quot;equals&quot;, &quot;value&quot;: &quot;a3f1a6612aa099d1be82cc11a0805a471784da59&quot;, &quot;id&quot;: &quot;ext-record-319&quot; } ], **Additional information:** You cannot define a correlation name &#039;members&#039; more than once .../library/Zend/Db/Select.php(345): Zend_Db_Select-&gt;_join() [internal function]: Zend_Db_Select-&gt;joinLeft() .../Tinebase/Backend/Sql/Filter/GroupSelect.php(58): call_user_func_array() [internal function]: Tinebase_Backend_Sql_Filter_GroupSelect-&gt;__call() [internal function]: Tinebase_Backend_Sql_Filter_GroupSelect-&gt;joinLeft() .../Tinebase/Backend/Sql/Filter/GroupSelect.php(58): call_user_func_array() [internal function]: Tinebase_Backend_Sql_Filter_GroupSelect-&gt;__call() [internal function]: Tinebase_Backend_Sql_Filter_GroupSelect-&gt;joinLeft() .../Tinebase/Backend/Sql/Filter/GroupSelect.php(58): call_user_func_array() .../Addressbook/Model/ListMemberFilter.php(41): Tinebase_Backend_Sql_Filter_GroupSelect-&gt;__call() .../Addressbook/Model/ListMemberFilter.php(41): Tinebase_Backend_Sql_Filter_GroupSelect-&gt;joinLeft() .../Tinebase/Backend/Sql/Filter/FilterGroup.php(46): Addressbook_Model_ListMemberFilter-&gt;appendFilterSql() .../Tinebase/Backend/Sql/Filter/FilterGroup.php(48): Tinebase_Backend_Sql_Filter_FilterGroup::appendFilters() .../Tinebase/Backend/Sql/Filter/FilterGroup.php(48): Tinebase_Backend_Sql_Filter_FilterGroup::appendFilters() .../Tinebase/Backend/Sql/Abstract.php(532): Tinebase_Backend_Sql_Filter_FilterGroup::appendFilters() .../Tinebase/Backend/Sql/Abstract.php(492): Tinebase_Backend_Sql_Abstract-&gt;_addFilter() .../Tinebase/Controller/Record/Abstract.php(192): Tinebase_Backend_Sql_Abstract-&gt;search() .../Tinebase/Frontend/Json/Abstract.php(244): Tinebase_Controller_Record_Abstract-&gt;search() .../Addressbook/Frontend/Json.php(62): Tinebase_Frontend_Json_Abstract-&gt;_search() [internal function]: Addressbook_Frontend_Json-&gt;searchContacts()
1.0
0008960: more than one list (group membership) filter breaks contact search - **Reported by pschuele on 26 Sep 2013 12:30** **Version:** Collin (2013.10.1~beta1) more than one list (group membership) filter breaks contact search **Steps to reproduce:** &quot;filters&quot;: [ { &quot;field&quot;: &quot;list&quot;, &quot;operator&quot;: &quot;equals&quot;, &quot;value&quot;: &quot;7112fba3e1d614025067281db5d6385ac03f4736&quot;, &quot;id&quot;: &quot;ext-record-271&quot; }, { &quot;field&quot;: &quot;list&quot;, &quot;operator&quot;: &quot;equals&quot;, &quot;value&quot;: &quot;a3f1a6612aa099d1be82cc11a0805a471784da59&quot;, &quot;id&quot;: &quot;ext-record-319&quot; } ], **Additional information:** You cannot define a correlation name &#039;members&#039; more than once .../library/Zend/Db/Select.php(345): Zend_Db_Select-&gt;_join() [internal function]: Zend_Db_Select-&gt;joinLeft() .../Tinebase/Backend/Sql/Filter/GroupSelect.php(58): call_user_func_array() [internal function]: Tinebase_Backend_Sql_Filter_GroupSelect-&gt;__call() [internal function]: Tinebase_Backend_Sql_Filter_GroupSelect-&gt;joinLeft() .../Tinebase/Backend/Sql/Filter/GroupSelect.php(58): call_user_func_array() [internal function]: Tinebase_Backend_Sql_Filter_GroupSelect-&gt;__call() [internal function]: Tinebase_Backend_Sql_Filter_GroupSelect-&gt;joinLeft() .../Tinebase/Backend/Sql/Filter/GroupSelect.php(58): call_user_func_array() .../Addressbook/Model/ListMemberFilter.php(41): Tinebase_Backend_Sql_Filter_GroupSelect-&gt;__call() .../Addressbook/Model/ListMemberFilter.php(41): Tinebase_Backend_Sql_Filter_GroupSelect-&gt;joinLeft() .../Tinebase/Backend/Sql/Filter/FilterGroup.php(46): Addressbook_Model_ListMemberFilter-&gt;appendFilterSql() .../Tinebase/Backend/Sql/Filter/FilterGroup.php(48): Tinebase_Backend_Sql_Filter_FilterGroup::appendFilters() .../Tinebase/Backend/Sql/Filter/FilterGroup.php(48): Tinebase_Backend_Sql_Filter_FilterGroup::appendFilters() .../Tinebase/Backend/Sql/Abstract.php(532): Tinebase_Backend_Sql_Filter_FilterGroup::appendFilters() .../Tinebase/Backend/Sql/Abstract.php(492): Tinebase_Backend_Sql_Abstract-&gt;_addFilter() .../Tinebase/Controller/Record/Abstract.php(192): Tinebase_Backend_Sql_Abstract-&gt;search() .../Tinebase/Frontend/Json/Abstract.php(244): Tinebase_Controller_Record_Abstract-&gt;search() .../Addressbook/Frontend/Json.php(62): Tinebase_Frontend_Json_Abstract-&gt;_search() [internal function]: Addressbook_Frontend_Json-&gt;searchContacts()
non_process
more than one list group membership filter breaks contact search reported by pschuele on sep version collin more than one list group membership filter breaks contact search steps to reproduce quot filters quot quot field quot quot list quot quot operator quot quot equals quot quot value quot quot quot quot id quot quot ext record quot quot field quot quot list quot quot operator quot quot equals quot quot value quot quot quot quot id quot quot ext record quot additional information you cannot define a correlation name members more than once library zend db select php zend db select gt join zend db select gt joinleft tinebase backend sql filter groupselect php call user func array tinebase backend sql filter groupselect gt call tinebase backend sql filter groupselect gt joinleft tinebase backend sql filter groupselect php call user func array tinebase backend sql filter groupselect gt call tinebase backend sql filter groupselect gt joinleft tinebase backend sql filter groupselect php call user func array addressbook model listmemberfilter php tinebase backend sql filter groupselect gt call addressbook model listmemberfilter php tinebase backend sql filter groupselect gt joinleft tinebase backend sql filter filtergroup php addressbook model listmemberfilter gt appendfiltersql tinebase backend sql filter filtergroup php tinebase backend sql filter filtergroup appendfilters tinebase backend sql filter filtergroup php tinebase backend sql filter filtergroup appendfilters tinebase backend sql abstract php tinebase backend sql filter filtergroup appendfilters tinebase backend sql abstract php tinebase backend sql abstract gt addfilter tinebase controller record abstract php tinebase backend sql abstract gt search tinebase frontend json abstract php tinebase controller record abstract gt search addressbook frontend json php tinebase frontend json abstract gt search addressbook frontend json gt searchcontacts
0
5,605
8,468,001,841
IssuesEvent
2018-10-23 18:30:05
icra/ecam
https://api.github.com/repos/icra/ecam
closed
Add missing descriptions
in process
Please see the table below. I am still waiting for the French translation. Variable | English | Spanish | French | Thai -- | -- | -- | -- | -- fsc_cont_emp | Fraction of produced faecal sludge that is emptied from containments during the assessment period. If only partial emptying is done it should be reflected in the fraction. | Fracción del lodo fecal producido que es vaciado de los contenedores durante el periodo de evaluación |   | สัดส่วนจำนวนครั้งของการสูบสิ่งปฏิกูลจากวัสดุกักเก็บตลอดระยะเวลาเก็บข้อมูล fsc_fslu_emp | Volume of faecal sludge emptied from the containment | Volumen de lodo fecal vaciado de los contenedores |   | ปริมาณสิ่งปฏิกูลที่สูบออกจากวัสดุกักเก็บ fsc_vol_trck   fst_vol_trck   sr_vol_trck | Volume of fuel consumed (Trucks) | Volumen de consumo de combustible (camiones) | Volume de carburant consommé (Camions) | ปริมาณเชื้อเพลิงที่ใช้กับการขนส่ง fsc_trck_typ   fst_trck_typ   fsr_trck_typ | Fuel type (Trucks) | Tipo de combustible (Camiones) | Type de carburant (Camions) | ชนิดเชื้อเพลิงที่ใช้กับการขนส่ง fst_biog_pro | Biogas produced during the assessment period by each faecal sludge treatment plant managed by the undertaking | Biogás producido durante el periodo de evaluación para cado uno de los tratamiento del lodo fecal manejados por la empresa |   | ปริมาณก๊าซชีวภาพที่ผลิตได้จากระบบบำบัดสิ่งปฏิกูล ในช่วงระยะเวลาเก็บข้อมูล fst_biog_val | Biogas valorized in the treatment plant, for example to heat the digesters or the building and/or to run a Co-generator to generate heat and electricity | Biogás valorizado in la planta de tratamiento, por ejemplo, para el calentamiento de los digestores o la construcción y/ó funcionamiento de  un cogenerador para la generación de calor y energía |   | ปริมาณก๊าซชีวภาพที่นำไปใช้ประโยชน์ภายในระบบบำบัด เช่น นำไปให้ความร้อน กับถังย่อยสลาย หรือภายในอาคาร และ/หรือ นำไปผลิตกระแสไฟฟ้า หรือพลังงานความร้อนในเครื่องผลิตกระแสไฟฟ้าร่วม fst_biog_fla | Biogas flared refers to the biogas that is combusted by flare gas systems without electricity or heat valorisation | Biogás quemado se refiere al biogás que es quemado en una antorcha quemadora, sin la valorización de electricidad o calor. |   | ปริมาณก๊าซชีวภาพที่นำไปเผาทิ้ง หรือ ก๊าซชีวภาพที่นำไปเผาทิ้งในระบบเตาเผา โดยไม่มีการนำไปใช้ประโยชน์ เพื่อการผลิตความร้อน หรือกระแสไฟฟ้า fst_KPI_GHG | Total GHG Faecal sludge Treatment | GEI totales de la gestión de lodos fecales |   | ผลรวมปริมาณก๊าซเรือนกระจกจากระบบบำบัดสิ่งปฏิกูล fst_SL_GHG_avoided | GHG emissions avoided due to biogas valorization | Emisiones de GEI evitados debido a la valorización del biogás |   | ปริมาณก๊าซชีวภาพที่สามารถหลีกเลี่ยงการปลดปล่อยได้จากการนำมาใช้ประโยชน์ Landfilling |   |   |   fsr_mass_landfil | Dry weight sent to landfill | Lodo fecal seco enviado al relleno sanitario |   | น้ำหนักแห้งที่ส่งไปยังหลุมฝังกลบ fsr_fslu_typ_lf | Type of (faecal) sludge disposed |   |   | ชนิดของกากตะกอนที่นำไปกำจัด fsr_disp_typ | Type of the landfilling | Tipo de lodo fecal desechado |   | ชนิดของการฝังกลบ fsr_KPI_GHG_landfil_ch4 | Amount of CO2,eq emissions due to CH4  emission from (faecal) sludge applied to landfill | Cantidad de CO2 eq debido a las emisiones de CH4 de los lodos (fecales) depositados en rellenos sanitarios |   | ปริมาณก๊าซ CO2 เทียบเท่า จากก๊าซ CH4 ที่มาจากกระบวนการฝังกลบ fsr_KPI_GHG_landfil_n2o | Amount of CO2,eq emissions due to N2O  emission from (faecal) sludge applied to landfill | Cantidad de CO2 eq debido a las emisiones de N2O de los lodos (fecales) depositados en rellenos sanitarios |   | ปริมาณก๊าซ CO2 เทียบเท่า จากก๊าซ N2O ที่มาจากกระบวนการฝังกลบ fsr_KPI_GHG_landfil | Total GHG from (faecal) sludge sent to landfilling | GEI totales del envió de los  lodos fecales a rellenos sanitarios |   | ผลรวมปริมาณก๊าซเรือนกระจกจากการนำสิ่งปฏิกูลไปกำจัดด้วยวิธีฝังกลบ fsr_ghg_avoided_landfil | Amount of CO2,eq emissions avoided from carbon sequestration of landfilling | Emisiones de GEI evitadas debido al secuestro de carbono en el relleno sanitario |   | ปริมาณก๊าซ CO2 เทียบเท่า ที่สามารถหลีกเลี่ยงการปล่อยได้จากการสะสมคาร์บอนในการฝังกลบ Land application |   |   |   fsr_mass_landapp | Amount of (faecal) sludge that is sent to land application (dry weight) | Lodo fecal seco enviado para la aplicación en el suelo |   | ปริมาณกากตะกอนที่นำไปใช้ปรับปรุงดิน (น้ำหนักแห้ง) fsr_fslu_typ_la | Type of (faecal) sludge sent to land application | Tipo de lodo fecal aplicado en el suelo |   | ชนิดของกากตะกอนที่นำไปใช้ปรับปรุงดิน fsr_soil_typ | Soil typology the sludge is applied on. Note: if you don't know the soil typology, leave it as 'Fine-textured' | Tipología del suelo donde el lodo es aplicado. Nota: si no se conoce la tipología del suelo, deje el tipo 'Fine-textured' |   | ลักษณะของดินในพื้นที่ ที่นำกากตะกอนมาปรับปรุงดิน  กรณีไม่ทราบให้เลือกเป็นลักษณะดินละเอียด fsr_KPI_GHG_landapp | Amount of CO2,eq emissions due to N2O  emission from faecal sludge sent to land application | Cantidad de CO2 eq debido a las emisiones de N2O de los lodos (fecales) aplicados en el suelo |   | ปริมาณก๊าซ CO2 เทียบเท่า จากก๊าซ N2O ที่เกิดจากการปรับปรุงดิน Dumping |   |   |   fsr_vol_dumping | The volume of faecal sludge dumped | Volumen de lodo fecal vertido |   | ปริมาณกากตะกอนที่ทิ้งไม่ถูกสุขลักษณะ fsr_KPI_GHG_dumping_ch4 | Amount of CO2,eq emissions due to CH4  emission from (faecal) sludge dumped | Cantidad de CO2 eq debido a las emisiones de CH4 de los lodos (fecales) vertidos |   | ปริมาณก๊าซ CO2 เทียบเท่า จากก๊าซ CH4 ที่เกิดจากการทิ้งกากตะกอนไม่ถูกสุขลักษณะ fsr_KPI_GHG_dumping_n2o | Amount of CO2,eq emissions due to N2O  emission from (faecal) sludge dumped | Cantidad de CO2 eq debido a las emisiones de N2O de los lodos (fecales) vertidos |   | ปริมาณก๊าซ CO2 เทียบเท่า จากก๊าซ N2O ที่เกิดจากการทิ้งกากตะกอนไม่ถูกสุขลักษณะ fsr_KPI_GHG_dumping | Total GHG missions due to (faecal) sludge dumping | Emisiones totales de GEI debido al vertimiento de lodos fecales |   | ผลรวมปริมาณก๊าซเรือนกระจกจากการทิ้งกากตะกอนไม่ถูกสุขลักษณะ Urine application |   |   |   fsr_N_urine | Total Nitrogen in urine applied to land | Nitrógeno total en la orina aplicada en suelos |   | ปริมาณไนโตรเจนทั้งหมด ในปัสสาวะที่นำไปใช้เพื่อการปรับปรุงดิน fsr_KPI_GHG_urine | Amount of CO2,eq emissions due to N2O  emission from land application of urine | Cantidad de CO2 eq debido a las emisiones de N2O de los lodos (fecales) vertidos de la orina aplicada en suelos |   | ปริมาณก๊าซ CO2  เทียบเท่า จากก๊าซ N2O ที่เกิดจากการนำปัสสาวะไปปรับปรุงดิน Reusing nutrients |   |   |   fsr_reused_N | Amount of total Nitrogen reused that displaces synthetic fertilizer | Nitrógeno total reusado desplazando fertilizantes sintéticos |   | ปริมาณไนโตรเจนทั้งหมด ที่ถูกนำกลับมาใช้ทดแทนปุ๋ยเคมี fsr_reused_P | Amount of total Phosphorus reused that displaces synthetic fertilizer | Fósforo total reusado desplazando fertilizantes sintéticos |   | ปริมาณฟอสฟอรัสทั้งหมด ที่ถูกนำกลับมาใช้ทดแทนปุ๋ยเคมี fsr_ghg_avoided_reuse_N | Amount of CO2,eq emissions avoided due to Nitrogen reuse | Cantidad de CO2 eq evitadas debido al reúso de nitrógeno |   | ปริมาณก๊าซ CO2 เทียบเท่า ที่สามารถหลีกเลี่ยงการปล่อยจากการนำไนโตรเจนกลับมาใช้ fsr_ghg_avoided_reuse_P | Amount of CO2,eq emissions avoided due to Phosphorus reuse | Cantidad de CO2 eq evitadas debido al reúso de fósforo |   | ปริมาณก๊าซ CO2 เทียบเท่า ที่สามารถหลีกเลี่ยงการปล่อยจากการนำฟอสฟอรัสกลับมาใช้ fsr_ghg_avoided_reuse | Amount of CO2,eq emissions avoided due to nutrients reused displacing synthetic fertilizer | Cantidad de CO2 eq evitadas debido al reúso de nutrientes desplazando fertilizantes sintéticos |   | ผลรวมปริมาณก๊าซ CO2เทียบเท่า ที่สามารถหลีกเลี่ยงการปล่อยจากการนำสารอาหารกลับมาใช้ทดแทนปุ๋ยเคมี
1.0
Add missing descriptions - Please see the table below. I am still waiting for the French translation. Variable | English | Spanish | French | Thai -- | -- | -- | -- | -- fsc_cont_emp | Fraction of produced faecal sludge that is emptied from containments during the assessment period. If only partial emptying is done it should be reflected in the fraction. | Fracción del lodo fecal producido que es vaciado de los contenedores durante el periodo de evaluación |   | สัดส่วนจำนวนครั้งของการสูบสิ่งปฏิกูลจากวัสดุกักเก็บตลอดระยะเวลาเก็บข้อมูล fsc_fslu_emp | Volume of faecal sludge emptied from the containment | Volumen de lodo fecal vaciado de los contenedores |   | ปริมาณสิ่งปฏิกูลที่สูบออกจากวัสดุกักเก็บ fsc_vol_trck   fst_vol_trck   sr_vol_trck | Volume of fuel consumed (Trucks) | Volumen de consumo de combustible (camiones) | Volume de carburant consommé (Camions) | ปริมาณเชื้อเพลิงที่ใช้กับการขนส่ง fsc_trck_typ   fst_trck_typ   fsr_trck_typ | Fuel type (Trucks) | Tipo de combustible (Camiones) | Type de carburant (Camions) | ชนิดเชื้อเพลิงที่ใช้กับการขนส่ง fst_biog_pro | Biogas produced during the assessment period by each faecal sludge treatment plant managed by the undertaking | Biogás producido durante el periodo de evaluación para cado uno de los tratamiento del lodo fecal manejados por la empresa |   | ปริมาณก๊าซชีวภาพที่ผลิตได้จากระบบบำบัดสิ่งปฏิกูล ในช่วงระยะเวลาเก็บข้อมูล fst_biog_val | Biogas valorized in the treatment plant, for example to heat the digesters or the building and/or to run a Co-generator to generate heat and electricity | Biogás valorizado in la planta de tratamiento, por ejemplo, para el calentamiento de los digestores o la construcción y/ó funcionamiento de  un cogenerador para la generación de calor y energía |   | ปริมาณก๊าซชีวภาพที่นำไปใช้ประโยชน์ภายในระบบบำบัด เช่น นำไปให้ความร้อน กับถังย่อยสลาย หรือภายในอาคาร และ/หรือ นำไปผลิตกระแสไฟฟ้า หรือพลังงานความร้อนในเครื่องผลิตกระแสไฟฟ้าร่วม fst_biog_fla | Biogas flared refers to the biogas that is combusted by flare gas systems without electricity or heat valorisation | Biogás quemado se refiere al biogás que es quemado en una antorcha quemadora, sin la valorización de electricidad o calor. |   | ปริมาณก๊าซชีวภาพที่นำไปเผาทิ้ง หรือ ก๊าซชีวภาพที่นำไปเผาทิ้งในระบบเตาเผา โดยไม่มีการนำไปใช้ประโยชน์ เพื่อการผลิตความร้อน หรือกระแสไฟฟ้า fst_KPI_GHG | Total GHG Faecal sludge Treatment | GEI totales de la gestión de lodos fecales |   | ผลรวมปริมาณก๊าซเรือนกระจกจากระบบบำบัดสิ่งปฏิกูล fst_SL_GHG_avoided | GHG emissions avoided due to biogas valorization | Emisiones de GEI evitados debido a la valorización del biogás |   | ปริมาณก๊าซชีวภาพที่สามารถหลีกเลี่ยงการปลดปล่อยได้จากการนำมาใช้ประโยชน์ Landfilling |   |   |   fsr_mass_landfil | Dry weight sent to landfill | Lodo fecal seco enviado al relleno sanitario |   | น้ำหนักแห้งที่ส่งไปยังหลุมฝังกลบ fsr_fslu_typ_lf | Type of (faecal) sludge disposed |   |   | ชนิดของกากตะกอนที่นำไปกำจัด fsr_disp_typ | Type of the landfilling | Tipo de lodo fecal desechado |   | ชนิดของการฝังกลบ fsr_KPI_GHG_landfil_ch4 | Amount of CO2,eq emissions due to CH4  emission from (faecal) sludge applied to landfill | Cantidad de CO2 eq debido a las emisiones de CH4 de los lodos (fecales) depositados en rellenos sanitarios |   | ปริมาณก๊าซ CO2 เทียบเท่า จากก๊าซ CH4 ที่มาจากกระบวนการฝังกลบ fsr_KPI_GHG_landfil_n2o | Amount of CO2,eq emissions due to N2O  emission from (faecal) sludge applied to landfill | Cantidad de CO2 eq debido a las emisiones de N2O de los lodos (fecales) depositados en rellenos sanitarios |   | ปริมาณก๊าซ CO2 เทียบเท่า จากก๊าซ N2O ที่มาจากกระบวนการฝังกลบ fsr_KPI_GHG_landfil | Total GHG from (faecal) sludge sent to landfilling | GEI totales del envió de los  lodos fecales a rellenos sanitarios |   | ผลรวมปริมาณก๊าซเรือนกระจกจากการนำสิ่งปฏิกูลไปกำจัดด้วยวิธีฝังกลบ fsr_ghg_avoided_landfil | Amount of CO2,eq emissions avoided from carbon sequestration of landfilling | Emisiones de GEI evitadas debido al secuestro de carbono en el relleno sanitario |   | ปริมาณก๊าซ CO2 เทียบเท่า ที่สามารถหลีกเลี่ยงการปล่อยได้จากการสะสมคาร์บอนในการฝังกลบ Land application |   |   |   fsr_mass_landapp | Amount of (faecal) sludge that is sent to land application (dry weight) | Lodo fecal seco enviado para la aplicación en el suelo |   | ปริมาณกากตะกอนที่นำไปใช้ปรับปรุงดิน (น้ำหนักแห้ง) fsr_fslu_typ_la | Type of (faecal) sludge sent to land application | Tipo de lodo fecal aplicado en el suelo |   | ชนิดของกากตะกอนที่นำไปใช้ปรับปรุงดิน fsr_soil_typ | Soil typology the sludge is applied on. Note: if you don't know the soil typology, leave it as 'Fine-textured' | Tipología del suelo donde el lodo es aplicado. Nota: si no se conoce la tipología del suelo, deje el tipo 'Fine-textured' |   | ลักษณะของดินในพื้นที่ ที่นำกากตะกอนมาปรับปรุงดิน  กรณีไม่ทราบให้เลือกเป็นลักษณะดินละเอียด fsr_KPI_GHG_landapp | Amount of CO2,eq emissions due to N2O  emission from faecal sludge sent to land application | Cantidad de CO2 eq debido a las emisiones de N2O de los lodos (fecales) aplicados en el suelo |   | ปริมาณก๊าซ CO2 เทียบเท่า จากก๊าซ N2O ที่เกิดจากการปรับปรุงดิน Dumping |   |   |   fsr_vol_dumping | The volume of faecal sludge dumped | Volumen de lodo fecal vertido |   | ปริมาณกากตะกอนที่ทิ้งไม่ถูกสุขลักษณะ fsr_KPI_GHG_dumping_ch4 | Amount of CO2,eq emissions due to CH4  emission from (faecal) sludge dumped | Cantidad de CO2 eq debido a las emisiones de CH4 de los lodos (fecales) vertidos |   | ปริมาณก๊าซ CO2 เทียบเท่า จากก๊าซ CH4 ที่เกิดจากการทิ้งกากตะกอนไม่ถูกสุขลักษณะ fsr_KPI_GHG_dumping_n2o | Amount of CO2,eq emissions due to N2O  emission from (faecal) sludge dumped | Cantidad de CO2 eq debido a las emisiones de N2O de los lodos (fecales) vertidos |   | ปริมาณก๊าซ CO2 เทียบเท่า จากก๊าซ N2O ที่เกิดจากการทิ้งกากตะกอนไม่ถูกสุขลักษณะ fsr_KPI_GHG_dumping | Total GHG missions due to (faecal) sludge dumping | Emisiones totales de GEI debido al vertimiento de lodos fecales |   | ผลรวมปริมาณก๊าซเรือนกระจกจากการทิ้งกากตะกอนไม่ถูกสุขลักษณะ Urine application |   |   |   fsr_N_urine | Total Nitrogen in urine applied to land | Nitrógeno total en la orina aplicada en suelos |   | ปริมาณไนโตรเจนทั้งหมด ในปัสสาวะที่นำไปใช้เพื่อการปรับปรุงดิน fsr_KPI_GHG_urine | Amount of CO2,eq emissions due to N2O  emission from land application of urine | Cantidad de CO2 eq debido a las emisiones de N2O de los lodos (fecales) vertidos de la orina aplicada en suelos |   | ปริมาณก๊าซ CO2  เทียบเท่า จากก๊าซ N2O ที่เกิดจากการนำปัสสาวะไปปรับปรุงดิน Reusing nutrients |   |   |   fsr_reused_N | Amount of total Nitrogen reused that displaces synthetic fertilizer | Nitrógeno total reusado desplazando fertilizantes sintéticos |   | ปริมาณไนโตรเจนทั้งหมด ที่ถูกนำกลับมาใช้ทดแทนปุ๋ยเคมี fsr_reused_P | Amount of total Phosphorus reused that displaces synthetic fertilizer | Fósforo total reusado desplazando fertilizantes sintéticos |   | ปริมาณฟอสฟอรัสทั้งหมด ที่ถูกนำกลับมาใช้ทดแทนปุ๋ยเคมี fsr_ghg_avoided_reuse_N | Amount of CO2,eq emissions avoided due to Nitrogen reuse | Cantidad de CO2 eq evitadas debido al reúso de nitrógeno |   | ปริมาณก๊าซ CO2 เทียบเท่า ที่สามารถหลีกเลี่ยงการปล่อยจากการนำไนโตรเจนกลับมาใช้ fsr_ghg_avoided_reuse_P | Amount of CO2,eq emissions avoided due to Phosphorus reuse | Cantidad de CO2 eq evitadas debido al reúso de fósforo |   | ปริมาณก๊าซ CO2 เทียบเท่า ที่สามารถหลีกเลี่ยงการปล่อยจากการนำฟอสฟอรัสกลับมาใช้ fsr_ghg_avoided_reuse | Amount of CO2,eq emissions avoided due to nutrients reused displacing synthetic fertilizer | Cantidad de CO2 eq evitadas debido al reúso de nutrientes desplazando fertilizantes sintéticos |   | ผลรวมปริมาณก๊าซ CO2เทียบเท่า ที่สามารถหลีกเลี่ยงการปล่อยจากการนำสารอาหารกลับมาใช้ทดแทนปุ๋ยเคมี
process
add missing descriptions please see the table below i am still waiting for the french translation variable english spanish french thai fsc cont emp fraction of produced faecal sludge that is emptied from containments during the assessment period if only partial emptying is done it should be reflected in the fraction fracción del lodo fecal producido que es vaciado de los contenedores durante el periodo de evaluación   สัดส่วนจำนวนครั้งของการสูบสิ่งปฏิกูลจากวัสดุกักเก็บตลอดระยะเวลาเก็บข้อมูล fsc fslu emp volume of faecal sludge emptied from the containment volumen de lodo fecal vaciado de los contenedores   ปริมาณสิ่งปฏิกูลที่สูบออกจากวัสดุกักเก็บ fsc vol trck   fst vol trck   sr vol trck volume of fuel consumed trucks volumen de consumo de combustible camiones volume de carburant consommé camions ปริมาณเชื้อเพลิงที่ใช้กับการขนส่ง fsc trck typ   fst trck typ   fsr trck typ fuel type trucks tipo de combustible camiones type de carburant camions ชนิดเชื้อเพลิงที่ใช้กับการขนส่ง fst biog pro biogas produced during the assessment period by each faecal sludge treatment plant managed by the undertaking biogás producido durante el periodo de evaluación para cado uno de los tratamiento del lodo fecal manejados por la empresa   ปริมาณก๊าซชีวภาพที่ผลิตได้จากระบบบำบัดสิ่งปฏิกูล ในช่วงระยะเวลาเก็บข้อมูล fst biog val biogas valorized in the treatment plant for example to heat the digesters or the building and or to run a co generator to generate heat and electricity biogás valorizado in la planta de tratamiento por ejemplo para el calentamiento de los digestores o la construcción y ó funcionamiento de  un cogenerador para la generación de calor y energía   ปริมาณก๊าซชีวภาพที่นำไปใช้ประโยชน์ภายในระบบบำบัด เช่น นำไปให้ความร้อน กับถังย่อยสลาย หรือภายในอาคาร และ หรือ นำไปผลิตกระแสไฟฟ้า หรือพลังงานความร้อนในเครื่องผลิตกระแสไฟฟ้าร่วม fst biog fla biogas flared refers to the biogas that is combusted by flare gas systems without electricity or heat valorisation biogás quemado se refiere al biogás que es quemado en una antorcha quemadora sin la valorización de electricidad o calor   ปริมาณก๊าซชีวภาพที่นำไปเผาทิ้ง หรือ ก๊าซชีวภาพที่นำไปเผาทิ้งในระบบเตาเผา โดยไม่มีการนำไปใช้ประโยชน์ เพื่อการผลิตความร้อน หรือกระแสไฟฟ้า fst kpi ghg total ghg faecal sludge treatment gei totales de la gestión de lodos fecales   ผลรวมปริมาณก๊าซเรือนกระจกจากระบบบำบัดสิ่งปฏิกูล fst sl ghg avoided ghg emissions avoided due to biogas valorization emisiones de gei evitados debido a la valorización del biogás   ปริมาณก๊าซชีวภาพที่สามารถหลีกเลี่ยงการปลดปล่อยได้จากการนำมาใช้ประโยชน์ landfilling       fsr mass landfil dry weight sent to landfill lodo fecal seco enviado al relleno sanitario   น้ำหนักแห้งที่ส่งไปยังหลุมฝังกลบ fsr fslu typ lf type of faecal sludge disposed     ชนิดของกากตะกอนที่นำไปกำจัด fsr disp typ type of the landfilling tipo de lodo fecal desechado   ชนิดของการฝังกลบ fsr kpi ghg landfil amount of eq emissions due to  emission from faecal sludge applied to landfill cantidad de eq debido a las emisiones de de los lodos fecales depositados en rellenos sanitarios   ปริมาณก๊าซ เทียบเท่า จากก๊าซ ที่มาจากกระบวนการฝังกลบ fsr kpi ghg landfil amount of eq emissions due to   emission from faecal sludge applied to landfill cantidad de eq debido a las emisiones de de los lodos fecales depositados en rellenos sanitarios   ปริมาณก๊าซ เทียบเท่า จากก๊าซ ที่มาจากกระบวนการฝังกลบ fsr kpi ghg landfil total ghg from faecal sludge sent to landfilling gei totales del envió de los  lodos fecales a rellenos sanitarios   ผลรวมปริมาณก๊าซเรือนกระจกจากการนำสิ่งปฏิกูลไปกำจัดด้วยวิธีฝังกลบ fsr ghg avoided landfil amount of eq emissions avoided from carbon sequestration of landfilling emisiones de gei evitadas debido al secuestro de carbono en el relleno sanitario   ปริมาณก๊าซ เทียบเท่า ที่สามารถหลีกเลี่ยงการปล่อยได้จากการสะสมคาร์บอนในการฝังกลบ land application       fsr mass landapp amount of faecal sludge that is sent to land application dry weight lodo fecal seco enviado para la aplicación en el suelo   ปริมาณกากตะกอนที่นำไปใช้ปรับปรุงดิน น้ำหนักแห้ง fsr fslu typ la type of faecal sludge sent to land application tipo de lodo fecal aplicado en el suelo   ชนิดของกากตะกอนที่นำไปใช้ปรับปรุงดิน fsr soil typ soil typology the sludge is applied on note if you don t know the soil typology leave it as fine textured tipología del suelo donde el lodo es aplicado nota si no se conoce la tipología del suelo deje el tipo fine textured   ลักษณะของดินในพื้นที่ ที่นำกากตะกอนมาปรับปรุงดิน  กรณีไม่ทราบให้เลือกเป็นลักษณะดินละเอียด fsr kpi ghg landapp amount of eq emissions due to   emission from faecal sludge sent to land application cantidad de eq debido a las emisiones de de los lodos fecales aplicados en el suelo   ปริมาณก๊าซ เทียบเท่า จากก๊าซ ที่เกิดจากการปรับปรุงดิน dumping       fsr vol dumping the volume of faecal sludge dumped volumen de lodo fecal vertido   ปริมาณกากตะกอนที่ทิ้งไม่ถูกสุขลักษณะ fsr kpi ghg dumping amount of eq emissions due to   emission from faecal sludge dumped cantidad de eq debido a las emisiones de de los lodos fecales vertidos   ปริมาณก๊าซ เทียบเท่า จากก๊าซ ที่เกิดจากการทิ้งกากตะกอนไม่ถูกสุขลักษณะ fsr kpi ghg dumping amount of eq emissions due to   emission from faecal sludge dumped cantidad de eq debido a las emisiones de de los lodos fecales vertidos   ปริมาณก๊าซ เทียบเท่า จากก๊าซ ที่เกิดจากการทิ้งกากตะกอนไม่ถูกสุขลักษณะ fsr kpi ghg dumping total ghg missions due to faecal sludge dumping emisiones totales de gei debido al vertimiento de lodos fecales   ผลรวมปริมาณก๊าซเรือนกระจกจากการทิ้งกากตะกอนไม่ถูกสุขลักษณะ urine application       fsr n urine total nitrogen in urine applied to land nitrógeno total en la orina aplicada en suelos   ปริมาณไนโตรเจนทั้งหมด ในปัสสาวะที่นำไปใช้เพื่อการปรับปรุงดิน fsr kpi ghg urine amount of eq emissions due to   emission from land application of urine cantidad de eq debido a las emisiones de de los lodos fecales vertidos de la orina aplicada en suelos   ปริมาณก๊าซ   เทียบเท่า จากก๊าซ ที่เกิดจากการนำปัสสาวะไปปรับปรุงดิน reusing nutrients       fsr reused n amount of total nitrogen reused that displaces synthetic fertilizer nitrógeno total reusado desplazando fertilizantes sintéticos   ปริมาณไนโตรเจนทั้งหมด ที่ถูกนำกลับมาใช้ทดแทนปุ๋ยเคมี fsr reused p amount of total phosphorus reused that displaces synthetic fertilizer fósforo total reusado desplazando fertilizantes sintéticos   ปริมาณฟอสฟอรัสทั้งหมด ที่ถูกนำกลับมาใช้ทดแทนปุ๋ยเคมี fsr ghg avoided reuse n amount of eq emissions avoided due to nitrogen reuse cantidad de eq evitadas debido al reúso de nitrógeno   ปริมาณก๊าซ เทียบเท่า ที่สามารถหลีกเลี่ยงการปล่อยจากการนำไนโตรเจนกลับมาใช้ fsr ghg avoided reuse p amount of eq emissions avoided due to phosphorus reuse cantidad de eq evitadas debido al reúso de fósforo   ปริมาณก๊าซ เทียบเท่า ที่สามารถหลีกเลี่ยงการปล่อยจากการนำฟอสฟอรัสกลับมาใช้ fsr ghg avoided reuse amount of eq emissions avoided due to nutrients reused displacing synthetic fertilizer cantidad de eq evitadas debido al reúso de nutrientes desplazando fertilizantes sintéticos   ผลรวมปริมาณก๊าซ ียบเท่า ที่สามารถหลีกเลี่ยงการปล่อยจากการนำสารอาหารกลับมาใช้ทดแทนปุ๋ยเคมี
1
394,813
11,648,983,359
IssuesEvent
2020-03-01 23:44:22
ayumi-cloud/oc-security-module
https://api.github.com/repos/ayumi-cloud/oc-security-module
opened
Add Reddit bot to whitelist rules
Add to Whitelist Firewall Priority: Medium enhancement in-progress
### Enhancement idea - [ ] Add Reddit bot to whitelist rules.
1.0
Add Reddit bot to whitelist rules - ### Enhancement idea - [ ] Add Reddit bot to whitelist rules.
non_process
add reddit bot to whitelist rules enhancement idea add reddit bot to whitelist rules
0
22,392
19,277,408,480
IssuesEvent
2021-12-10 13:30:39
MPMG-DCC-UFMG/C01
https://api.github.com/repos/MPMG-DCC-UFMG/C01
closed
Ajustar a ordenação da listagem de coletores
usabilidade Desenvolvimento
## Comportamento Esperado Os coletores com instâncias em execução ou modificados recentimente ficam no topo da listagem ## Comportamento Atual Os coletores são ordenados apenas por ordem de criação, então se um coletor mais antigo estiver sendo utilizado, é preciso fazer um scroll e encontrar ele na lista, que pode ser longa. ## Passos para reproduzir o erro 1. Ter vários coletores cadastrados 2. Trabalhar com um coletor antigo
True
Ajustar a ordenação da listagem de coletores - ## Comportamento Esperado Os coletores com instâncias em execução ou modificados recentimente ficam no topo da listagem ## Comportamento Atual Os coletores são ordenados apenas por ordem de criação, então se um coletor mais antigo estiver sendo utilizado, é preciso fazer um scroll e encontrar ele na lista, que pode ser longa. ## Passos para reproduzir o erro 1. Ter vários coletores cadastrados 2. Trabalhar com um coletor antigo
non_process
ajustar a ordenação da listagem de coletores comportamento esperado os coletores com instâncias em execução ou modificados recentimente ficam no topo da listagem comportamento atual os coletores são ordenados apenas por ordem de criação então se um coletor mais antigo estiver sendo utilizado é preciso fazer um scroll e encontrar ele na lista que pode ser longa passos para reproduzir o erro ter vários coletores cadastrados trabalhar com um coletor antigo
0
285
2,725,124,832
IssuesEvent
2015-04-14 21:44:52
neuropoly/spinalcordtoolbox
https://api.github.com/repos/neuropoly/spinalcordtoolbox
opened
compute_csa should receive instance 'param' as a parameter in order to be callable from another script
priority: medium sct_process_segmentation
In order to call the function from a different script the instance 'param' should always been put as an argument. Original 'param' will then be applied in every script that is being called.
1.0
compute_csa should receive instance 'param' as a parameter in order to be callable from another script - In order to call the function from a different script the instance 'param' should always been put as an argument. Original 'param' will then be applied in every script that is being called.
process
compute csa should receive instance param as a parameter in order to be callable from another script in order to call the function from a different script the instance param should always been put as an argument original param will then be applied in every script that is being called
1
368,279
25,785,471,166
IssuesEvent
2022-12-09 19:56:26
ray-project/kuberay
https://api.github.com/repos/ray-project/kuberay
closed
[HelmChart] No documentation how to use customized image (for development) in HelmChart instruction
bug documentation P1
### Search before asking - [X] I searched the [issues](https://github.com/ray-project/kuberay/issues) and found no similar issues. ### KubeRay Component ray-operator ### What happened + What you expected to happen I have built my own image in my own docker hub, but not able to find instruction where to update the helm chart to use the image. ### Reproduction script N/A ### Anything else _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR!
1.0
[HelmChart] No documentation how to use customized image (for development) in HelmChart instruction - ### Search before asking - [X] I searched the [issues](https://github.com/ray-project/kuberay/issues) and found no similar issues. ### KubeRay Component ray-operator ### What happened + What you expected to happen I have built my own image in my own docker hub, but not able to find instruction where to update the helm chart to use the image. ### Reproduction script N/A ### Anything else _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR!
non_process
no documentation how to use customized image for development in helmchart instruction search before asking i searched the and found no similar issues kuberay component ray operator what happened what you expected to happen i have built my own image in my own docker hub but not able to find instruction where to update the helm chart to use the image reproduction script n a anything else no response are you willing to submit a pr yes i am willing to submit a pr
0
396,968
11,716,523,131
IssuesEvent
2020-03-09 15:46:16
lukeparser/lukeparser
https://api.github.com/repos/lukeparser/lukeparser
opened
Command to list arguments of another command
Area: View Priority: Normal Type: Feature
For example: ``` \listargs[ # blub ] ``` shall list all arguments that can be provided for a section.
1.0
Command to list arguments of another command - For example: ``` \listargs[ # blub ] ``` shall list all arguments that can be provided for a section.
non_process
command to list arguments of another command for example listargs blub shall list all arguments that can be provided for a section
0
3,600
6,630,924,434
IssuesEvent
2017-09-25 03:34:31
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
closed
Issue of temporary table in call procedure
ADMIN CONNECTION POOL enhancement QUERY PROCESSOR
When calling a procedure from php, the procedure creates a temporary table that is used in another line of code to display information (same connection). But it seems that between two line of code the temporary table doesn't exist anymore (probably not the same connection in the back-end for the two lines of code, even if it's the same connection in the front-end). I recently installed this tool for a migration and scaling purposes, so I don't really master this tools. Thanks your for telling me if something can be done to fix this issue.
1.0
Issue of temporary table in call procedure - When calling a procedure from php, the procedure creates a temporary table that is used in another line of code to display information (same connection). But it seems that between two line of code the temporary table doesn't exist anymore (probably not the same connection in the back-end for the two lines of code, even if it's the same connection in the front-end). I recently installed this tool for a migration and scaling purposes, so I don't really master this tools. Thanks your for telling me if something can be done to fix this issue.
process
issue of temporary table in call procedure when calling a procedure from php the procedure creates a temporary table that is used in another line of code to display information same connection but it seems that between two line of code the temporary table doesn t exist anymore probably not the same connection in the back end for the two lines of code even if it s the same connection in the front end i recently installed this tool for a migration and scaling purposes so i don t really master this tools thanks your for telling me if something can be done to fix this issue
1
16,464
21,390,659,125
IssuesEvent
2022-04-21 06:42:36
zammad/zammad
https://api.github.com/repos/zammad/zammad
closed
parsing incoming mails breaks on this line "Reply-To: <>"
enhancement verified prioritised by payment mail processing
<!-- Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓 Since november 15th we handle all requests, except real bugs, at our community board. Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21 Please post: - Feature requests - Development questions - Technical questions on the board -> https://community.zammad.org ! If you think you hit a bug, please continue: - Search existing issues and the CHANGELOG.md for your issue - there might be a solution already - Make sure to use the latest version of Zammad if possible - Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it! - Please write the issue in english - Don't remove the template - otherwise we will close the issue without further comments - Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted). * The upper textblock will be removed automatically when you submit your issue * --> ### Infos: * Used Zammad version: 4.1.x * Installation method (source, package, ..): package * Operating system: CentOS 7 * Database + version: * Elasticsearch version: * Browser + version: none, it's an cli issue ### Expected behavior: * parsing incoming mails should not break if the mail contains this line: "Reply-To: <>" ### Actual behavior: * it breaks with this error message ``` zammad run rails r 'Channel::EmailParser.process_unprocessable_mails' "ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/83e5bf7c5a1a640f057cbf0b668f40e0.eml, please create an issue at https://github.com/zammad/zammad/issues" "ERROR: #<Exceptions::UnprocessableEntity: Invalid email '@local'>" /opt/zammad/app/models/channel/email_parser.rb:135:in `rescue in process': #<Exceptions::UnprocessableEntity: Invalid email '@local'> (RuntimeError) /opt/zammad/app/models/user.rb:927:in `check_email' ... ``` Once I removed the Reply-To line it works. Yes I'm sure this is a bug and no feature request or a general question.
1.0
parsing incoming mails breaks on this line "Reply-To: <>" - <!-- Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓 Since november 15th we handle all requests, except real bugs, at our community board. Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21 Please post: - Feature requests - Development questions - Technical questions on the board -> https://community.zammad.org ! If you think you hit a bug, please continue: - Search existing issues and the CHANGELOG.md for your issue - there might be a solution already - Make sure to use the latest version of Zammad if possible - Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it! - Please write the issue in english - Don't remove the template - otherwise we will close the issue without further comments - Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted). * The upper textblock will be removed automatically when you submit your issue * --> ### Infos: * Used Zammad version: 4.1.x * Installation method (source, package, ..): package * Operating system: CentOS 7 * Database + version: * Elasticsearch version: * Browser + version: none, it's an cli issue ### Expected behavior: * parsing incoming mails should not break if the mail contains this line: "Reply-To: <>" ### Actual behavior: * it breaks with this error message ``` zammad run rails r 'Channel::EmailParser.process_unprocessable_mails' "ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/83e5bf7c5a1a640f057cbf0b668f40e0.eml, please create an issue at https://github.com/zammad/zammad/issues" "ERROR: #<Exceptions::UnprocessableEntity: Invalid email '@local'>" /opt/zammad/app/models/channel/email_parser.rb:135:in `rescue in process': #<Exceptions::UnprocessableEntity: Invalid email '@local'> (RuntimeError) /opt/zammad/app/models/user.rb:927:in `check_email' ... ``` Once I removed the Reply-To line it works. Yes I'm sure this is a bug and no feature request or a general question.
process
parsing incoming mails breaks on this line reply to hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version x installation method source package package operating system centos database version elasticsearch version browser version none it s an cli issue expected behavior parsing incoming mails should not break if the mail contains this line reply to actual behavior it breaks with this error message zammad run rails r channel emailparser process unprocessable mails error can t process email you will find it for bug reporting under opt zammad tmp unprocessable mail eml please create an issue at error opt zammad app models channel email parser rb in rescue in process runtimeerror opt zammad app models user rb in check email once i removed the reply to line it works yes i m sure this is a bug and no feature request or a general question
1
14,537
17,643,826,562
IssuesEvent
2021-08-20 01:00:17
open-telemetry/opentelemetry-collector
https://api.github.com/repos/open-telemetry/opentelemetry-collector
closed
Merge include/exclude logic from span package
area:processor
With https://github.com/open-telemetry/opentelemetry-collector/pull/537 the config for include and exclude are under object. This issue tracks having the Matcher object(or some logical name) that checks both the include and exclude properties for the processors. It would remove functions like https://github.com/open-telemetry/opentelemetry-collector/blob/328ad6d22b37f02e8079d0e896cfc3e188263b61/processor/attributesprocessor/attributes.go#L203
1.0
Merge include/exclude logic from span package - With https://github.com/open-telemetry/opentelemetry-collector/pull/537 the config for include and exclude are under object. This issue tracks having the Matcher object(or some logical name) that checks both the include and exclude properties for the processors. It would remove functions like https://github.com/open-telemetry/opentelemetry-collector/blob/328ad6d22b37f02e8079d0e896cfc3e188263b61/processor/attributesprocessor/attributes.go#L203
process
merge include exclude logic from span package with the config for include and exclude are under object this issue tracks having the matcher object or some logical name that checks both the include and exclude properties for the processors it would remove functions like
1
17,629
23,445,647,301
IssuesEvent
2022-08-15 19:19:52
apache/arrow-datafusion
https://api.github.com/repos/apache/arrow-datafusion
closed
Remove outdated license text left over from arrow repo
enhancement development-process
I was reviewing code and noticed that https://github.com/apache/arrow-datafusion/blob/master/LICENSE.txt was a copy/paste from https://github.com/apache/arrow/blob/master/LICENSE.txt when we split out this repo. Thus it contains many incorrect and irrelevant references
1.0
Remove outdated license text left over from arrow repo - I was reviewing code and noticed that https://github.com/apache/arrow-datafusion/blob/master/LICENSE.txt was a copy/paste from https://github.com/apache/arrow/blob/master/LICENSE.txt when we split out this repo. Thus it contains many incorrect and irrelevant references
process
remove outdated license text left over from arrow repo i was reviewing code and noticed that was a copy paste from when we split out this repo thus it contains many incorrect and irrelevant references
1
161,832
12,577,233,118
IssuesEvent
2020-06-09 09:12:37
WoWManiaUK/Redemption
https://api.github.com/repos/WoWManiaUK/Redemption
closed
Can't Filter on Cloth Chest Pieces in AH
Fix - Tester Confirmed
**Links:** **What is Happening:** If you filter on Cloth hands, feet or other items it works. But if you filter on chestpieces you get all cloth items. **What Should happen:** If you filter on cloth chest pieces, it should be only be showing that.
1.0
Can't Filter on Cloth Chest Pieces in AH - **Links:** **What is Happening:** If you filter on Cloth hands, feet or other items it works. But if you filter on chestpieces you get all cloth items. **What Should happen:** If you filter on cloth chest pieces, it should be only be showing that.
non_process
can t filter on cloth chest pieces in ah links what is happening if you filter on cloth hands feet or other items it works but if you filter on chestpieces you get all cloth items what should happen if you filter on cloth chest pieces it should be only be showing that
0
11,493
14,366,720,973
IssuesEvent
2020-12-01 05:08:52
carp-lang/Carp
https://api.github.com/repos/carp-lang/Carp
opened
Release scripts broken
bug process
Seems like the release scripts broke (I noticed this when trying to add one for Windows... now none of them work). Seems to be a mixture of Stack problems and some changes to Github actions?
1.0
Release scripts broken - Seems like the release scripts broke (I noticed this when trying to add one for Windows... now none of them work). Seems to be a mixture of Stack problems and some changes to Github actions?
process
release scripts broken seems like the release scripts broke i noticed this when trying to add one for windows now none of them work seems to be a mixture of stack problems and some changes to github actions
1
204,689
15,527,911,412
IssuesEvent
2021-03-13 08:27:28
eclipse/che
https://api.github.com/repos/eclipse/che
closed
Nightly Eclipse Che E2E devfile tests are failing on creation of workspace
area/qe e2e-test/failure kind/bug severity/P1
### Describe the bug Nightly Eclipse Che E2E devfile tests are failing on creation of workspace on minikube: https://codeready-workspaces-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/basic-MultiUser-Che-check-e2e-tests-against-k8s/3231/console ``` 14:12:03 + docker run --shm-size=1g --net=host --ipc=host -p 5920:5920 -e TS_SELENIUM_BASE_URL=https://che-eclipse-che.10.0.101.42.nip.io -e TS_SELENIUM_LOG_LEVEL=DEBUG -e TS_SELENIUM_MULTIUSER=true -e TS_SELENIUM_USERNAME=admin -e TS_SELENIUM_PASSWORD=admin -e TEST_SUITE=test-all-devfiles -e NODE_TLS_REJECT_UNAUTHORIZED=0 -v /mnt/hudson_workspace/workspace/basic-MultiUser-Che-check-e2e-tests-against-k8s/tests/e2e:/tmp/e2e:Z quay.io/eclipse/che-e2e:nightly 14:12:03 WARNING: Published ports are discarded when using host network mode 14:12:04 + '[' -z https://che-eclipse-che.10.0.101.42.nip.io ']' 14:12:04 + '[' -z test-all-devfiles ']' 14:12:04 + export DISPLAY=:20 14:12:04 + DISPLAY=:20 14:12:04 14:12:04 ####################### 14:12:04 14:12:04 For remote debug connect to the VNC server 0.0.0.0:5920 14:12:04 14:12:04 ####################### 14:12:04 14:12:04 + Xvfb :20 -screen 0 1920x1080x24 14:12:04 + echo '' 14:12:04 + echo '#######################' 14:12:04 + echo '' 14:12:04 + echo 'For remote debug connect to the VNC server 0.0.0.0:5920' 14:12:04 + echo '' 14:12:04 + echo '#######################' 14:12:04 + echo '' 14:12:04 + x11vnc -display :20 -N -forever 14:12:04 + export TS_SELENIUM_REMOTE_DRIVER_URL=http://localhost:4444/wd/hub 14:12:04 + TS_SELENIUM_REMOTE_DRIVER_URL=http://localhost:4444/wd/hub 14:12:04 + expectedStatus=200 14:12:04 + currentTry=1 14:12:04 + maximumAttempts=5 14:12:04 + /usr/bin/supervisord --configuration /etc/supervisord.conf 14:12:04 ++ curl -s -o /dev/null -w '%{http_code}' --fail http://localhost:4444/wd/hub/status 14:12:04 Wait selenium server availability ... 14:12:04 + '[' 000 '!=' 200 ']' 14:12:04 + (( currentTry > maximumAttempts )) 14:12:04 + echo 'Wait selenium server availability ...' 14:12:04 + curentTry=1 14:12:04 + sleep 1 14:12:05 ++ curl -s -o /dev/null -w '%{http_code}' --fail http://localhost:4444/wd/hub/status 14:12:05 Wait selenium server availability ... 14:12:05 + '[' 000 '!=' 200 ']' 14:12:05 + (( currentTry > maximumAttempts )) 14:12:05 + echo 'Wait selenium server availability ...' 14:12:05 + curentTry=2 14:12:05 + sleep 1 14:12:05 ++ curl -s -o /dev/null -w '%{http_code}' --fail http://localhost:4444/wd/hub/status 14:12:05 + '[' 000 '!=' 200 ']' 14:12:05 + (( currentTry > maximumAttempts )) 14:12:05 + echo 'Wait selenium server availability ...' 14:12:05 + curentTry=3 14:12:05 + sleep 1 14:12:05 Wait selenium server availability ... 14:12:07 ++ curl -s -o /dev/null -w '%{http_code}' --fail http://localhost:4444/wd/hub/status 14:12:07 + '[' 200 '!=' 200 ']' 14:12:07 + mount 14:12:07 + grep e2e 14:12:07 /dev/vda1 on /tmp/e2e type xfs (rw,relatime,seclabel,attr2,inode64,noquota) 14:12:07 + echo 'The local code is mounted. Executing local code.' 14:12:07 + cd /tmp/e2e 14:12:07 The local code is mounted. Executing local code. 14:12:07 + npm install 14:12:12 14:12:12 > chromedriver@80.0.1 install /tmp/e2e/node_modules/chromedriver 14:12:12 > node install.js 14:12:12 14:12:12 ChromeDriver binary exists. Validating... 14:12:12 ChromeDriver is already available at '/tmp/80.0.3987.16/chromedriver/chromedriver'. 14:12:12 Copying to target path /tmp/e2e/node_modules/chromedriver/lib/chromedriver 14:12:12 Fixing file permissions 14:12:12 Done. ChromeDriver binary available at /tmp/e2e/node_modules/chromedriver/lib/chromedriver/chromedriver 14:12:13 npm WARN e2e@1.0.0 No repository field. 14:12:13 npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@2.1.2 (node_modules/fsevents): 14:12:13 npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@2.1.2: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) 14:12:13 14:12:13 added 223 packages from 282 contributors and audited 224 packages in 4.897s 14:12:13 14:12:13 18 packages are looking for funding 14:12:13 run `npm fund` for details 14:12:13 14:12:13 found 1 high severity vulnerability 14:12:13 run `npm audit fix` to fix them, or `npm audit` for details 14:12:13 + '[' test-all-devfiles == load-test ']' 14:12:13 + SCREEN_RECORDING=true 14:12:13 + '[' true == true ']' 14:12:13 + echo 'Starting ffmpeg recording...' 14:12:13 + mkdir -p /tmp/ffmpeg_report 14:12:13 Starting ffmpeg recording... 14:12:13 + ffmpeg_pid=196 14:12:13 + trap kill_ffmpeg 2 15 14:12:13 + echo 'Running TEST_SUITE: test-all-devfiles with user: admin' 14:12:13 + npm run test-all-devfiles 14:12:13 + nohup ffmpeg -y -video_size 1920x1080 -framerate 24 -f x11grab -i :20.0 /tmp/ffmpeg_report/output.mp4 14:12:13 Running TEST_SUITE: test-all-devfiles with user: admin 14:12:14 14:12:14 > e2e@1.0.0 test-all-devfiles /tmp/e2e 14:12:14 > ./generateIndex.sh && npm run lint && npm run tsc && mocha --opts mocha-all-devfiles.opts 14:12:14 14:12:14 Generating index.ts file... 14:12:15 14:12:15 > e2e@1.0.0 lint /tmp/e2e 14:12:15 > tslint --fix -p . 14:12:15 14:12:17 14:12:17 Could not find implementations for the following rules specified in the configuration: 14:12:17 label-undefined 14:12:17 no-constructor-vars 14:12:17 no-duplicate-key 14:12:17 no-trailing-comma 14:12:17 no-unreachable 14:12:17 Try upgrading TSLint and/or ensuring that you have all necessary custom rules installed. 14:12:17 If TSLint was recently upgraded, you may have old rules configured which need to be cleaned up. 14:12:17 14:12:18 The 'no-string-literal' rule threw an error in '/tmp/e2e/utils/requestHandlers/CheApiRequestHandler.ts': 14:12:18 TypeError: ts.unescapeIdentifier is not a function 14:12:18 at cb (/tmp/e2e/node_modules/tslint/lib/rules/noStringLiteralRule.js:54:39) 14:12:18 at visitNode (/tmp/e2e/node_modules/typescript/lib/typescript.js:16505:24) 14:12:18 at Object.forEachChild (/tmp/e2e/node_modules/typescript/lib/typescript.js:16720:24) 14:12:18 at cb (/tmp/e2e/node_modules/tslint/lib/rules/noStringLiteralRule.js:60:19) 14:12:18 at visitNode (/tmp/e2e/node_modules/typescript/lib/typescript.js:16505:24) 14:12:18 at Object.forEachChild (/tmp/e2e/node_modules/typescript/lib/typescript.js:16751:24) 14:12:18 at cb (/tmp/e2e/node_modules/tslint/lib/rules/noStringLiteralRule.js:60:19) 14:12:18 at visitNodes (/tmp/e2e/node_modules/typescript/lib/typescript.js:16514:30) 14:12:18 at Object.forEachChild (/tmp/e2e/node_modules/typescript/lib/typescript.js:16740:24) 14:12:18 at cb (/tmp/e2e/node_modules/tslint/lib/rules/noStringLiteralRule.js:60:19) 14:12:21 14:12:21 > e2e@1.0.0 tsc /tmp/e2e 14:12:21 > tsc -p . 14:12:21 14:12:27 (node:283) DeprecationWarning: Configuration via mocha.opts is DEPRECATED and will be removed from a future version of Mocha. Use RC files or package.json instead. 14:12:27 14:12:27 ################## Launch Information ################## 14:12:27 14:12:27 TS_SELENIUM_BASE_URL: https://che-eclipse-che.10.0.101.42.nip.io 14:12:27 TS_SELENIUM_HEADLESS: false 14:12:27 14:12:27 TS_SELENIUM_USERNAME: admin 14:12:27 TS_SELENIUM_PASSWORD: admin 14:12:27 14:12:27 TS_SELENIUM_HAPPY_PATH_WORKSPACE_NAME: petclinic-dev-environment 14:12:27 TS_SELENIUM_DELAY_BETWEEN_SCREENSHOTS: 1000 14:12:27 TS_SELENIUM_REPORT_FOLDER: ./report 14:12:27 TS_SELENIUM_EXECUTION_SCREENCAST: false 14:12:27 DELETE_SCREENCAST_IF_TEST_PASS: true 14:12:27 TS_SELENIUM_REMOTE_DRIVER_URL: http://localhost:4444/wd/hub 14:12:27 DELETE_WORKSPACE_ON_FAILED_TEST: false 14:12:27 TS_SELENIUM_LOG_LEVEL: DEBUG 14:12:27 14:12:27 to output timeout variables, set TS_SELENIUM_PRINT_TIMEOUT_VARIABLES to true 14:12:27 ######################################################## 14:12:27 14:12:27 ▼ PreferencesHandler.setConfirmExit to never 14:12:27 14:12:27 Login test 14:12:27 ▼ DriverHelper.navigateToUrl https://che-eclipse-che.10.0.101.42.nip.io 14:12:28 [WARN] PreferencesHandler.setPreference could not set theia-user-preferences from api/preferences response, forcing manually. 14:12:29 ▼ PreferencesHandler.setTerminalToDom 14:12:29 ▼ MultiUserLoginPage.login 14:12:29 ▼ CheLoginPage.waitEclipseCheLoginFormPage 14:12:30 ▼ CheLoginPage.inputUserNameEclipseCheLoginPage username: "admin" 14:12:31 ▼ CheLoginPage.inputPaswordEclipseCheLoginPage password: "admin" 14:12:31 ▼ CheLoginPage.clickEclipseCheLoginButton 14:12:32 ✓ Login (4529ms) 14:12:32 14:12:32 C/C++ test 14:12:32 Create C/C++ workspace 14:12:32 ▼ Dashboard.waitPage 14:12:54 1) Open 'New Workspace' page 14:12:54 [ERROR] CheReporter runner.on.fail: C/C++ test Create C/C++ workspace Open 'New Workspace' page failed after 21286ms 14:12:54 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Create_C/C++_workspace_Open_'New_Workspace'_page' 14:12:54 at Object.mkdirSync (fs.js:757:3) 14:12:54 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:12:54 at Runner.emit (events.js:203:15) 14:12:54 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:12:54 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:12:54 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:12:54 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:12:54 at process._tickCallback (internal/process/next_tick.js:68:7) 14:12:54 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1) 14:12:54 (node:283) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. 14:12:54 ▼ Ide.waitAndSwitchToIdeFrame 14:19:00 ▼ Ide.waitPreloaderVisible 14:25:07 2) Wait for workspace readiness 14:25:07 [ERROR] CheReporter runner.on.fail: C/C++ test Create C/C++ workspace Wait for workspace readiness failed after 723212ms 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Create_C/C++_workspace_Wait_for_workspace_readiness' 14:25:07 at Object.mkdirSync (fs.js:757:3) 14:25:07 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:07 at Runner.emit (events.js:203:15) 14:25:07 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:07 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:07 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2) 14:25:07 Test opening file 14:25:07 ▼ ProjectTree.expandPathAndOpenFile "cpp-hello-world" filename: hello.cpp 14:25:07 ▼ ProjectTree.expandPath "cpp-hello-world" 14:25:07 ▼ ProjectTree.expandItem "cpp-hello-world" 14:25:07 3) Expand project and open file in editor 14:25:07 [ERROR] CheReporter runner.on.fail: C/C++ test Test opening file Expand project and open file in editor failed after 3039ms 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Test_opening_file_Expand_project_and_open_file_in_editor' 14:25:07 at Object.mkdirSync (fs.js:757:3) 14:25:07 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:07 at Runner.emit (events.js:203:15) 14:25:07 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:07 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:07 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 3) 14:25:07 ▼ Editor.moveCursorToLineAndChar title: "hello.cpp" line: "6" char: "1" 14:25:07 ▼ Editor.performKeyCombination title: "hello.cpp" text: "" 14:25:07 4) Prepare file for LS tests 14:25:07 [ERROR] CheReporter runner.on.fail: C/C++ test Test opening file Prepare file for LS tests failed after 3030ms 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Test_opening_file_Prepare_file_for_LS_tests' 14:25:07 at Object.mkdirSync (fs.js:757:3) 14:25:07 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:07 at Runner.emit (events.js:203:15) 14:25:07 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:07 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:07 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 4) 14:25:07 Validation of project build 14:25:07 ▼ TopMenu.selectOption "Terminal" 14:25:07 ▼ TopMenu.clickOnTopMenuButton "Terminal" 14:25:07 ▼ Ide.closeAllNotifications 14:25:07 ▼ NotificationCenter.open 14:25:07 ▼ NotificationCenter.clickIconOnStatusBar 14:25:07 5) Run command 'build' 14:25:07 [ERROR] CheReporter runner.on.fail: C/C++ test Validation of project build Run command 'build' failed after 3039ms 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Validation_of_project_build_Run_command_'build'' 14:25:07 at Object.mkdirSync (fs.js:757:3) 14:25:07 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:07 at Runner.emit (events.js:203:15) 14:25:07 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:07 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:07 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 5) 14:25:07 ▼ TopMenu.selectOption "Terminal" 14:25:07 ▼ TopMenu.clickOnTopMenuButton "Terminal" 14:25:07 ▼ Ide.closeAllNotifications 14:25:07 ▼ NotificationCenter.open 14:25:07 ▼ NotificationCenter.clickIconOnStatusBar 14:25:09 6) Run command 'run' 14:25:09 [ERROR] CheReporter runner.on.fail: C/C++ test Validation of project build Run command 'run' failed after 3038ms 14:25:09 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Validation_of_project_build_Run_command_'run'' 14:25:09 at Object.mkdirSync (fs.js:757:3) 14:25:09 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:09 at Runner.emit (events.js:203:15) 14:25:09 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:09 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:09 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:09 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:09 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:09 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 6) 14:25:09 Language server validation 14:25:09 ▼ Editor.type title: "hello.cpp" text: "error_text;" 14:25:09 ▼ Editor.selectTab "hello.cpp" 14:25:09 ▼ Editor.waitTab "hello.cpp" 14:25:14 7) Error highlighting 14:25:14 [ERROR] CheReporter runner.on.fail: C/C++ test Language server validation Error highlighting failed after 5060ms 14:25:14 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Language_server_validation_Error_highlighting' 14:25:14 at Object.mkdirSync (fs.js:757:3) 14:25:14 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:14 at Runner.emit (events.js:203:15) 14:25:14 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:14 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:14 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:14 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:14 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:14 ▼ Ide.closeAllNotifications 14:25:14 ▼ NotificationCenter.open 14:25:14 ▼ NotificationCenter.clickIconOnStatusBar 14:25:14 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 7) 14:25:17 8) Suggestion invoking 14:25:17 [ERROR] CheReporter runner.on.fail: C/C++ test Language server validation Suggestion invoking failed after 3052ms 14:25:17 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Language_server_validation_Suggestion_invoking' 14:25:17 at Object.mkdirSync (fs.js:757:3) 14:25:17 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:17 at Runner.emit (events.js:203:15) 14:25:17 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:17 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:17 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:17 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:17 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:17 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 8) 14:25:17 ▼ Editor.moveCursorToLineAndChar title: "hello.cpp" line: "15" char: "9" 14:25:17 ▼ Editor.performKeyCombination title: "hello.cpp" text: "" 14:25:20 9) Autocomplete 14:25:20 [ERROR] CheReporter runner.on.fail: C/C++ test Language server validation Autocomplete failed after 3023ms 14:25:20 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Language_server_validation_Autocomplete' 14:25:20 at Object.mkdirSync (fs.js:757:3) 14:25:20 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:20 at Runner.emit (events.js:203:15) 14:25:20 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:20 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:20 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:20 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:20 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:20 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 9) 14:25:20 Stopping and deleting the workspace 14:25:20 ▼ Dashboard.stopWorkspaceByUI "get-started" 14:25:20 ▼ Dashboard.openDashboard 14:25:20 ▼ Dashboard.waitPage 14:25:42 10) Stop worksapce 14:25:42 [ERROR] CheReporter runner.on.fail: C/C++ test Stopping and deleting the workspace Stop worksapce failed after 20540ms 14:25:42 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Stopping_and_deleting_the_workspace_Stop_worksapce' 14:25:42 at Object.mkdirSync (fs.js:757:3) 14:25:42 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:42 at Runner.emit (events.js:203:15) 14:25:42 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:42 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:42 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:42 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:42 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:42 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 10) 14:25:42 ▼ Dashboard.deleteWorkspaceByUI "get-started" 14:25:42 ▼ Dashboard.openDashboard 14:25:42 ▼ Dashboard.waitPage 14:26:04 11) Remove workspace 14:26:04 [ERROR] CheReporter runner.on.fail: C/C++ test Stopping and deleting the workspace Remove workspace failed after 20483ms 14:26:04 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Stopping_and_deleting_the_workspace_Remove_workspace' 14:26:04 at Object.mkdirSync (fs.js:757:3) 14:26:04 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:26:04 at Runner.emit (events.js:203:15) 14:26:04 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:26:04 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:26:04 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:26:04 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:26:04 at process._tickCallback (internal/process/next_tick.js:68:7) 14:26:04 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 11) 14:26:04 14:26:04 Test .NET Core 14:26:04 Create .NET Core workspace 14:26:04 ▼ Dashboard.waitPage 14:26:22 12) Open 'New Workspace' page 14:26:22 [ERROR] CheReporter runner.on.fail: Test .NET Core Create .NET Core workspace Open 'New Workspace' page failed after 20178ms 14:26:22 ▼ Ide.waitAndSwitchToIdeFrame 14:32:29 ▼ Ide.waitPreloaderVisible 14:38:35 13) Wait for workspace readiness 14:38:35 [ERROR] CheReporter runner.on.fail: Test .NET Core Create .NET Core workspace Wait for workspace readiness failed after 722970ms 14:38:35 Test opening file 14:38:35 ▼ ProjectTree.expandPathAndOpenFile "dotnet-web-simple" filename: Program.cs 14:38:35 ▼ ProjectTree.expandPath "dotnet-web-simple" 14:38:35 ▼ ProjectTree.expandItem "dotnet-web-simple" 14:38:35 14) Expand project and open file in editor 14:38:35 [ERROR] CheReporter runner.on.fail: Test .NET Core Test opening file Expand project and open file in editor failed after 3024ms 14:38:35 ▼ Editor.moveCursorToLineAndChar title: "Program.cs" line: "18" char: "6" 14:38:35 ▼ Editor.performKeyCombination title: "Program.cs" text: "" 14:38:35 15) Prepare file for LS tests 14:38:35 [ERROR] CheReporter runner.on.fail: Test .NET Core Test opening file Prepare file for LS tests failed after 3021ms 14:38:35 Installing dependencies 14:38:35 ▼ TopMenu.selectOption "Terminal" 14:38:35 ▼ TopMenu.clickOnTopMenuButton "Terminal" 14:38:35 ▼ Ide.closeAllNotifications 14:38:35 ▼ NotificationCenter.open 14:38:35 ▼ NotificationCenter.clickIconOnStatusBar 14:38:35 16) Run command 'update dependencies' 14:38:35 [ERROR] CheReporter runner.on.fail: Test .NET Core Installing dependencies Run command 'update dependencies' failed after 3036ms 14:38:35 ▼ Ide.closeAllNotifications 14:38:35 ▼ NotificationCenter.open 14:38:35 ▼ NotificationCenter.clickIconOnStatusBar 14:38:36 17) Close the terminal tasks 14:38:36 [ERROR] CheReporter runner.on.fail: Test .NET Core Installing dependencies Close the terminal tasks failed after 3036ms 14:38:36 Validation of workspace build 14:38:36 ▼ TopMenu.selectOption "Terminal" 14:38:36 ▼ TopMenu.clickOnTopMenuButton "Terminal" 14:38:36 ▼ Ide.closeAllNotifications 14:38:36 ▼ NotificationCenter.open 14:38:36 ▼ NotificationCenter.clickIconOnStatusBar 14:38:39 18) Run command 'build' 14:38:39 [ERROR] CheReporter runner.on.fail: Test .NET Core Validation of workspace build Run command 'build' failed after 3030ms 14:38:39 ▼ Ide.closeAllNotifications 14:38:39 ▼ NotificationCenter.open 14:38:39 ▼ NotificationCenter.clickIconOnStatusBar 14:38:43 19) Close the terminal tasks 14:38:43 [ERROR] CheReporter runner.on.fail: Test .NET Core Validation of workspace build Close the terminal tasks failed after 3039ms 14:38:43 Run .NET Core example application 14:38:43 ▼ TopMenu.selectOption "Terminal" 14:38:43 ▼ TopMenu.clickOnTopMenuButton "Terminal" 14:38:43 ▼ Ide.closeAllNotifications 14:38:43 ▼ NotificationCenter.open 14:38:43 ▼ NotificationCenter.clickIconOnStatusBar 14:38:45 20) Run command 'run' expecting notification pops up 14:38:45 [ERROR] CheReporter runner.on.fail: Test .NET Core Run .NET Core example application Run command 'run' expecting notification pops up failed after 3044ms 14:38:45 Language server validation 14:38:45 ▼ Ide.closeAllNotifications 14:38:45 ▼ NotificationCenter.open 14:38:45 ▼ NotificationCenter.clickIconOnStatusBar 14:38:49 21) Suggestion invoking 14:38:49 [ERROR] CheReporter runner.on.fail: Test .NET Core Language server validation Suggestion invoking failed after 3018ms 14:38:49 ▼ Editor.type title: "Program.cs" text: "error_text;" 14:38:49 ▼ Editor.selectTab "Program.cs" 14:38:49 ▼ Editor.waitTab "Program.cs" 14:38:54 22) Error highlighting 14:38:54 [ERROR] CheReporter runner.on.fail: Test .NET Core Language server validation Error highlighting failed after 5047ms 14:38:54 ▼ Editor.moveCursorToLineAndChar title: "Program.cs" line: "22" char: "27" 14:38:54 ▼ Editor.performKeyCombination title: "Program.cs" text: "" 14:38:56 23) Autocomplete 14:38:56 [ERROR] CheReporter runner.on.fail: Test .NET Core Language server validation Autocomplete failed after 3015ms 14:38:56 Stopping and deleting the workspace 14:38:57 ▼ Dashboard.stopWorkspaceByUI "get-started" 14:38:57 ▼ Dashboard.openDashboard 14:38:57 ▼ Dashboard.waitPage 14:39:19 24) Stop worksapce 14:39:19 [ERROR] CheReporter runner.on.fail: Test .NET Core Stopping and deleting the workspace Stop worksapce failed after 20561ms 14:39:19 ▼ Dashboard.deleteWorkspaceByUI "get-started" 14:39:19 ▼ Dashboard.openDashboard 14:39:19 ▼ Dashboard.waitPage 14:39:41 25) Remove workspace 14:39:41 [ERROR] CheReporter runner.on.fail: Test .NET Core Stopping and deleting the workspace Remove workspace failed after 20666ms 14:39:41 14:39:41 Go test 14:39:41 Create Go workspace 14:39:41 [WARN] Manually setting a preference for golang devfile LS based on issue: https://github.com/eclipse/che/issues/16113 14:39:41 ▼ PreferencesHandler.setUseGoLanguageServer to true. 14:39:41 ✓ Workaround for issue #16113 (293ms) 14:39:41 ▼ Dashboard.waitPage 14:39:59 26) Open 'New Workspace' page 14:39:59 [ERROR] CheReporter runner.on.fail: Go test Create Go workspace Open 'New Workspace' page failed after 20156ms 14:39:59 ▼ Ide.waitAndSwitchToIdeFrame 14:46:06 ▼ Ide.waitPreloaderVisible ... ``` ![screenshot-Open_'New_Workspace'_page](https://user-images.githubusercontent.com/1197777/105713656-11fa2680-5f24-11eb-985c-a36a360dd775.png) Fixup: https://github.com/eclipse/che/pull/18428 ### Che version <!-- (if workspace is running, version can be obtained with help/about menu) --> - [ ] latest - [x] nightly - [ ] other: please specify ### Steps to reproduce <!-- 1. Do '...' 2. Click on '....' 3. See error --> ### Expected behavior <!-- A clear and concise description of what you expected to happen. --> ### Runtime - [ ] kubernetes (include output of `kubectl version`) - [ ] Openshift (include output of `oc version`) - [x] minikube (include output of `minikube version` and `kubectl version`) - [ ] minishift (include output of `minishift version` and `oc version`) - [ ] docker-desktop + K8S (include output of `docker version` and `kubectl version`) - [ ] other: (please specify) ### Screenshots <!-- If applicable, add screenshots to help explain your problem. --> ### Installation method - [x] chectl * provide a full command that was used to deploy Eclipse Che (including the output) * provide an output of `chectl version` command - [ ] OperatorHub - [ ] I don't know ### Environment - [ ] my computer - [ ] Windows - [ ] Linux - [ ] macOS - [ ] Cloud - [ ] Amazon - [ ] Azure - [ ] GCE - [ ] other (please specify) - [ ] other: please specify ### Eclipse Che Logs <!-- https://www.eclipse.org/che/docs/che-7/collecting-logs-using-chectl --> ### Additional context <!-- Add any other context about the problem here. -->
1.0
Nightly Eclipse Che E2E devfile tests are failing on creation of workspace - ### Describe the bug Nightly Eclipse Che E2E devfile tests are failing on creation of workspace on minikube: https://codeready-workspaces-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/basic-MultiUser-Che-check-e2e-tests-against-k8s/3231/console ``` 14:12:03 + docker run --shm-size=1g --net=host --ipc=host -p 5920:5920 -e TS_SELENIUM_BASE_URL=https://che-eclipse-che.10.0.101.42.nip.io -e TS_SELENIUM_LOG_LEVEL=DEBUG -e TS_SELENIUM_MULTIUSER=true -e TS_SELENIUM_USERNAME=admin -e TS_SELENIUM_PASSWORD=admin -e TEST_SUITE=test-all-devfiles -e NODE_TLS_REJECT_UNAUTHORIZED=0 -v /mnt/hudson_workspace/workspace/basic-MultiUser-Che-check-e2e-tests-against-k8s/tests/e2e:/tmp/e2e:Z quay.io/eclipse/che-e2e:nightly 14:12:03 WARNING: Published ports are discarded when using host network mode 14:12:04 + '[' -z https://che-eclipse-che.10.0.101.42.nip.io ']' 14:12:04 + '[' -z test-all-devfiles ']' 14:12:04 + export DISPLAY=:20 14:12:04 + DISPLAY=:20 14:12:04 14:12:04 ####################### 14:12:04 14:12:04 For remote debug connect to the VNC server 0.0.0.0:5920 14:12:04 14:12:04 ####################### 14:12:04 14:12:04 + Xvfb :20 -screen 0 1920x1080x24 14:12:04 + echo '' 14:12:04 + echo '#######################' 14:12:04 + echo '' 14:12:04 + echo 'For remote debug connect to the VNC server 0.0.0.0:5920' 14:12:04 + echo '' 14:12:04 + echo '#######################' 14:12:04 + echo '' 14:12:04 + x11vnc -display :20 -N -forever 14:12:04 + export TS_SELENIUM_REMOTE_DRIVER_URL=http://localhost:4444/wd/hub 14:12:04 + TS_SELENIUM_REMOTE_DRIVER_URL=http://localhost:4444/wd/hub 14:12:04 + expectedStatus=200 14:12:04 + currentTry=1 14:12:04 + maximumAttempts=5 14:12:04 + /usr/bin/supervisord --configuration /etc/supervisord.conf 14:12:04 ++ curl -s -o /dev/null -w '%{http_code}' --fail http://localhost:4444/wd/hub/status 14:12:04 Wait selenium server availability ... 14:12:04 + '[' 000 '!=' 200 ']' 14:12:04 + (( currentTry > maximumAttempts )) 14:12:04 + echo 'Wait selenium server availability ...' 14:12:04 + curentTry=1 14:12:04 + sleep 1 14:12:05 ++ curl -s -o /dev/null -w '%{http_code}' --fail http://localhost:4444/wd/hub/status 14:12:05 Wait selenium server availability ... 14:12:05 + '[' 000 '!=' 200 ']' 14:12:05 + (( currentTry > maximumAttempts )) 14:12:05 + echo 'Wait selenium server availability ...' 14:12:05 + curentTry=2 14:12:05 + sleep 1 14:12:05 ++ curl -s -o /dev/null -w '%{http_code}' --fail http://localhost:4444/wd/hub/status 14:12:05 + '[' 000 '!=' 200 ']' 14:12:05 + (( currentTry > maximumAttempts )) 14:12:05 + echo 'Wait selenium server availability ...' 14:12:05 + curentTry=3 14:12:05 + sleep 1 14:12:05 Wait selenium server availability ... 14:12:07 ++ curl -s -o /dev/null -w '%{http_code}' --fail http://localhost:4444/wd/hub/status 14:12:07 + '[' 200 '!=' 200 ']' 14:12:07 + mount 14:12:07 + grep e2e 14:12:07 /dev/vda1 on /tmp/e2e type xfs (rw,relatime,seclabel,attr2,inode64,noquota) 14:12:07 + echo 'The local code is mounted. Executing local code.' 14:12:07 + cd /tmp/e2e 14:12:07 The local code is mounted. Executing local code. 14:12:07 + npm install 14:12:12 14:12:12 > chromedriver@80.0.1 install /tmp/e2e/node_modules/chromedriver 14:12:12 > node install.js 14:12:12 14:12:12 ChromeDriver binary exists. Validating... 14:12:12 ChromeDriver is already available at '/tmp/80.0.3987.16/chromedriver/chromedriver'. 14:12:12 Copying to target path /tmp/e2e/node_modules/chromedriver/lib/chromedriver 14:12:12 Fixing file permissions 14:12:12 Done. ChromeDriver binary available at /tmp/e2e/node_modules/chromedriver/lib/chromedriver/chromedriver 14:12:13 npm WARN e2e@1.0.0 No repository field. 14:12:13 npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@2.1.2 (node_modules/fsevents): 14:12:13 npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@2.1.2: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) 14:12:13 14:12:13 added 223 packages from 282 contributors and audited 224 packages in 4.897s 14:12:13 14:12:13 18 packages are looking for funding 14:12:13 run `npm fund` for details 14:12:13 14:12:13 found 1 high severity vulnerability 14:12:13 run `npm audit fix` to fix them, or `npm audit` for details 14:12:13 + '[' test-all-devfiles == load-test ']' 14:12:13 + SCREEN_RECORDING=true 14:12:13 + '[' true == true ']' 14:12:13 + echo 'Starting ffmpeg recording...' 14:12:13 + mkdir -p /tmp/ffmpeg_report 14:12:13 Starting ffmpeg recording... 14:12:13 + ffmpeg_pid=196 14:12:13 + trap kill_ffmpeg 2 15 14:12:13 + echo 'Running TEST_SUITE: test-all-devfiles with user: admin' 14:12:13 + npm run test-all-devfiles 14:12:13 + nohup ffmpeg -y -video_size 1920x1080 -framerate 24 -f x11grab -i :20.0 /tmp/ffmpeg_report/output.mp4 14:12:13 Running TEST_SUITE: test-all-devfiles with user: admin 14:12:14 14:12:14 > e2e@1.0.0 test-all-devfiles /tmp/e2e 14:12:14 > ./generateIndex.sh && npm run lint && npm run tsc && mocha --opts mocha-all-devfiles.opts 14:12:14 14:12:14 Generating index.ts file... 14:12:15 14:12:15 > e2e@1.0.0 lint /tmp/e2e 14:12:15 > tslint --fix -p . 14:12:15 14:12:17 14:12:17 Could not find implementations for the following rules specified in the configuration: 14:12:17 label-undefined 14:12:17 no-constructor-vars 14:12:17 no-duplicate-key 14:12:17 no-trailing-comma 14:12:17 no-unreachable 14:12:17 Try upgrading TSLint and/or ensuring that you have all necessary custom rules installed. 14:12:17 If TSLint was recently upgraded, you may have old rules configured which need to be cleaned up. 14:12:17 14:12:18 The 'no-string-literal' rule threw an error in '/tmp/e2e/utils/requestHandlers/CheApiRequestHandler.ts': 14:12:18 TypeError: ts.unescapeIdentifier is not a function 14:12:18 at cb (/tmp/e2e/node_modules/tslint/lib/rules/noStringLiteralRule.js:54:39) 14:12:18 at visitNode (/tmp/e2e/node_modules/typescript/lib/typescript.js:16505:24) 14:12:18 at Object.forEachChild (/tmp/e2e/node_modules/typescript/lib/typescript.js:16720:24) 14:12:18 at cb (/tmp/e2e/node_modules/tslint/lib/rules/noStringLiteralRule.js:60:19) 14:12:18 at visitNode (/tmp/e2e/node_modules/typescript/lib/typescript.js:16505:24) 14:12:18 at Object.forEachChild (/tmp/e2e/node_modules/typescript/lib/typescript.js:16751:24) 14:12:18 at cb (/tmp/e2e/node_modules/tslint/lib/rules/noStringLiteralRule.js:60:19) 14:12:18 at visitNodes (/tmp/e2e/node_modules/typescript/lib/typescript.js:16514:30) 14:12:18 at Object.forEachChild (/tmp/e2e/node_modules/typescript/lib/typescript.js:16740:24) 14:12:18 at cb (/tmp/e2e/node_modules/tslint/lib/rules/noStringLiteralRule.js:60:19) 14:12:21 14:12:21 > e2e@1.0.0 tsc /tmp/e2e 14:12:21 > tsc -p . 14:12:21 14:12:27 (node:283) DeprecationWarning: Configuration via mocha.opts is DEPRECATED and will be removed from a future version of Mocha. Use RC files or package.json instead. 14:12:27 14:12:27 ################## Launch Information ################## 14:12:27 14:12:27 TS_SELENIUM_BASE_URL: https://che-eclipse-che.10.0.101.42.nip.io 14:12:27 TS_SELENIUM_HEADLESS: false 14:12:27 14:12:27 TS_SELENIUM_USERNAME: admin 14:12:27 TS_SELENIUM_PASSWORD: admin 14:12:27 14:12:27 TS_SELENIUM_HAPPY_PATH_WORKSPACE_NAME: petclinic-dev-environment 14:12:27 TS_SELENIUM_DELAY_BETWEEN_SCREENSHOTS: 1000 14:12:27 TS_SELENIUM_REPORT_FOLDER: ./report 14:12:27 TS_SELENIUM_EXECUTION_SCREENCAST: false 14:12:27 DELETE_SCREENCAST_IF_TEST_PASS: true 14:12:27 TS_SELENIUM_REMOTE_DRIVER_URL: http://localhost:4444/wd/hub 14:12:27 DELETE_WORKSPACE_ON_FAILED_TEST: false 14:12:27 TS_SELENIUM_LOG_LEVEL: DEBUG 14:12:27 14:12:27 to output timeout variables, set TS_SELENIUM_PRINT_TIMEOUT_VARIABLES to true 14:12:27 ######################################################## 14:12:27 14:12:27 ▼ PreferencesHandler.setConfirmExit to never 14:12:27 14:12:27 Login test 14:12:27 ▼ DriverHelper.navigateToUrl https://che-eclipse-che.10.0.101.42.nip.io 14:12:28 [WARN] PreferencesHandler.setPreference could not set theia-user-preferences from api/preferences response, forcing manually. 14:12:29 ▼ PreferencesHandler.setTerminalToDom 14:12:29 ▼ MultiUserLoginPage.login 14:12:29 ▼ CheLoginPage.waitEclipseCheLoginFormPage 14:12:30 ▼ CheLoginPage.inputUserNameEclipseCheLoginPage username: "admin" 14:12:31 ▼ CheLoginPage.inputPaswordEclipseCheLoginPage password: "admin" 14:12:31 ▼ CheLoginPage.clickEclipseCheLoginButton 14:12:32 ✓ Login (4529ms) 14:12:32 14:12:32 C/C++ test 14:12:32 Create C/C++ workspace 14:12:32 ▼ Dashboard.waitPage 14:12:54 1) Open 'New Workspace' page 14:12:54 [ERROR] CheReporter runner.on.fail: C/C++ test Create C/C++ workspace Open 'New Workspace' page failed after 21286ms 14:12:54 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Create_C/C++_workspace_Open_'New_Workspace'_page' 14:12:54 at Object.mkdirSync (fs.js:757:3) 14:12:54 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:12:54 at Runner.emit (events.js:203:15) 14:12:54 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:12:54 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:12:54 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:12:54 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:12:54 at process._tickCallback (internal/process/next_tick.js:68:7) 14:12:54 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1) 14:12:54 (node:283) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. 14:12:54 ▼ Ide.waitAndSwitchToIdeFrame 14:19:00 ▼ Ide.waitPreloaderVisible 14:25:07 2) Wait for workspace readiness 14:25:07 [ERROR] CheReporter runner.on.fail: C/C++ test Create C/C++ workspace Wait for workspace readiness failed after 723212ms 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Create_C/C++_workspace_Wait_for_workspace_readiness' 14:25:07 at Object.mkdirSync (fs.js:757:3) 14:25:07 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:07 at Runner.emit (events.js:203:15) 14:25:07 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:07 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:07 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2) 14:25:07 Test opening file 14:25:07 ▼ ProjectTree.expandPathAndOpenFile "cpp-hello-world" filename: hello.cpp 14:25:07 ▼ ProjectTree.expandPath "cpp-hello-world" 14:25:07 ▼ ProjectTree.expandItem "cpp-hello-world" 14:25:07 3) Expand project and open file in editor 14:25:07 [ERROR] CheReporter runner.on.fail: C/C++ test Test opening file Expand project and open file in editor failed after 3039ms 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Test_opening_file_Expand_project_and_open_file_in_editor' 14:25:07 at Object.mkdirSync (fs.js:757:3) 14:25:07 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:07 at Runner.emit (events.js:203:15) 14:25:07 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:07 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:07 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 3) 14:25:07 ▼ Editor.moveCursorToLineAndChar title: "hello.cpp" line: "6" char: "1" 14:25:07 ▼ Editor.performKeyCombination title: "hello.cpp" text: "" 14:25:07 4) Prepare file for LS tests 14:25:07 [ERROR] CheReporter runner.on.fail: C/C++ test Test opening file Prepare file for LS tests failed after 3030ms 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Test_opening_file_Prepare_file_for_LS_tests' 14:25:07 at Object.mkdirSync (fs.js:757:3) 14:25:07 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:07 at Runner.emit (events.js:203:15) 14:25:07 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:07 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:07 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 4) 14:25:07 Validation of project build 14:25:07 ▼ TopMenu.selectOption "Terminal" 14:25:07 ▼ TopMenu.clickOnTopMenuButton "Terminal" 14:25:07 ▼ Ide.closeAllNotifications 14:25:07 ▼ NotificationCenter.open 14:25:07 ▼ NotificationCenter.clickIconOnStatusBar 14:25:07 5) Run command 'build' 14:25:07 [ERROR] CheReporter runner.on.fail: C/C++ test Validation of project build Run command 'build' failed after 3039ms 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Validation_of_project_build_Run_command_'build'' 14:25:07 at Object.mkdirSync (fs.js:757:3) 14:25:07 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:07 at Runner.emit (events.js:203:15) 14:25:07 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:07 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:07 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:07 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:07 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 5) 14:25:07 ▼ TopMenu.selectOption "Terminal" 14:25:07 ▼ TopMenu.clickOnTopMenuButton "Terminal" 14:25:07 ▼ Ide.closeAllNotifications 14:25:07 ▼ NotificationCenter.open 14:25:07 ▼ NotificationCenter.clickIconOnStatusBar 14:25:09 6) Run command 'run' 14:25:09 [ERROR] CheReporter runner.on.fail: C/C++ test Validation of project build Run command 'run' failed after 3038ms 14:25:09 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Validation_of_project_build_Run_command_'run'' 14:25:09 at Object.mkdirSync (fs.js:757:3) 14:25:09 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:09 at Runner.emit (events.js:203:15) 14:25:09 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:09 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:09 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:09 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:09 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:09 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 6) 14:25:09 Language server validation 14:25:09 ▼ Editor.type title: "hello.cpp" text: "error_text;" 14:25:09 ▼ Editor.selectTab "hello.cpp" 14:25:09 ▼ Editor.waitTab "hello.cpp" 14:25:14 7) Error highlighting 14:25:14 [ERROR] CheReporter runner.on.fail: C/C++ test Language server validation Error highlighting failed after 5060ms 14:25:14 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Language_server_validation_Error_highlighting' 14:25:14 at Object.mkdirSync (fs.js:757:3) 14:25:14 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:14 at Runner.emit (events.js:203:15) 14:25:14 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:14 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:14 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:14 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:14 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:14 ▼ Ide.closeAllNotifications 14:25:14 ▼ NotificationCenter.open 14:25:14 ▼ NotificationCenter.clickIconOnStatusBar 14:25:14 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 7) 14:25:17 8) Suggestion invoking 14:25:17 [ERROR] CheReporter runner.on.fail: C/C++ test Language server validation Suggestion invoking failed after 3052ms 14:25:17 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Language_server_validation_Suggestion_invoking' 14:25:17 at Object.mkdirSync (fs.js:757:3) 14:25:17 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:17 at Runner.emit (events.js:203:15) 14:25:17 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:17 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:17 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:17 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:17 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:17 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 8) 14:25:17 ▼ Editor.moveCursorToLineAndChar title: "hello.cpp" line: "15" char: "9" 14:25:17 ▼ Editor.performKeyCombination title: "hello.cpp" text: "" 14:25:20 9) Autocomplete 14:25:20 [ERROR] CheReporter runner.on.fail: C/C++ test Language server validation Autocomplete failed after 3023ms 14:25:20 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Language_server_validation_Autocomplete' 14:25:20 at Object.mkdirSync (fs.js:757:3) 14:25:20 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:20 at Runner.emit (events.js:203:15) 14:25:20 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:20 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:20 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:20 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:20 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:20 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 9) 14:25:20 Stopping and deleting the workspace 14:25:20 ▼ Dashboard.stopWorkspaceByUI "get-started" 14:25:20 ▼ Dashboard.openDashboard 14:25:20 ▼ Dashboard.waitPage 14:25:42 10) Stop worksapce 14:25:42 [ERROR] CheReporter runner.on.fail: C/C++ test Stopping and deleting the workspace Stop worksapce failed after 20540ms 14:25:42 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Stopping_and_deleting_the_workspace_Stop_worksapce' 14:25:42 at Object.mkdirSync (fs.js:757:3) 14:25:42 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:25:42 at Runner.emit (events.js:203:15) 14:25:42 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:25:42 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:25:42 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:25:42 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:25:42 at process._tickCallback (internal/process/next_tick.js:68:7) 14:25:42 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 10) 14:25:42 ▼ Dashboard.deleteWorkspaceByUI "get-started" 14:25:42 ▼ Dashboard.openDashboard 14:25:42 ▼ Dashboard.waitPage 14:26:04 11) Remove workspace 14:26:04 [ERROR] CheReporter runner.on.fail: C/C++ test Stopping and deleting the workspace Remove workspace failed after 20483ms 14:26:04 (node:283) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, mkdir './report/C/C++_test_Stopping_and_deleting_the_workspace_Remove_workspace' 14:26:04 at Object.mkdirSync (fs.js:757:3) 14:26:04 at Runner.<anonymous> (/tmp/e2e/driver/CheReporter.ts:148:12) 14:26:04 at Runner.emit (events.js:203:15) 14:26:04 at Runner.fail (/tmp/e2e/node_modules/mocha/lib/runner.js:310:8) 14:26:04 at /tmp/e2e/node_modules/mocha/lib/runner.js:698:18 14:26:04 at done (/tmp/e2e/node_modules/mocha/lib/runnable.js:335:5) 14:26:04 at /tmp/e2e/node_modules/mocha/lib/runnable.js:406:11 14:26:04 at process._tickCallback (internal/process/next_tick.js:68:7) 14:26:04 (node:283) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 11) 14:26:04 14:26:04 Test .NET Core 14:26:04 Create .NET Core workspace 14:26:04 ▼ Dashboard.waitPage 14:26:22 12) Open 'New Workspace' page 14:26:22 [ERROR] CheReporter runner.on.fail: Test .NET Core Create .NET Core workspace Open 'New Workspace' page failed after 20178ms 14:26:22 ▼ Ide.waitAndSwitchToIdeFrame 14:32:29 ▼ Ide.waitPreloaderVisible 14:38:35 13) Wait for workspace readiness 14:38:35 [ERROR] CheReporter runner.on.fail: Test .NET Core Create .NET Core workspace Wait for workspace readiness failed after 722970ms 14:38:35 Test opening file 14:38:35 ▼ ProjectTree.expandPathAndOpenFile "dotnet-web-simple" filename: Program.cs 14:38:35 ▼ ProjectTree.expandPath "dotnet-web-simple" 14:38:35 ▼ ProjectTree.expandItem "dotnet-web-simple" 14:38:35 14) Expand project and open file in editor 14:38:35 [ERROR] CheReporter runner.on.fail: Test .NET Core Test opening file Expand project and open file in editor failed after 3024ms 14:38:35 ▼ Editor.moveCursorToLineAndChar title: "Program.cs" line: "18" char: "6" 14:38:35 ▼ Editor.performKeyCombination title: "Program.cs" text: "" 14:38:35 15) Prepare file for LS tests 14:38:35 [ERROR] CheReporter runner.on.fail: Test .NET Core Test opening file Prepare file for LS tests failed after 3021ms 14:38:35 Installing dependencies 14:38:35 ▼ TopMenu.selectOption "Terminal" 14:38:35 ▼ TopMenu.clickOnTopMenuButton "Terminal" 14:38:35 ▼ Ide.closeAllNotifications 14:38:35 ▼ NotificationCenter.open 14:38:35 ▼ NotificationCenter.clickIconOnStatusBar 14:38:35 16) Run command 'update dependencies' 14:38:35 [ERROR] CheReporter runner.on.fail: Test .NET Core Installing dependencies Run command 'update dependencies' failed after 3036ms 14:38:35 ▼ Ide.closeAllNotifications 14:38:35 ▼ NotificationCenter.open 14:38:35 ▼ NotificationCenter.clickIconOnStatusBar 14:38:36 17) Close the terminal tasks 14:38:36 [ERROR] CheReporter runner.on.fail: Test .NET Core Installing dependencies Close the terminal tasks failed after 3036ms 14:38:36 Validation of workspace build 14:38:36 ▼ TopMenu.selectOption "Terminal" 14:38:36 ▼ TopMenu.clickOnTopMenuButton "Terminal" 14:38:36 ▼ Ide.closeAllNotifications 14:38:36 ▼ NotificationCenter.open 14:38:36 ▼ NotificationCenter.clickIconOnStatusBar 14:38:39 18) Run command 'build' 14:38:39 [ERROR] CheReporter runner.on.fail: Test .NET Core Validation of workspace build Run command 'build' failed after 3030ms 14:38:39 ▼ Ide.closeAllNotifications 14:38:39 ▼ NotificationCenter.open 14:38:39 ▼ NotificationCenter.clickIconOnStatusBar 14:38:43 19) Close the terminal tasks 14:38:43 [ERROR] CheReporter runner.on.fail: Test .NET Core Validation of workspace build Close the terminal tasks failed after 3039ms 14:38:43 Run .NET Core example application 14:38:43 ▼ TopMenu.selectOption "Terminal" 14:38:43 ▼ TopMenu.clickOnTopMenuButton "Terminal" 14:38:43 ▼ Ide.closeAllNotifications 14:38:43 ▼ NotificationCenter.open 14:38:43 ▼ NotificationCenter.clickIconOnStatusBar 14:38:45 20) Run command 'run' expecting notification pops up 14:38:45 [ERROR] CheReporter runner.on.fail: Test .NET Core Run .NET Core example application Run command 'run' expecting notification pops up failed after 3044ms 14:38:45 Language server validation 14:38:45 ▼ Ide.closeAllNotifications 14:38:45 ▼ NotificationCenter.open 14:38:45 ▼ NotificationCenter.clickIconOnStatusBar 14:38:49 21) Suggestion invoking 14:38:49 [ERROR] CheReporter runner.on.fail: Test .NET Core Language server validation Suggestion invoking failed after 3018ms 14:38:49 ▼ Editor.type title: "Program.cs" text: "error_text;" 14:38:49 ▼ Editor.selectTab "Program.cs" 14:38:49 ▼ Editor.waitTab "Program.cs" 14:38:54 22) Error highlighting 14:38:54 [ERROR] CheReporter runner.on.fail: Test .NET Core Language server validation Error highlighting failed after 5047ms 14:38:54 ▼ Editor.moveCursorToLineAndChar title: "Program.cs" line: "22" char: "27" 14:38:54 ▼ Editor.performKeyCombination title: "Program.cs" text: "" 14:38:56 23) Autocomplete 14:38:56 [ERROR] CheReporter runner.on.fail: Test .NET Core Language server validation Autocomplete failed after 3015ms 14:38:56 Stopping and deleting the workspace 14:38:57 ▼ Dashboard.stopWorkspaceByUI "get-started" 14:38:57 ▼ Dashboard.openDashboard 14:38:57 ▼ Dashboard.waitPage 14:39:19 24) Stop worksapce 14:39:19 [ERROR] CheReporter runner.on.fail: Test .NET Core Stopping and deleting the workspace Stop worksapce failed after 20561ms 14:39:19 ▼ Dashboard.deleteWorkspaceByUI "get-started" 14:39:19 ▼ Dashboard.openDashboard 14:39:19 ▼ Dashboard.waitPage 14:39:41 25) Remove workspace 14:39:41 [ERROR] CheReporter runner.on.fail: Test .NET Core Stopping and deleting the workspace Remove workspace failed after 20666ms 14:39:41 14:39:41 Go test 14:39:41 Create Go workspace 14:39:41 [WARN] Manually setting a preference for golang devfile LS based on issue: https://github.com/eclipse/che/issues/16113 14:39:41 ▼ PreferencesHandler.setUseGoLanguageServer to true. 14:39:41 ✓ Workaround for issue #16113 (293ms) 14:39:41 ▼ Dashboard.waitPage 14:39:59 26) Open 'New Workspace' page 14:39:59 [ERROR] CheReporter runner.on.fail: Go test Create Go workspace Open 'New Workspace' page failed after 20156ms 14:39:59 ▼ Ide.waitAndSwitchToIdeFrame 14:46:06 ▼ Ide.waitPreloaderVisible ... ``` ![screenshot-Open_'New_Workspace'_page](https://user-images.githubusercontent.com/1197777/105713656-11fa2680-5f24-11eb-985c-a36a360dd775.png) Fixup: https://github.com/eclipse/che/pull/18428 ### Che version <!-- (if workspace is running, version can be obtained with help/about menu) --> - [ ] latest - [x] nightly - [ ] other: please specify ### Steps to reproduce <!-- 1. Do '...' 2. Click on '....' 3. See error --> ### Expected behavior <!-- A clear and concise description of what you expected to happen. --> ### Runtime - [ ] kubernetes (include output of `kubectl version`) - [ ] Openshift (include output of `oc version`) - [x] minikube (include output of `minikube version` and `kubectl version`) - [ ] minishift (include output of `minishift version` and `oc version`) - [ ] docker-desktop + K8S (include output of `docker version` and `kubectl version`) - [ ] other: (please specify) ### Screenshots <!-- If applicable, add screenshots to help explain your problem. --> ### Installation method - [x] chectl * provide a full command that was used to deploy Eclipse Che (including the output) * provide an output of `chectl version` command - [ ] OperatorHub - [ ] I don't know ### Environment - [ ] my computer - [ ] Windows - [ ] Linux - [ ] macOS - [ ] Cloud - [ ] Amazon - [ ] Azure - [ ] GCE - [ ] other (please specify) - [ ] other: please specify ### Eclipse Che Logs <!-- https://www.eclipse.org/che/docs/che-7/collecting-logs-using-chectl --> ### Additional context <!-- Add any other context about the problem here. -->
non_process
nightly eclipse che devfile tests are failing on creation of workspace describe the bug nightly eclipse che devfile tests are failing on creation of workspace on minikube docker run shm size net host ipc host p e ts selenium base url e ts selenium log level debug e ts selenium multiuser true e ts selenium username admin e ts selenium password admin e test suite test all devfiles e node tls reject unauthorized v mnt hudson workspace workspace basic multiuser che check tests against tests tmp z quay io eclipse che nightly warning published ports are discarded when using host network mode export display display for remote debug connect to the vnc server xvfb screen echo echo echo echo for remote debug connect to the vnc server echo echo echo display n forever export ts selenium remote driver url ts selenium remote driver url expectedstatus currenttry maximumattempts usr bin supervisord configuration etc supervisord conf curl s o dev null w http code fail wait selenium server availability currenttry maximumattempts echo wait selenium server availability curenttry sleep curl s o dev null w http code fail wait selenium server availability currenttry maximumattempts echo wait selenium server availability curenttry sleep curl s o dev null w http code fail currenttry maximumattempts echo wait selenium server availability curenttry sleep wait selenium server availability curl s o dev null w http code fail mount grep dev on tmp type xfs rw relatime seclabel noquota echo the local code is mounted executing local code cd tmp the local code is mounted executing local code npm install chromedriver install tmp node modules chromedriver node install js chromedriver binary exists validating chromedriver is already available at tmp chromedriver chromedriver copying to target path tmp node modules chromedriver lib chromedriver fixing file permissions done chromedriver binary available at tmp node modules chromedriver lib chromedriver chromedriver npm warn no repository field npm warn optional skipping optional dependency fsevents node modules fsevents npm warn notsup skipping optional dependency unsupported platform for fsevents wanted os darwin arch any current os linux arch added packages from contributors and audited packages in packages are looking for funding run npm fund for details found high severity vulnerability run npm audit fix to fix them or npm audit for details screen recording true echo starting ffmpeg recording mkdir p tmp ffmpeg report starting ffmpeg recording ffmpeg pid trap kill ffmpeg echo running test suite test all devfiles with user admin npm run test all devfiles nohup ffmpeg y video size framerate f i tmp ffmpeg report output running test suite test all devfiles with user admin test all devfiles tmp generateindex sh npm run lint npm run tsc mocha opts mocha all devfiles opts generating index ts file lint tmp tslint fix p could not find implementations for the following rules specified in the configuration label undefined no constructor vars no duplicate key no trailing comma no unreachable try upgrading tslint and or ensuring that you have all necessary custom rules installed if tslint was recently upgraded you may have old rules configured which need to be cleaned up the no string literal rule threw an error in tmp utils requesthandlers cheapirequesthandler ts typeerror ts unescapeidentifier is not a function at cb tmp node modules tslint lib rules nostringliteralrule js at visitnode tmp node modules typescript lib typescript js at object foreachchild tmp node modules typescript lib typescript js at cb tmp node modules tslint lib rules nostringliteralrule js at visitnode tmp node modules typescript lib typescript js at object foreachchild tmp node modules typescript lib typescript js at cb tmp node modules tslint lib rules nostringliteralrule js at visitnodes tmp node modules typescript lib typescript js at object foreachchild tmp node modules typescript lib typescript js at cb tmp node modules tslint lib rules nostringliteralrule js tsc tmp tsc p node deprecationwarning configuration via mocha opts is deprecated and will be removed from a future version of mocha use rc files or package json instead launch information ts selenium base url ts selenium headless false ts selenium username admin ts selenium password admin ts selenium happy path workspace name petclinic dev environment ts selenium delay between screenshots ts selenium report folder report ts selenium execution screencast false delete screencast if test pass true ts selenium remote driver url delete workspace on failed test false ts selenium log level debug to output timeout variables set ts selenium print timeout variables to true ▼ preferenceshandler setconfirmexit to never login test ▼ driverhelper navigatetourl preferenceshandler setpreference could not set theia user preferences from api preferences response forcing manually ▼ preferenceshandler setterminaltodom ▼ multiuserloginpage login ▼ cheloginpage waiteclipsecheloginformpage ▼ cheloginpage inputusernameeclipsecheloginpage username admin ▼ cheloginpage inputpaswordeclipsecheloginpage password admin ▼ cheloginpage clickeclipsecheloginbutton ✓ login c c test create c c workspace ▼ dashboard waitpage open new workspace page chereporter runner on fail c c test create c c workspace open new workspace page failed after node unhandledpromiserejectionwarning error enoent no such file or directory mkdir report c c test create c c workspace open new workspace page at object mkdirsync fs js at runner tmp driver chereporter ts at runner emit events js at runner fail tmp node modules mocha lib runner js at tmp node modules mocha lib runner js at done tmp node modules mocha lib runnable js at tmp node modules mocha lib runnable js at process tickcallback internal process next tick js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id node deprecationwarning unhandled promise rejections are deprecated in the future promise rejections that are not handled will terminate the node js process with a non zero exit code ▼ ide waitandswitchtoideframe ▼ ide waitpreloadervisible wait for workspace readiness chereporter runner on fail c c test create c c workspace wait for workspace readiness failed after node unhandledpromiserejectionwarning error enoent no such file or directory mkdir report c c test create c c workspace wait for workspace readiness at object mkdirsync fs js at runner tmp driver chereporter ts at runner emit events js at runner fail tmp node modules mocha lib runner js at tmp node modules mocha lib runner js at done tmp node modules mocha lib runnable js at tmp node modules mocha lib runnable js at process tickcallback internal process next tick js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id test opening file ▼ projecttree expandpathandopenfile cpp hello world filename hello cpp ▼ projecttree expandpath cpp hello world ▼ projecttree expanditem cpp hello world expand project and open file in editor chereporter runner on fail c c test test opening file expand project and open file in editor failed after node unhandledpromiserejectionwarning error enoent no such file or directory mkdir report c c test test opening file expand project and open file in editor at object mkdirsync fs js at runner tmp driver chereporter ts at runner emit events js at runner fail tmp node modules mocha lib runner js at tmp node modules mocha lib runner js at done tmp node modules mocha lib runnable js at tmp node modules mocha lib runnable js at process tickcallback internal process next tick js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id ▼ editor movecursortolineandchar title hello cpp line char ▼ editor performkeycombination title hello cpp text  prepare file for ls tests chereporter runner on fail c c test test opening file prepare file for ls tests failed after node unhandledpromiserejectionwarning error enoent no such file or directory mkdir report c c test test opening file prepare file for ls tests at object mkdirsync fs js at runner tmp driver chereporter ts at runner emit events js at runner fail tmp node modules mocha lib runner js at tmp node modules mocha lib runner js at done tmp node modules mocha lib runnable js at tmp node modules mocha lib runnable js at process tickcallback internal process next tick js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id validation of project build ▼ topmenu selectoption terminal ▼ topmenu clickontopmenubutton terminal ▼ ide closeallnotifications ▼ notificationcenter open ▼ notificationcenter clickicononstatusbar run command build chereporter runner on fail c c test validation of project build run command build failed after node unhandledpromiserejectionwarning error enoent no such file or directory mkdir report c c test validation of project build run command build at object mkdirsync fs js at runner tmp driver chereporter ts at runner emit events js at runner fail tmp node modules mocha lib runner js at tmp node modules mocha lib runner js at done tmp node modules mocha lib runnable js at tmp node modules mocha lib runnable js at process tickcallback internal process next tick js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id ▼ topmenu selectoption terminal ▼ topmenu clickontopmenubutton terminal ▼ ide closeallnotifications ▼ notificationcenter open ▼ notificationcenter clickicononstatusbar run command run chereporter runner on fail c c test validation of project build run command run failed after node unhandledpromiserejectionwarning error enoent no such file or directory mkdir report c c test validation of project build run command run at object mkdirsync fs js at runner tmp driver chereporter ts at runner emit events js at runner fail tmp node modules mocha lib runner js at tmp node modules mocha lib runner js at done tmp node modules mocha lib runnable js at tmp node modules mocha lib runnable js at process tickcallback internal process next tick js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id language server validation ▼ editor type title hello cpp text error text ▼ editor selecttab hello cpp ▼ editor waittab hello cpp error highlighting chereporter runner on fail c c test language server validation error highlighting failed after node unhandledpromiserejectionwarning error enoent no such file or directory mkdir report c c test language server validation error highlighting at object mkdirsync fs js at runner tmp driver chereporter ts at runner emit events js at runner fail tmp node modules mocha lib runner js at tmp node modules mocha lib runner js at done tmp node modules mocha lib runnable js at tmp node modules mocha lib runnable js at process tickcallback internal process next tick js ▼ ide closeallnotifications ▼ notificationcenter open ▼ notificationcenter clickicononstatusbar node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id suggestion invoking chereporter runner on fail c c test language server validation suggestion invoking failed after node unhandledpromiserejectionwarning error enoent no such file or directory mkdir report c c test language server validation suggestion invoking at object mkdirsync fs js at runner tmp driver chereporter ts at runner emit events js at runner fail tmp node modules mocha lib runner js at tmp node modules mocha lib runner js at done tmp node modules mocha lib runnable js at tmp node modules mocha lib runnable js at process tickcallback internal process next tick js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id ▼ editor movecursortolineandchar title hello cpp line char ▼ editor performkeycombination title hello cpp text  autocomplete chereporter runner on fail c c test language server validation autocomplete failed after node unhandledpromiserejectionwarning error enoent no such file or directory mkdir report c c test language server validation autocomplete at object mkdirsync fs js at runner tmp driver chereporter ts at runner emit events js at runner fail tmp node modules mocha lib runner js at tmp node modules mocha lib runner js at done tmp node modules mocha lib runnable js at tmp node modules mocha lib runnable js at process tickcallback internal process next tick js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id stopping and deleting the workspace ▼ dashboard stopworkspacebyui get started ▼ dashboard opendashboard ▼ dashboard waitpage stop worksapce chereporter runner on fail c c test stopping and deleting the workspace stop worksapce failed after node unhandledpromiserejectionwarning error enoent no such file or directory mkdir report c c test stopping and deleting the workspace stop worksapce at object mkdirsync fs js at runner tmp driver chereporter ts at runner emit events js at runner fail tmp node modules mocha lib runner js at tmp node modules mocha lib runner js at done tmp node modules mocha lib runnable js at tmp node modules mocha lib runnable js at process tickcallback internal process next tick js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id ▼ dashboard deleteworkspacebyui get started ▼ dashboard opendashboard ▼ dashboard waitpage remove workspace chereporter runner on fail c c test stopping and deleting the workspace remove workspace failed after node unhandledpromiserejectionwarning error enoent no such file or directory mkdir report c c test stopping and deleting the workspace remove workspace at object mkdirsync fs js at runner tmp driver chereporter ts at runner emit events js at runner fail tmp node modules mocha lib runner js at tmp node modules mocha lib runner js at done tmp node modules mocha lib runnable js at tmp node modules mocha lib runnable js at process tickcallback internal process next tick js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id test net core create net core workspace ▼ dashboard waitpage open new workspace page chereporter runner on fail test net core create net core workspace open new workspace page failed after ▼ ide waitandswitchtoideframe ▼ ide waitpreloadervisible wait for workspace readiness chereporter runner on fail test net core create net core workspace wait for workspace readiness failed after test opening file ▼ projecttree expandpathandopenfile dotnet web simple filename program cs ▼ projecttree expandpath dotnet web simple ▼ projecttree expanditem dotnet web simple expand project and open file in editor chereporter runner on fail test net core test opening file expand project and open file in editor failed after ▼ editor movecursortolineandchar title program cs line char ▼ editor performkeycombination title program cs text  prepare file for ls tests chereporter runner on fail test net core test opening file prepare file for ls tests failed after installing dependencies ▼ topmenu selectoption terminal ▼ topmenu clickontopmenubutton terminal ▼ ide closeallnotifications ▼ notificationcenter open ▼ notificationcenter clickicononstatusbar run command update dependencies chereporter runner on fail test net core installing dependencies run command update dependencies failed after ▼ ide closeallnotifications ▼ notificationcenter open ▼ notificationcenter clickicononstatusbar close the terminal tasks chereporter runner on fail test net core installing dependencies close the terminal tasks failed after validation of workspace build ▼ topmenu selectoption terminal ▼ topmenu clickontopmenubutton terminal ▼ ide closeallnotifications ▼ notificationcenter open ▼ notificationcenter clickicononstatusbar run command build chereporter runner on fail test net core validation of workspace build run command build failed after ▼ ide closeallnotifications ▼ notificationcenter open ▼ notificationcenter clickicononstatusbar close the terminal tasks chereporter runner on fail test net core validation of workspace build close the terminal tasks failed after run net core example application ▼ topmenu selectoption terminal ▼ topmenu clickontopmenubutton terminal ▼ ide closeallnotifications ▼ notificationcenter open ▼ notificationcenter clickicononstatusbar run command run expecting notification pops up chereporter runner on fail test net core run net core example application run command run expecting notification pops up failed after language server validation ▼ ide closeallnotifications ▼ notificationcenter open ▼ notificationcenter clickicononstatusbar suggestion invoking chereporter runner on fail test net core language server validation suggestion invoking failed after ▼ editor type title program cs text error text ▼ editor selecttab program cs ▼ editor waittab program cs error highlighting chereporter runner on fail test net core language server validation error highlighting failed after ▼ editor movecursortolineandchar title program cs line char ▼ editor performkeycombination title program cs text  autocomplete chereporter runner on fail test net core language server validation autocomplete failed after stopping and deleting the workspace ▼ dashboard stopworkspacebyui get started ▼ dashboard opendashboard ▼ dashboard waitpage stop worksapce chereporter runner on fail test net core stopping and deleting the workspace stop worksapce failed after ▼ dashboard deleteworkspacebyui get started ▼ dashboard opendashboard ▼ dashboard waitpage remove workspace chereporter runner on fail test net core stopping and deleting the workspace remove workspace failed after go test create go workspace manually setting a preference for golang devfile ls based on issue ▼ preferenceshandler setusegolanguageserver to true ✓ workaround for issue ▼ dashboard waitpage open new workspace page chereporter runner on fail go test create go workspace open new workspace page failed after ▼ ide waitandswitchtoideframe ▼ ide waitpreloadervisible fixup che version latest nightly other please specify steps to reproduce do click on see error expected behavior runtime kubernetes include output of kubectl version openshift include output of oc version minikube include output of minikube version and kubectl version minishift include output of minishift version and oc version docker desktop include output of docker version and kubectl version other please specify screenshots installation method chectl provide a full command that was used to deploy eclipse che including the output provide an output of chectl version command operatorhub i don t know environment my computer windows linux macos cloud amazon azure gce other please specify other please specify eclipse che logs additional context
0
13,787
16,549,426,065
IssuesEvent
2021-05-28 06:42:19
rladies/rladiesguide
https://api.github.com/repos/rladies/rladiesguide
closed
add info on dead name updates for directory
improvements needed :point_up: rladies processes :bullettrain_side:
draft text: ## Updating Dead Names When updating an existing R-Ladies member directory profile, we usually edit the entry but this will not fix the URL. As a result, the dead name will still appear in the URL for the directory entry. Therefore, when requested to update a dead name in an existing directory entry, we will need to completely delete the old entry and add the entry as a new entry. This will create a new URL with the updated name.
1.0
add info on dead name updates for directory - draft text: ## Updating Dead Names When updating an existing R-Ladies member directory profile, we usually edit the entry but this will not fix the URL. As a result, the dead name will still appear in the URL for the directory entry. Therefore, when requested to update a dead name in an existing directory entry, we will need to completely delete the old entry and add the entry as a new entry. This will create a new URL with the updated name.
process
add info on dead name updates for directory draft text updating dead names when updating an existing r ladies member directory profile we usually edit the entry but this will not fix the url as a result the dead name will still appear in the url for the directory entry therefore when requested to update a dead name in an existing directory entry we will need to completely delete the old entry and add the entry as a new entry this will create a new url with the updated name
1
7,722
10,826,179,250
IssuesEvent
2019-11-09 20:52:41
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
Windows service crash when stop it
area-System.ServiceProcess
_From @dospat07 on Wednesday, September 25, 2019 2:18:37 PM_ ### To Reproduce Steps to reproduce the behavior: 1. Using this version of ASP.NET Core 3.0 and Visual Studio 2019 version 16.3 2. Create new ASP.NET core API web application 3. Add Microsoft.Extensions.Hosting.WindowsServices 4. Change method CreateHostBuilder to ``` public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .UseWindowsService() .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }); } ``` 5. change appsettings.json to ``` { "Logging": { "EventLog": { "LogLevel": { "Default": "Warning" } } }, "AllowedHosts": "*" } ``` 6. create windows service via sc or powershell 7. start service 8. stop service 9. Error found in Application log in EventViewer Faulting application name: WebApplication3.exe, version: 1.0.0.0, time stamp: 0x5d7bb03a Faulting module name: ntdll.dll, version: 6.3.9600.19304, time stamp: 0x5c7f684f Exception code: 0xc0000374 Fault offset: 0x00000000000f1cd0 Faulting process ID: 0x3eb0 Faulting application start time: 0x01d573a4eb992a85 Faulting application path: C:\Users\xxx\Source\repos\WebApplication3\WebApplication3\bin\Debug\netcoreapp3.0\WebApplication3.exe Faulting module path: C:\Windows\SYSTEM32\ntdll.dll Report ID: 4cb3564a-df98-11e9-82b2-082e5f215ab6 Faulting package full name: Faulting package-relative application ID: _Copied from original issue: aspnet/Extensions#2396_
1.0
Windows service crash when stop it - _From @dospat07 on Wednesday, September 25, 2019 2:18:37 PM_ ### To Reproduce Steps to reproduce the behavior: 1. Using this version of ASP.NET Core 3.0 and Visual Studio 2019 version 16.3 2. Create new ASP.NET core API web application 3. Add Microsoft.Extensions.Hosting.WindowsServices 4. Change method CreateHostBuilder to ``` public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .UseWindowsService() .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }); } ``` 5. change appsettings.json to ``` { "Logging": { "EventLog": { "LogLevel": { "Default": "Warning" } } }, "AllowedHosts": "*" } ``` 6. create windows service via sc or powershell 7. start service 8. stop service 9. Error found in Application log in EventViewer Faulting application name: WebApplication3.exe, version: 1.0.0.0, time stamp: 0x5d7bb03a Faulting module name: ntdll.dll, version: 6.3.9600.19304, time stamp: 0x5c7f684f Exception code: 0xc0000374 Fault offset: 0x00000000000f1cd0 Faulting process ID: 0x3eb0 Faulting application start time: 0x01d573a4eb992a85 Faulting application path: C:\Users\xxx\Source\repos\WebApplication3\WebApplication3\bin\Debug\netcoreapp3.0\WebApplication3.exe Faulting module path: C:\Windows\SYSTEM32\ntdll.dll Report ID: 4cb3564a-df98-11e9-82b2-082e5f215ab6 Faulting package full name: Faulting package-relative application ID: _Copied from original issue: aspnet/Extensions#2396_
process
windows service crash when stop it from on wednesday september pm to reproduce steps to reproduce the behavior using this version of asp net core and visual studio version create new asp net core api web application add microsoft extensions hosting windowsservices change method createhostbuilder to public static ihostbuilder createhostbuilder string args host createdefaultbuilder args usewindowsservice configurewebhostdefaults webbuilder webbuilder usestartup change appsettings json to logging eventlog loglevel default warning allowedhosts create windows service via sc or powershell start service stop service error found in application log in eventviewer faulting application name exe version time stamp faulting module name ntdll dll version time stamp exception code fault offset faulting process id faulting application start time faulting application path c users xxx source repos bin debug exe faulting module path c windows ntdll dll report id faulting package full name faulting package relative application id copied from original issue aspnet extensions
1
406,856
27,585,348,322
IssuesEvent
2023-03-08 19:17:42
kokkos/kokkos
https://api.github.com/repos/kokkos/kokkos
closed
Do ScatterViews support subviews?
Documentation InDevelop
Simple question: Do ScatterViews support subviews? If not, consider this a feature request. See #1393.
1.0
Do ScatterViews support subviews? - Simple question: Do ScatterViews support subviews? If not, consider this a feature request. See #1393.
non_process
do scatterviews support subviews simple question do scatterviews support subviews if not consider this a feature request see
0
6,384
9,449,377,661
IssuesEvent
2019-04-16 01:37:56
googleapis/google-auth-library-nodejs
https://api.github.com/repos/googleapis/google-auth-library-nodejs
closed
Add a test to cover #605
type: process
There's a PR over in #605 to fix a bug with not throwing on refreshAccessToken when no refresh_token is set. We... should have a test to cover that.
1.0
Add a test to cover #605 - There's a PR over in #605 to fix a bug with not throwing on refreshAccessToken when no refresh_token is set. We... should have a test to cover that.
process
add a test to cover there s a pr over in to fix a bug with not throwing on refreshaccesstoken when no refresh token is set we should have a test to cover that
1
7,237
10,384,749,738
IssuesEvent
2019-09-10 12:42:39
googleapis/google-cloud-python
https://api.github.com/repos/googleapis/google-cloud-python
closed
Synthesis failed for phishingprotection
api: phishingprotection autosynth failure type: process
Hello! Autosynth couldn't regenerate phishingprotection. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth-phishingprotection' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--'] synthtool > Executing /tmpfs/src/git/autosynth/working_repo/phishingprotection/synth.py. synthtool > Ensuring dependencies. synthtool > Pulling artman image. latest: Pulling from googleapis/artman Digest: sha256:0e6f3a668cd68afc768ecbe08817cf6e56a0e64fcbdb1c58c3b97492d12418a1 Status: Image is up to date for googleapis/artman:latest synthtool > Cloning googleapis. synthtool > Running generator for google/cloud/phishingprotection/artman_phishingprotection_v1beta1.yaml. synthtool > Failed executing docker run --name artman-docker --rm -i -e HOST_USER_ID=1000 -e HOST_GROUP_ID=1000 -e RUNNING_IN_ARTMAN_DOCKER=True -v /home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis -v /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles -w /home/kbuilder/.cache/synthtool/googleapis googleapis/artman:latest /bin/bash -c artman --local --config google/cloud/phishingprotection/artman_phishingprotection_v1beta1.yaml --generator-args='--dev_samples' generate python_gapic: artman> Final args: artman> api_name: phishingprotection artman> api_version: v1beta1 artman> artifact_type: GAPIC artman> aspect: ALL artman> gapic_code_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/phishingprotection-v1beta1 artman> gapic_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/phishingprotection/v1beta1/phishingprotection_gapic.yaml artman> generator_args: --dev_samples artman> import_proto_path: artman> - /home/kbuilder/.cache/synthtool/googleapis artman> language: python artman> organization_name: google-cloud artman> output_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles artman> proto_deps: artman> - name: google-common-protos artman> proto_package: '' artman> root_dir: /home/kbuilder/.cache/synthtool/googleapis artman> samples: '' artman> service_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/phishingprotection/phishingprotection.yaml artman> src_proto_path: artman> - /home/kbuilder/.cache/synthtool/googleapis/google/cloud/phishingprotection/v1beta1 artman> toolkit_path: /toolkit artman> artman> Creating GapicClientPipeline. artman.output > WARNING: toplevel: (lint) control-presence: Service phishingprotection.googleapis.com does not have control environment configured. ERROR: toplevel: Found single resource name "project" in GAPIC config that has no corresponding annotation ERROR: toplevel: Unexpected exception: java.lang.NullPointerException at com.google.api.codegen.config.GapicProductConfig.createResourceNameConfigsFromAnnotationsAndGapicConfig(GapicProductConfig.java:817) at com.google.api.codegen.config.GapicProductConfig.create(GapicProductConfig.java:253) at com.google.api.codegen.gapic.GapicGeneratorApp.process(GapicGeneratorApp.java:212) at com.google.api.tools.framework.tools.GenericToolDriverBase.run(GenericToolDriverBase.java:90) at com.google.api.tools.framework.tools.ToolDriverBase.run(ToolDriverBase.java:73) at com.google.api.codegen.GeneratorMain.gapicGeneratorMain(GeneratorMain.java:338) at com.google.api.codegen.GeneratorMain.main(GeneratorMain.java:190) artman> Traceback (most recent call last): File "/artman/artman/cli/main.py", line 72, in main engine.run() File "/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py", line 247, in run for _state in self.run_iter(timeout=timeout): File "/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter failure.Failure.reraise_if_any(er_failures) File "/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py", line 339, in reraise_if_any failures[0].reraise() File "/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py", line 346, in reraise six.reraise(*self._exc_info) File "/usr/local/lib/python3.5/dist-packages/six.py", line 693, in reraise raise value File "/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task result = task.execute(**arguments) File "/artman/artman/tasks/gapic_tasks.py", line 146, in execute task_utils.gapic_gen_task(toolkit_path, [gapic_artifact] + args)) File "/artman/artman/tasks/task_base.py", line 64, in exec_command raise e File "/artman/artman/tasks/task_base.py", line 56, in exec_command output = subprocess.check_output(args, stderr=subprocess.STDOUT) File "/usr/lib/python3.5/subprocess.py", line 626, in check_output **kwargs).stdout File "/usr/lib/python3.5/subprocess.py", line 708, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['java', '-cp', '/toolkit/build/libs/gapic-generator-latest-fatjar.jar', 'com.google.api.codegen.GeneratorMain', 'LEGACY_GAPIC_AND_PACKAGE', '--descriptor_set=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/google-cloud-phishingprotection-v1beta1_updated_py_docs.desc', '--package_yaml2=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python_google-cloud-phishingprotection-v1beta1_package2.yaml', '--output=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/phishingprotection-v1beta1', '--language=python', '--service_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/cloud/phishingprotection/phishingprotection.yaml', '--gapic_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/cloud/phishingprotection/v1beta1/phishingprotection_gapic.yaml', '--dev_samples']' returned non-zero exit status 1 Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 87, in <module> main() File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 79, in main spec.loader.exec_module(synth_module) # type: ignore File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed File "/tmpfs/src/git/autosynth/working_repo/phishingprotection/synth.py", line 38, in <module> generator_args=["--dev_samples"], File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py", line 50, in py_library return self._generate_code(service, version, "python", **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py", line 138, in _generate_code generator_args=generator_args, File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/artman.py", line 141, in run shell.run(cmd, cwd=root_dir) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 39, in run raise exc File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run encoding="utf-8", File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['docker', 'run', '--name', 'artman-docker', '--rm', '-i', '-e', 'HOST_USER_ID=1000', '-e', 'HOST_GROUP_ID=1000', '-e', 'RUNNING_IN_ARTMAN_DOCKER=True', '-v', '/home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis', '-v', '/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles', '-w', PosixPath('/home/kbuilder/.cache/synthtool/googleapis'), 'googleapis/artman:latest', '/bin/bash', '-c', "artman --local --config google/cloud/phishingprotection/artman_phishingprotection_v1beta1.yaml --generator-args='--dev_samples' generate python_gapic"]' returned non-zero exit status 32. synthtool > Cleaned up 1 temporary directories. synthtool > Wrote metadata to synth.metadata. Synthesis failed ``` Google internal developers can see the full log [here](https://sponge/d9f7b3e4-0380-4b7f-ae58-dc06066daabf).
1.0
Synthesis failed for phishingprotection - Hello! Autosynth couldn't regenerate phishingprotection. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth-phishingprotection' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--'] synthtool > Executing /tmpfs/src/git/autosynth/working_repo/phishingprotection/synth.py. synthtool > Ensuring dependencies. synthtool > Pulling artman image. latest: Pulling from googleapis/artman Digest: sha256:0e6f3a668cd68afc768ecbe08817cf6e56a0e64fcbdb1c58c3b97492d12418a1 Status: Image is up to date for googleapis/artman:latest synthtool > Cloning googleapis. synthtool > Running generator for google/cloud/phishingprotection/artman_phishingprotection_v1beta1.yaml. synthtool > Failed executing docker run --name artman-docker --rm -i -e HOST_USER_ID=1000 -e HOST_GROUP_ID=1000 -e RUNNING_IN_ARTMAN_DOCKER=True -v /home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis -v /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles -w /home/kbuilder/.cache/synthtool/googleapis googleapis/artman:latest /bin/bash -c artman --local --config google/cloud/phishingprotection/artman_phishingprotection_v1beta1.yaml --generator-args='--dev_samples' generate python_gapic: artman> Final args: artman> api_name: phishingprotection artman> api_version: v1beta1 artman> artifact_type: GAPIC artman> aspect: ALL artman> gapic_code_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/phishingprotection-v1beta1 artman> gapic_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/phishingprotection/v1beta1/phishingprotection_gapic.yaml artman> generator_args: --dev_samples artman> import_proto_path: artman> - /home/kbuilder/.cache/synthtool/googleapis artman> language: python artman> organization_name: google-cloud artman> output_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles artman> proto_deps: artman> - name: google-common-protos artman> proto_package: '' artman> root_dir: /home/kbuilder/.cache/synthtool/googleapis artman> samples: '' artman> service_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/phishingprotection/phishingprotection.yaml artman> src_proto_path: artman> - /home/kbuilder/.cache/synthtool/googleapis/google/cloud/phishingprotection/v1beta1 artman> toolkit_path: /toolkit artman> artman> Creating GapicClientPipeline. artman.output > WARNING: toplevel: (lint) control-presence: Service phishingprotection.googleapis.com does not have control environment configured. ERROR: toplevel: Found single resource name "project" in GAPIC config that has no corresponding annotation ERROR: toplevel: Unexpected exception: java.lang.NullPointerException at com.google.api.codegen.config.GapicProductConfig.createResourceNameConfigsFromAnnotationsAndGapicConfig(GapicProductConfig.java:817) at com.google.api.codegen.config.GapicProductConfig.create(GapicProductConfig.java:253) at com.google.api.codegen.gapic.GapicGeneratorApp.process(GapicGeneratorApp.java:212) at com.google.api.tools.framework.tools.GenericToolDriverBase.run(GenericToolDriverBase.java:90) at com.google.api.tools.framework.tools.ToolDriverBase.run(ToolDriverBase.java:73) at com.google.api.codegen.GeneratorMain.gapicGeneratorMain(GeneratorMain.java:338) at com.google.api.codegen.GeneratorMain.main(GeneratorMain.java:190) artman> Traceback (most recent call last): File "/artman/artman/cli/main.py", line 72, in main engine.run() File "/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py", line 247, in run for _state in self.run_iter(timeout=timeout): File "/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter failure.Failure.reraise_if_any(er_failures) File "/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py", line 339, in reraise_if_any failures[0].reraise() File "/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py", line 346, in reraise six.reraise(*self._exc_info) File "/usr/local/lib/python3.5/dist-packages/six.py", line 693, in reraise raise value File "/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task result = task.execute(**arguments) File "/artman/artman/tasks/gapic_tasks.py", line 146, in execute task_utils.gapic_gen_task(toolkit_path, [gapic_artifact] + args)) File "/artman/artman/tasks/task_base.py", line 64, in exec_command raise e File "/artman/artman/tasks/task_base.py", line 56, in exec_command output = subprocess.check_output(args, stderr=subprocess.STDOUT) File "/usr/lib/python3.5/subprocess.py", line 626, in check_output **kwargs).stdout File "/usr/lib/python3.5/subprocess.py", line 708, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['java', '-cp', '/toolkit/build/libs/gapic-generator-latest-fatjar.jar', 'com.google.api.codegen.GeneratorMain', 'LEGACY_GAPIC_AND_PACKAGE', '--descriptor_set=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/google-cloud-phishingprotection-v1beta1_updated_py_docs.desc', '--package_yaml2=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python_google-cloud-phishingprotection-v1beta1_package2.yaml', '--output=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/phishingprotection-v1beta1', '--language=python', '--service_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/cloud/phishingprotection/phishingprotection.yaml', '--gapic_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/cloud/phishingprotection/v1beta1/phishingprotection_gapic.yaml', '--dev_samples']' returned non-zero exit status 1 Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 87, in <module> main() File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 79, in main spec.loader.exec_module(synth_module) # type: ignore File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed File "/tmpfs/src/git/autosynth/working_repo/phishingprotection/synth.py", line 38, in <module> generator_args=["--dev_samples"], File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py", line 50, in py_library return self._generate_code(service, version, "python", **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py", line 138, in _generate_code generator_args=generator_args, File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/artman.py", line 141, in run shell.run(cmd, cwd=root_dir) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 39, in run raise exc File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run encoding="utf-8", File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['docker', 'run', '--name', 'artman-docker', '--rm', '-i', '-e', 'HOST_USER_ID=1000', '-e', 'HOST_GROUP_ID=1000', '-e', 'RUNNING_IN_ARTMAN_DOCKER=True', '-v', '/home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis', '-v', '/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles', '-w', PosixPath('/home/kbuilder/.cache/synthtool/googleapis'), 'googleapis/artman:latest', '/bin/bash', '-c', "artman --local --config google/cloud/phishingprotection/artman_phishingprotection_v1beta1.yaml --generator-args='--dev_samples' generate python_gapic"]' returned non-zero exit status 32. synthtool > Cleaned up 1 temporary directories. synthtool > Wrote metadata to synth.metadata. Synthesis failed ``` Google internal developers can see the full log [here](https://sponge/d9f7b3e4-0380-4b7f-ae58-dc06066daabf).
process
synthesis failed for phishingprotection hello autosynth couldn t regenerate phishingprotection broken heart here s the output from running synth py cloning into working repo switched to branch autosynth phishingprotection running synthtool synthtool executing tmpfs src git autosynth working repo phishingprotection synth py synthtool ensuring dependencies synthtool pulling artman image latest pulling from googleapis artman digest status image is up to date for googleapis artman latest synthtool cloning googleapis synthtool running generator for google cloud phishingprotection artman phishingprotection yaml synthtool failed executing docker run name artman docker rm i e host user id e host group id e running in artman docker true v home kbuilder cache synthtool googleapis home kbuilder cache synthtool googleapis v home kbuilder cache synthtool googleapis artman genfiles home kbuilder cache synthtool googleapis artman genfiles w home kbuilder cache synthtool googleapis googleapis artman latest bin bash c artman local config google cloud phishingprotection artman phishingprotection yaml generator args dev samples generate python gapic artman final args artman api name phishingprotection artman api version artman artifact type gapic artman aspect all artman gapic code dir home kbuilder cache synthtool googleapis artman genfiles python phishingprotection artman gapic yaml home kbuilder cache synthtool googleapis google cloud phishingprotection phishingprotection gapic yaml artman generator args dev samples artman import proto path artman home kbuilder cache synthtool googleapis artman language python artman organization name google cloud artman output dir home kbuilder cache synthtool googleapis artman genfiles artman proto deps artman name google common protos artman proto package artman root dir home kbuilder cache synthtool googleapis artman samples artman service yaml home kbuilder cache synthtool googleapis google cloud phishingprotection phishingprotection yaml artman src proto path artman home kbuilder cache synthtool googleapis google cloud phishingprotection artman toolkit path toolkit artman artman creating gapicclientpipeline artman output warning toplevel lint control presence service phishingprotection googleapis com does not have control environment configured error toplevel found single resource name project in gapic config that has no corresponding annotation error toplevel unexpected exception java lang nullpointerexception at com google api codegen config gapicproductconfig createresourcenameconfigsfromannotationsandgapicconfig gapicproductconfig java at com google api codegen config gapicproductconfig create gapicproductconfig java at com google api codegen gapic gapicgeneratorapp process gapicgeneratorapp java at com google api tools framework tools generictooldriverbase run generictooldriverbase java at com google api tools framework tools tooldriverbase run tooldriverbase java at com google api codegen generatormain gapicgeneratormain generatormain java at com google api codegen generatormain main generatormain java artman traceback most recent call last file artman artman cli main py line in main engine run file usr local lib dist packages taskflow engines action engine engine py line in run for state in self run iter timeout timeout file usr local lib dist packages taskflow engines action engine engine py line in run iter failure failure reraise if any er failures file usr local lib dist packages taskflow types failure py line in reraise if any failures reraise file usr local lib dist packages taskflow types failure py line in reraise six reraise self exc info file usr local lib dist packages six py line in reraise raise value file usr local lib dist packages taskflow engines action engine executor py line in execute task result task execute arguments file artman artman tasks gapic tasks py line in execute task utils gapic gen task toolkit path args file artman artman tasks task base py line in exec command raise e file artman artman tasks task base py line in exec command output subprocess check output args stderr subprocess stdout file usr lib subprocess py line in check output kwargs stdout file usr lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth env lib site packages synthtool main py line in main file tmpfs src git autosynth env lib site packages click core py line in call return self main args kwargs file tmpfs src git autosynth env lib site packages click core py line in main rv self invoke ctx file tmpfs src git autosynth env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src git autosynth env lib site packages click core py line in invoke return callback args kwargs file tmpfs src git autosynth env lib site packages synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file tmpfs src git autosynth working repo phishingprotection synth py line in generator args file tmpfs src git autosynth env lib site packages synthtool gcp gapic generator py line in py library return self generate code service version python kwargs file tmpfs src git autosynth env lib site packages synthtool gcp gapic generator py line in generate code generator args generator args file tmpfs src git autosynth env lib site packages synthtool gcp artman py line in run shell run cmd cwd root dir file tmpfs src git autosynth env lib site packages synthtool shell py line in run raise exc file tmpfs src git autosynth env lib site packages synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status synthtool cleaned up temporary directories synthtool wrote metadata to synth metadata synthesis failed google internal developers can see the full log
1
22,742
32,056,366,254
IssuesEvent
2023-09-24 05:49:35
subspace/status
https://api.github.com/repos/subspace/status
closed
🛑 Gemini-3f block explorer squid Processor service is down
status gemini-3f-block-explorer-squid-processor-service
In [`36994fc`](https://github.com/subspace/status/commit/36994fc98b99a80cabd122c01df81ce173f23532 ), Gemini-3f block explorer squid Processor service (https://squid.gemini-3f.subspace.network/processor-health) was **down**: - HTTP code: 0 - Response time: 0 ms
1.0
🛑 Gemini-3f block explorer squid Processor service is down - In [`36994fc`](https://github.com/subspace/status/commit/36994fc98b99a80cabd122c01df81ce173f23532 ), Gemini-3f block explorer squid Processor service (https://squid.gemini-3f.subspace.network/processor-health) was **down**: - HTTP code: 0 - Response time: 0 ms
process
🛑 gemini block explorer squid processor service is down in gemini block explorer squid processor service was down http code response time ms
1
248,993
26,870,972,579
IssuesEvent
2023-02-04 13:06:04
MatBenfield/news
https://api.github.com/repos/MatBenfield/news
closed
[SecurityWeek] Exploitation of Oracle E-Business Suite Vulnerability Starts After PoC Publication
SecurityWeek Stale
**Exploitation attempts targeting a critical-severity Oracle E-Business Suite vulnerability have been observed shortly after proof-of-concept (PoC) code was published.** One of the major Oracle product lines, the E-Business Suite is a set of enterprise applications that help organizations automate processes such as supply chain management (SCM), enterprise resource planning (ERP), and customer relationship management (CRM). Tracked as **CVE-2022-21587** (CVSS score of 9.8), the exploited flaw was identified in the Web Applications Desktop Integrator of Oracle’s enterprise product and was addressed as part of Oracle’s October 2022 Critical Patch Update. According to [a NIST advsory](https://nvd.nist.gov/vuln/detail/CVE-2022-21587), unauthenticated attackers with network access via HTTP can easily exploit the security defect to compromise the Web Applications Desktop Integrator and take it over. This week, CISA added CVE-2022-21587 to its Known Exploited Vulnerabilities (KEV) [catalog](https://www.cisa.gov/known-exploited-vulnerabilities-catalog), urging Oracle customers to apply the available patches as soon as possible. The first exploitation attempts, however, were observed on January 21, Shadowserver warned last week. “Since Jan 21st we are seeing exploitation attempts in our honeypot sensors for Oracle E-Business Suite CVE-2022-21587 (CVSS 9.8 RCE) shortly after a PoC was published,” Shadowserver [said](https://twitter.com/Shadowserver/status/1618249951322210304). The PoC came from Vietnam-based cybersecurity firm Viettel Cyber Security, which on January 16 published [a detailed analysis](https://blog.viettelcybersecurity.com/cve-2022-21587-oracle-e-business-suite-unauth-rce/) of the vulnerability and potential exploitation venues. According to [Shadowserver data](https://dashboard.shadowserver.org/statistics/honeypot/monitoring/vulnerability/?category=monitoring&statistic=unique_ips), the number of observed exploitation attempts is currently low. However, threat actors are known to target unpatched Oracle products, and the number of attacks may increase shortly. This week, CISA also warned of observed exploitation of [CVE-2023-22952](https://nvd.nist.gov/vuln/detail/CVE-2023-22952), a high-severity remote code execution flaw in SugarCRM. Impacting the EmailTemplates, [the vulnerability is described](https://support.sugarcrm.com/Resources/Security/sugarcrm-sa-2023-001/) as a missing input validation defect that allows an attacker to inject custom PHP code using crafted requests. Patches for this vulnerability were released on January 11, 2023. In January, shortly after exploitation began, Censys reported seeing hundreds of SugarCRM servers being [hacked using CVE-2023-22952](https://www.securityweek.com/recently-disclosed-vulnerability-exploited-hack-hundreds-sugarcrm-servers/). **Related:** [Exploited Control Web Panel Flaw Added to CISA ‘Must-Patch’ List](https://www.securityweek.com/exploited-control-web-panel-flaw-added-to-cisa-must-patch-list/) **Related:** [CISA Says Two Old JasperReports Vulnerabilities Exploited in Attacks](https://www.securityweek.com/cisa-says-two-old-jasperreports-vulnerabilities-exploited-attacks/) **Related:** [CISA Warns Veeam Backup & Replication Vulnerabilities Exploited in Attacks](https://www.securityweek.com/cisa-warns-veeam-backup-replication-vulnerabilities-exploited-attacks/) The post [Exploitation of Oracle E-Business Suite Vulnerability Starts After PoC Publication](https://www.securityweek.com/exploitation-of-oracle-e-business-suite-vulnerability-starts-after-poc-publication/) appeared first on [SecurityWeek](https://www.securityweek.com). <https://www.securityweek.com/exploitation-of-oracle-e-business-suite-vulnerability-starts-after-poc-publication/>
True
[SecurityWeek] Exploitation of Oracle E-Business Suite Vulnerability Starts After PoC Publication - **Exploitation attempts targeting a critical-severity Oracle E-Business Suite vulnerability have been observed shortly after proof-of-concept (PoC) code was published.** One of the major Oracle product lines, the E-Business Suite is a set of enterprise applications that help organizations automate processes such as supply chain management (SCM), enterprise resource planning (ERP), and customer relationship management (CRM). Tracked as **CVE-2022-21587** (CVSS score of 9.8), the exploited flaw was identified in the Web Applications Desktop Integrator of Oracle’s enterprise product and was addressed as part of Oracle’s October 2022 Critical Patch Update. According to [a NIST advsory](https://nvd.nist.gov/vuln/detail/CVE-2022-21587), unauthenticated attackers with network access via HTTP can easily exploit the security defect to compromise the Web Applications Desktop Integrator and take it over. This week, CISA added CVE-2022-21587 to its Known Exploited Vulnerabilities (KEV) [catalog](https://www.cisa.gov/known-exploited-vulnerabilities-catalog), urging Oracle customers to apply the available patches as soon as possible. The first exploitation attempts, however, were observed on January 21, Shadowserver warned last week. “Since Jan 21st we are seeing exploitation attempts in our honeypot sensors for Oracle E-Business Suite CVE-2022-21587 (CVSS 9.8 RCE) shortly after a PoC was published,” Shadowserver [said](https://twitter.com/Shadowserver/status/1618249951322210304). The PoC came from Vietnam-based cybersecurity firm Viettel Cyber Security, which on January 16 published [a detailed analysis](https://blog.viettelcybersecurity.com/cve-2022-21587-oracle-e-business-suite-unauth-rce/) of the vulnerability and potential exploitation venues. According to [Shadowserver data](https://dashboard.shadowserver.org/statistics/honeypot/monitoring/vulnerability/?category=monitoring&statistic=unique_ips), the number of observed exploitation attempts is currently low. However, threat actors are known to target unpatched Oracle products, and the number of attacks may increase shortly. This week, CISA also warned of observed exploitation of [CVE-2023-22952](https://nvd.nist.gov/vuln/detail/CVE-2023-22952), a high-severity remote code execution flaw in SugarCRM. Impacting the EmailTemplates, [the vulnerability is described](https://support.sugarcrm.com/Resources/Security/sugarcrm-sa-2023-001/) as a missing input validation defect that allows an attacker to inject custom PHP code using crafted requests. Patches for this vulnerability were released on January 11, 2023. In January, shortly after exploitation began, Censys reported seeing hundreds of SugarCRM servers being [hacked using CVE-2023-22952](https://www.securityweek.com/recently-disclosed-vulnerability-exploited-hack-hundreds-sugarcrm-servers/). **Related:** [Exploited Control Web Panel Flaw Added to CISA ‘Must-Patch’ List](https://www.securityweek.com/exploited-control-web-panel-flaw-added-to-cisa-must-patch-list/) **Related:** [CISA Says Two Old JasperReports Vulnerabilities Exploited in Attacks](https://www.securityweek.com/cisa-says-two-old-jasperreports-vulnerabilities-exploited-attacks/) **Related:** [CISA Warns Veeam Backup & Replication Vulnerabilities Exploited in Attacks](https://www.securityweek.com/cisa-warns-veeam-backup-replication-vulnerabilities-exploited-attacks/) The post [Exploitation of Oracle E-Business Suite Vulnerability Starts After PoC Publication](https://www.securityweek.com/exploitation-of-oracle-e-business-suite-vulnerability-starts-after-poc-publication/) appeared first on [SecurityWeek](https://www.securityweek.com). <https://www.securityweek.com/exploitation-of-oracle-e-business-suite-vulnerability-starts-after-poc-publication/>
non_process
exploitation of oracle e business suite vulnerability starts after poc publication exploitation attempts targeting a critical severity oracle e business suite vulnerability have been observed shortly after proof of concept poc code was published one of the major oracle product lines the e business suite is a set of enterprise applications that help organizations automate processes such as supply chain management scm enterprise resource planning erp and customer relationship management crm tracked as cve cvss score of the exploited flaw was identified in the web applications desktop integrator of oracle’s enterprise product and was addressed as part of oracle’s october critical patch update according to unauthenticated attackers with network access via http can easily exploit the security defect to compromise the web applications desktop integrator and take it over this week cisa added cve to its known exploited vulnerabilities kev urging oracle customers to apply the available patches as soon as possible the first exploitation attempts however were observed on january shadowserver warned last week “since jan we are seeing exploitation attempts in our honeypot sensors for oracle e business suite cve cvss rce shortly after a poc was published ” shadowserver the poc came from vietnam based cybersecurity firm viettel cyber security which on january published of the vulnerability and potential exploitation venues according to the number of observed exploitation attempts is currently low however threat actors are known to target unpatched oracle products and the number of attacks may increase shortly this week cisa also warned of observed exploitation of a high severity remote code execution flaw in sugarcrm impacting the emailtemplates as a missing input validation defect that allows an attacker to inject custom php code using crafted requests patches for this vulnerability were released on january in january shortly after exploitation began censys reported seeing hundreds of sugarcrm servers being related related related the post appeared first on
0
9,911
12,950,613,526
IssuesEvent
2020-07-19 13:52:35
cyfile/Matlab-miscellanies
https://api.github.com/repos/cyfile/Matlab-miscellanies
opened
用 ipcam 对接 ffmpeg 提供的桌面流
Image Processing
ipcam 是 MATLAB Support Package for IP Cameras (MATLAB Add-on附加功能)提供的一个命令 先在电脑端 下载ffmpeg 并添加到cmd 里. 然后用其抓取桌面 并按摄像头格式​​​推流​ matlab 命令里用 ipcam 接收
1.0
用 ipcam 对接 ffmpeg 提供的桌面流 - ipcam 是 MATLAB Support Package for IP Cameras (MATLAB Add-on附加功能)提供的一个命令 先在电脑端 下载ffmpeg 并添加到cmd 里. 然后用其抓取桌面 并按摄像头格式​​​推流​ matlab 命令里用 ipcam 接收
process
用 ipcam 对接 ffmpeg 提供的桌面流 ipcam 是 matlab support package for ip cameras (matlab add on附加功能)提供的一个命令 先在电脑端 下载ffmpeg 并添加到cmd 里 然后用其抓取桌面 并按摄像头格式​​​推流​ matlab 命令里用 ipcam 接收
1
367,748
25,760,137,333
IssuesEvent
2022-12-08 19:45:12
skeletonlabs/skeleton
https://api.github.com/repos/skeletonlabs/skeleton
closed
Docs: set CLI as the new SvelteKit onboarding default
documentation
### Link to the Page https://www.skeleton.dev/guides/install ### Describe the Issue Let's update and tailor the onboarding documentation to recommend the CLI by default. We'll want to clearly indicate it's in beta, however, we've had zero issues reported since it went live. It seems to be doing everything we need it to do. The challenge here will be balancing the onboarding user path. Essentially there's two options now: 1. SvelteKit users that use the CLI - they can skip straight to building their apps 2. SvelteKit/Vite/Astros that either bypass or can't use the CLI - they still need the written instruction Additionally, some of the information in the written guide is beneficial to CLI users - such as how to set or create a custom theme, how to override styles and use component props, and just generally know what the CLI is doing for them. ### Are you able to create a Pull Request with the fix? Yes
1.0
Docs: set CLI as the new SvelteKit onboarding default - ### Link to the Page https://www.skeleton.dev/guides/install ### Describe the Issue Let's update and tailor the onboarding documentation to recommend the CLI by default. We'll want to clearly indicate it's in beta, however, we've had zero issues reported since it went live. It seems to be doing everything we need it to do. The challenge here will be balancing the onboarding user path. Essentially there's two options now: 1. SvelteKit users that use the CLI - they can skip straight to building their apps 2. SvelteKit/Vite/Astros that either bypass or can't use the CLI - they still need the written instruction Additionally, some of the information in the written guide is beneficial to CLI users - such as how to set or create a custom theme, how to override styles and use component props, and just generally know what the CLI is doing for them. ### Are you able to create a Pull Request with the fix? Yes
non_process
docs set cli as the new sveltekit onboarding default link to the page describe the issue let s update and tailor the onboarding documentation to recommend the cli by default we ll want to clearly indicate it s in beta however we ve had zero issues reported since it went live it seems to be doing everything we need it to do the challenge here will be balancing the onboarding user path essentially there s two options now sveltekit users that use the cli they can skip straight to building their apps sveltekit vite astros that either bypass or can t use the cli they still need the written instruction additionally some of the information in the written guide is beneficial to cli users such as how to set or create a custom theme how to override styles and use component props and just generally know what the cli is doing for them are you able to create a pull request with the fix yes
0
137,973
11,171,890,361
IssuesEvent
2019-12-29 00:04:24
HRSEVESOSK/cide-ionic-app
https://api.github.com/repos/HRSEVESOSK/cide-ionic-app
opened
ROLE_CIDE_COORDINATOR Upload of documents
testing
Only one document can be uploaded in “Zapisnik”. It was required at least three of them on meeting on 29th of May 2019. ![obrázok](https://user-images.githubusercontent.com/6195936/71550655-0c0adc80-29d6-11ea-9532-7f8bfa277b3c.png) Note 14.6.2019. Information required if three documents can be uploaded.
1.0
ROLE_CIDE_COORDINATOR Upload of documents - Only one document can be uploaded in “Zapisnik”. It was required at least three of them on meeting on 29th of May 2019. ![obrázok](https://user-images.githubusercontent.com/6195936/71550655-0c0adc80-29d6-11ea-9532-7f8bfa277b3c.png) Note 14.6.2019. Information required if three documents can be uploaded.
non_process
role cide coordinator upload of documents only one document can be uploaded in “zapisnik” it was required at least three of them on meeting on of may note information required if three documents can be uploaded
0
1,484
4,058,960,195
IssuesEvent
2016-05-25 07:42:07
e-government-ua/iBP
https://api.github.com/repos/e-government-ua/iBP
closed
Нетішин Хмельницька обл - Дозвіл на розробку проекту землеустрою щодо відведення земельної ділянки
In process of testing in work
Опрацьовувати їх буде та ж людина - начальник ЦНАПу Кушта Галина Галина Кушта - начальник ЦНАП (їхня робоча пошта) cnap_netishyn@ukr.net neteshin_user1 - логин и пароль Олена Матросова (менеджер від iGov) olena.boichuk@gmail.com neteshin_user2 - логин и пароль Руслан Рудомський (менеджер від iGov) nebajduzhyj@gmail.com neteshin_user3 - логин и пароль
1.0
Нетішин Хмельницька обл - Дозвіл на розробку проекту землеустрою щодо відведення земельної ділянки - Опрацьовувати їх буде та ж людина - начальник ЦНАПу Кушта Галина Галина Кушта - начальник ЦНАП (їхня робоча пошта) cnap_netishyn@ukr.net neteshin_user1 - логин и пароль Олена Матросова (менеджер від iGov) olena.boichuk@gmail.com neteshin_user2 - логин и пароль Руслан Рудомський (менеджер від iGov) nebajduzhyj@gmail.com neteshin_user3 - логин и пароль
process
нетішин хмельницька обл дозвіл на розробку проекту землеустрою щодо відведення земельної ділянки опрацьовувати їх буде та ж людина начальник цнапу кушта галина галина кушта начальник цнап їхня робоча пошта cnap netishyn ukr net neteshin логин и пароль олена матросова менеджер від igov olena boichuk gmail com neteshin логин и пароль руслан рудомський менеджер від igov nebajduzhyj gmail com neteshin логин и пароль
1
742,979
25,881,432,103
IssuesEvent
2022-12-14 11:32:48
thoth-station/opendatahub-cnbi
https://api.github.com/repos/thoth-station/opendatahub-cnbi
closed
Make the deployment compatible with ODH and ArgoCD for op1st / OSC cluster
kind/feature sig/devsecops thoth/group-programming wg/cre priority/critical-urgent
## Problem statement <!-- Is your feature request related to a problem? Please provide a clear and concise description of what the problem is. Ex. I'm always frustrated when [...] This should be a user story! --> Continuation of https://github.com/open-services-group/scrum/issues/37 As a BYON/CNBi developer, I want ArgoCD to manage all the required components in the dev environment so that when I contribute changes they are automatically applied to the environment and become available for testing, demos, etc ## Acceptance criteria The following components should be deployed by ArgoCD to the `odh-cl1` cluster, in the `opf-jupyterhub-stage namespace`: - [x] the BYON development version of the ODH dashboard - [x] all the tekton pipelines and tasks required: https://github.com/thoth-station/opendatahub-cnbi/issues/10 - [x] the CNBi controller: https://github.com/thoth-station/meteor-operator/issues/88 Also: - [x] a process is in place to make sure the components are updated when devs implement changes to them - [x] documentation for the CD environment is available and reflects the process and components mentioned above
1.0
Make the deployment compatible with ODH and ArgoCD for op1st / OSC cluster - ## Problem statement <!-- Is your feature request related to a problem? Please provide a clear and concise description of what the problem is. Ex. I'm always frustrated when [...] This should be a user story! --> Continuation of https://github.com/open-services-group/scrum/issues/37 As a BYON/CNBi developer, I want ArgoCD to manage all the required components in the dev environment so that when I contribute changes they are automatically applied to the environment and become available for testing, demos, etc ## Acceptance criteria The following components should be deployed by ArgoCD to the `odh-cl1` cluster, in the `opf-jupyterhub-stage namespace`: - [x] the BYON development version of the ODH dashboard - [x] all the tekton pipelines and tasks required: https://github.com/thoth-station/opendatahub-cnbi/issues/10 - [x] the CNBi controller: https://github.com/thoth-station/meteor-operator/issues/88 Also: - [x] a process is in place to make sure the components are updated when devs implement changes to them - [x] documentation for the CD environment is available and reflects the process and components mentioned above
non_process
make the deployment compatible with odh and argocd for osc cluster problem statement is your feature request related to a problem please provide a clear and concise description of what the problem is ex i m always frustrated when this should be a user story continuation of as a byon cnbi developer i want argocd to manage all the required components in the dev environment so that when i contribute changes they are automatically applied to the environment and become available for testing demos etc acceptance criteria the following components should be deployed by argocd to the odh cluster in the opf jupyterhub stage namespace the byon development version of the odh dashboard all the tekton pipelines and tasks required the cnbi controller also a process is in place to make sure the components are updated when devs implement changes to them documentation for the cd environment is available and reflects the process and components mentioned above
0
508,329
14,698,523,277
IssuesEvent
2021-01-04 06:39:21
teamforus/general
https://api.github.com/repos/teamforus/general
closed
Provider signup: add back button in beginning (after info steps)
Approval: Granted Priority: Must have Scope: Small Status: Planned project-100
Learn more about change requests here: https://bit.ly/39CWeEE ### Requested by: Jamal ### Change description As a provider I would like to go back to read the information again after clicking next on the info pages. **Sidenote:** Being "stuck" on this page if you quickly skipped over the information but are now curious what it said is a pretty bad UX, especially since you can also not use the browser back button or reload. There seems to be no way (even clearing cookies) except for opening a new (private) tab 😓 <img width="1127" alt="Screenshot 2020-12-05 at 16 04 52" src="https://user-images.githubusercontent.com/30194799/101246445-a78eec00-3713-11eb-8a47-cfb324d397e7.png">
1.0
Provider signup: add back button in beginning (after info steps) - Learn more about change requests here: https://bit.ly/39CWeEE ### Requested by: Jamal ### Change description As a provider I would like to go back to read the information again after clicking next on the info pages. **Sidenote:** Being "stuck" on this page if you quickly skipped over the information but are now curious what it said is a pretty bad UX, especially since you can also not use the browser back button or reload. There seems to be no way (even clearing cookies) except for opening a new (private) tab 😓 <img width="1127" alt="Screenshot 2020-12-05 at 16 04 52" src="https://user-images.githubusercontent.com/30194799/101246445-a78eec00-3713-11eb-8a47-cfb324d397e7.png">
non_process
provider signup add back button in beginning after info steps learn more about change requests here requested by jamal change description as a provider i would like to go back to read the information again after clicking next on the info pages sidenote being stuck on this page if you quickly skipped over the information but are now curious what it said is a pretty bad ux especially since you can also not use the browser back button or reload there seems to be no way even clearing cookies except for opening a new private tab 😓 img width alt screenshot at src
0
12,878
15,268,227,031
IssuesEvent
2021-02-22 11:06:27
alphagov/govuk-design-system
https://api.github.com/repos/alphagov/govuk-design-system
opened
Send contributions to working group for February / March review
process 🕔 days
## What Send updated link styles and hover states to working group for review. ## Why To get feedback on iterations. ## Who needs to know about this Designer, Community Manager ## Done when - [ ] Contribution prepared - [ ] Contribution sent to working group - [ ] Folder created for November review and spreadsheet/form moved
1.0
Send contributions to working group for February / March review - ## What Send updated link styles and hover states to working group for review. ## Why To get feedback on iterations. ## Who needs to know about this Designer, Community Manager ## Done when - [ ] Contribution prepared - [ ] Contribution sent to working group - [ ] Folder created for November review and spreadsheet/form moved
process
send contributions to working group for february march review what send updated link styles and hover states to working group for review why to get feedback on iterations who needs to know about this designer community manager done when contribution prepared contribution sent to working group folder created for november review and spreadsheet form moved
1
2,083
4,912,478,581
IssuesEvent
2016-11-23 09:14:08
CERNDocumentServer/cds
https://api.github.com/repos/CERNDocumentServer/cds
closed
webhooks: faster transcoded files commit
avc_processing enhancement in progress
The `video_transcode` task generates the transcoded videos, but they are commited in the database only when the task completes. Therefore, the UI cannot get hold of them until then, so we need to commit in the middle of the task.
1.0
webhooks: faster transcoded files commit - The `video_transcode` task generates the transcoded videos, but they are commited in the database only when the task completes. Therefore, the UI cannot get hold of them until then, so we need to commit in the middle of the task.
process
webhooks faster transcoded files commit the video transcode task generates the transcoded videos but they are commited in the database only when the task completes therefore the ui cannot get hold of them until then so we need to commit in the middle of the task
1
12,761
15,115,943,489
IssuesEvent
2021-02-09 05:43:14
bitia-ru/gekkon
https://api.github.com/repos/bitia-ru/gekkon
opened
Загрузка фотографии по клику на плюс (когда фотки нет)
enhancement frontend-desktop frontend-mobile processed
Про то, касается ли этот тикет мобильника — проверить.
1.0
Загрузка фотографии по клику на плюс (когда фотки нет) - Про то, касается ли этот тикет мобильника — проверить.
process
загрузка фотографии по клику на плюс когда фотки нет про то касается ли этот тикет мобильника — проверить
1
2,243
5,088,645,007
IssuesEvent
2016-12-31 23:55:00
sw4j-org/tool-jpa-processor
https://api.github.com/repos/sw4j-org/tool-jpa-processor
opened
Handle @MapKeyColumn Annotation
annotation processor task
Handle the `@MapKeyColumn` annotation for a property or field. See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf) - 11.1.33 MapKeyColumn Annotation
1.0
Handle @MapKeyColumn Annotation - Handle the `@MapKeyColumn` annotation for a property or field. See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf) - 11.1.33 MapKeyColumn Annotation
process
handle mapkeycolumn annotation handle the mapkeycolumn annotation for a property or field see mapkeycolumn annotation
1
131,997
12,496,276,283
IssuesEvent
2020-06-01 14:35:07
go-bdd/gobdd
https://api.github.com/repos/go-bdd/gobdd
closed
Add a short gif of using the lib to readme
documentation good first issue
The goal is to add a short gif where we show how the library works. It's a good developer-experience and more developer-friendly :)
1.0
Add a short gif of using the lib to readme - The goal is to add a short gif where we show how the library works. It's a good developer-experience and more developer-friendly :)
non_process
add a short gif of using the lib to readme the goal is to add a short gif where we show how the library works it s a good developer experience and more developer friendly
0
4,171
7,107,919,811
IssuesEvent
2018-01-16 21:46:05
18F/product-guide
https://api.github.com/repos/18F/product-guide
closed
SECTION UPDATE (Project Comms) - Product storytellers
help wanted process change
Product storytellers are something we may be implementing to handle project communications. It has not been fully socialized yet, but if we move forward with it, we will want to add to and perhaps edit the Project Communications section of the guide to fold this in.
1.0
SECTION UPDATE (Project Comms) - Product storytellers - Product storytellers are something we may be implementing to handle project communications. It has not been fully socialized yet, but if we move forward with it, we will want to add to and perhaps edit the Project Communications section of the guide to fold this in.
process
section update project comms product storytellers product storytellers are something we may be implementing to handle project communications it has not been fully socialized yet but if we move forward with it we will want to add to and perhaps edit the project communications section of the guide to fold this in
1
12,834
15,214,381,342
IssuesEvent
2021-02-17 13:08:19
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
Some exported types no longer available in Prisma namespace in 2.17
process/candidate team/client tech/typescript
<!-- Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client. Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports --> ## Bug description I use Prisma generated types such as `CreateManyInput`, `UpdateManyWithWhereWithout` etc for validation and type safety. Earlier these types were available in Prisma namespace and were accessible using `Prisma.CreateManyInput` and so on. Now after updating to 2.17, I am not getting these types. Though I am able to import them directly without Prisma namespace. My concern is that what's the longer term approach being taken by Prisma since changing these post every update is really difficult. Long term, will these types be available under namespace or without or will no longer be made available? If they won't be available, what should be the best way to validate the variables since these auto generated types offer a lot of ease. <!-- A clear and concise description of what the bug is. --> ## How to reproduce <!-- Steps to reproduce the behavior: 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> ## Expected behavior Import shouldn't give an error <!-- A clear and concise description of what you expected to happen. --> ## Prisma information <!-- Your Prisma schema, Prisma Client queries, ... Do not include your database credentials when sharing your Prisma schema! --> ## Environment & setup <!-- In which environment does the problem occur --> - OS: MacOS Catalina 10.15.1, Running inside docker `node:14.15.4-alpine`<!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> - Database: PostgreSQL <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> - Node.js version: v14.15.4 <!--[Run `node -v` to see your Node.js version]--> - Prisma version: <!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]--> ``` prisma : 2.17.0 @prisma/client : 2.17.0 Current platform : linux-musl Query Engine : query-engine 3c463ebd78b1d21d8fdacdd27899e280cf686223 (at node_modules/@prisma/engines/query-engine-linux-musl) Migration Engine : migration-engine-cli 3c463ebd78b1d21d8fdacdd27899e280cf686223 (at node_modules/@prisma/engines/migration-engine-linux-musl) Introspection Engine : introspection-core 3c463ebd78b1d21d8fdacdd27899e280cf686223 (at node_modules/@prisma/engines/introspection-engine-linux-musl) Format Binary : prisma-fmt 3c463ebd78b1d21d8fdacdd27899e280cf686223 (at node_modules/@prisma/engines/prisma-fmt-linux-musl) Studio : 0.353.0 Preview Features : createMany ```
1.0
Some exported types no longer available in Prisma namespace in 2.17 - <!-- Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client. Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports --> ## Bug description I use Prisma generated types such as `CreateManyInput`, `UpdateManyWithWhereWithout` etc for validation and type safety. Earlier these types were available in Prisma namespace and were accessible using `Prisma.CreateManyInput` and so on. Now after updating to 2.17, I am not getting these types. Though I am able to import them directly without Prisma namespace. My concern is that what's the longer term approach being taken by Prisma since changing these post every update is really difficult. Long term, will these types be available under namespace or without or will no longer be made available? If they won't be available, what should be the best way to validate the variables since these auto generated types offer a lot of ease. <!-- A clear and concise description of what the bug is. --> ## How to reproduce <!-- Steps to reproduce the behavior: 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> ## Expected behavior Import shouldn't give an error <!-- A clear and concise description of what you expected to happen. --> ## Prisma information <!-- Your Prisma schema, Prisma Client queries, ... Do not include your database credentials when sharing your Prisma schema! --> ## Environment & setup <!-- In which environment does the problem occur --> - OS: MacOS Catalina 10.15.1, Running inside docker `node:14.15.4-alpine`<!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> - Database: PostgreSQL <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> - Node.js version: v14.15.4 <!--[Run `node -v` to see your Node.js version]--> - Prisma version: <!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]--> ``` prisma : 2.17.0 @prisma/client : 2.17.0 Current platform : linux-musl Query Engine : query-engine 3c463ebd78b1d21d8fdacdd27899e280cf686223 (at node_modules/@prisma/engines/query-engine-linux-musl) Migration Engine : migration-engine-cli 3c463ebd78b1d21d8fdacdd27899e280cf686223 (at node_modules/@prisma/engines/migration-engine-linux-musl) Introspection Engine : introspection-core 3c463ebd78b1d21d8fdacdd27899e280cf686223 (at node_modules/@prisma/engines/introspection-engine-linux-musl) Format Binary : prisma-fmt 3c463ebd78b1d21d8fdacdd27899e280cf686223 (at node_modules/@prisma/engines/prisma-fmt-linux-musl) Studio : 0.353.0 Preview Features : createMany ```
process
some exported types no longer available in prisma namespace in thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description i use prisma generated types such as createmanyinput updatemanywithwherewithout etc for validation and type safety earlier these types were available in prisma namespace and were accessible using prisma createmanyinput and so on now after updating to i am not getting these types though i am able to import them directly without prisma namespace my concern is that what s the longer term approach being taken by prisma since changing these post every update is really difficult long term will these types be available under namespace or without or will no longer be made available if they won t be available what should be the best way to validate the variables since these auto generated types offer a lot of ease how to reproduce steps to reproduce the behavior go to change run see error expected behavior import shouldn t give an error prisma information your prisma schema prisma client queries do not include your database credentials when sharing your prisma schema environment setup os macos catalina running inside docker node alpine database postgresql node js version prisma version prisma prisma client current platform linux musl query engine query engine at node modules prisma engines query engine linux musl migration engine migration engine cli at node modules prisma engines migration engine linux musl introspection engine introspection core at node modules prisma engines introspection engine linux musl format binary prisma fmt at node modules prisma engines prisma fmt linux musl studio preview features createmany
1
14,335
17,366,342,055
IssuesEvent
2021-07-30 07:51:55
tokio-rs/tokio
https://api.github.com/repos/tokio-rs/tokio
closed
How can I get signal from ExitStatus
A-tokio C-question M-process
I want to get the **signal** from `ExitStatus`, like `std::process::ExitStatus` provided.
1.0
How can I get signal from ExitStatus - I want to get the **signal** from `ExitStatus`, like `std::process::ExitStatus` provided.
process
how can i get signal from exitstatus i want to get the signal from exitstatus like std process exitstatus provided
1
9,256
12,291,991,262
IssuesEvent
2020-05-10 12:44:04
Ultimate-Hosts-Blacklist/whitelist
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
opened
[FALSE-POSITIVE?]
whitelisting process
**Domains or links** pochta.ru **More Information** I use hosts file, and when I tried to check delivery status on pochta.ru, name was not resolved. **Have you requested removal from other sources?** No **Additional context** This is a Russian Post web site.
1.0
[FALSE-POSITIVE?] - **Domains or links** pochta.ru **More Information** I use hosts file, and when I tried to check delivery status on pochta.ru, name was not resolved. **Have you requested removal from other sources?** No **Additional context** This is a Russian Post web site.
process
domains or links pochta ru more information i use hosts file and when i tried to check delivery status on pochta ru name was not resolved have you requested removal from other sources no additional context this is a russian post web site
1
268
2,699,134,537
IssuesEvent
2015-04-03 14:41:06
appsgate2015/appsgate
https://api.github.com/repos/appsgate2015/appsgate
opened
Cohérence des NotificationMsg
appsgate-server P1 PROCESSING
Pour certains équipements qui surchargent CoreNotficationMsg ou qui implémentent NotificationMsg (CoreObjectSpec). L'état renvoyé par le JSON (objectId, varName, value, oldValue) -> utilisé par le client n'est pas cohérent avec les méthode correspondantes de l'objet java (respectivement getSource, getVarName, getNewValue, getOldValue) -> utilisé par le serveur, notamment par EHMI, TraceMan, EUDEInterpreter... !!! A refaire au cas par cas !!! !!! Sauf besoin particulier, lors de l'implémentation d'un nouveau message, utiliser directement la classe de référence CoreNotificationMsg !!!
1.0
Cohérence des NotificationMsg - Pour certains équipements qui surchargent CoreNotficationMsg ou qui implémentent NotificationMsg (CoreObjectSpec). L'état renvoyé par le JSON (objectId, varName, value, oldValue) -> utilisé par le client n'est pas cohérent avec les méthode correspondantes de l'objet java (respectivement getSource, getVarName, getNewValue, getOldValue) -> utilisé par le serveur, notamment par EHMI, TraceMan, EUDEInterpreter... !!! A refaire au cas par cas !!! !!! Sauf besoin particulier, lors de l'implémentation d'un nouveau message, utiliser directement la classe de référence CoreNotificationMsg !!!
process
cohérence des notificationmsg pour certains équipements qui surchargent corenotficationmsg ou qui implémentent notificationmsg coreobjectspec l état renvoyé par le json objectid varname value oldvalue utilisé par le client n est pas cohérent avec les méthode correspondantes de l objet java respectivement getsource getvarname getnewvalue getoldvalue utilisé par le serveur notamment par ehmi traceman eudeinterpreter a refaire au cas par cas sauf besoin particulier lors de l implémentation d un nouveau message utiliser directement la classe de référence corenotificationmsg
1
348,344
10,441,307,581
IssuesEvent
2019-09-18 10:31:39
gardener/etcd-backup-restore
https://api.github.com/repos/gardener/etcd-backup-restore
opened
Optimise the graceful deletion time in case of active snapshot upload
area/performance component/etcd-backup-restore exp/intermediate kind/enhancement platform/all priority/normal status/accepted
**Motivation (Why is this needed?):** Currently on receiving stop signal, snapshotter wait untill current snapshot upload is finished. Specifically in case of full snapshot upload it takes too much time end the etcd-backup-restore process gracefully. Same with processing initialisation request. Initialisation request handler waits until snapshotter is stopped i.e snapshot upload is finished. This result in addition time before beginning of initialization.
1.0
Optimise the graceful deletion time in case of active snapshot upload - **Motivation (Why is this needed?):** Currently on receiving stop signal, snapshotter wait untill current snapshot upload is finished. Specifically in case of full snapshot upload it takes too much time end the etcd-backup-restore process gracefully. Same with processing initialisation request. Initialisation request handler waits until snapshotter is stopped i.e snapshot upload is finished. This result in addition time before beginning of initialization.
non_process
optimise the graceful deletion time in case of active snapshot upload motivation why is this needed currently on receiving stop signal snapshotter wait untill current snapshot upload is finished specifically in case of full snapshot upload it takes too much time end the etcd backup restore process gracefully same with processing initialisation request initialisation request handler waits until snapshotter is stopped i e snapshot upload is finished this result in addition time before beginning of initialization
0
7,996
11,188,128,428
IssuesEvent
2020-01-02 02:55:36
52ABP/Documents
https://api.github.com/repos/52ABP/Documents
opened
ASP.NET Core 进程内(InProcess)托管 | 52ABP官方技术文档与博客
ASP.NET Core 进程内(InProcess)托管 | 52ABP官方技术文档与博客 Gitalk
https://docs.52abp.com/mvc/6-In-ProcessHosting.html 旨在打造新手小白从入门到实战的学习网站,内容涵盖:ASP.NET Core、Angular 、.NET Core、52ABP、等企业级解决方案等
1.0
ASP.NET Core 进程内(InProcess)托管 | 52ABP官方技术文档与博客 - https://docs.52abp.com/mvc/6-In-ProcessHosting.html 旨在打造新手小白从入门到实战的学习网站,内容涵盖:ASP.NET Core、Angular 、.NET Core、52ABP、等企业级解决方案等
process
asp net core 进程内 inprocess 托管 旨在打造新手小白从入门到实战的学习网站,内容涵盖:asp net core、angular 、 net core、 、等企业级解决方案等
1
44,849
7,132,881,400
IssuesEvent
2018-01-22 15:52:33
capistrano/bundler
https://api.github.com/repos/capistrano/bundler
closed
Explain that .bundle should be added to linked_dirs
documentation you can help!
As explained in #95, the `bundle check` behavior used by this gem depends on `.bundle/config` from previous deployments. If this directory is not linked, `bundle check` will be ineffective and a slower `bundle install` will be used. This unnecessarily slows down deployments. We should add this explanation to the README.
1.0
Explain that .bundle should be added to linked_dirs - As explained in #95, the `bundle check` behavior used by this gem depends on `.bundle/config` from previous deployments. If this directory is not linked, `bundle check` will be ineffective and a slower `bundle install` will be used. This unnecessarily slows down deployments. We should add this explanation to the README.
non_process
explain that bundle should be added to linked dirs as explained in the bundle check behavior used by this gem depends on bundle config from previous deployments if this directory is not linked bundle check will be ineffective and a slower bundle install will be used this unnecessarily slows down deployments we should add this explanation to the readme
0
814,262
30,497,522,833
IssuesEvent
2023-07-18 11:55:57
Polarts/feel-tracker
https://api.github.com/repos/Polarts/feel-tracker
closed
Day Selector
high priority
# Requirements: The day selector consists of two components: 1. Day Item - a single day item that represents a calendary date the user can pick. 2. Day Carousel - a carousel displaying 7 days at a time. - [ ] Add a `DayItem` component to `src/components/day`. The component should have the following states: - [ ] Empty state - displays the day of week and the date number in the month. ![image](https://user-images.githubusercontent.com/30803298/230161222-48a50d22-6a78-4b82-813f-6c0c97cda60a.png) - [ ] Active state - displays the number of activities in that day as a badge above, and the average day rating as a border below. ![image](https://user-images.githubusercontent.com/30803298/230161585-f227c600-0bfd-410c-a41a-3539868b807a.png) - [ ] Selected state - the day item's background turns into the primary gradient, it no longer displays the rating border. ![image](https://user-images.githubusercontent.com/30803298/230161770-fd6c0c4e-7a4a-4657-b58c-e9d591ba0ffe.png) - [ ] Add a `DayCarousel` component to `src/components/day`: - [ ] The component displays a list of `DayItem` components each representing a calendary day of the week. - [ ] Days that haven't passed yet should be disabled - "empty" state with lower opacity. - [ ] The element should be scrollable by swiping. I recommend using CSS scroll snapping. - [ ] "<" and ">" buttons should scroll exactly 7 days at a time. The user shouldn't be allowed to scroll into the future. - [ ] Clicking on a day item should set a local state. There can only be one selected day at a time. # Design: ![image](https://user-images.githubusercontent.com/30803298/230162399-190fc394-0070-4b4d-9356-d59e8e0484de.png)
1.0
Day Selector - # Requirements: The day selector consists of two components: 1. Day Item - a single day item that represents a calendary date the user can pick. 2. Day Carousel - a carousel displaying 7 days at a time. - [ ] Add a `DayItem` component to `src/components/day`. The component should have the following states: - [ ] Empty state - displays the day of week and the date number in the month. ![image](https://user-images.githubusercontent.com/30803298/230161222-48a50d22-6a78-4b82-813f-6c0c97cda60a.png) - [ ] Active state - displays the number of activities in that day as a badge above, and the average day rating as a border below. ![image](https://user-images.githubusercontent.com/30803298/230161585-f227c600-0bfd-410c-a41a-3539868b807a.png) - [ ] Selected state - the day item's background turns into the primary gradient, it no longer displays the rating border. ![image](https://user-images.githubusercontent.com/30803298/230161770-fd6c0c4e-7a4a-4657-b58c-e9d591ba0ffe.png) - [ ] Add a `DayCarousel` component to `src/components/day`: - [ ] The component displays a list of `DayItem` components each representing a calendary day of the week. - [ ] Days that haven't passed yet should be disabled - "empty" state with lower opacity. - [ ] The element should be scrollable by swiping. I recommend using CSS scroll snapping. - [ ] "<" and ">" buttons should scroll exactly 7 days at a time. The user shouldn't be allowed to scroll into the future. - [ ] Clicking on a day item should set a local state. There can only be one selected day at a time. # Design: ![image](https://user-images.githubusercontent.com/30803298/230162399-190fc394-0070-4b4d-9356-d59e8e0484de.png)
non_process
day selector requirements the day selector consists of two components day item a single day item that represents a calendary date the user can pick day carousel a carousel displaying days at a time add a dayitem component to src components day the component should have the following states empty state displays the day of week and the date number in the month active state displays the number of activities in that day as a badge above and the average day rating as a border below selected state the day item s background turns into the primary gradient it no longer displays the rating border add a daycarousel component to src components day the component displays a list of dayitem components each representing a calendary day of the week days that haven t passed yet should be disabled empty state with lower opacity the element should be scrollable by swiping i recommend using css scroll snapping buttons should scroll exactly days at a time the user shouldn t be allowed to scroll into the future clicking on a day item should set a local state there can only be one selected day at a time design
0
66,210
3,251,173,214
IssuesEvent
2015-10-19 08:18:14
cs2103aug2015-w14-3j/main
https://api.github.com/repos/cs2103aug2015-w14-3j/main
closed
As a user, I can view all my schedule in a planner/calendar
priority.low type.story
so that I can find empty slots easily
1.0
As a user, I can view all my schedule in a planner/calendar - so that I can find empty slots easily
non_process
as a user i can view all my schedule in a planner calendar so that i can find empty slots easily
0
54,850
13,933,381,176
IssuesEvent
2020-10-22 08:37:38
jinuem/PDFJsAnnotations
https://api.github.com/repos/jinuem/PDFJsAnnotations
opened
CVE-2018-14042 (Medium) detected in bootstrap-4.0.0.min.js
security vulnerability
## CVE-2018-14042 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-4.0.0.min.js</b></p></summary> <p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0/js/bootstrap.min.js</a></p> <p>Path to dependency file: PDFJsAnnotations/index.html</p> <p>Path to vulnerable library: PDFJsAnnotations/index.html</p> <p> Dependency Hierarchy: - :x: **bootstrap-4.0.0.min.js** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip. <p>Publish Date: 2018-07-13 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14042>CVE-2018-14042</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/twbs/bootstrap/pull/26630">https://github.com/twbs/bootstrap/pull/26630</a></p> <p>Release Date: 2018-07-13</p> <p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2.org.webjars:bootstrap:3.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-14042 (Medium) detected in bootstrap-4.0.0.min.js - ## CVE-2018-14042 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-4.0.0.min.js</b></p></summary> <p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0/js/bootstrap.min.js</a></p> <p>Path to dependency file: PDFJsAnnotations/index.html</p> <p>Path to vulnerable library: PDFJsAnnotations/index.html</p> <p> Dependency Hierarchy: - :x: **bootstrap-4.0.0.min.js** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip. <p>Publish Date: 2018-07-13 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14042>CVE-2018-14042</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/twbs/bootstrap/pull/26630">https://github.com/twbs/bootstrap/pull/26630</a></p> <p>Release Date: 2018-07-13</p> <p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2.org.webjars:bootstrap:3.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in bootstrap min js cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file pdfjsannotations index html path to vulnerable library pdfjsannotations index html dependency hierarchy x bootstrap min js vulnerable library vulnerability details in bootstrap before xss is possible in the data container property of tooltip publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org webjars npm bootstrap org webjars bootstrap step up your open source security game with whitesource
0
12,149
14,741,397,920
IssuesEvent
2021-01-07 10:33:33
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Edit Account - Rebillibles and Vendors permission bug
anc-process anp-1 ant-bug ant-enhancement
In GitLab by @kdjstudios on Jan 16, 2019, 10:38 **Submitted by:** @amishra **Helpdesk:** NA **Server:** All **Client/Site:** ALL **Account:** ALL **Issue:** While tesing we found another issue in rebillable section under account **Edit Page.** - suppose a user has security level "NCMS3" or "NCSM5". - goto account edit page. - inside Rebillables secton click on "add vendor servise button." - choose on Vendor from dropdown - system will throw an error/alert message because the user is not "o6"/"o5"/ "manager"/"admin" . In comment http://gitlab.aavaz.biz/AnswerNet/SABilling/issues/1267#note_32262 we did not include this. **This could impact rebillable report.** This is fixed on "New Account Page" but need to be fixed on "Account edit page" Estimate: | Size | Lower Time Bound | Variability | Upper Time Bound | Testing | Time for Testing | Time for Dev & Testing | |------|------------------|-------------|------------------|---------|------------------|------------------------| | R-Small | 2 | Concepts and Implementation Well Understood | 2.5 | Low Impact | 0.5 | 3 | Thanks
1.0
Edit Account - Rebillibles and Vendors permission bug - In GitLab by @kdjstudios on Jan 16, 2019, 10:38 **Submitted by:** @amishra **Helpdesk:** NA **Server:** All **Client/Site:** ALL **Account:** ALL **Issue:** While tesing we found another issue in rebillable section under account **Edit Page.** - suppose a user has security level "NCMS3" or "NCSM5". - goto account edit page. - inside Rebillables secton click on "add vendor servise button." - choose on Vendor from dropdown - system will throw an error/alert message because the user is not "o6"/"o5"/ "manager"/"admin" . In comment http://gitlab.aavaz.biz/AnswerNet/SABilling/issues/1267#note_32262 we did not include this. **This could impact rebillable report.** This is fixed on "New Account Page" but need to be fixed on "Account edit page" Estimate: | Size | Lower Time Bound | Variability | Upper Time Bound | Testing | Time for Testing | Time for Dev & Testing | |------|------------------|-------------|------------------|---------|------------------|------------------------| | R-Small | 2 | Concepts and Implementation Well Understood | 2.5 | Low Impact | 0.5 | 3 | Thanks
process
edit account rebillibles and vendors permission bug in gitlab by kdjstudios on jan submitted by amishra helpdesk na server all client site all account all issue while tesing we found another issue in rebillable section under account edit page suppose a user has security level or goto account edit page inside rebillables secton click on add vendor servise button choose on vendor from dropdown system will throw an error alert message because the user is not manager admin in comment we did not include this this could impact rebillable report this is fixed on new account page but need to be fixed on account edit page estimate size lower time bound variability upper time bound testing time for testing time for dev testing r small concepts and implementation well understood low impact thanks
1
3,850
6,808,544,516
IssuesEvent
2017-11-04 04:22:31
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
reopened
--dollar not correct for getTokenBal.
status-inprocess tools-getTokenBal type-bug
getTokenBal needs a price list for the wei / token conversion to dollars. This is not correct currently because it reports dollars as a converstion at the eth/dollar rate. Also-- dollars should be swapped out of 'fiat' and the user should be able to specify the price source in any denomination they want.
1.0
--dollar not correct for getTokenBal. - getTokenBal needs a price list for the wei / token conversion to dollars. This is not correct currently because it reports dollars as a converstion at the eth/dollar rate. Also-- dollars should be swapped out of 'fiat' and the user should be able to specify the price source in any denomination they want.
process
dollar not correct for gettokenbal gettokenbal needs a price list for the wei token conversion to dollars this is not correct currently because it reports dollars as a converstion at the eth dollar rate also dollars should be swapped out of fiat and the user should be able to specify the price source in any denomination they want
1
13,015
15,371,038,267
IssuesEvent
2021-03-02 09:32:25
googleapis/google-cloud-dotnet
https://api.github.com/repos/googleapis/google-cloud-dotnet
closed
Diff detection PR step is failing
priority: p1 type: process
This started today - I wonder whether it's because the "ubuntu-latest" GitHub action environment has updated to 20.04. Will try updating the version of LibGit2Sharp we use.
1.0
Diff detection PR step is failing - This started today - I wonder whether it's because the "ubuntu-latest" GitHub action environment has updated to 20.04. Will try updating the version of LibGit2Sharp we use.
process
diff detection pr step is failing this started today i wonder whether it s because the ubuntu latest github action environment has updated to will try updating the version of we use
1
11,943
14,707,933,025
IssuesEvent
2021-01-04 22:32:12
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Batch processing "Select files..." in Autofill raise python error with QgsProcessingParameterFile
Bug Processing
<!-- Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone. If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix Checklist before submitting - [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists - [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles). - [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue --> **Describe the bug** <!-- A clear and concise description of what the bug is. --> ``` AttributeError: 'QgsProcessingParameterFile' object has no attribute 'createFileFilter' Traceback (most recent call last): File "C:/OSGEO4~1/apps/qgis/./python/plugins\processing\gui\BatchPanel.py", line 241, in showFileSelectionDialog self, self.tr('Select Files'), path, self.parameterDefinition.createFileFilter() AttributeError: 'QgsProcessingParameterFile' object has no attribute 'createFileFilter' ``` QgsProcessingParameterFile does not inherite from QgsFileFilterGenerator according to [this](https://qgis.org/api/classQgsProcessingParameterFile.html) As I setup the type to File, I would except the autofill feature to work properly with a file filter set to the file filter choosen in the parameter definition. Bug ? Missing feature ? Oversight ? **How to Reproduce** <!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome --> 1. Create a new model 2. Add a "File/Forder" named INPUT for example ![image](https://user-images.githubusercontent.com/39594821/102792156-67cd3300-43a8-11eb-8719-df46ea16b876.png) 3. Run the model ![image](https://user-images.githubusercontent.com/39594821/102792179-6f8cd780-43a8-11eb-92fc-9f30f1427c72.png) ![image](https://user-images.githubusercontent.com/39594821/102792210-7a476c80-43a8-11eb-9d05-a48ff09bb028.png) 4. Run as Batch Process ![image](https://user-images.githubusercontent.com/39594821/102792309-a06d0c80-43a8-11eb-9c78-caa0c5deedd5.png) 5. Autofill ![image](https://user-images.githubusercontent.com/39594821/102792393-bd094480-43a8-11eb-9c5d-e2616e02600a.png) 6. See python error ![image](https://user-images.githubusercontent.com/39594821/102792438-d01c1480-43a8-11eb-9354-d61b7c66e59f.png) **QGIS and OS versions** 3.16.1 Win 10 3.16.2 is not available on OSGeo4W <!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
1.0
Batch processing "Select files..." in Autofill raise python error with QgsProcessingParameterFile - <!-- Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone. If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix Checklist before submitting - [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists - [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles). - [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue --> **Describe the bug** <!-- A clear and concise description of what the bug is. --> ``` AttributeError: 'QgsProcessingParameterFile' object has no attribute 'createFileFilter' Traceback (most recent call last): File "C:/OSGEO4~1/apps/qgis/./python/plugins\processing\gui\BatchPanel.py", line 241, in showFileSelectionDialog self, self.tr('Select Files'), path, self.parameterDefinition.createFileFilter() AttributeError: 'QgsProcessingParameterFile' object has no attribute 'createFileFilter' ``` QgsProcessingParameterFile does not inherite from QgsFileFilterGenerator according to [this](https://qgis.org/api/classQgsProcessingParameterFile.html) As I setup the type to File, I would except the autofill feature to work properly with a file filter set to the file filter choosen in the parameter definition. Bug ? Missing feature ? Oversight ? **How to Reproduce** <!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome --> 1. Create a new model 2. Add a "File/Forder" named INPUT for example ![image](https://user-images.githubusercontent.com/39594821/102792156-67cd3300-43a8-11eb-8719-df46ea16b876.png) 3. Run the model ![image](https://user-images.githubusercontent.com/39594821/102792179-6f8cd780-43a8-11eb-92fc-9f30f1427c72.png) ![image](https://user-images.githubusercontent.com/39594821/102792210-7a476c80-43a8-11eb-9d05-a48ff09bb028.png) 4. Run as Batch Process ![image](https://user-images.githubusercontent.com/39594821/102792309-a06d0c80-43a8-11eb-9c78-caa0c5deedd5.png) 5. Autofill ![image](https://user-images.githubusercontent.com/39594821/102792393-bd094480-43a8-11eb-9c5d-e2616e02600a.png) 6. See python error ![image](https://user-images.githubusercontent.com/39594821/102792438-d01c1480-43a8-11eb-9354-d61b7c66e59f.png) **QGIS and OS versions** 3.16.1 Win 10 3.16.2 is not available on OSGeo4W <!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
process
batch processing select files in autofill raise python error with qgsprocessingparameterfile bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug attributeerror qgsprocessingparameterfile object has no attribute createfilefilter traceback most recent call last file c apps qgis python plugins processing gui batchpanel py line in showfileselectiondialog self self tr select files path self parameterdefinition createfilefilter attributeerror qgsprocessingparameterfile object has no attribute createfilefilter qgsprocessingparameterfile does not inherite from qgsfilefiltergenerator according to as i setup the type to file i would except the autofill feature to work properly with a file filter set to the file filter choosen in the parameter definition bug missing feature oversight how to reproduce create a new model add a file forder named input for example run the model run as batch process autofill see python error qgis and os versions win is not available on about click in the table ctrl a and then ctrl c finally paste here
1
15,418
19,605,488,624
IssuesEvent
2022-01-06 08:56:27
plazi/community
https://api.github.com/repos/plazi/community
opened
to be processed
process request
here is one more new species from the CAS press release please process, including holotype and GBIF transfer [ichtyhology&herpetology.109.3.806-835.pdf](https://github.com/plazi/community/files/7820575/ichtyhology.herpetology.109.3.806-835.pdf)
1.0
to be processed - here is one more new species from the CAS press release please process, including holotype and GBIF transfer [ichtyhology&herpetology.109.3.806-835.pdf](https://github.com/plazi/community/files/7820575/ichtyhology.herpetology.109.3.806-835.pdf)
process
to be processed here is one more new species from the cas press release please process including holotype and gbif transfer
1
129,576
10,578,576,363
IssuesEvent
2019-10-07 23:10:39
MicrosoftDocs/visualstudio-docs
https://api.github.com/repos/MicrosoftDocs/visualstudio-docs
closed
xUnit Test Generator extension not updated for VS 2019
Pri1 doc-bug visual-studio-windows/prod vs-ide-test/tech
There is another one at https://marketplace.visualstudio.com/items?itemName=YowkoTsai.xUnitnetTestGenerator. Perhaps should be linked as well. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 30b8edaf-7667-a3e6-38dc-371c53bcd386 * Version Independent ID: 580e757e-8ce7-1437-3843-c88ff4f3ab54 * Content: [Generate unit tests for your code with IntelliTest - Visual Studio](https://docs.microsoft.com/en-us/visualstudio/test/generate-unit-tests-for-your-code-with-intellitest?view=vs-2019#feedback) * Content Source: [docs/test/generate-unit-tests-for-your-code-with-intellitest.md](https://github.com/MicrosoftDocs/visualstudio-docs/blob/master/docs/test/generate-unit-tests-for-your-code-with-intellitest.md) * Product: **visual-studio-windows** * Technology: **vs-ide-test** * GitHub Login: @gewarren * Microsoft Alias: **gewarren**
1.0
xUnit Test Generator extension not updated for VS 2019 - There is another one at https://marketplace.visualstudio.com/items?itemName=YowkoTsai.xUnitnetTestGenerator. Perhaps should be linked as well. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 30b8edaf-7667-a3e6-38dc-371c53bcd386 * Version Independent ID: 580e757e-8ce7-1437-3843-c88ff4f3ab54 * Content: [Generate unit tests for your code with IntelliTest - Visual Studio](https://docs.microsoft.com/en-us/visualstudio/test/generate-unit-tests-for-your-code-with-intellitest?view=vs-2019#feedback) * Content Source: [docs/test/generate-unit-tests-for-your-code-with-intellitest.md](https://github.com/MicrosoftDocs/visualstudio-docs/blob/master/docs/test/generate-unit-tests-for-your-code-with-intellitest.md) * Product: **visual-studio-windows** * Technology: **vs-ide-test** * GitHub Login: @gewarren * Microsoft Alias: **gewarren**
non_process
xunit test generator extension not updated for vs there is another one at perhaps should be linked as well document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product visual studio windows technology vs ide test github login gewarren microsoft alias gewarren
0
3,570
6,612,058,473
IssuesEvent
2017-09-20 01:12:57
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Date and Time filter in GoAccess
duplicate log-processing question
Hi, I have configured go access dashboard with HTML page. /usr/local/bin/goaccess -a -f /tmp/dis2.log -o /var/www/html/example.com.html Is there any way to filter date and time in that dashboard ??? Thanks, Bipin Bahuguna
1.0
Date and Time filter in GoAccess - Hi, I have configured go access dashboard with HTML page. /usr/local/bin/goaccess -a -f /tmp/dis2.log -o /var/www/html/example.com.html Is there any way to filter date and time in that dashboard ??? Thanks, Bipin Bahuguna
process
date and time filter in goaccess hi i have configured go access dashboard with html page usr local bin goaccess a f tmp log o var www html example com html is there any way to filter date and time in that dashboard thanks bipin bahuguna
1
56,965
13,958,823,832
IssuesEvent
2020-10-24 13:47:48
yungyuc/turgon
https://api.github.com/repos/yungyuc/turgon
closed
Configure github codespaces for the turgon repository
build
GitHub Codespaces: https://github.com/features/codespaces Codespaces document: https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces
1.0
Configure github codespaces for the turgon repository - GitHub Codespaces: https://github.com/features/codespaces Codespaces document: https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces
non_process
configure github codespaces for the turgon repository github codespaces codespaces document
0
20,027
5,964,776,314
IssuesEvent
2017-05-30 09:45:33
elastic/logstash
https://api.github.com/repos/elastic/logstash
closed
ensure api code is compatible with the existence of multiple pipelines
api code cleanup
There needs to be an investigation and fix of any obstacles that stop logstash from working properly with multi pipelines. At least one problem has been detected: currently there are two explicit references to the id "main" of the pipelines in the api code: ``` logstash-core/lib/logstash/api/commands/node.rb: [:stats, :pipelines, :main, :config], logstash-core/lib/logstash/api/commands/stats.rb: stats = stats[:main] ``` Further investigation can be accomplished by using the `feature/multi_pipeline` branch and testing out the api or simply by changing the `pipeline.id` in the yaml config file. This is a blocker for https://github.com/elastic/logstash/pull/6525
1.0
ensure api code is compatible with the existence of multiple pipelines - There needs to be an investigation and fix of any obstacles that stop logstash from working properly with multi pipelines. At least one problem has been detected: currently there are two explicit references to the id "main" of the pipelines in the api code: ``` logstash-core/lib/logstash/api/commands/node.rb: [:stats, :pipelines, :main, :config], logstash-core/lib/logstash/api/commands/stats.rb: stats = stats[:main] ``` Further investigation can be accomplished by using the `feature/multi_pipeline` branch and testing out the api or simply by changing the `pipeline.id` in the yaml config file. This is a blocker for https://github.com/elastic/logstash/pull/6525
non_process
ensure api code is compatible with the existence of multiple pipelines there needs to be an investigation and fix of any obstacles that stop logstash from working properly with multi pipelines at least one problem has been detected currently there are two explicit references to the id main of the pipelines in the api code logstash core lib logstash api commands node rb logstash core lib logstash api commands stats rb stats stats further investigation can be accomplished by using the feature multi pipeline branch and testing out the api or simply by changing the pipeline id in the yaml config file this is a blocker for
0
130,849
10,667,553,275
IssuesEvent
2019-10-19 13:20:06
libigl/libigl
https://api.github.com/repos/libigl/libigl
closed
Reveal name of failing mesh during unit test
question unit-tests
In a unit test that applies the pattern: ``` TEST_CASE("my test", "[igl]") { const auto test_case = [](const std::string &param) { ... }; test_common::run_test_cases(test_common::all_meshes(), test_case); } ``` If I run `ctest -R "my test"` I see the output for a failure as: ``` ctest -R "my test" Test project /usr/local/libigl/build Start 48: my test 1/1 Test #48: my test .......................***Failed 0.02 sec ``` How can I get the name of the failing mesh(es)? (Running `ctest` with the `--verbose` flag doesn't reveal it)
1.0
Reveal name of failing mesh during unit test - In a unit test that applies the pattern: ``` TEST_CASE("my test", "[igl]") { const auto test_case = [](const std::string &param) { ... }; test_common::run_test_cases(test_common::all_meshes(), test_case); } ``` If I run `ctest -R "my test"` I see the output for a failure as: ``` ctest -R "my test" Test project /usr/local/libigl/build Start 48: my test 1/1 Test #48: my test .......................***Failed 0.02 sec ``` How can I get the name of the failing mesh(es)? (Running `ctest` with the `--verbose` flag doesn't reveal it)
non_process
reveal name of failing mesh during unit test in a unit test that applies the pattern test case my test const auto test case const std string param test common run test cases test common all meshes test case if i run ctest r my test i see the output for a failure as ctest r my test test project usr local libigl build start my test test my test failed sec how can i get the name of the failing mesh es running ctest with the verbose flag doesn t reveal it
0
9,488
12,480,345,113
IssuesEvent
2020-05-29 20:07:14
sct-pipeline/spine-generic
https://api.github.com/repos/sct-pipeline/spine-generic
closed
Make sure the spineGeneric single and multi subjects are 100% BIDS compatible
processing urgent
Ultimately they should pass the BIDS validator of OpenNeuro
1.0
Make sure the spineGeneric single and multi subjects are 100% BIDS compatible - Ultimately they should pass the BIDS validator of OpenNeuro
process
make sure the spinegeneric single and multi subjects are bids compatible ultimately they should pass the bids validator of openneuro
1
248,635
21,047,763,573
IssuesEvent
2022-03-31 17:40:44
searchspring/snap
https://api.github.com/repos/searchspring/snap
closed
Testing: snap-controller
testing open source critical
Current 63.67% get over 80%. Test without network connection to ensure that mock data is always being used.
1.0
Testing: snap-controller - Current 63.67% get over 80%. Test without network connection to ensure that mock data is always being used.
non_process
testing snap controller current get over test without network connection to ensure that mock data is always being used
0
241,833
7,834,886,023
IssuesEvent
2018-06-16 19:53:32
DarkPacks/SevTech-Ages
https://api.github.com/repos/DarkPacks/SevTech-Ages
closed
Using "To the Bat Poles" on a server forces all players into third-person mode.
Category: Mod Priority: Low Status: Reported To Mod Status: Stale Type: Bug
<!-- Instructions on how to do issues like your boy darkosto --> <!-- NOTE: If you have other mods installed or you have changed versions; please revert to a clean install of the modpack and try to replicate the crash/issue otherwise we can ignore the crash due to a "modded" pack. --> <!-- Before anything else, use the *search* feature! --> <!-- * Maybe someone already reported the issue you're experiencing? --> <!-- * Maybe you can find the answer to your question by looking at older or closed issues? --> <!-- * Have a go at it and see! --> <!-- * Please search on the [issue track](../) before creating one. --> ## Issue / Bug <!--- If you're describing a bug, describe the current behavior --> <!--- If you're suggesting a change/improvement, tell us how it should work --> <!--- MAKE SURE TO ADD LOGS! --> <!--- If possible add a video/gif of the issue/bug (makes it easier for darkosto to understand you) --> Sliding down poles in SMP forces all players into third-person mode regardless of whether or not they are using the pole. ![ezgif com-video-to-gif](https://user-images.githubusercontent.com/17113297/39884683-a6c4fac6-5458-11e8-908c-713ac774efde.gif) ## Expected Behavior <!--- If describing a bug, tell us what happens instead of the expected behavior --> <!--- If suggesting a change/improvement, explain the difference from current behavior --> I would expect no third person at all, maybe third person if you're using the pole, but definitely no third person for players not using the pole. ## Possible Solution <!--- Not obligatory, but suggest a fix/reason for the bug, --> <!--- or ideas how to implement the addition or change --> No clue. ## Steps to Reproduce (for bugs) <!--- Provide a link to a live example, or an unambiguous set of steps to --> 1. Create pole of iron bars in SMP. 2. Slide down pole with other players online. <!--- add more if needed --> ## Context <!--- How has this issue affected you? What are you trying to accomplish? --> <!--- Providing context helps us come up with a solution that is most useful in the real world --> Sudden perspective shifts cause motion sickness, disorientation, or even permanent shift of perspective in SMP until corrected userside. ## Client Information <!--- Include as many relevant details about the environment you experienced the bug in --> * Modpack Version: 3.0.7 * Java Version: 8.171 * Launcher Used: Twitch <!-- Please tell us how much memory you have allocated to the game. For Twitch/ATLauncher look in the settings --> * Memory Allocated: 4GB <!--- If your using a server please fill the additional information below --> * Server/LAN/Single Player: Server * Resourcepack Enabled?: No * Optifine Installed?: No <!--- Additional Information if you are using a server setup (DELETE THIS SECTION IF YOUR ISSUE IS CLIENT ONLY) --> ## Server Information * Java Version: N/A * Operating System: N/A * Hoster/Hosting Soultion: CubedHost 4GB * Sponge (Non-Vanilla Forge) Server?: N/A <!--- If YES please list the installed content (Mods/Plugins) --> * Additional Content Installed?: No
1.0
Using "To the Bat Poles" on a server forces all players into third-person mode. - <!-- Instructions on how to do issues like your boy darkosto --> <!-- NOTE: If you have other mods installed or you have changed versions; please revert to a clean install of the modpack and try to replicate the crash/issue otherwise we can ignore the crash due to a "modded" pack. --> <!-- Before anything else, use the *search* feature! --> <!-- * Maybe someone already reported the issue you're experiencing? --> <!-- * Maybe you can find the answer to your question by looking at older or closed issues? --> <!-- * Have a go at it and see! --> <!-- * Please search on the [issue track](../) before creating one. --> ## Issue / Bug <!--- If you're describing a bug, describe the current behavior --> <!--- If you're suggesting a change/improvement, tell us how it should work --> <!--- MAKE SURE TO ADD LOGS! --> <!--- If possible add a video/gif of the issue/bug (makes it easier for darkosto to understand you) --> Sliding down poles in SMP forces all players into third-person mode regardless of whether or not they are using the pole. ![ezgif com-video-to-gif](https://user-images.githubusercontent.com/17113297/39884683-a6c4fac6-5458-11e8-908c-713ac774efde.gif) ## Expected Behavior <!--- If describing a bug, tell us what happens instead of the expected behavior --> <!--- If suggesting a change/improvement, explain the difference from current behavior --> I would expect no third person at all, maybe third person if you're using the pole, but definitely no third person for players not using the pole. ## Possible Solution <!--- Not obligatory, but suggest a fix/reason for the bug, --> <!--- or ideas how to implement the addition or change --> No clue. ## Steps to Reproduce (for bugs) <!--- Provide a link to a live example, or an unambiguous set of steps to --> 1. Create pole of iron bars in SMP. 2. Slide down pole with other players online. <!--- add more if needed --> ## Context <!--- How has this issue affected you? What are you trying to accomplish? --> <!--- Providing context helps us come up with a solution that is most useful in the real world --> Sudden perspective shifts cause motion sickness, disorientation, or even permanent shift of perspective in SMP until corrected userside. ## Client Information <!--- Include as many relevant details about the environment you experienced the bug in --> * Modpack Version: 3.0.7 * Java Version: 8.171 * Launcher Used: Twitch <!-- Please tell us how much memory you have allocated to the game. For Twitch/ATLauncher look in the settings --> * Memory Allocated: 4GB <!--- If your using a server please fill the additional information below --> * Server/LAN/Single Player: Server * Resourcepack Enabled?: No * Optifine Installed?: No <!--- Additional Information if you are using a server setup (DELETE THIS SECTION IF YOUR ISSUE IS CLIENT ONLY) --> ## Server Information * Java Version: N/A * Operating System: N/A * Hoster/Hosting Soultion: CubedHost 4GB * Sponge (Non-Vanilla Forge) Server?: N/A <!--- If YES please list the installed content (Mods/Plugins) --> * Additional Content Installed?: No
non_process
using to the bat poles on a server forces all players into third person mode note if you have other mods installed or you have changed versions please revert to a clean install of the modpack and try to replicate the crash issue otherwise we can ignore the crash due to a modded pack issue bug sliding down poles in smp forces all players into third person mode regardless of whether or not they are using the pole expected behavior i would expect no third person at all maybe third person if you re using the pole but definitely no third person for players not using the pole possible solution no clue steps to reproduce for bugs create pole of iron bars in smp slide down pole with other players online context sudden perspective shifts cause motion sickness disorientation or even permanent shift of perspective in smp until corrected userside client information modpack version java version launcher used twitch memory allocated server lan single player server resourcepack enabled no optifine installed no server information java version n a operating system n a hoster hosting soultion cubedhost sponge non vanilla forge server n a additional content installed no
0
64,885
26,901,397,659
IssuesEvent
2023-02-06 15:53:25
hashicorp/terraform-provider-azurerm
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
closed
azurerm_function_app_function doesn't support multiple files
enhancement service/functions
### Is there an existing issue for this? - [X] I have searched the existing issues ### Community Note <!--- Please keep this note for the community ---> * Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Terraform Version 1.2.1 ### AzureRM Provider Version 3.9.0 ### Affected Resource(s)/Data Source(s) azurerm_function_app_function ### Terraform Configuration Files ```hcl resource "azurerm_function_app_function" "function" { name = ${var.name} function_app_id = azurerm_linux_function_app.function_app.id language = "Python" // Only deploys a single file: use VSCode Function Deployment for code file { name = "__init__.py" content = file("${var.function_location}/__init__.py") } config_json = jsonencode({ "scriptFile": "__init__.py", "bindings": [ { "type": "eventGridTrigger", "name": "event", "direction": "in" } ] }) lifecycle { ignore_changes = [ file ] } } ``` ### Debug Output/Panic Output ```shell N/A ``` ### Expected Behaviour It should be possible to specify a zip file of function code, which will contain multiple files. This is particularly important for interpreted languages like Python where you don't provide a compiled artifact, instead a collection of .py and a requirements.txt. ### Actual Behaviour If you specify multiple `file` blocks (eg. for the `__init__.py` and `requirements.txt`), the contents of the last file will be used for both file names when uploaded to Azure. Alternatively, the function has to be deployed independently of Terraform (eg. with Azure CLI) and then imported back into the state, so the entire function folder can be uploaded as a zip. ### Steps to Reproduce _No response_ ### Important Factoids _No response_ ### References _No response_
1.0
azurerm_function_app_function doesn't support multiple files - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Community Note <!--- Please keep this note for the community ---> * Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Terraform Version 1.2.1 ### AzureRM Provider Version 3.9.0 ### Affected Resource(s)/Data Source(s) azurerm_function_app_function ### Terraform Configuration Files ```hcl resource "azurerm_function_app_function" "function" { name = ${var.name} function_app_id = azurerm_linux_function_app.function_app.id language = "Python" // Only deploys a single file: use VSCode Function Deployment for code file { name = "__init__.py" content = file("${var.function_location}/__init__.py") } config_json = jsonencode({ "scriptFile": "__init__.py", "bindings": [ { "type": "eventGridTrigger", "name": "event", "direction": "in" } ] }) lifecycle { ignore_changes = [ file ] } } ``` ### Debug Output/Panic Output ```shell N/A ``` ### Expected Behaviour It should be possible to specify a zip file of function code, which will contain multiple files. This is particularly important for interpreted languages like Python where you don't provide a compiled artifact, instead a collection of .py and a requirements.txt. ### Actual Behaviour If you specify multiple `file` blocks (eg. for the `__init__.py` and `requirements.txt`), the contents of the last file will be used for both file names when uploaded to Azure. Alternatively, the function has to be deployed independently of Terraform (eg. with Azure CLI) and then imported back into the state, so the entire function folder can be uploaded as a zip. ### Steps to Reproduce _No response_ ### Important Factoids _No response_ ### References _No response_
non_process
azurerm function app function doesn t support multiple files is there an existing issue for this i have searched the existing issues community note please vote on this issue by adding a thumbsup to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version azurerm provider version affected resource s data source s azurerm function app function terraform configuration files hcl resource azurerm function app function function name var name function app id azurerm linux function app function app id language python only deploys a single file use vscode function deployment for code file name init py content file var function location init py config json jsonencode scriptfile init py bindings type eventgridtrigger name event direction in lifecycle ignore changes file debug output panic output shell n a expected behaviour it should be possible to specify a zip file of function code which will contain multiple files this is particularly important for interpreted languages like python where you don t provide a compiled artifact instead a collection of py and a requirements txt actual behaviour if you specify multiple file blocks eg for the init py and requirements txt the contents of the last file will be used for both file names when uploaded to azure alternatively the function has to be deployed independently of terraform eg with azure cli and then imported back into the state so the entire function folder can be uploaded as a zip steps to reproduce no response important factoids no response references no response
0
43,815
5,713,846,500
IssuesEvent
2017-04-19 08:53:30
geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE
https://api.github.com/repos/geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE
reopened
hrXcPmRvjhTObmCxhK2nSXwyFACIefqBShekBPM3cINblBJCmUhhPqO5y/ABcnbYGYYE84JfReh+TakhQxLfVNAcqnNlCWDGoDKQj/iXk5d1wAXmYvEoKlhFnMWsfArrHHoxIDIjeISwh6DMh5LTYdGhgASSfa9ySKXJ2BxgLhE=
design
IvKPsvEiKNnHEYi9+sWMvYv4hPYU1QfIuWHmPw8qV4r0OSjdvBWCuzN8rpvm4hx0YUuB9jAsAWxgOHKbHHB/24aWeD/Fm2ZQtGPw6IPJUg+osrtWTzdl5MSS9gxCt/KwYMziFHHnSyZEF6Z9n/z7mbbo9RHL8xDFsvehKtMjotByWPWgzchfVR4kFRTHW0KcJMD+9ebYIQbhrETa9L/kpubJFlwTxyzWNSWgV6n/Wll+G9yEbpxgp7ifIl/sp9lWjBsm5z+/dbW7L4LUj5IgVCTA/vXm2CEG4axE2vS/5Kbl6X8XunhbnFKNj5zSxGFyJIPujvmOH5QtostLF/OaKn1iU4StWCdOKf5auxytD8p6dMFjtA0YxKCODnT9nrH8JMD+9ebYIQbhrETa9L/kpiTA/vXm2CEG4axE2vS/5KZIc/bHjHDwmLDAjVXExNgeRXgiAG/Gn46RcUM5TW7XIb/NovBP2ulOD+EbA68vlypaIPjP/BX3BHxUS/rSxj5wJMD+9ebYIQbhrETa9L/kpjXlw8sfdYeLSvcKMEzKsXVC++dYbBsz9SN70x0bozkBJMD+9ebYIQbhrETa9L/kpoP7mZXOU8ASedwZ+Kz6pFl+NBVguAbd2atYpXRv6AXkJMD+9ebYIQbhrETa9L/kpnD87yV7Hn9dZ9vV4g+SI8ApKxHSiNc7c1vzyNpWREqYxEaSZto2dLj1eXkuPVxR3yTA/vXm2CEG4axE2vS/5KZsaV5PdkrPUrMUZy6C9KdPOVvcXoS3YqEF+Ffif3xfJgMSYKyXxDv6xhUz4Q6iddcu6QevfFTBNkwz02vz9av0XTKhHavRoV2VLaVq+irrrGq1RzLONFDKfF9yeGMeEkss/eHChs172LT66Lw8BeHkJMD+9ebYIQbhrETa9L/kptvsDzHGQVaksZFmpYHg0d30FJh1+D6Ze5Vjgti3V1KX1gVMULPIDY/NPq2EvgPDGCTA/vXm2CEG4axE2vS/5KZDABoyGtB3Y/Oe4KY4Y8+oJMo7st9ZPzUzTGnCVsHwYMl/73gjJkKYhRSY9nWAFHYkwP715tghBuGsRNr0v+SmapJw3eB2aQjAfbAh/vYErq1eOfIIp4/sp+jMVyw331CabasIXws/4VU2aiGSzGSB4oDO4z3JYSmnxn1wbD/EBpGRLJCciAo5qigaYXiYGj5iRspez8QMEowYIevEWIyHJMD+9ebYIQbhrETa9L/kptwTvxk/Qy6ZdPU8OcD62syU/83wHYQOFoiNFVnv1+Stj6xje3a3/DdUX5OMqw5xt8ZeFn5gstoyZ5iU9n/DZIU5lM4P6sW4N1365XIreGrQLmX5hl8aaSVVEJ1UK3hlKyTA/vXm2CEG4axE2vS/5KbK2gBWR3eG5g9QDlWYaP4pwblPv/3I2E2KGXBuWksskSTA/vXm2CEG4axE2vS/5KbRygI5lAMVXjdUa+aNhJVv8jkXvaWVNN0nawUCxIv/USTA/vXm2CEG4axE2vS/5KYcc/9O9IO+m6Pq2SC257GTmvAF3tbItfNPkpZxNabdpWVH7gsznCwgDd53vqMaB75M7kOKk0TYj6k2QYSuJdD7JMD+9ebYIQbhrETa9L/kpojvS26HwOPKXzbi5VqfL+CzrMQu2lwLFMjzAlviWJffayn8rDE3lF5RbOSe5C9L1A6GgCNO7qw9P6SohSMoJPPWjRTrHiOSPt+X6fkbNbR1w88wYlUJBwVpajrWWlgmNNRgWVMT9u34QgeiJnlHyO8iFgImJL8R2FcXTgUzf2rtJPaI+2WeFprOP4k4IZI+JSTA/vXm2CEG4axE2vS/5KYkwP715tghBuGsRNr0v+Sm/sMsrGkJr86vro6wZZh9IgrV8wSYjZQW7Ez23D4ZAa0SF0Koqu5tU8fbW1YYxgy3JMD+9ebYIQbhrETa9L/kpiTA/vXm2CEG4axE2vS/5Kbbo8mFjignAK3/WO23XDQAlquahgSgslYea1oiueZ+SSTA/vXm2CEG4axE2vS/5KYkwP715tghBuGsRNr0v+SmlAuQSnDWHXy7C5DURMu2y9FT1H0MG2mfR8+Lhxqs0d6tMBDavNE+YLZZwNrIvcuIbqo/OFk70n3yji6evtdzQZnc8a2fAWNIUsZba09Hf9Jjw/cqXl0y/f8w/SHDRyqIktC6mnEKDNrRE5+sEBFGbaoD/PjsKPFzZ2N1QjumYZLK7E5MWw1ivLP/sqVMm51h
1.0
hrXcPmRvjhTObmCxhK2nSXwyFACIefqBShekBPM3cINblBJCmUhhPqO5y/ABcnbYGYYE84JfReh+TakhQxLfVNAcqnNlCWDGoDKQj/iXk5d1wAXmYvEoKlhFnMWsfArrHHoxIDIjeISwh6DMh5LTYdGhgASSfa9ySKXJ2BxgLhE= - IvKPsvEiKNnHEYi9+sWMvYv4hPYU1QfIuWHmPw8qV4r0OSjdvBWCuzN8rpvm4hx0YUuB9jAsAWxgOHKbHHB/24aWeD/Fm2ZQtGPw6IPJUg+osrtWTzdl5MSS9gxCt/KwYMziFHHnSyZEF6Z9n/z7mbbo9RHL8xDFsvehKtMjotByWPWgzchfVR4kFRTHW0KcJMD+9ebYIQbhrETa9L/kpubJFlwTxyzWNSWgV6n/Wll+G9yEbpxgp7ifIl/sp9lWjBsm5z+/dbW7L4LUj5IgVCTA/vXm2CEG4axE2vS/5Kbl6X8XunhbnFKNj5zSxGFyJIPujvmOH5QtostLF/OaKn1iU4StWCdOKf5auxytD8p6dMFjtA0YxKCODnT9nrH8JMD+9ebYIQbhrETa9L/kpiTA/vXm2CEG4axE2vS/5KZIc/bHjHDwmLDAjVXExNgeRXgiAG/Gn46RcUM5TW7XIb/NovBP2ulOD+EbA68vlypaIPjP/BX3BHxUS/rSxj5wJMD+9ebYIQbhrETa9L/kpjXlw8sfdYeLSvcKMEzKsXVC++dYbBsz9SN70x0bozkBJMD+9ebYIQbhrETa9L/kpoP7mZXOU8ASedwZ+Kz6pFl+NBVguAbd2atYpXRv6AXkJMD+9ebYIQbhrETa9L/kpnD87yV7Hn9dZ9vV4g+SI8ApKxHSiNc7c1vzyNpWREqYxEaSZto2dLj1eXkuPVxR3yTA/vXm2CEG4axE2vS/5KZsaV5PdkrPUrMUZy6C9KdPOVvcXoS3YqEF+Ffif3xfJgMSYKyXxDv6xhUz4Q6iddcu6QevfFTBNkwz02vz9av0XTKhHavRoV2VLaVq+irrrGq1RzLONFDKfF9yeGMeEkss/eHChs172LT66Lw8BeHkJMD+9ebYIQbhrETa9L/kptvsDzHGQVaksZFmpYHg0d30FJh1+D6Ze5Vjgti3V1KX1gVMULPIDY/NPq2EvgPDGCTA/vXm2CEG4axE2vS/5KZDABoyGtB3Y/Oe4KY4Y8+oJMo7st9ZPzUzTGnCVsHwYMl/73gjJkKYhRSY9nWAFHYkwP715tghBuGsRNr0v+SmapJw3eB2aQjAfbAh/vYErq1eOfIIp4/sp+jMVyw331CabasIXws/4VU2aiGSzGSB4oDO4z3JYSmnxn1wbD/EBpGRLJCciAo5qigaYXiYGj5iRspez8QMEowYIevEWIyHJMD+9ebYIQbhrETa9L/kptwTvxk/Qy6ZdPU8OcD62syU/83wHYQOFoiNFVnv1+Stj6xje3a3/DdUX5OMqw5xt8ZeFn5gstoyZ5iU9n/DZIU5lM4P6sW4N1365XIreGrQLmX5hl8aaSVVEJ1UK3hlKyTA/vXm2CEG4axE2vS/5KbK2gBWR3eG5g9QDlWYaP4pwblPv/3I2E2KGXBuWksskSTA/vXm2CEG4axE2vS/5KbRygI5lAMVXjdUa+aNhJVv8jkXvaWVNN0nawUCxIv/USTA/vXm2CEG4axE2vS/5KYcc/9O9IO+m6Pq2SC257GTmvAF3tbItfNPkpZxNabdpWVH7gsznCwgDd53vqMaB75M7kOKk0TYj6k2QYSuJdD7JMD+9ebYIQbhrETa9L/kpojvS26HwOPKXzbi5VqfL+CzrMQu2lwLFMjzAlviWJffayn8rDE3lF5RbOSe5C9L1A6GgCNO7qw9P6SohSMoJPPWjRTrHiOSPt+X6fkbNbR1w88wYlUJBwVpajrWWlgmNNRgWVMT9u34QgeiJnlHyO8iFgImJL8R2FcXTgUzf2rtJPaI+2WeFprOP4k4IZI+JSTA/vXm2CEG4axE2vS/5KYkwP715tghBuGsRNr0v+Sm/sMsrGkJr86vro6wZZh9IgrV8wSYjZQW7Ez23D4ZAa0SF0Koqu5tU8fbW1YYxgy3JMD+9ebYIQbhrETa9L/kpiTA/vXm2CEG4axE2vS/5Kbbo8mFjignAK3/WO23XDQAlquahgSgslYea1oiueZ+SSTA/vXm2CEG4axE2vS/5KYkwP715tghBuGsRNr0v+SmlAuQSnDWHXy7C5DURMu2y9FT1H0MG2mfR8+Lhxqs0d6tMBDavNE+YLZZwNrIvcuIbqo/OFk70n3yji6evtdzQZnc8a2fAWNIUsZba09Hf9Jjw/cqXl0y/f8w/SHDRyqIktC6mnEKDNrRE5+sEBFGbaoD/PjsKPFzZ2N1QjumYZLK7E5MWw1ivLP/sqVMm51h
non_process
takhqxlfvnacqnnlcwdgodkqj wll kpita bhjhdwmldajvxexngerxgiag sp kptwtvxk usta jsta sm kpita ssta ylzzwnrivcuibqo sebfgbaod
0
1,931
4,761,401,136
IssuesEvent
2016-10-25 08:07:34
CERNDocumentServer/cds
https://api.github.com/repos/CERNDocumentServer/cds
opened
deposit: SSE channel endpoint
avc_processing enhancement
Add a new endpoint on each deposit that corresponds to the SSE channel, where clients should subscribe in order to receive messages about this particular deposit.
1.0
deposit: SSE channel endpoint - Add a new endpoint on each deposit that corresponds to the SSE channel, where clients should subscribe in order to receive messages about this particular deposit.
process
deposit sse channel endpoint add a new endpoint on each deposit that corresponds to the sse channel where clients should subscribe in order to receive messages about this particular deposit
1
250,523
7,978,326,644
IssuesEvent
2018-07-17 17:58:04
inverse-inc/packetfence
https://api.github.com/repos/inverse-inc/packetfence
closed
"make vendor" fails on devel
Priority: Medium Type: Bug
Failing with: ``` # make vendor npm install npm WARN pfappserver@7.0.0 No description npm WARN pfappserver@7.0.0 No repository field. npm WARN pfappserver@7.0.0 No license field. bower ace-builds#1.3.x EINVRES Request to https://bower.herokuapp.com/packages/ace-builds failed with 502 make: *** [vendor] Error 1 ``` If I visit the URL, it outputs the following: ``` This Bower version is deprecated. Please update it: npm install -g bower. The new registry address is https://registry.bower.io ``` I did then update bower to 1.8.4 and it started working again. So, my guess is that we should but bower >= 1.8.4 in package.json but I just wanted to check with @cgx that it won't break anything for the new admin
1.0
"make vendor" fails on devel - Failing with: ``` # make vendor npm install npm WARN pfappserver@7.0.0 No description npm WARN pfappserver@7.0.0 No repository field. npm WARN pfappserver@7.0.0 No license field. bower ace-builds#1.3.x EINVRES Request to https://bower.herokuapp.com/packages/ace-builds failed with 502 make: *** [vendor] Error 1 ``` If I visit the URL, it outputs the following: ``` This Bower version is deprecated. Please update it: npm install -g bower. The new registry address is https://registry.bower.io ``` I did then update bower to 1.8.4 and it started working again. So, my guess is that we should but bower >= 1.8.4 in package.json but I just wanted to check with @cgx that it won't break anything for the new admin
non_process
make vendor fails on devel failing with make vendor npm install npm warn pfappserver no description npm warn pfappserver no repository field npm warn pfappserver no license field bower ace builds x einvres request to failed with make error if i visit the url it outputs the following this bower version is deprecated please update it npm install g bower the new registry address is i did then update bower to and it started working again so my guess is that we should but bower in package json but i just wanted to check with cgx that it won t break anything for the new admin
0
15,018
10,239,656,603
IssuesEvent
2019-08-19 18:47:45
microsoft/vscode-cpptools
https://api.github.com/repos/microsoft/vscode-cpptools
closed
#include errors detected with WSL
Language Service duplicate
I met the #include errors detected error with my simple hello world and configuration with WSL background: Version: 1.26.1 Commit: 493869ee8e8a846b0855873886fc79d480d342de Date: 2018-08-16T18:38:57.434Z Electron: 2.0.5 Chrome: 61.0.3163.100 Node.js: 8.9.3 V8: 6.1.534.41 Architecture: x64 C/C++ plugin:0.18.1 ``` "#include errors detected. Please update your includePath. IntelliSense features for this translation unit (C:\\workspace\\test\\helloworld.cpp) will be provided by the Tag Parser.", cannot open source file \"**asm/errno.h**\" (dependency of \"person.h\") ``` #### I found vs code always complain about the **asm/error.h" for **all the first #include file** even #### it can locate(ctrl+mouse click) that file correctly. c_cpp_properties.json ``` { "configurations": [ { "name": "WSL", "intelliSenseMode": "clang-x64", "compilerPath": "/usr/bin/c++", "includePath": [ "/usr/include/c++/4.8", "/usr/include/c++/4.8/x86_64-suse-linux", "/usr/include/c++/4.8/backward", "/usr/lib64/gcc/x86_64-suse-linux/4.8/include", "/usr/local/include", "/usr/lib64/gcc/x86_64-suse-linux/4.8/include-fixed", "/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/include", "/usr/include" ], "defines": [], "browse": { "path": [ "/usr/include/c++/4.8", "/usr/include/c++/4.8/x86_64-suse-linux", "/usr/include/c++/4.8/backward", "/usr/lib64/gcc/x86_64-suse-linux/4.8/include", "/usr/local/include", "/usr/lib64/gcc/x86_64-suse-linux/4.8/include-fixed", "/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/include", "/usr/include" ], "limitSymbolsToIncludedHeaders": true, "databaseFilename": "" }, "cStandard": "c11", "cppStandard": "c++11" } ], "version": 4 } ``` ![helloworld](https://user-images.githubusercontent.com/5364806/44570618-0df4d800-a7b1-11e8-91df-5e24fce7ddb9.png) ![person](https://user-images.githubusercontent.com/5364806/44570625-13522280-a7b1-11e8-861a-76c9125fdce8.png)
1.0
#include errors detected with WSL - I met the #include errors detected error with my simple hello world and configuration with WSL background: Version: 1.26.1 Commit: 493869ee8e8a846b0855873886fc79d480d342de Date: 2018-08-16T18:38:57.434Z Electron: 2.0.5 Chrome: 61.0.3163.100 Node.js: 8.9.3 V8: 6.1.534.41 Architecture: x64 C/C++ plugin:0.18.1 ``` "#include errors detected. Please update your includePath. IntelliSense features for this translation unit (C:\\workspace\\test\\helloworld.cpp) will be provided by the Tag Parser.", cannot open source file \"**asm/errno.h**\" (dependency of \"person.h\") ``` #### I found vs code always complain about the **asm/error.h" for **all the first #include file** even #### it can locate(ctrl+mouse click) that file correctly. c_cpp_properties.json ``` { "configurations": [ { "name": "WSL", "intelliSenseMode": "clang-x64", "compilerPath": "/usr/bin/c++", "includePath": [ "/usr/include/c++/4.8", "/usr/include/c++/4.8/x86_64-suse-linux", "/usr/include/c++/4.8/backward", "/usr/lib64/gcc/x86_64-suse-linux/4.8/include", "/usr/local/include", "/usr/lib64/gcc/x86_64-suse-linux/4.8/include-fixed", "/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/include", "/usr/include" ], "defines": [], "browse": { "path": [ "/usr/include/c++/4.8", "/usr/include/c++/4.8/x86_64-suse-linux", "/usr/include/c++/4.8/backward", "/usr/lib64/gcc/x86_64-suse-linux/4.8/include", "/usr/local/include", "/usr/lib64/gcc/x86_64-suse-linux/4.8/include-fixed", "/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/include", "/usr/include" ], "limitSymbolsToIncludedHeaders": true, "databaseFilename": "" }, "cStandard": "c11", "cppStandard": "c++11" } ], "version": 4 } ``` ![helloworld](https://user-images.githubusercontent.com/5364806/44570618-0df4d800-a7b1-11e8-91df-5e24fce7ddb9.png) ![person](https://user-images.githubusercontent.com/5364806/44570625-13522280-a7b1-11e8-861a-76c9125fdce8.png)
non_process
include errors detected with wsl i met the include errors detected error with my simple hello world and configuration with wsl background version commit date electron chrome node js architecture c c plugin include errors detected please update your includepath intellisense features for this translation unit c workspace test helloworld cpp will be provided by the tag parser cannot open source file asm errno h dependency of person h i found vs code always complain about the asm error h for all the first include file even it can locate ctrl mouse click that file correctly c cpp properties json configurations name wsl intellisensemode clang compilerpath usr bin c includepath usr include c usr include c suse linux usr include c backward usr gcc suse linux include usr local include usr gcc suse linux include fixed usr gcc suse linux suse linux include usr include defines browse path usr include c usr include c suse linux usr include c backward usr gcc suse linux include usr local include usr gcc suse linux include fixed usr gcc suse linux suse linux include usr include limitsymbolstoincludedheaders true databasefilename cstandard cppstandard c version
0
17,004
22,366,305,586
IssuesEvent
2022-06-16 04:42:54
streamnative/pulsar-spark
https://api.github.com/repos/streamnative/pulsar-spark
closed
[FEATURE] Upgrade Pulsar client lib version to 2.9.2
type/feature compute/data-processing
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when \[...] **Describe the solution you'd like** The current pulsar version used is 2.4.2, it's quite old. We need to upgrade to newer pulsar releases. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
1.0
[FEATURE] Upgrade Pulsar client lib version to 2.9.2 - **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when \[...] **Describe the solution you'd like** The current pulsar version used is 2.4.2, it's quite old. We need to upgrade to newer pulsar releases. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
process
upgrade pulsar client lib version to is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when describe the solution you d like the current pulsar version used is it s quite old we need to upgrade to newer pulsar releases describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here
1
19,817
11,298,806,954
IssuesEvent
2020-01-17 09:49:42
kyma-project/kyma
https://api.github.com/repos/kyma-project/kyma
closed
Job for istioctl install
area/service-mesh enhancement
**Description** Currently we have two charts : istio-init and istio. Istioctl is a cli meaning that from kyma installer pov we need to provide a job which at least for now can download istioctl (eventually it should have it probably embedded) and execute install command . Additonally job must have kubectl as we need to mark couple of namespaced with istio injection disabled. Job should be in istio chart. AC: - Job definition present in the chart with everything need to perform operations - SA,RBAC etc. - ServiceAccount is least privileged -> has access to only istio resources + allows labeling namespaces + patching deployment Note that profiles(overrides) will be handled in a separate issue
1.0
Job for istioctl install - **Description** Currently we have two charts : istio-init and istio. Istioctl is a cli meaning that from kyma installer pov we need to provide a job which at least for now can download istioctl (eventually it should have it probably embedded) and execute install command . Additonally job must have kubectl as we need to mark couple of namespaced with istio injection disabled. Job should be in istio chart. AC: - Job definition present in the chart with everything need to perform operations - SA,RBAC etc. - ServiceAccount is least privileged -> has access to only istio resources + allows labeling namespaces + patching deployment Note that profiles(overrides) will be handled in a separate issue
non_process
job for istioctl install description currently we have two charts istio init and istio istioctl is a cli meaning that from kyma installer pov we need to provide a job which at least for now can download istioctl eventually it should have it probably embedded and execute install command additonally job must have kubectl as we need to mark couple of namespaced with istio injection disabled job should be in istio chart ac job definition present in the chart with everything need to perform operations sa rbac etc serviceaccount is least privileged has access to only istio resources allows labeling namespaces patching deployment note that profiles overrides will be handled in a separate issue
0
768,148
26,955,306,379
IssuesEvent
2023-02-08 14:32:30
flipt-io/flipt
https://api.github.com/repos/flipt-io/flipt
closed
[FLI-196] OTel: Support OTLP export
enhancement High priority
[https://github.com/open-telemetry/opentelemetry-specification/blob/v1.5.0/specification/protocol/otlp.md](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.5.0/specification/protocol/otlp.md) [https://github.com/open-telemetry/opentelemetry-specification/blob/v1.5.0/specification/protocol/exporter.md](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.5.0/specification/protocol/exporter.md) Potentially starting with just traces, but we could also move to supporting OTLP for metrics exporting as well We could/should likely also support both grpc and http protocols for the collector <sub>From [SyncLinear.com](https://synclinear.com) | [FLI-196](https://linear.app/flipt/issue/FLI-196/otel-support-otlp-export)</sub>
1.0
[FLI-196] OTel: Support OTLP export - [https://github.com/open-telemetry/opentelemetry-specification/blob/v1.5.0/specification/protocol/otlp.md](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.5.0/specification/protocol/otlp.md) [https://github.com/open-telemetry/opentelemetry-specification/blob/v1.5.0/specification/protocol/exporter.md](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.5.0/specification/protocol/exporter.md) Potentially starting with just traces, but we could also move to supporting OTLP for metrics exporting as well We could/should likely also support both grpc and http protocols for the collector <sub>From [SyncLinear.com](https://synclinear.com) | [FLI-196](https://linear.app/flipt/issue/FLI-196/otel-support-otlp-export)</sub>
non_process
otel support otlp export potentially starting with just traces but we could also move to supporting otlp for metrics exporting as well we could should likely also support both grpc and http protocols for the collector from
0
18,583
24,565,975,133
IssuesEvent
2022-10-13 03:04:38
pyanodon/pybugreports
https://api.github.com/repos/pyanodon/pybugreports
closed
Nixie Tubes mod linked improperly
bug progression mod:pypostprocessing compatibility
### Mod source PyAE Beta ### Which mod are you having an issue with? - [ ] pyalienlife - [X] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [ ] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [X] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [ ] Crash - [ ] Progression - [ ] Balance - [ ] Pypostprocessing failure - [ ] Other ### What is the problem? Placement of the Nixie Tubes mod is not tuned for the current tech tree. It currently uses 'Advanced Electronics' as its pre-requisite. It should have Circuit Network as its pre-requisite, and probably doesn't need logistics science. Cost should be on par with circuit network, and component costs should be on par with a constant combinator. ![image](https://user-images.githubusercontent.com/60377024/185261024-90c3db8d-b310-45cb-8492-f749613ec064.png) ### Steps to reproduce Mod is nixie-tubes 1.1.3 https://mods.factorio.com/mod/nixie-tubes ### Additional context _No response_ ### Log file _No response_
1.0
Nixie Tubes mod linked improperly - ### Mod source PyAE Beta ### Which mod are you having an issue with? - [ ] pyalienlife - [X] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [ ] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [X] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [ ] Crash - [ ] Progression - [ ] Balance - [ ] Pypostprocessing failure - [ ] Other ### What is the problem? Placement of the Nixie Tubes mod is not tuned for the current tech tree. It currently uses 'Advanced Electronics' as its pre-requisite. It should have Circuit Network as its pre-requisite, and probably doesn't need logistics science. Cost should be on par with circuit network, and component costs should be on par with a constant combinator. ![image](https://user-images.githubusercontent.com/60377024/185261024-90c3db8d-b310-45cb-8492-f749613ec064.png) ### Steps to reproduce Mod is nixie-tubes 1.1.3 https://mods.factorio.com/mod/nixie-tubes ### Additional context _No response_ ### Log file _No response_
process
nixie tubes mod linked improperly mod source pyae beta which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem placement of the nixie tubes mod is not tuned for the current tech tree it currently uses advanced electronics as its pre requisite it should have circuit network as its pre requisite and probably doesn t need logistics science cost should be on par with circuit network and component costs should be on par with a constant combinator steps to reproduce mod is nixie tubes additional context no response log file no response
1
11,497
17,290,409,382
IssuesEvent
2021-07-24 16:20:26
renovatebot/renovate
https://api.github.com/repos/renovatebot/renovate
opened
Release Notes - Only showing most recent version notes
priority-5-triage status:requirements type:bug
<!-- PLEASE DO NOT REPORT ANY SECURITY CONCERNS THIS WAY Email renovate-disclosure@whitesourcesoftware.com instead. --> **How are you running Renovate?** - [x] WhiteSource Renovate hosted app on github.com - [ ] Self hosted If using the hosted app, please skip to the next section. Otherwise, if self-hosted, please complete the following: Please select which platform you are using: - [ ] Azure DevOps (dev.azure.com) - [ ] Azure DevOps Server - [ ] Bitbucket Cloud (bitbucket.org) - [ ] Bitbucket Server - [ ] Gitea - [ ] github.com - [ ] GitHub Enterprise Server - [ ] gitlab.com - [ ] GitLab self-hosted Renovate version: ... **Describe the bug** When testing renovate against [apollographql/federation-jvm](https://github.com/apollographql/federation-jvm), it correctly identified that the `pom.xml` should be updated from version 0.5.0 to 0.6.4. However, the release notes added to the PR only have the most recent entry for v0.6.4 and has not added any previous ones. Example renovate PR - https://github.com/setchy/renovate-bot-testbed/pull/1 ... **Relevant debug logs** <!-- Try not to raise a bug report unless you've looked at the logs first. If you're running self-hosted, run with `LOG_LEVEL=debug` in your environment variables and search for whatever dependency/branch/PR that is causing the problem. If you are using the Renovate App, log into https://app.renovatebot.com/dashboard and locate the correct job log for when the problem occurred (e.g. when the PR was created). Paste the *relevant* logs here, not the entire thing and not just a link to the dashboard (others do not have permissions to view them). --> <details><summary>Click me to see logs</summary> ``` Copy/paste any log here, between the starting and ending backticks ``` </details> **Have you created a minimal reproduction repository?** Please read the [minimal reproductions documentation](https://github.com/renovatebot/renovate/blob/main/docs/development/minimal-reproductions.md) to learn how to make a good minimal reproduction repository. - [x] I have provided a minimal reproduction repository - [ ] I don't have time for that, but it happens in a public repository I have linked to - [ ] I don't have time for that, and cannot share my private repository - [ ] The nature of this bug means it's impossible to reproduce publicly **Additional context** <!-- Add any other context about the problem here, including your own debugging or ideas on what went wrong. --> ...
1.0
Release Notes - Only showing most recent version notes - <!-- PLEASE DO NOT REPORT ANY SECURITY CONCERNS THIS WAY Email renovate-disclosure@whitesourcesoftware.com instead. --> **How are you running Renovate?** - [x] WhiteSource Renovate hosted app on github.com - [ ] Self hosted If using the hosted app, please skip to the next section. Otherwise, if self-hosted, please complete the following: Please select which platform you are using: - [ ] Azure DevOps (dev.azure.com) - [ ] Azure DevOps Server - [ ] Bitbucket Cloud (bitbucket.org) - [ ] Bitbucket Server - [ ] Gitea - [ ] github.com - [ ] GitHub Enterprise Server - [ ] gitlab.com - [ ] GitLab self-hosted Renovate version: ... **Describe the bug** When testing renovate against [apollographql/federation-jvm](https://github.com/apollographql/federation-jvm), it correctly identified that the `pom.xml` should be updated from version 0.5.0 to 0.6.4. However, the release notes added to the PR only have the most recent entry for v0.6.4 and has not added any previous ones. Example renovate PR - https://github.com/setchy/renovate-bot-testbed/pull/1 ... **Relevant debug logs** <!-- Try not to raise a bug report unless you've looked at the logs first. If you're running self-hosted, run with `LOG_LEVEL=debug` in your environment variables and search for whatever dependency/branch/PR that is causing the problem. If you are using the Renovate App, log into https://app.renovatebot.com/dashboard and locate the correct job log for when the problem occurred (e.g. when the PR was created). Paste the *relevant* logs here, not the entire thing and not just a link to the dashboard (others do not have permissions to view them). --> <details><summary>Click me to see logs</summary> ``` Copy/paste any log here, between the starting and ending backticks ``` </details> **Have you created a minimal reproduction repository?** Please read the [minimal reproductions documentation](https://github.com/renovatebot/renovate/blob/main/docs/development/minimal-reproductions.md) to learn how to make a good minimal reproduction repository. - [x] I have provided a minimal reproduction repository - [ ] I don't have time for that, but it happens in a public repository I have linked to - [ ] I don't have time for that, and cannot share my private repository - [ ] The nature of this bug means it's impossible to reproduce publicly **Additional context** <!-- Add any other context about the problem here, including your own debugging or ideas on what went wrong. --> ...
non_process
release notes only showing most recent version notes please do not report any security concerns this way email renovate disclosure whitesourcesoftware com instead how are you running renovate whitesource renovate hosted app on github com self hosted if using the hosted app please skip to the next section otherwise if self hosted please complete the following please select which platform you are using azure devops dev azure com azure devops server bitbucket cloud bitbucket org bitbucket server gitea github com github enterprise server gitlab com gitlab self hosted renovate version describe the bug when testing renovate against it correctly identified that the pom xml should be updated from version to however the release notes added to the pr only have the most recent entry for and has not added any previous ones example renovate pr relevant debug logs try not to raise a bug report unless you ve looked at the logs first if you re running self hosted run with log level debug in your environment variables and search for whatever dependency branch pr that is causing the problem if you are using the renovate app log into and locate the correct job log for when the problem occurred e g when the pr was created paste the relevant logs here not the entire thing and not just a link to the dashboard others do not have permissions to view them click me to see logs copy paste any log here between the starting and ending backticks have you created a minimal reproduction repository please read the to learn how to make a good minimal reproduction repository i have provided a minimal reproduction repository i don t have time for that but it happens in a public repository i have linked to i don t have time for that and cannot share my private repository the nature of this bug means it s impossible to reproduce publicly additional context
0
18,128
24,167,527,572
IssuesEvent
2022-09-22 16:11:12
GoogleCloudPlatform/terraform-mean-cloudrun-mongodb
https://api.github.com/repos/GoogleCloudPlatform/terraform-mean-cloudrun-mongodb
closed
Identify tasks to be automated
process
Referring to #3, identify the steps that need to be automated by Terraform. MVP prerequisites: 1. User has a single app container image already pushed to an image repository. On container deployment, the app reads the `ATLAS_URI` environment variable to connect to the database. 2. User has a GCP billing account ID and an organization ID. 3. User has installed [gcloud](https://cloud.google.com/sdk/docs/install-sdk). 4. User has a MongoDB Atlas account organization ID and API keys. 5. User has installed [Terraform](https://www.terraform.io/downloads). 6. User has identified the GCP and Atlas region for deployment. High level script task requirements: - [x] Ability to create app/database infrastructure, deploy app, and provide user with app URL. - [x] Verify successful deployment. See #12 (app URL probably enough verification for single container deploy) - [x] Ability to tear down app/database infrastructure.
1.0
Identify tasks to be automated - Referring to #3, identify the steps that need to be automated by Terraform. MVP prerequisites: 1. User has a single app container image already pushed to an image repository. On container deployment, the app reads the `ATLAS_URI` environment variable to connect to the database. 2. User has a GCP billing account ID and an organization ID. 3. User has installed [gcloud](https://cloud.google.com/sdk/docs/install-sdk). 4. User has a MongoDB Atlas account organization ID and API keys. 5. User has installed [Terraform](https://www.terraform.io/downloads). 6. User has identified the GCP and Atlas region for deployment. High level script task requirements: - [x] Ability to create app/database infrastructure, deploy app, and provide user with app URL. - [x] Verify successful deployment. See #12 (app URL probably enough verification for single container deploy) - [x] Ability to tear down app/database infrastructure.
process
identify tasks to be automated referring to identify the steps that need to be automated by terraform mvp prerequisites user has a single app container image already pushed to an image repository on container deployment the app reads the atlas uri environment variable to connect to the database user has a gcp billing account id and an organization id user has installed user has a mongodb atlas account organization id and api keys user has installed user has identified the gcp and atlas region for deployment high level script task requirements ability to create app database infrastructure deploy app and provide user with app url verify successful deployment see app url probably enough verification for single container deploy ability to tear down app database infrastructure
1
661,571
22,060,975,290
IssuesEvent
2022-05-30 17:47:02
bcgov/entity
https://api.github.com/repos/bcgov/entity
closed
INC0167964 - Github and Zenhub access for Megan Fedora
Priority1 ENTITY SRE
### ServiceNow incident: INC0167964 ### Contact information Staff Name: Maribeth Wilson Staff Email: ### Description Hi there, can you please assist us in providing access to Github for Megan Fedora? Megan is the new Manager, Registries Operations. Please let me know if you need any other information. Thank you, Maribeth Wilson Project Coordinator | BC Registries and Online Services Service BC Ministry of Citizens’ Services T: 778 405 1525 Web: http://www.servicebc.gov.bc.ca ### Ops Process - [ ] Add **Entity** or **Relationships** label to zenhub ticket - [ ] Add **Ops** label - [ ] Add **Priority1** label to zenhub ticket, if: - If the business says it is a priority - BA can use their business knowledge and best judgement if it is a priority or not - How long the ticket has been open - If we are still unsure, reach out to other BAs in the guild - [ ] Add ticket to "Ops" column - [ ] When ticket has been created, post the ticket in RocketChat '#Operations Tasks' channel - [ ] Reply All to IT Ops email (CC BA Inbox) and provide zenhub ticket number opened - [ ] Dev/BAs to resolve issue - [ ] **DEV**: Enter the time you worked on the ticket in the estimate - [ ] **DEV**: Tell BA it is ready to review - [ ] BAs review - [ ] Add ticket to the current milestone of the code base the ops ticket was relating to (i.e., if it's a Name Request issue, assign to current entities milestone, etc.) - [ ] Close Zenhub ticket - [ ] Tell IT Ops to close the ServiceNow Incident
1.0
INC0167964 - Github and Zenhub access for Megan Fedora - ### ServiceNow incident: INC0167964 ### Contact information Staff Name: Maribeth Wilson Staff Email: ### Description Hi there, can you please assist us in providing access to Github for Megan Fedora? Megan is the new Manager, Registries Operations. Please let me know if you need any other information. Thank you, Maribeth Wilson Project Coordinator | BC Registries and Online Services Service BC Ministry of Citizens’ Services T: 778 405 1525 Web: http://www.servicebc.gov.bc.ca ### Ops Process - [ ] Add **Entity** or **Relationships** label to zenhub ticket - [ ] Add **Ops** label - [ ] Add **Priority1** label to zenhub ticket, if: - If the business says it is a priority - BA can use their business knowledge and best judgement if it is a priority or not - How long the ticket has been open - If we are still unsure, reach out to other BAs in the guild - [ ] Add ticket to "Ops" column - [ ] When ticket has been created, post the ticket in RocketChat '#Operations Tasks' channel - [ ] Reply All to IT Ops email (CC BA Inbox) and provide zenhub ticket number opened - [ ] Dev/BAs to resolve issue - [ ] **DEV**: Enter the time you worked on the ticket in the estimate - [ ] **DEV**: Tell BA it is ready to review - [ ] BAs review - [ ] Add ticket to the current milestone of the code base the ops ticket was relating to (i.e., if it's a Name Request issue, assign to current entities milestone, etc.) - [ ] Close Zenhub ticket - [ ] Tell IT Ops to close the ServiceNow Incident
non_process
github and zenhub access for megan fedora servicenow incident contact information staff name maribeth wilson staff email description hi there can you please assist us in providing access to github for megan fedora megan is the new manager registries operations please let me know if you need any other information thank you maribeth wilson project coordinator bc registries and online services service bc ministry of citizens’ services t web ops process add entity or relationships label to zenhub ticket add ops label add label to zenhub ticket if if the business says it is a priority ba can use their business knowledge and best judgement if it is a priority or not how long the ticket has been open if we are still unsure reach out to other bas in the guild add ticket to ops column when ticket has been created post the ticket in rocketchat operations tasks channel reply all to it ops email cc ba inbox and provide zenhub ticket number opened dev bas to resolve issue dev enter the time you worked on the ticket in the estimate dev tell ba it is ready to review bas review add ticket to the current milestone of the code base the ops ticket was relating to i e if it s a name request issue assign to current entities milestone etc close zenhub ticket tell it ops to close the servicenow incident
0
24,778
4,108,762,469
IssuesEvent
2016-06-06 17:11:42
albaizq/NBAMovements2
https://api.github.com/repos/albaizq/NBAMovements2
closed
OOPS! Evaluation for territorio.owl
Important Inference Unit test bug
OOPS! has encountered some pitfalls related to inference. The Pitfalls are the following: 1. "Missing equivalent classes". Importance level: Important 2. "Missing domain or range in properties". Importance level: Important
1.0
OOPS! Evaluation for territorio.owl - OOPS! has encountered some pitfalls related to inference. The Pitfalls are the following: 1. "Missing equivalent classes". Importance level: Important 2. "Missing domain or range in properties". Importance level: Important
non_process
oops evaluation for territorio owl oops has encountered some pitfalls related to inference the pitfalls are the following missing equivalent classes importance level important missing domain or range in properties importance level important
0
670,170
22,678,521,676
IssuesEvent
2022-07-04 07:46:55
ooni/probe
https://api.github.com/repos/ooni/probe
closed
android: crash after clean install w/ default settings using Android 12
bug ooni/probe-mobile priority/medium
This is a special case of https://github.com/ooni/probe/issues/1897. I've had a subsequent exchange with the user that reported https://github.com/ooni/probe/issues/1897 to us. It seems the app is _still_ crashing after a clean install. After that, the app instead works as intended. The presence of an "Error" test in the results suggests there's something wrong when we start tests.
1.0
android: crash after clean install w/ default settings using Android 12 - This is a special case of https://github.com/ooni/probe/issues/1897. I've had a subsequent exchange with the user that reported https://github.com/ooni/probe/issues/1897 to us. It seems the app is _still_ crashing after a clean install. After that, the app instead works as intended. The presence of an "Error" test in the results suggests there's something wrong when we start tests.
non_process
android crash after clean install w default settings using android this is a special case of i ve had a subsequent exchange with the user that reported to us it seems the app is still crashing after a clean install after that the app instead works as intended the presence of an error test in the results suggests there s something wrong when we start tests
0
19,996
26,470,032,388
IssuesEvent
2023-01-17 06:10:33
nion-software/nionswift
https://api.github.com/repos/nion-software/nionswift
opened
Processing data should also retain some graphic properties
type - enhancement f - user-interface f - processing
BPS 2021-06-17: I think things would go smoother if the display properties (log display, color map) are maintained on a new data item that is generated from applying a function. CM note: I'm not sure this can happen in all cases, but like metadata, there should be a defined set of processing operations where it does apply. - #915
1.0
Processing data should also retain some graphic properties - BPS 2021-06-17: I think things would go smoother if the display properties (log display, color map) are maintained on a new data item that is generated from applying a function. CM note: I'm not sure this can happen in all cases, but like metadata, there should be a defined set of processing operations where it does apply. - #915
process
processing data should also retain some graphic properties bps i think things would go smoother if the display properties log display color map are maintained on a new data item that is generated from applying a function cm note i m not sure this can happen in all cases but like metadata there should be a defined set of processing operations where it does apply
1
449,532
12,970,291,391
IssuesEvent
2020-07-21 09:06:58
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
closed
Bandwidth level throttling policies cannot be created
Priority/High Type/Bug
### Description: Bandwidth level throttling policies cannot be created from the admin portal. In the console, we can see the following error. ``` [2020-07-06 22:21:26,520] ERROR - GlobalThrowableMapper Unrecognized property 'type' com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "type" (class org.wso2.carbon.apimgt.rest.api.admin.v1.dto.BandwidthLimitDTO), not marked as ignorable (4 known properties: "timeUnit", "dataAmount", "dataUnit", "unitTime"]) at [Source: (org.apache.cxf.transport.http.AbstractHTTPDestination$1); line: 1, column: 224] (through reference chain: org.wso2.carbon.apimgt.rest.api.admin.v1.dto.AdvancedThrottlePolicyDTO["defaultLimit"]->org.wso2.carbon.apimgt.rest.api.admin.v1.dto.ThrottleLimitDTO["bandwidth"]->org.wso2.carbon.apimgt.rest.api.admin.v1.dto.BandwidthLimitDTO["type"]) at com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:61) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:823) ~[jackson-databind-2.9.9.3.jar:2.9.9.2] at com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1153) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1589) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1567) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:294) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.ObjectReader._bind(ObjectReader.java:1574) ~[jackson-databind-2.9.9.3.jar:2.9.9.2] at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:965) ~[jackson-databind-2.9.9.3.jar:2.9.9.2] at com.fasterxml.jackson.jaxrs.base.ProviderBase.readFrom(ProviderBase.java:815) ~[jackson-jaxrs-base-2.9.9.jar:2.9.9] at org.apache.cxf.jaxrs.utils.JAXRSUtils.readFromMessageBodyReader(JAXRSUtils.java:1397) ~[cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.utils.JAXRSUtils.readFromMessageBody(JAXRSUtils.java:1349) ~[cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.utils.JAXRSUtils.processRequestBodyParameter(JAXRSUtils.java:865) ~[cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.utils.JAXRSUtils.processParameters(JAXRSUtils.java:810) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.processRequest(JAXRSInInterceptor.java:214) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.handleMessage(JAXRSInInterceptor.java:78) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) [cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) [cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:267) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:216) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:301) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:220) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at javax.servlet.http.HttpServlet.service(HttpServlet.java:660) [tomcat-servlet-api_9.0.31.wso2v1.jar:?] at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:276) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat_9.0.31.wso2v1.jar:?] at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:86) [org.wso2.carbon.identity.context.rewrite.valve_1.4.0.jar:?] at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:110) [org.wso2.carbon.identity.authz.valve_1.4.0.jar:?] at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:75) [org.wso2.carbon.identity.auth.valve_1.4.0.jar:?] at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99) [org.wso2.carbon.tomcat.ext_4.6.0.jar:?] at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) [org.wso2.carbon.tomcat.ext_4.6.0.jar:?] at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62) [org.wso2.carbon.tomcat.ext_4.6.0.jar:?] at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145) [org.wso2.carbon.tomcat.ext_4.6.0.jar:?] at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:688) [tomcat_9.0.31.wso2v1.jar:?] at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57) [org.wso2.carbon.tomcat.ext_4.6.0.jar:?] at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:119) [org.wso2.carbon.tomcat.ext_4.6.0.jar:?] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:367) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1639) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat_9.0.31.wso2v1.jar:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat_9.0.31.wso2v1.jar:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] ``` ### Steps to reproduce: Login to admin portal. Then try to create an advanced policy based on bandwidth. ### Affected Product Version: Current Master Branch ### Environment details (with versions): - OS: MacOS Mojave - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
1.0
Bandwidth level throttling policies cannot be created - ### Description: Bandwidth level throttling policies cannot be created from the admin portal. In the console, we can see the following error. ``` [2020-07-06 22:21:26,520] ERROR - GlobalThrowableMapper Unrecognized property 'type' com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "type" (class org.wso2.carbon.apimgt.rest.api.admin.v1.dto.BandwidthLimitDTO), not marked as ignorable (4 known properties: "timeUnit", "dataAmount", "dataUnit", "unitTime"]) at [Source: (org.apache.cxf.transport.http.AbstractHTTPDestination$1); line: 1, column: 224] (through reference chain: org.wso2.carbon.apimgt.rest.api.admin.v1.dto.AdvancedThrottlePolicyDTO["defaultLimit"]->org.wso2.carbon.apimgt.rest.api.admin.v1.dto.ThrottleLimitDTO["bandwidth"]->org.wso2.carbon.apimgt.rest.api.admin.v1.dto.BandwidthLimitDTO["type"]) at com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:61) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:823) ~[jackson-databind-2.9.9.3.jar:2.9.9.2] at com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1153) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1589) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1567) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:294) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151) ~[jackson-databind-2.9.9.3.jar:2.9.9.3] at com.fasterxml.jackson.databind.ObjectReader._bind(ObjectReader.java:1574) ~[jackson-databind-2.9.9.3.jar:2.9.9.2] at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:965) ~[jackson-databind-2.9.9.3.jar:2.9.9.2] at com.fasterxml.jackson.jaxrs.base.ProviderBase.readFrom(ProviderBase.java:815) ~[jackson-jaxrs-base-2.9.9.jar:2.9.9] at org.apache.cxf.jaxrs.utils.JAXRSUtils.readFromMessageBodyReader(JAXRSUtils.java:1397) ~[cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.utils.JAXRSUtils.readFromMessageBody(JAXRSUtils.java:1349) ~[cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.utils.JAXRSUtils.processRequestBodyParameter(JAXRSUtils.java:865) ~[cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.utils.JAXRSUtils.processParameters(JAXRSUtils.java:810) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.processRequest(JAXRSInInterceptor.java:214) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.handleMessage(JAXRSInInterceptor.java:78) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8] at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) [cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) [cxf-core-3.2.8.jar:3.2.8] at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:267) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:216) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:301) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:220) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at javax.servlet.http.HttpServlet.service(HttpServlet.java:660) [tomcat-servlet-api_9.0.31.wso2v1.jar:?] at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:276) [cxf-rt-transports-http-3.2.8.jar:3.2.8] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat_9.0.31.wso2v1.jar:?] at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:86) [org.wso2.carbon.identity.context.rewrite.valve_1.4.0.jar:?] at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:110) [org.wso2.carbon.identity.authz.valve_1.4.0.jar:?] at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:75) [org.wso2.carbon.identity.auth.valve_1.4.0.jar:?] at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99) [org.wso2.carbon.tomcat.ext_4.6.0.jar:?] at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) [org.wso2.carbon.tomcat.ext_4.6.0.jar:?] at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62) [org.wso2.carbon.tomcat.ext_4.6.0.jar:?] at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145) [org.wso2.carbon.tomcat.ext_4.6.0.jar:?] at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:688) [tomcat_9.0.31.wso2v1.jar:?] at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57) [org.wso2.carbon.tomcat.ext_4.6.0.jar:?] at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:119) [org.wso2.carbon.tomcat.ext_4.6.0.jar:?] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:367) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1639) [tomcat_9.0.31.wso2v1.jar:?] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat_9.0.31.wso2v1.jar:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat_9.0.31.wso2v1.jar:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] ``` ### Steps to reproduce: Login to admin portal. Then try to create an advanced policy based on bandwidth. ### Affected Product Version: Current Master Branch ### Environment details (with versions): - OS: MacOS Mojave - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
non_process
bandwidth level throttling policies cannot be created description bandwidth level throttling policies cannot be created from the admin portal in the console we can see the following error error globalthrowablemapper unrecognized property type com fasterxml jackson databind exc unrecognizedpropertyexception unrecognized field type class org carbon apimgt rest api admin dto bandwidthlimitdto not marked as ignorable known properties timeunit dataamount dataunit unittime at through reference chain org carbon apimgt rest api admin dto advancedthrottlepolicydto org carbon apimgt rest api admin dto throttlelimitdto org carbon apimgt rest api admin dto bandwidthlimitdto at com fasterxml jackson databind exc unrecognizedpropertyexception from unrecognizedpropertyexception java at com fasterxml jackson databind deserializationcontext handleunknownproperty deserializationcontext java at com fasterxml jackson databind deser std stddeserializer handleunknownproperty stddeserializer java at com fasterxml jackson databind deser beandeserializerbase handleunknownproperty beandeserializerbase java at com fasterxml jackson databind deser beandeserializerbase handleunknownvanilla beandeserializerbase java at com fasterxml jackson databind deser beandeserializer vanilladeserialize beandeserializer java at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java at com fasterxml jackson databind deser impl methodproperty deserializeandset methodproperty java at com fasterxml jackson databind deser beandeserializer vanilladeserialize beandeserializer java at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java at com fasterxml jackson databind deser impl methodproperty deserializeandset methodproperty java at com fasterxml jackson databind deser beandeserializer vanilladeserialize beandeserializer java at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java at com fasterxml jackson databind objectreader bind objectreader java at com fasterxml jackson databind objectreader readvalue objectreader java at com fasterxml jackson jaxrs base providerbase readfrom providerbase java at org apache cxf jaxrs utils jaxrsutils readfrommessagebodyreader jaxrsutils java at org apache cxf jaxrs utils jaxrsutils readfrommessagebody jaxrsutils java at org apache cxf jaxrs utils jaxrsutils processrequestbodyparameter jaxrsutils java at org apache cxf jaxrs utils jaxrsutils processparameters jaxrsutils java at org apache cxf jaxrs interceptor jaxrsininterceptor processrequest jaxrsininterceptor java at org apache cxf jaxrs interceptor jaxrsininterceptor handlemessage jaxrsininterceptor java at org apache cxf phase phaseinterceptorchain dointercept phaseinterceptorchain java at org apache cxf transport chaininitiationobserver onmessage chaininitiationobserver java at org apache cxf transport http abstracthttpdestination invoke abstracthttpdestination java at org apache cxf transport servlet servletcontroller invokedestination servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet cxfnonspringservlet invoke cxfnonspringservlet java at org apache cxf transport servlet abstracthttpservlet handlerequest abstracthttpservlet java at org apache cxf transport servlet abstracthttpservlet dopost abstracthttpservlet java at javax servlet http httpservlet service httpservlet java at org apache cxf transport servlet abstracthttpservlet service abstracthttpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org carbon tomcat ext valves requestcorrelationidvalve invoke requestcorrelationidvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java steps to reproduce login to admin portal then try to create an advanced policy based on bandwidth affected product version current master branch environment details with versions os macos mojave client env docker optional fields related issues suggested labels suggested assignees
0
13,430
15,881,226,571
IssuesEvent
2021-04-09 14:35:02
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PD[] Reorganize consent documents in Cloud Storage
Bug P1 Participant datastore Process: Fixed Process: Tested QA Process: Tested dev
Consent documents in Cloud Storage must be organized as Study > Participant-specific ID (different from Participant ID) > Consent Documents We also need a way for existing deployments to take the update
3.0
[PD[] Reorganize consent documents in Cloud Storage - Consent documents in Cloud Storage must be organized as Study > Participant-specific ID (different from Participant ID) > Consent Documents We also need a way for existing deployments to take the update
process
reorganize consent documents in cloud storage consent documents in cloud storage must be organized as study participant specific id different from participant id consent documents we also need a way for existing deployments to take the update
1
19,301
25,466,501,500
IssuesEvent
2022-11-25 05:18:22
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[IDP] [PM] Admin account should not be added in the PM in the following scenario
Bug Blocker P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
**Pre-condition:** mfa should be enabled in the PM **Steps:** 1. Login to PM 2. Click on 'Admins' tab 3. Click on 'Add new admin' button 4. Add admin in the application without phone number and Verify **AR:** Admin account is getting added in the PM **ER:** Admin account should not be added in the PM ,without entering phone number field. **Note:** Phone number is mandatory when mfa is enabled in the PM
3.0
[IDP] [PM] Admin account should not be added in the PM in the following scenario - **Pre-condition:** mfa should be enabled in the PM **Steps:** 1. Login to PM 2. Click on 'Admins' tab 3. Click on 'Add new admin' button 4. Add admin in the application without phone number and Verify **AR:** Admin account is getting added in the PM **ER:** Admin account should not be added in the PM ,without entering phone number field. **Note:** Phone number is mandatory when mfa is enabled in the PM
process
admin account should not be added in the pm in the following scenario pre condition mfa should be enabled in the pm steps login to pm click on admins tab click on add new admin button add admin in the application without phone number and verify ar admin account is getting added in the pm er admin account should not be added in the pm without entering phone number field note phone number is mandatory when mfa is enabled in the pm
1
11,245
14,015,429,862
IssuesEvent
2020-10-29 13:19:19
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
Change term - georeferenceSources
Class - Location Process - implement Term - change
## Change term * Submitter: @ekrimmel * Justification (why is this change necessary?): For clarification and compliance with best practices [Chapman & Wieczorek 2020](https://doi.org/10.35035/e09p-h128) * Proponents (who needs this change): compliance change needed by users of Best Practices. Proposed new attributes of the term: * Examples: Change "USGS 1:24000 Florence Montana Quad" to "USGS 1:24000 Florence Montana Quad 1967" See source issue at https://github.com/gbif/doc-georeferencing-quick-reference-guide/issues/3
1.0
Change term - georeferenceSources - ## Change term * Submitter: @ekrimmel * Justification (why is this change necessary?): For clarification and compliance with best practices [Chapman & Wieczorek 2020](https://doi.org/10.35035/e09p-h128) * Proponents (who needs this change): compliance change needed by users of Best Practices. Proposed new attributes of the term: * Examples: Change "USGS 1:24000 Florence Montana Quad" to "USGS 1:24000 Florence Montana Quad 1967" See source issue at https://github.com/gbif/doc-georeferencing-quick-reference-guide/issues/3
process
change term georeferencesources change term submitter ekrimmel justification why is this change necessary for clarification and compliance with best practices proponents who needs this change compliance change needed by users of best practices proposed new attributes of the term examples change usgs florence montana quad to usgs florence montana quad see source issue at
1
2,004
4,819,348,854
IssuesEvent
2016-11-04 18:58:37
Azure/azure-event-hubs-java
https://api.github.com/repos/Azure/azure-event-hubs-java
closed
EPH crashes due to high memory usage by large number of threads
EventProcessorHost
_From @serkantkaraca on July 18, 2016 18:46_ EPH client process memory usage hit almost 3GB and then process crashed due to not able to allocate more resources. Attached screenshot shows the memory footprint during the run. <2790868.22> 07/18/2016 18:24:16 - Error: JAVA-Receiver_1<stderr>: Java HotSpot(TM) 64-Bit Server VM warning: Attempt to allocate stack guard pages failed. <2790868.22> 07/18/2016 18:24:16 - Error: JAVA-Receiver_1<stderr>: Java HotSpot(TM) 64-Bit Server VM warning: Attempt to allocate stack guard pages failed. <2790868.22> 07/18/2016 18:24:16 - Error: JAVA-Receiver_1<stderr>: Java HotSpot(TM) 64-Bit Server VM warning: Attempt to unguard stack red zone failed. <2790868.22> 07/18/2016 18:24:16 - Error: JAVA-Receiver_1<stderr>: Java HotSpot(TM) 64-Bit Server VM warning: Attempt to allocate stack guard pages failed. <2790868.22> 07/18/2016 18:24:16 - Error: JAVA-Receiver_1<stderr>: Java HotSpot(TM) 64-Bit Server VM warning: Attempt to unguard stack red zone failed. <2790868.22> 07/18/2016 18:24:16 - Error: JAVA-Receiver_1<stderr>: Java HotSpot(TM) 64-Bit Server VM warning: Attempt to unguard stack red zone failed. ![capture](https://cloud.githubusercontent.com/assets/16924470/16926315/fbccd7f8-4cdc-11e6-978f-fd204468eced.PNG) _Copied from original issue: Azure/azure-event-hubs#192_
1.0
EPH crashes due to high memory usage by large number of threads - _From @serkantkaraca on July 18, 2016 18:46_ EPH client process memory usage hit almost 3GB and then process crashed due to not able to allocate more resources. Attached screenshot shows the memory footprint during the run. <2790868.22> 07/18/2016 18:24:16 - Error: JAVA-Receiver_1<stderr>: Java HotSpot(TM) 64-Bit Server VM warning: Attempt to allocate stack guard pages failed. <2790868.22> 07/18/2016 18:24:16 - Error: JAVA-Receiver_1<stderr>: Java HotSpot(TM) 64-Bit Server VM warning: Attempt to allocate stack guard pages failed. <2790868.22> 07/18/2016 18:24:16 - Error: JAVA-Receiver_1<stderr>: Java HotSpot(TM) 64-Bit Server VM warning: Attempt to unguard stack red zone failed. <2790868.22> 07/18/2016 18:24:16 - Error: JAVA-Receiver_1<stderr>: Java HotSpot(TM) 64-Bit Server VM warning: Attempt to allocate stack guard pages failed. <2790868.22> 07/18/2016 18:24:16 - Error: JAVA-Receiver_1<stderr>: Java HotSpot(TM) 64-Bit Server VM warning: Attempt to unguard stack red zone failed. <2790868.22> 07/18/2016 18:24:16 - Error: JAVA-Receiver_1<stderr>: Java HotSpot(TM) 64-Bit Server VM warning: Attempt to unguard stack red zone failed. ![capture](https://cloud.githubusercontent.com/assets/16924470/16926315/fbccd7f8-4cdc-11e6-978f-fd204468eced.PNG) _Copied from original issue: Azure/azure-event-hubs#192_
process
eph crashes due to high memory usage by large number of threads from serkantkaraca on july eph client process memory usage hit almost and then process crashed due to not able to allocate more resources attached screenshot shows the memory footprint during the run error java receiver java hotspot tm bit server vm warning attempt to allocate stack guard pages failed error java receiver java hotspot tm bit server vm warning attempt to allocate stack guard pages failed error java receiver java hotspot tm bit server vm warning attempt to unguard stack red zone failed error java receiver java hotspot tm bit server vm warning attempt to allocate stack guard pages failed error java receiver java hotspot tm bit server vm warning attempt to unguard stack red zone failed error java receiver java hotspot tm bit server vm warning attempt to unguard stack red zone failed copied from original issue azure azure event hubs
1
7,783
10,924,427,552
IssuesEvent
2019-11-22 10:06:48
prisma/prisma2
https://api.github.com/repos/prisma/prisma2
closed
UnhandledPromiseRejectionWarning: Error: Photon binary for current platform linux-glibc-libssl1.1.0 could not be found.
bug/2-confirmed kind/bug process/next-milestone
I'm getting `UnhandledPromiseRejectionWarning: Error: Photon binary for current platform linux-glibc-libssl1.1.0 could not be found.` errors when running the generated code. Any idea how I can fix this? When I checked similar issues they seemed to be fixed and closed, so I reported this one. ## Reproduction: 1. Install Prisma2 `npm install -g prisma2`, results in version `2.0.0-preview014.2` 2. Run `prisma2 init` and pick sqlite, blank project and Javascript 3. Make a sample schema 4. run `prisma2 generate` ``` > Sockify@0.0.1 prisma2 /home/ruben/Datasprong/sokken/sockpreview/sockify > prisma2 "generate" > Downloading linux-glibc-libssl1.1.0 binary for query-engine [====================] 100% Generating Photon.js to /home/ruben/Datasprong/sokken/sockpreview/sockify/node_modules/@generated/photon Done in 2.05s ``` 5. Run my code (simply connects via Photon to the database) ``` Use of eval is strongly discouraged, as it poses security risks and may cause issues with minification 12961: } 12962: this.platform = this.platform || platform; 12963: const fileName = eval(`require('path').basename(__filename)`); ^ 12964: if (fileName === 'NodeEngine.js') { 12965: return this.getQueryEnginePath(this.platform, path_1.default.resolve(__dirname, `..`)); (node:16314) UnhandledPromiseRejectionWarning: Error: Photon binary for current platform linux-glibc-libssl1.1.0 could not be found. Make sure to adjust the generator configuration in the schema.prisma file: generator photon { provider = "photonjs" binaryTargets = ["native"] } Please run prisma2 generate for your changes to take effect. Note, that by providing `native`, Photon automatically resolves `linux-glibc-libssl1.1.0`. Read more about deploying Photon: https://github.com/prisma/prisma2/blob/master/docs/core/generators/photonjs.md at NodeEngine.getPrismaPath (/home/ruben/Datasprong/sokken/sockpreview/sockify/__sapper__/dev/server/server.js:17081:23) (node:16314) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1) (node:16314) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. ``` 6. Following the sugestion to add `'native'` does not help.
1.0
UnhandledPromiseRejectionWarning: Error: Photon binary for current platform linux-glibc-libssl1.1.0 could not be found. - I'm getting `UnhandledPromiseRejectionWarning: Error: Photon binary for current platform linux-glibc-libssl1.1.0 could not be found.` errors when running the generated code. Any idea how I can fix this? When I checked similar issues they seemed to be fixed and closed, so I reported this one. ## Reproduction: 1. Install Prisma2 `npm install -g prisma2`, results in version `2.0.0-preview014.2` 2. Run `prisma2 init` and pick sqlite, blank project and Javascript 3. Make a sample schema 4. run `prisma2 generate` ``` > Sockify@0.0.1 prisma2 /home/ruben/Datasprong/sokken/sockpreview/sockify > prisma2 "generate" > Downloading linux-glibc-libssl1.1.0 binary for query-engine [====================] 100% Generating Photon.js to /home/ruben/Datasprong/sokken/sockpreview/sockify/node_modules/@generated/photon Done in 2.05s ``` 5. Run my code (simply connects via Photon to the database) ``` Use of eval is strongly discouraged, as it poses security risks and may cause issues with minification 12961: } 12962: this.platform = this.platform || platform; 12963: const fileName = eval(`require('path').basename(__filename)`); ^ 12964: if (fileName === 'NodeEngine.js') { 12965: return this.getQueryEnginePath(this.platform, path_1.default.resolve(__dirname, `..`)); (node:16314) UnhandledPromiseRejectionWarning: Error: Photon binary for current platform linux-glibc-libssl1.1.0 could not be found. Make sure to adjust the generator configuration in the schema.prisma file: generator photon { provider = "photonjs" binaryTargets = ["native"] } Please run prisma2 generate for your changes to take effect. Note, that by providing `native`, Photon automatically resolves `linux-glibc-libssl1.1.0`. Read more about deploying Photon: https://github.com/prisma/prisma2/blob/master/docs/core/generators/photonjs.md at NodeEngine.getPrismaPath (/home/ruben/Datasprong/sokken/sockpreview/sockify/__sapper__/dev/server/server.js:17081:23) (node:16314) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1) (node:16314) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. ``` 6. Following the sugestion to add `'native'` does not help.
process
unhandledpromiserejectionwarning error photon binary for current platform linux glibc could not be found i m getting unhandledpromiserejectionwarning error photon binary for current platform linux glibc could not be found errors when running the generated code any idea how i can fix this when i checked similar issues they seemed to be fixed and closed so i reported this one reproduction install npm install g results in version run init and pick sqlite blank project and javascript make a sample schema run generate sockify home ruben datasprong sokken sockpreview sockify generate downloading linux glibc binary for query engine generating photon js to home ruben datasprong sokken sockpreview sockify node modules generated photon done in run my code simply connects via photon to the database use of eval is strongly discouraged as it poses security risks and may cause issues with minification this platform this platform platform const filename eval require path basename filename if filename nodeengine js return this getqueryenginepath this platform path default resolve dirname node unhandledpromiserejectionwarning error photon binary for current platform linux glibc could not be found make sure to adjust the generator configuration in the schema prisma file generator photon provider photonjs binarytargets please run generate for your changes to take effect note that by providing native photon automatically resolves linux glibc read more about deploying photon at nodeengine getprismapath home ruben datasprong sokken sockpreview sockify sapper dev server server js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id node deprecationwarning unhandled promise rejections are deprecated in the future promise rejections that are not handled will terminate the node js process with a non zero exit code following the sugestion to add native does not help
1
20,704
27,392,413,274
IssuesEvent
2023-02-28 17:07:17
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Goaccess does not parse all the file
question log-processing
I updated goaccess yesterday. However, I found that goaccess@1.7 can no longer parse all my log file. My log has 830k lines while goaccess only parse the first about 90k lines. I wonder if there's any limitation in goaccess@1.7. ![image](https://user-images.githubusercontent.com/70561268/212608472-53736a3c-36a9-4921-8faf-41f7bd0bea2c.png)
1.0
Goaccess does not parse all the file - I updated goaccess yesterday. However, I found that goaccess@1.7 can no longer parse all my log file. My log has 830k lines while goaccess only parse the first about 90k lines. I wonder if there's any limitation in goaccess@1.7. ![image](https://user-images.githubusercontent.com/70561268/212608472-53736a3c-36a9-4921-8faf-41f7bd0bea2c.png)
process
goaccess does not parse all the file i updated goaccess yesterday however i found that goaccess can no longer parse all my log file my log has lines while goaccess only parse the first about lines i wonder if there s any limitation in goaccess
1
91,528
15,856,486,024
IssuesEvent
2021-04-08 02:27:10
benchabot/abp
https://api.github.com/repos/benchabot/abp
opened
WS-2020-0208 (Medium) detected in highlight.js-9.18.1.tgz
security vulnerability
## WS-2020-0208 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>highlight.js-9.18.1.tgz</b></p></summary> <p>Syntax highlighting with language autodetection.</p> <p>Library home page: <a href="https://registry.npmjs.org/highlight.js/-/highlight.js-9.18.1.tgz">https://registry.npmjs.org/highlight.js/-/highlight.js-9.18.1.tgz</a></p> <p>Path to dependency file: abp/samples/MicroserviceDemo/applications/PublicWebSite.Host/package.json</p> <p>Path to vulnerable library: abp/modules/blogging/app/Volo.BloggingTestApp/node_modules/highlight.js/package.json,abp/modules/blogging/app/Volo.BloggingTestApp/node_modules/highlight.js/package.json,abp/modules/blogging/app/Volo.BloggingTestApp/node_modules/highlight.js/package.json,abp/modules/blogging/app/Volo.BloggingTestApp/node_modules/highlight.js/package.json</p> <p> Dependency Hierarchy: - blogging-1.1.1.tgz (Root Library) - tui-editor-1.1.1.tgz - tui-editor-1.4.10.tgz - :x: **highlight.js-9.18.1.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> If are you are using Highlight.js to highlight user-provided data you are possibly vulnerable. On the client-side (in a browser or Electron environment) risks could include lengthy freezes or crashes... On the server-side infinite freezes could occur... effectively preventing users from accessing your app or service (ie, Denial of Service). This is an issue with grammars shipped with the parser (and potentially 3rd party grammars also), not the parser itself. If you are using Highlight.js with any of the following grammars you are vulnerable. If you are using highlightAuto to detect the language (and have any of these grammars registered) you are vulnerable. <p>Publish Date: 2020-12-04 <p>URL: <a href=https://github.com/highlightjs/highlight.js/commit/373b9d862401162e832ce77305e49b859e110f9c>WS-2020-0208</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/highlightjs/highlight.js/tree/10.4.1">https://github.com/highlightjs/highlight.js/tree/10.4.1</a></p> <p>Release Date: 2020-12-04</p> <p>Fix Resolution: 10.4.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2020-0208 (Medium) detected in highlight.js-9.18.1.tgz - ## WS-2020-0208 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>highlight.js-9.18.1.tgz</b></p></summary> <p>Syntax highlighting with language autodetection.</p> <p>Library home page: <a href="https://registry.npmjs.org/highlight.js/-/highlight.js-9.18.1.tgz">https://registry.npmjs.org/highlight.js/-/highlight.js-9.18.1.tgz</a></p> <p>Path to dependency file: abp/samples/MicroserviceDemo/applications/PublicWebSite.Host/package.json</p> <p>Path to vulnerable library: abp/modules/blogging/app/Volo.BloggingTestApp/node_modules/highlight.js/package.json,abp/modules/blogging/app/Volo.BloggingTestApp/node_modules/highlight.js/package.json,abp/modules/blogging/app/Volo.BloggingTestApp/node_modules/highlight.js/package.json,abp/modules/blogging/app/Volo.BloggingTestApp/node_modules/highlight.js/package.json</p> <p> Dependency Hierarchy: - blogging-1.1.1.tgz (Root Library) - tui-editor-1.1.1.tgz - tui-editor-1.4.10.tgz - :x: **highlight.js-9.18.1.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> If are you are using Highlight.js to highlight user-provided data you are possibly vulnerable. On the client-side (in a browser or Electron environment) risks could include lengthy freezes or crashes... On the server-side infinite freezes could occur... effectively preventing users from accessing your app or service (ie, Denial of Service). This is an issue with grammars shipped with the parser (and potentially 3rd party grammars also), not the parser itself. If you are using Highlight.js with any of the following grammars you are vulnerable. If you are using highlightAuto to detect the language (and have any of these grammars registered) you are vulnerable. <p>Publish Date: 2020-12-04 <p>URL: <a href=https://github.com/highlightjs/highlight.js/commit/373b9d862401162e832ce77305e49b859e110f9c>WS-2020-0208</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/highlightjs/highlight.js/tree/10.4.1">https://github.com/highlightjs/highlight.js/tree/10.4.1</a></p> <p>Release Date: 2020-12-04</p> <p>Fix Resolution: 10.4.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws medium detected in highlight js tgz ws medium severity vulnerability vulnerable library highlight js tgz syntax highlighting with language autodetection library home page a href path to dependency file abp samples microservicedemo applications publicwebsite host package json path to vulnerable library abp modules blogging app volo bloggingtestapp node modules highlight js package json abp modules blogging app volo bloggingtestapp node modules highlight js package json abp modules blogging app volo bloggingtestapp node modules highlight js package json abp modules blogging app volo bloggingtestapp node modules highlight js package json dependency hierarchy blogging tgz root library tui editor tgz tui editor tgz x highlight js tgz vulnerable library vulnerability details if are you are using highlight js to highlight user provided data you are possibly vulnerable on the client side in a browser or electron environment risks could include lengthy freezes or crashes on the server side infinite freezes could occur effectively preventing users from accessing your app or service ie denial of service this is an issue with grammars shipped with the parser and potentially party grammars also not the parser itself if you are using highlight js with any of the following grammars you are vulnerable if you are using highlightauto to detect the language and have any of these grammars registered you are vulnerable publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0