Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 844 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 12 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 248k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
410,255 | 11,985,489,682 | IssuesEvent | 2020-04-07 17:35:54 | melonproject/protocol | https://api.github.com/repos/melonproject/protocol | closed | More sophisticated MLN premium pricing function in Engine | idea low priority | Right now the `premiumPercent` function in Engine is just stepwise and applies to the entire supply of eth in the contract (@SeanJCasey brought attention to this in chat a while ago).
This was a rough preliminary approach since the goal was just to be a sink, but we could come up with something that applies to the supply more dynamically, or has some other interesting property. | 1.0 | More sophisticated MLN premium pricing function in Engine - Right now the `premiumPercent` function in Engine is just stepwise and applies to the entire supply of eth in the contract (@SeanJCasey brought attention to this in chat a while ago).
This was a rough preliminary approach since the goal was just to be a sink, but we could come up with something that applies to the supply more dynamically, or has some other interesting property. | priority | more sophisticated mln premium pricing function in engine right now the premiumpercent function in engine is just stepwise and applies to the entire supply of eth in the contract seanjcasey brought attention to this in chat a while ago this was a rough preliminary approach since the goal was just to be a sink but we could come up with something that applies to the supply more dynamically or has some other interesting property | 1 |
496,635 | 14,350,725,579 | IssuesEvent | 2020-11-29 22:09:07 | GrandDynamo/OneDrive-Cloud-Player | https://api.github.com/repos/GrandDynamo/OneDrive-Cloud-Player | closed | Sanitize file names | low priority | Who thinks of these names... VideoPlayerPageViewModel.cs already resides in the ViewModels folder, so IMO it's unnecessary to have these long names. | 1.0 | Sanitize file names - Who thinks of these names... VideoPlayerPageViewModel.cs already resides in the ViewModels folder, so IMO it's unnecessary to have these long names. | priority | sanitize file names who thinks of these names videoplayerpageviewmodel cs already resides in the viewmodels folder so imo it s unnecessary to have these long names | 1 |
586,967 | 17,600,710,019 | IssuesEvent | 2021-08-17 11:29:37 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | opened | Change tooltip in magic lifestyles about traits | suggestion :question: priority low :grey_exclamation: | <!--
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
**Describe your suggestion in full detail below:**
@ValianBlue suggestion
> I would personally change it to "Remaining [Magic] Perks to get [Level]: [X]." In other words, have it count down with each perk the player acquires as opposed to listing the threshold.
I suggest `[Character] needs [N] more unlocked [Magic] perks to become [trait] (current/total)` | 1.0 | Change tooltip in magic lifestyles about traits - <!--
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
**Describe your suggestion in full detail below:**
@ValianBlue suggestion
> I would personally change it to "Remaining [Magic] Perks to get [Level]: [X]." In other words, have it count down with each perk the player acquires as opposed to listing the threshold.
I suggest `[Character] needs [N] more unlocked [Magic] perks to become [trait] (current/total)` | priority | change tooltip in magic lifestyles about traits do not remove pre existing lines describe your suggestion in full detail below valianblue suggestion i would personally change it to remaining perks to get in other words have it count down with each perk the player acquires as opposed to listing the threshold i suggest needs more unlocked perks to become current total | 1 |
504,416 | 14,618,153,963 | IssuesEvent | 2020-12-22 15:49:10 | super-cooper/memebot | https://api.github.com/repos/super-cooper/memebot | closed | Include user's name in !hello output | feature low-priority | **What do you dislike about the feature in its current state?**
!hello is too generic. I want the bot to say hello *to me*.
**Describe the solution you'd like**
Output is instead "Hello, [user]!" where [user] is the nickname of the user who called the command. If the user has no nickname, use their username.
**What tradeoffs are made by implementing your improvement?**
the hello command needs to actually have logic, but it makes the command more fun.
**Describe alternatives you've considered**
leaving hello as is
| 1.0 | Include user's name in !hello output - **What do you dislike about the feature in its current state?**
!hello is too generic. I want the bot to say hello *to me*.
**Describe the solution you'd like**
Output is instead "Hello, [user]!" where [user] is the nickname of the user who called the command. If the user has no nickname, use their username.
**What tradeoffs are made by implementing your improvement?**
the hello command needs to actually have logic, but it makes the command more fun.
**Describe alternatives you've considered**
leaving hello as is
| priority | include user s name in hello output what do you dislike about the feature in its current state hello is too generic i want the bot to say hello to me describe the solution you d like output is instead hello where is the nickname of the user who called the command if the user has no nickname use their username what tradeoffs are made by implementing your improvement the hello command needs to actually have logic but it makes the command more fun describe alternatives you ve considered leaving hello as is | 1 |
539,194 | 15,784,861,137 | IssuesEvent | 2021-04-01 15:37:41 | faktaoklimatu/web-core | https://api.github.com/repos/faktaoklimatu/web-core | opened | Non-existent authors on explainers also link to "about us" | 1: low priority free-to-take | The corals explainer is written by Tereza Jarníková, the name links to `/o-nas#clenove` but she is not listed. What should be done?
1. Nothing.
2. Only create a link if the person exists (logistically somewhat involved).
3. Ask all explainer creators to add a profile.
Any opinions, @jankrcal, @mgrabovsky? | 1.0 | Non-existent authors on explainers also link to "about us" - The corals explainer is written by Tereza Jarníková, the name links to `/o-nas#clenove` but she is not listed. What should be done?
1. Nothing.
2. Only create a link if the person exists (logistically somewhat involved).
3. Ask all explainer creators to add a profile.
Any opinions, @jankrcal, @mgrabovsky? | priority | non existent authors on explainers also link to about us the corals explainer is written by tereza jarníková the name links to o nas clenove but she is not listed what should be done nothing only create a link if the person exists logistically somewhat involved ask all explainer creators to add a profile any opinions jankrcal mgrabovsky | 1 |
765,485 | 26,848,454,133 | IssuesEvent | 2023-02-03 09:11:47 | canonical/ubuntu.com | https://api.github.com/repos/canonical/ubuntu.com | closed | /advantage table row anchors hide the purpose of the table | Priority: Low | 1\. Go to any of these URLs:
- https://ubuntu.com/advantage/#esm
- https://ubuntu.com/advantage/#livepatch
- https://ubuntu.com/advantage/#fips
What happens:
👍 The relevant row in the “Plans for enterprise use” table is highlighted.
👎 The page is scrolled so that nothing is visible above that table row.
What’s wrong with this:
- Linking to an anchor on a separate page is unusual, so we need to be more careful than usual about ensuring people understand where they’ve ended up.
- Unless you scroll, you can’t even see the table headers, so you don’t know what the table columns are for.
- Unless you scroll, you can’t see the section heading, so you don’t know what the entire table is for.
What should happen:
👍 The relevant row in the “Plans for enterprise use” table is highlighted.
👍 The page is scrolled so that the “Plans for enterprise use” heading and table headers are visible.
[Encountered while reviewing #6085.] | 1.0 | /advantage table row anchors hide the purpose of the table - 1\. Go to any of these URLs:
- https://ubuntu.com/advantage/#esm
- https://ubuntu.com/advantage/#livepatch
- https://ubuntu.com/advantage/#fips
What happens:
👍 The relevant row in the “Plans for enterprise use” table is highlighted.
👎 The page is scrolled so that nothing is visible above that table row.
What’s wrong with this:
- Linking to an anchor on a separate page is unusual, so we need to be more careful than usual about ensuring people understand where they’ve ended up.
- Unless you scroll, you can’t even see the table headers, so you don’t know what the table columns are for.
- Unless you scroll, you can’t see the section heading, so you don’t know what the entire table is for.
What should happen:
👍 The relevant row in the “Plans for enterprise use” table is highlighted.
👍 The page is scrolled so that the “Plans for enterprise use” heading and table headers are visible.
[Encountered while reviewing #6085.] | priority | advantage table row anchors hide the purpose of the table go to any of these urls what happens 👍 the relevant row in the “plans for enterprise use” table is highlighted 👎 the page is scrolled so that nothing is visible above that table row what’s wrong with this linking to an anchor on a separate page is unusual so we need to be more careful than usual about ensuring people understand where they’ve ended up unless you scroll you can’t even see the table headers so you don’t know what the table columns are for unless you scroll you can’t see the section heading so you don’t know what the entire table is for what should happen 👍 the relevant row in the “plans for enterprise use” table is highlighted 👍 the page is scrolled so that the “plans for enterprise use” heading and table headers are visible | 1 |
133,289 | 5,200,303,837 | IssuesEvent | 2017-01-23 23:24:41 | IBMDataScience/datascix | https://api.github.com/repos/IBMDataScience/datascix | closed | Add the ability to validate URL syntax for "image_url", "blog_url" properties | priority-low type-enhancement |
e.g. Bad:
```
"image_url" : "https://github.com/IBMDataScience/datascix/blob/master/public/prod/changelog/img/github.png?raw=true?raw=true",
``` | 1.0 | Add the ability to validate URL syntax for "image_url", "blog_url" properties -
e.g. Bad:
```
"image_url" : "https://github.com/IBMDataScience/datascix/blob/master/public/prod/changelog/img/github.png?raw=true?raw=true",
``` | priority | add the ability to validate url syntax for image url blog url properties e g bad image url | 1 |
175,830 | 6,554,336,001 | IssuesEvent | 2017-09-06 05:07:41 | hacksu/2017-kenthackenough-ui-main | https://api.github.com/repos/hacksu/2017-kenthackenough-ui-main | closed | Powered by Hacksu Icon @ the bottom of the site | Low Priority | Add Powered by Hacksu logo to the bottom of the site (Or somewhere) | 1.0 | Powered by Hacksu Icon @ the bottom of the site - Add Powered by Hacksu logo to the bottom of the site (Or somewhere) | priority | powered by hacksu icon the bottom of the site add powered by hacksu logo to the bottom of the site or somewhere | 1 |
276,941 | 8,614,771,282 | IssuesEvent | 2018-11-19 18:29:29 | MontrealCorpusTools/iscan-server | https://api.github.com/repos/MontrealCorpusTools/iscan-server | opened | Add default stop subsets to phone subset enrichment | UI enrichment low priority | Currently the phone subset enrichment has defaults for syllabics, etc. There should also be buttons for voiced and voiceless stops just to save time. Maybe this could be generalised to just add a few more phonetic categories in general to the subset enrichment. | 1.0 | Add default stop subsets to phone subset enrichment - Currently the phone subset enrichment has defaults for syllabics, etc. There should also be buttons for voiced and voiceless stops just to save time. Maybe this could be generalised to just add a few more phonetic categories in general to the subset enrichment. | priority | add default stop subsets to phone subset enrichment currently the phone subset enrichment has defaults for syllabics etc there should also be buttons for voiced and voiceless stops just to save time maybe this could be generalised to just add a few more phonetic categories in general to the subset enrichment | 1 |
700,958 | 24,080,496,823 | IssuesEvent | 2022-09-19 05:55:57 | tensorchord/envd | https://api.github.com/repos/tensorchord/envd | closed | enhancement(network error): Make error message more informative | priority/3-low 💙 type/enhancement 💭 | ## Description
When the image pulling process is affected by the network issue, the error message is not friendly.

| 1.0 | enhancement(network error): Make error message more informative - ## Description
When the image pulling process is affected by the network issue, the error message is not friendly.

| priority | enhancement network error make error message more informative description when the image pulling process is affected by the network issue the error message is not friendly | 1 |
172,797 | 6,516,315,421 | IssuesEvent | 2017-08-27 06:55:12 | python/mypy | https://api.github.com/repos/python/mypy | closed | Type alias problems with typeshed's `bytes` | crash priority-2-low | Running mypy on typeshed's stdlib/2/builtins.pyi violates the assertion in semanal.py:931 - `sym.node` is a Var instead of a TypeInfo for `builtins.bytes`. This is related to the statement `bytes = str` and type promotion of `str` to `bytes`.
```
...
File ".../mypy/mypy/build.py", line 1425, in semantic_analysis
self.manager.semantic_analyzer.visit_file(self.tree, self.xpath, self.options)
File ".../mypy/mypy/semanal.py", line 246, in visit_file
self.accept(d)
File ".../mypy/mypy/semanal.py", line 2719, in accept
node.accept(self)
File ".../mypy/mypy/nodes.py", line 709, in accept
return visitor.visit_class_def(self)
File ".../mypy/mypy/semanal.py", line 579, in visit_class_def
self.setup_type_promotion(defn)
File ".../mypy/mypy/semanal.py", line 663, in setup_type_promotion
promote_target = self.named_type_or_none(promotions[defn.fullname])
File ".../mypy/mypy/semanal.py", line 931, in named_type_or_none
assert isinstance(sym.node, TypeInfo)
``` | 1.0 | Type alias problems with typeshed's `bytes` - Running mypy on typeshed's stdlib/2/builtins.pyi violates the assertion in semanal.py:931 - `sym.node` is a Var instead of a TypeInfo for `builtins.bytes`. This is related to the statement `bytes = str` and type promotion of `str` to `bytes`.
```
...
File ".../mypy/mypy/build.py", line 1425, in semantic_analysis
self.manager.semantic_analyzer.visit_file(self.tree, self.xpath, self.options)
File ".../mypy/mypy/semanal.py", line 246, in visit_file
self.accept(d)
File ".../mypy/mypy/semanal.py", line 2719, in accept
node.accept(self)
File ".../mypy/mypy/nodes.py", line 709, in accept
return visitor.visit_class_def(self)
File ".../mypy/mypy/semanal.py", line 579, in visit_class_def
self.setup_type_promotion(defn)
File ".../mypy/mypy/semanal.py", line 663, in setup_type_promotion
promote_target = self.named_type_or_none(promotions[defn.fullname])
File ".../mypy/mypy/semanal.py", line 931, in named_type_or_none
assert isinstance(sym.node, TypeInfo)
``` | priority | type alias problems with typeshed s bytes running mypy on typeshed s stdlib builtins pyi violates the assertion in semanal py sym node is a var instead of a typeinfo for builtins bytes this is related to the statement bytes str and type promotion of str to bytes file mypy mypy build py line in semantic analysis self manager semantic analyzer visit file self tree self xpath self options file mypy mypy semanal py line in visit file self accept d file mypy mypy semanal py line in accept node accept self file mypy mypy nodes py line in accept return visitor visit class def self file mypy mypy semanal py line in visit class def self setup type promotion defn file mypy mypy semanal py line in setup type promotion promote target self named type or none promotions file mypy mypy semanal py line in named type or none assert isinstance sym node typeinfo | 1 |
734,451 | 25,349,726,614 | IssuesEvent | 2022-11-19 16:19:05 | es-ude/elastic-ai.creator | https://api.github.com/repos/es-ude/elastic-ai.creator | closed | Check GitHub Workflow deprecation warnings | major priority gh-workflow | During the runtime of our GitHub workflow, we receive the following [deprecation warnings](https://github.com/es-ude/elastic-ai.creator/actions/runs/3454812472):

I guess we need to fix them soon to make sure our workflows are working as expected. | 1.0 | Check GitHub Workflow deprecation warnings - During the runtime of our GitHub workflow, we receive the following [deprecation warnings](https://github.com/es-ude/elastic-ai.creator/actions/runs/3454812472):

I guess we need to fix them soon to make sure our workflows are working as expected. | priority | check github workflow deprecation warnings during the runtime of our github workflow we receive the following i guess we need to fix them soon to make sure our workflows are working as expected | 1 |
123,311 | 4,860,030,331 | IssuesEvent | 2016-11-13 22:54:28 | gravityview/GravityView | https://api.github.com/repos/gravityview/GravityView | closed | "You don't have any active Views" message shown when performing a Posts screen search | Core: Administration Difficulty: Low Priority: Low | It should use the standard "No Views found" message instead of the "Lost in space?" message.
| 1.0 | "You don't have any active Views" message shown when performing a Posts screen search - It should use the standard "No Views found" message instead of the "Lost in space?" message.
| priority | you don t have any active views message shown when performing a posts screen search it should use the standard no views found message instead of the lost in space message | 1 |
228,796 | 7,567,875,787 | IssuesEvent | 2018-04-22 14:40:23 | openbabel/openbabel | https://api.github.com/repos/openbabel/openbabel | closed | Error in Installing Open Babel | auto-migrated bug low priority | I wanted to install Open Babel but everytime I tried installing, a window came up stating as the attachment. I hope someone could help me with this issue. Please..
Reported by: wannursyakilla
Original Ticket: [openbabel/bugs/978](https://sourceforge.net/p/openbabel/bugs/978) | 1.0 | Error in Installing Open Babel - I wanted to install Open Babel but everytime I tried installing, a window came up stating as the attachment. I hope someone could help me with this issue. Please..
Reported by: wannursyakilla
Original Ticket: [openbabel/bugs/978](https://sourceforge.net/p/openbabel/bugs/978) | priority | error in installing open babel i wanted to install open babel but everytime i tried installing a window came up stating as the attachment i hope someone could help me with this issue please reported by wannursyakilla original ticket | 1 |
504,921 | 14,623,683,276 | IssuesEvent | 2020-12-23 04:08:00 | AtlasOfLivingAustralia/volunteer-portal | https://api.github.com/repos/AtlasOfLivingAustralia/volunteer-portal | closed | Tutorials link to ///data/volunteer//tutorials | Priority - low | The tutorial page links to ``///data/volunteer//tutorials``. These links manage to render, but it makes it more difficult to setup a useful ``robots.txt`` file to tell bots which paths under ``/data`` (and ``/``) they should and shouldn't index. The links on the tutorial page seem to work if changed to remove the redundant slashes.
For example the following works,:
```
https://volunteer.ala.org.au///data/volunteer//tutorials/ANIC-2018-expeditions.pdf
```
but it also works without the redundant slashes:
```
https://volunteer.ala.org.au/data/volunteer/tutorials/ANIC-2018-expeditions.pdf
```
To relieve some of the pressure on the server from bot accesses, the robots.txt file now attempts to stop all access to ``/data``, ``//data`` (which is also observed), ``///data``, and ``////data`` (which isn't observed, but added to prevent other potential issues) | 1.0 | Tutorials link to ///data/volunteer//tutorials - The tutorial page links to ``///data/volunteer//tutorials``. These links manage to render, but it makes it more difficult to setup a useful ``robots.txt`` file to tell bots which paths under ``/data`` (and ``/``) they should and shouldn't index. The links on the tutorial page seem to work if changed to remove the redundant slashes.
For example the following works,:
```
https://volunteer.ala.org.au///data/volunteer//tutorials/ANIC-2018-expeditions.pdf
```
but it also works without the redundant slashes:
```
https://volunteer.ala.org.au/data/volunteer/tutorials/ANIC-2018-expeditions.pdf
```
To relieve some of the pressure on the server from bot accesses, the robots.txt file now attempts to stop all access to ``/data``, ``//data`` (which is also observed), ``///data``, and ``////data`` (which isn't observed, but added to prevent other potential issues) | priority | tutorials link to data volunteer tutorials the tutorial page links to data volunteer tutorials these links manage to render but it makes it more difficult to setup a useful robots txt file to tell bots which paths under data and they should and shouldn t index the links on the tutorial page seem to work if changed to remove the redundant slashes for example the following works but it also works without the redundant slashes to relieve some of the pressure on the server from bot accesses the robots txt file now attempts to stop all access to data data which is also observed data and data which isn t observed but added to prevent other potential issues | 1 |
506,841 | 14,674,204,400 | IssuesEvent | 2020-12-30 14:50:19 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | opened | Adapt Horse Archers and Camels | :books: lore :books: :grey_exclamation: priority low :ice_cream: vanilla modification :icecream: :question: suggestion :question: | <!--
DO NOT REMOVE PRE-EXISTING LINES
IF YOU WANT TO SUGGEST A FEW THINGS, OPEN A NEW ISSUE PER EVERY SUGGESTION
----------------------------------------------------------------------------------------------------------
-->
**Describe your suggestion in full detail below:**
We should adapt vanilla horse archers and camels.
Centaurs should be having horse archers and cultures in Tanaris and Uldum should have camels. | 1.0 | Adapt Horse Archers and Camels - <!--
DO NOT REMOVE PRE-EXISTING LINES
IF YOU WANT TO SUGGEST A FEW THINGS, OPEN A NEW ISSUE PER EVERY SUGGESTION
----------------------------------------------------------------------------------------------------------
-->
**Describe your suggestion in full detail below:**
We should adapt vanilla horse archers and camels.
Centaurs should be having horse archers and cultures in Tanaris and Uldum should have camels. | priority | adapt horse archers and camels do not remove pre existing lines if you want to suggest a few things open a new issue per every suggestion describe your suggestion in full detail below we should adapt vanilla horse archers and camels centaurs should be having horse archers and cultures in tanaris and uldum should have camels | 1 |
724,469 | 24,931,607,455 | IssuesEvent | 2022-10-31 12:07:25 | ignite/cli | https://api.github.com/repos/ignite/cli | opened | Update Module Dependencies | request priority/low | This issue tracks recently updated PRs for updating `ignite`'s module dependencies:
- [ ] #3004
- [ ] #3005
- [ ] #3006
- [ ] #3007
- [ ] #3008 | 1.0 | Update Module Dependencies - This issue tracks recently updated PRs for updating `ignite`'s module dependencies:
- [ ] #3004
- [ ] #3005
- [ ] #3006
- [ ] #3007
- [ ] #3008 | priority | update module dependencies this issue tracks recently updated prs for updating ignite s module dependencies | 1 |
82,530 | 3,614,580,509 | IssuesEvent | 2016-02-06 03:51:44 | MenoData/Time4J | https://api.github.com/repos/MenoData/Time4J | closed | Add equals/hashCode-support for platform formatter | bug fixed priority: low | Following test fails if the i18n-module is not available:
```java
assertThat(
PlainDate.localFormatter(DisplayMode.FULL),
is(PlainDate.formatter(DisplayMode.FULL, Locale.getDefault()))
);
``` | 1.0 | Add equals/hashCode-support for platform formatter - Following test fails if the i18n-module is not available:
```java
assertThat(
PlainDate.localFormatter(DisplayMode.FULL),
is(PlainDate.formatter(DisplayMode.FULL, Locale.getDefault()))
);
``` | priority | add equals hashcode support for platform formatter following test fails if the module is not available java assertthat plaindate localformatter displaymode full is plaindate formatter displaymode full locale getdefault | 1 |
413,711 | 12,090,831,874 | IssuesEvent | 2020-04-19 08:41:27 | Matteas-Eden/roll-for-reaction | https://api.github.com/repos/Matteas-Eden/roll-for-reaction | closed | Hotkey to equip items | Low Priority usability | **User Story**
As a gamer, I'd like to be able to press 'E' to equip an item, so that I can equip items without usig the mouse
**Acceptance Criteria**
- When the notification for receiving a new item comes up, you can press 'E' to eqip it
- Pressing 'E' also removes the notification
---
**Why is this feature needed? Please describe the problem your requested feature wants to solve**
Unable to eqip using key commands
<!-- Describe what the problem is. Ex. I'm always frustrated when ... -->
**Describe the solution you'd like**
When the notification for receiving a new item comes up, you can press 'E' to eqip it and the notification then disappears.
<!--Describe what you want to happen -->
| 1.0 | Hotkey to equip items - **User Story**
As a gamer, I'd like to be able to press 'E' to equip an item, so that I can equip items without usig the mouse
**Acceptance Criteria**
- When the notification for receiving a new item comes up, you can press 'E' to eqip it
- Pressing 'E' also removes the notification
---
**Why is this feature needed? Please describe the problem your requested feature wants to solve**
Unable to eqip using key commands
<!-- Describe what the problem is. Ex. I'm always frustrated when ... -->
**Describe the solution you'd like**
When the notification for receiving a new item comes up, you can press 'E' to eqip it and the notification then disappears.
<!--Describe what you want to happen -->
| priority | hotkey to equip items user story as a gamer i d like to be able to press e to equip an item so that i can equip items without usig the mouse acceptance criteria when the notification for receiving a new item comes up you can press e to eqip it pressing e also removes the notification why is this feature needed please describe the problem your requested feature wants to solve unable to eqip using key commands describe the solution you d like when the notification for receiving a new item comes up you can press e to eqip it and the notification then disappears | 1 |
129,143 | 5,089,262,109 | IssuesEvent | 2017-01-01 13:38:47 | Geeklog-Core/geeklog | https://api.github.com/repos/Geeklog-Core/geeklog | closed | [Feature Requests] Eliminate the need for COM_stripslashes | low priority minor | **Reported by jmucchiello on 4 Jul 2008 18:08**
**Version:** Future
**Description:**
This was discussed on the mailing list last summer/fall:
http://eight.pairlist.net/pipermail/geeklog-devel/2007-September/002318.html
Not sure how this interacts with Web Services which for some reason call COM_applyBasicFilter which just doesn't have COM_stripslashes in it. Why doesn't a webservice call obey the magic_quotes_gpc() setting?
**Additional Information:**
// add this to lib-common.php somewhere near the top:
``` PHP
if (get_magic_quotes_gpc() == 1) {
if (!function_exists('array_walk_recursive')) {
require_once 'PHP/Compat.php';
PHP_Compat::loadFunction('array_walk_recursive');
}
$_STRIP_SLASHES = create_function('&$v,$k', '$v = stripslashes($v);');
array_walk_recursive($_POST, $_STRIP_SLASHES);
array_walk_recursive($_GET, $_STRIP_SLASHES);
array_walk_recursive($_REQUEST, $_STRIP_SLASHES);
array_walk_recursive($_COOKIE, $_STRIP_SLASHES);
unset($_STRIP_SLASHES);
}
```
// and the turn COM_stripslashes into
``` PHP
/**
* DEPRICATED: You do not need to call this any more
* Strip slashes from a string only when magic_quotes_gpc = on.
*
* @param string $text The text
* @return string The text, possibly without slashes.
*/
function COM_stripslashes($text)
{
return $text;
}
```
COM_applyFilter and COM_checkHTML would be good places to remove COM_stripslashes from the core code as a first pass change. Later all calls to it would be removed.
[Mantis Bugtracker #679](http://project.geeklog.net/tracking/view.php?id=679)
| 1.0 | [Feature Requests] Eliminate the need for COM_stripslashes - **Reported by jmucchiello on 4 Jul 2008 18:08**
**Version:** Future
**Description:**
This was discussed on the mailing list last summer/fall:
http://eight.pairlist.net/pipermail/geeklog-devel/2007-September/002318.html
Not sure how this interacts with Web Services which for some reason call COM_applyBasicFilter which just doesn't have COM_stripslashes in it. Why doesn't a webservice call obey the magic_quotes_gpc() setting?
**Additional Information:**
// add this to lib-common.php somewhere near the top:
``` PHP
if (get_magic_quotes_gpc() == 1) {
if (!function_exists('array_walk_recursive')) {
require_once 'PHP/Compat.php';
PHP_Compat::loadFunction('array_walk_recursive');
}
$_STRIP_SLASHES = create_function('&$v,$k', '$v = stripslashes($v);');
array_walk_recursive($_POST, $_STRIP_SLASHES);
array_walk_recursive($_GET, $_STRIP_SLASHES);
array_walk_recursive($_REQUEST, $_STRIP_SLASHES);
array_walk_recursive($_COOKIE, $_STRIP_SLASHES);
unset($_STRIP_SLASHES);
}
```
// and the turn COM_stripslashes into
``` PHP
/**
* DEPRICATED: You do not need to call this any more
* Strip slashes from a string only when magic_quotes_gpc = on.
*
* @param string $text The text
* @return string The text, possibly without slashes.
*/
function COM_stripslashes($text)
{
return $text;
}
```
COM_applyFilter and COM_checkHTML would be good places to remove COM_stripslashes from the core code as a first pass change. Later all calls to it would be removed.
[Mantis Bugtracker #679](http://project.geeklog.net/tracking/view.php?id=679)
| priority | eliminate the need for com stripslashes reported by jmucchiello on jul version future description this was discussed on the mailing list last summer fall not sure how this interacts with web services which for some reason call com applybasicfilter which just doesn t have com stripslashes in it why doesn t a webservice call obey the magic quotes gpc setting additional information add this to lib common php somewhere near the top php if get magic quotes gpc if function exists array walk recursive require once php compat php php compat loadfunction array walk recursive strip slashes create function v k v stripslashes v array walk recursive post strip slashes array walk recursive get strip slashes array walk recursive request strip slashes array walk recursive cookie strip slashes unset strip slashes and the turn com stripslashes into php depricated you do not need to call this any more strip slashes from a string only when magic quotes gpc on param string text the text return string the text possibly without slashes function com stripslashes text return text com applyfilter and com checkhtml would be good places to remove com stripslashes from the core code as a first pass change later all calls to it would be removed | 1 |
755,131 | 26,418,351,345 | IssuesEvent | 2023-01-13 17:52:41 | geopm/geopm | https://api.github.com/repos/geopm/geopm | closed | geopmd hanging on shutdown | bug bug-priority-high bug-exposure-high bug-quality-low | **Describe the bug**
I tried to shut the service down with systemctl and I expected it to shutdown gracefully instead it hangs, and will eventually be killed with SIGKILL.
**GEOPM version**
5e829c4c2
**Expected behavior**
```
$ sudo systemctl stop geopm
$ journalctl -u geopm
...
Jan 12 16:52:42 mcfly1 systemd[1]: Started Global Extensible Open Power Manager Service.
Jan 12 16:52:43 mcfly1 systemd[1]: Stopping Global Extensible Open Power Manager Service...
Jan 12 16:52:43 mcfly1 systemd[1]: Stopped Global Extensible Open Power Manager Service.
```
**Actual behavior**
```
Jan 12 16:43:07 mcfly1 systemd[1]: Starting Global Extensible Open Power Manager Service...
Jan 12 16:43:08 mcfly1 systemd[1]: Started Global Extensible Open Power Manager Service.
Jan 12 16:43:19 mcfly1 systemd[1]: Stopping Global Extensible Open Power Manager Service...
Jan 12 16:44:50 mcfly1 systemd[1]: geopm.service: State 'stop-sigterm' timed out. Killing.
Jan 12 16:44:50 mcfly1 systemd[1]: geopm.service: Killing process 5581 (geopmd) with signal SIGKILL.
Jan 12 16:44:50 mcfly1 systemd[1]: geopm.service: Main process exited, code=killed, status=9/KILL
Jan 12 16:44:50 mcfly1 systemd[1]: Stopped Global Extensible Open Power Manager Service.
Jan 12 16:44:50 mcfly1 systemd[1]: geopm.service: Unit entered failed state.
Jan 12 16:44:50 mcfly1 systemd[1]: geopm.service: Failed with result 'timeout'.
```
**Additional context**
This only started happening since 09c039976 was merged recently. It looks like I missed handling the event loop shutdown. | 1.0 | geopmd hanging on shutdown - **Describe the bug**
I tried to shut the service down with systemctl and I expected it to shutdown gracefully instead it hangs, and will eventually be killed with SIGKILL.
**GEOPM version**
5e829c4c2
**Expected behavior**
```
$ sudo systemctl stop geopm
$ journalctl -u geopm
...
Jan 12 16:52:42 mcfly1 systemd[1]: Started Global Extensible Open Power Manager Service.
Jan 12 16:52:43 mcfly1 systemd[1]: Stopping Global Extensible Open Power Manager Service...
Jan 12 16:52:43 mcfly1 systemd[1]: Stopped Global Extensible Open Power Manager Service.
```
**Actual behavior**
```
Jan 12 16:43:07 mcfly1 systemd[1]: Starting Global Extensible Open Power Manager Service...
Jan 12 16:43:08 mcfly1 systemd[1]: Started Global Extensible Open Power Manager Service.
Jan 12 16:43:19 mcfly1 systemd[1]: Stopping Global Extensible Open Power Manager Service...
Jan 12 16:44:50 mcfly1 systemd[1]: geopm.service: State 'stop-sigterm' timed out. Killing.
Jan 12 16:44:50 mcfly1 systemd[1]: geopm.service: Killing process 5581 (geopmd) with signal SIGKILL.
Jan 12 16:44:50 mcfly1 systemd[1]: geopm.service: Main process exited, code=killed, status=9/KILL
Jan 12 16:44:50 mcfly1 systemd[1]: Stopped Global Extensible Open Power Manager Service.
Jan 12 16:44:50 mcfly1 systemd[1]: geopm.service: Unit entered failed state.
Jan 12 16:44:50 mcfly1 systemd[1]: geopm.service: Failed with result 'timeout'.
```
**Additional context**
This only started happening since 09c039976 was merged recently. It looks like I missed handling the event loop shutdown. | priority | geopmd hanging on shutdown describe the bug i tried to shut the service down with systemctl and i expected it to shutdown gracefully instead it hangs and will eventually be killed with sigkill geopm version expected behavior sudo systemctl stop geopm journalctl u geopm jan systemd started global extensible open power manager service jan systemd stopping global extensible open power manager service jan systemd stopped global extensible open power manager service actual behavior jan systemd starting global extensible open power manager service jan systemd started global extensible open power manager service jan systemd stopping global extensible open power manager service jan systemd geopm service state stop sigterm timed out killing jan systemd geopm service killing process geopmd with signal sigkill jan systemd geopm service main process exited code killed status kill jan systemd stopped global extensible open power manager service jan systemd geopm service unit entered failed state jan systemd geopm service failed with result timeout additional context this only started happening since was merged recently it looks like i missed handling the event loop shutdown | 1 |
538,495 | 15,770,035,522 | IssuesEvent | 2021-03-31 18:59:50 | vacuumlabs/adalite | https://api.github.com/repos/vacuumlabs/adalite | closed | Separate circleCI into separate jobs | low priority | Currently, the workflow doesn't allow other checks to run if one fails, which might be annoying for development | 1.0 | Separate circleCI into separate jobs - Currently, the workflow doesn't allow other checks to run if one fails, which might be annoying for development | priority | separate circleci into separate jobs currently the workflow doesn t allow other checks to run if one fails which might be annoying for development | 1 |
167,411 | 6,337,618,176 | IssuesEvent | 2017-07-27 00:39:25 | redox-os/ion | https://api.github.com/repos/redox-os/ion | closed | Parsing Issue w/ && Operator | bug high-priority low-hanging fruit | I'm not sure why this is at the moment, but:
```sh
matches Foo '([A-Z])\w+' && echo true
```
Gives the following output:
```
ion: syntax error: '(' at position 13 is out of place
```
But this:
```sh
matches Foo '([A-Z])\w+'; echo true
```
Gives:
```
true
```
I've also noticed that this:
```sh
matches Foo '[A-Z]\w+' && echo true
```
Gives no output, but this does:
```sh
matches Foo '[A-Z]\w+'; echo true
```
Likely the same issue, and likely in the `StatementSplitter` logic. | 1.0 | Parsing Issue w/ && Operator - I'm not sure why this is at the moment, but:
```sh
matches Foo '([A-Z])\w+' && echo true
```
Gives the following output:
```
ion: syntax error: '(' at position 13 is out of place
```
But this:
```sh
matches Foo '([A-Z])\w+'; echo true
```
Gives:
```
true
```
I've also noticed that this:
```sh
matches Foo '[A-Z]\w+' && echo true
```
Gives no output, but this does:
```sh
matches Foo '[A-Z]\w+'; echo true
```
Likely the same issue, and likely in the `StatementSplitter` logic. | priority | parsing issue w operator i m not sure why this is at the moment but sh matches foo w echo true gives the following output ion syntax error at position is out of place but this sh matches foo w echo true gives true i ve also noticed that this sh matches foo w echo true gives no output but this does sh matches foo w echo true likely the same issue and likely in the statementsplitter logic | 1 |
718,420 | 24,716,532,825 | IssuesEvent | 2022-10-20 07:26:39 | eth-cscs/DLA-Future | https://api.github.com/repos/eth-cscs/DLA-Future | opened | Should `reference_wrapper` and other things be automatically be unwrapped in more contexts than `transform` and `transformMPI` | enhancement Task Priority:Low | We currently automatically unwrap futures and reference wrappers in `transform` and `transformMPI`. @albestro had a use case where he expected e.g. `withTemporaryTile` to do the same. One might expect this to happen in `let_value`, and probably many other places. Should we expand the number of places where we do it automatically? Should we expose a custom version of `unwrapping` (pretty much `TransformCallHelper` + `unwrapping`) for use outside of `transform`? | 1.0 | Should `reference_wrapper` and other things be automatically be unwrapped in more contexts than `transform` and `transformMPI` - We currently automatically unwrap futures and reference wrappers in `transform` and `transformMPI`. @albestro had a use case where he expected e.g. `withTemporaryTile` to do the same. One might expect this to happen in `let_value`, and probably many other places. Should we expand the number of places where we do it automatically? Should we expose a custom version of `unwrapping` (pretty much `TransformCallHelper` + `unwrapping`) for use outside of `transform`? | priority | should reference wrapper and other things be automatically be unwrapped in more contexts than transform and transformmpi we currently automatically unwrap futures and reference wrappers in transform and transformmpi albestro had a use case where he expected e g withtemporarytile to do the same one might expect this to happen in let value and probably many other places should we expand the number of places where we do it automatically should we expose a custom version of unwrapping pretty much transformcallhelper unwrapping for use outside of transform | 1 |
612,420 | 19,012,268,902 | IssuesEvent | 2021-11-23 10:35:58 | OpenNebula/one | https://api.github.com/repos/OpenNebula/one | closed | Install One CLI Tools on Mac OS | Category: CLI Community Type: Bug Status: Accepted Priority: Low | **Description**
It would be great to be able to install the CLI Tools on Mac OS.
**Use case**
I want to use some of the CLI binaries to execute API calls against the OpenNebula API.
I'm using a Mac laptop and it's annoying to have to connect to a Linux machine in order to be able to use the CLI Tools.
**Changes**
Is there a way to build the CLI Tools only, from source, on Mac OS ?
I saw that the CLI binaries were ruby code/gems, so I think it should be doable.
If so, could someone update the documentation to explain how to install it ?
If it's not possible yet to install those on Mac OS, how complicated would that be to implement such functionality ?
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
| 1.0 | Install One CLI Tools on Mac OS - **Description**
It would be great to be able to install the CLI Tools on Mac OS.
**Use case**
I want to use some of the CLI binaries to execute API calls against the OpenNebula API.
I'm using a Mac laptop and it's annoying to have to connect to a Linux machine in order to be able to use the CLI Tools.
**Changes**
Is there a way to build the CLI Tools only, from source, on Mac OS ?
I saw that the CLI binaries were ruby code/gems, so I think it should be doable.
If so, could someone update the documentation to explain how to install it ?
If it's not possible yet to install those on Mac OS, how complicated would that be to implement such functionality ?
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
| priority | install one cli tools on mac os description it would be great to be able to install the cli tools on mac os use case i want to use some of the cli binaries to execute api calls against the opennebula api i m using a mac laptop and it s annoying to have to connect to a linux machine in order to be able to use the cli tools changes is there a way to build the cli tools only from source on mac os i saw that the cli binaries were ruby code gems so i think it should be doable if so could someone update the documentation to explain how to install it if it s not possible yet to install those on mac os how complicated would that be to implement such functionality progress status branch created code committed to development branch testing qa documentation release notes resolved issues compatibility known issues code committed to upstream release hotfix branches documentation committed to upstream release hotfix branches | 1 |
97,078 | 3,984,919,665 | IssuesEvent | 2016-05-07 14:36:57 | Brickimedia/brickimedia | https://api.github.com/repos/Brickimedia/brickimedia | closed | Browsers, operating systems, and device | [feedback] Question [feedback] RFC [priority] Mid-low | @Brickimedia/developers
Can we get a real quick list of what everyone uses mainly? Its nice to know so when there's a problem that pertains to one of the specific things we can ping that certain dev, and also to know what our full testing environment as a whole team looks like as a whole, and see if we need to expand it (our team is pretty small, only about 10-15)
Note: I'll close this issue once everyone / majority of everyone has commented and added | 1.0 | Browsers, operating systems, and device - @Brickimedia/developers
Can we get a real quick list of what everyone uses mainly? Its nice to know so when there's a problem that pertains to one of the specific things we can ping that certain dev, and also to know what our full testing environment as a whole team looks like as a whole, and see if we need to expand it (our team is pretty small, only about 10-15)
Note: I'll close this issue once everyone / majority of everyone has commented and added | priority | browsers operating systems and device brickimedia developers can we get a real quick list of what everyone uses mainly its nice to know so when there s a problem that pertains to one of the specific things we can ping that certain dev and also to know what our full testing environment as a whole team looks like as a whole and see if we need to expand it our team is pretty small only about note i ll close this issue once everyone majority of everyone has commented and added | 1 |
149,659 | 5,723,108,219 | IssuesEvent | 2017-04-20 11:17:26 | pmem/issues | https://api.github.com/repos/pmem/issues | closed | unit tests: pmempool_info/TEST18: SETUP (all/pmem/debug) fails | Exposure: Low OS: Linux Priority: 4 low Type: Bug | Found on a492d408739b33ba79681e74c8942ce1c4f62167:
> pmempool_info/TEST18: SETUP (all/pmem/debug)
> [MATCHING FAILED, COMPLETE FILE (out18.log) BELOW]
> Poolset structure:
> Number of replicas : 1
> Replica 0 (master) - local, 1 part(s):
> part 0:
> path : /dev/dax0.0
> type : device dax
> size : 4225761280
>
> POOL Header:
> Signature : PMEMOBJ
> Major : 3
> Mandatory features : 0x0
> Not mandatory features : 0x0
> Forced RO : 0x0
> Pool set UUID : c05b266e-5f47-411a-8ba0-24e7f3390a7e
> UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Previous part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Next part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Previous replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Next replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Creation Time : Sat Mar 18 2017 06:44:41
> Alignment Descriptor : 0x000007f737777310[OK]
> Class : ELF64
> Data : 2's complement, little endian
> Machine : AMD X86-64
> Checksum : 0xb974bc8973773004 [OK]
>
> PMEM OBJ Header:
> Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> Lanes offset : 0x2000
> Number of lanes : 1024
> Heap offset : 0x302000
> Heap size : 4222607360
> Checksum : 0x9d564c515dd6ef76 [OK]
> Root offset : 0x0
> Part file:
> path : /dev/dax0.0
> type : device dax
> size : 4225761280
>
> POOL Header:
> Signature : PMEMOBJ
> Major : 3
> Mandatory features : 0x0
> Not mandatory features : 0x0
> Forced RO : 0x0
> Pool set UUID : c05b266e-5f47-411a-8ba0-24e7f3390a7e
> UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Previous part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Next part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Previous replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Next replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Creation Time : Sat Mar 18 2017 06:44:41
> Alignment Descriptor : 0x000007f737777310[OK]
> Class : ELF64
> Data : 2's complement, little endian
> Machine : AMD X86-64
> Checksum : 0xb974bc8973773004 [OK]
>
> PMEM OBJ Header:
> Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> Lanes offset : 0x2000
> Number of lanes : 1024
> Heap offset : 0x302000
> Heap size : 4222607360
> Checksum : 0x9d564c515dd6ef76 [OK]
> Root offset : 0x0
>
> [EOF]
> out18.log.match:1 Poolset structure:
> out18.log:1 Poolset structure:
> out18.log.match:2 Number of replicas : 1
> out18.log:2 Number of replicas : 1
> out18.log.match:3 Replica 0 (master) - local, 1 part(s):
> out18.log:3 Replica 0 (master) - local, 1 part(s):
> out18.log.match:4 part 0:
> out18.log:4 part 0:
> out18.log.match:5 path : $(nW)
> out18.log:5 path : /dev/dax0.0
> out18.log.match:6 type : device dax
> out18.log:6 type : device dax
> out18.log.match:7 size : $(nW)
> out18.log:7 size : 4225761280
> out18.log.match:8
> out18.log:8
> out18.log.match:9 POOL Header:
> out18.log:9 POOL Header:
> out18.log.match:10 Signature : PMEMOBJ
> out18.log:10 Signature : PMEMOBJ
> out18.log.match:11 Major : $(nW)
> out18.log:11 Major : 3
> out18.log.match:12 Mandatory features : 0x0
> out18.log:12 Mandatory features : 0x0
> out18.log.match:13 Not mandatory features : 0x0
> out18.log:13 Not mandatory features : 0x0
> out18.log.match:14 Forced RO : 0x0
> out18.log:14 Forced RO : 0x0
> out18.log.match:15 Pool set UUID : $(nW)
> out18.log:15 Pool set UUID : c05b266e-5f47-411a-8ba0-24e7f3390a7e
> out18.log.match:16 UUID : $(nW)
> out18.log:16 UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:17 Previous part UUID : $(nW)
> out18.log:17 Previous part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:18 Next part UUID : $(nW)
> out18.log:18 Next part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:19 Previous replica UUID : $(nW)
> out18.log:19 Previous replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:20 Next replica UUID : $(nW)
> out18.log:20 Next replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:21 Creation Time : $(*)
> out18.log:21 Creation Time : Sat Mar 18 2017 06:44:41
> out18.log.match:22 Alignment Descriptor : $(nW)
> out18.log:22 Alignment Descriptor : 0x000007f737777310[OK]
> out18.log.match:23 Class : ELF64
> out18.log:23 Class : ELF64
> out18.log.match:24 Data : 2's complement, little endian
> out18.log:24 Data : 2's complement, little endian
> out18.log.match:25 Machine : AMD X86-64
> out18.log:25 Machine : AMD X86-64
> out18.log.match:26 Checksum : $(*)
> out18.log:26 Checksum : 0xb974bc8973773004 [OK]
> out18.log.match:27
> out18.log:27
> out18.log.match:28 PMEM OBJ Header:
> out18.log:28 PMEM OBJ Header:
> out18.log.match:29 $(OPT)Layout : pmempool
> out18.log:29 Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> out18.log:29 [skipping optional line]
> out18.log.match:30 $(OPT)Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> out18.log:29 Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> out18.log.match:31 Lanes offset : $(nW)
> out18.log:30 Lanes offset : 0x2000
> out18.log.match:32 Number of lanes : $(nW)
> out18.log:31 Number of lanes : 1024
> out18.log.match:33 Heap offset : $(nW)
> out18.log:32 Heap offset : 0x302000
> out18.log.match:34 Heap size : $(nW)
> out18.log:33 Heap size : 4222607360
> out18.log.match:35 Checksum : $(*)
> out18.log:34 Checksum : 0x9d564c515dd6ef76 [OK]
> out18.log.match:36 Root offset : $(nW)
> out18.log:35 Root offset : 0x0
> out18.log.match:37 Part file:
> out18.log:36 Part file:
> out18.log.match:38 path : $(nW)
> out18.log:37 path : /dev/dax0.0
> out18.log.match:39 type : device dax
> out18.log:38 type : device dax
> out18.log.match:40 size : $(nW)
> out18.log:39 size : 4225761280
> out18.log.match:41
> out18.log:40
> out18.log.match:42 POOL Header:
> out18.log:41 POOL Header:
> out18.log.match:43 Signature : PMEMOBJ
> out18.log:42 Signature : PMEMOBJ
> out18.log.match:44 Major : $(nW)
> out18.log:43 Major : 3
> out18.log.match:45 Mandatory features : 0x0
> out18.log:44 Mandatory features : 0x0
> out18.log.match:46 Not mandatory features : 0x0
> out18.log:45 Not mandatory features : 0x0
> out18.log.match:47 Forced RO : 0x0
> out18.log:46 Forced RO : 0x0
> out18.log.match:48 Pool set UUID : $(nW)
> out18.log:47 Pool set UUID : c05b266e-5f47-411a-8ba0-24e7f3390a7e
> out18.log.match:49 UUID : $(nW)
> out18.log:48 UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:50 Previous part UUID : $(nW)
> out18.log:49 Previous part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:51 Next part UUID : $(nW)
> out18.log:50 Next part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:52 Previous replica UUID : $(nW)
> out18.log:51 Previous replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:53 Next replica UUID : $(nW)
> out18.log:52 Next replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:54 Creation Time : $(*)
> out18.log:53 Creation Time : Sat Mar 18 2017 06:44:41
> out18.log.match:55 Alignment Descriptor : $(nW)
> out18.log:54 Alignment Descriptor : 0x000007f737777310[OK]
> out18.log.match:56 Class : $(nW)
> out18.log:55 Class : ELF64
> out18.log.match:57 Data : 2's complement, little endian
> out18.log:56 Data : 2's complement, little endian
> out18.log.match:58 Machine : AMD X86-64
> out18.log:57 Machine : AMD X86-64
> out18.log.match:59 Checksum : $(*)
> out18.log:58 Checksum : 0xb974bc8973773004 [OK]
> out18.log.match:60
> out18.log:59
> out18.log.match:61 PMEM OBJ Header:
> out18.log:60 PMEM OBJ Header:
> out18.log.match:62 Layout : pmempool
> out18.log:61 Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> out18.log:61 [skipping optional line]
> out18.log.match:63 Lanes offset : $(nW)
> out18.log:61 Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> FAIL: match: out18.log.match:63 did not match pattern
> RUNTESTS: stopping: pmempool_info/TEST18 failed, TEST=all FS=any BUILD=debug
> ../Makefile.inc:328: recipe for target 'TEST18' failed | 1.0 | unit tests: pmempool_info/TEST18: SETUP (all/pmem/debug) fails - Found on a492d408739b33ba79681e74c8942ce1c4f62167:
> pmempool_info/TEST18: SETUP (all/pmem/debug)
> [MATCHING FAILED, COMPLETE FILE (out18.log) BELOW]
> Poolset structure:
> Number of replicas : 1
> Replica 0 (master) - local, 1 part(s):
> part 0:
> path : /dev/dax0.0
> type : device dax
> size : 4225761280
>
> POOL Header:
> Signature : PMEMOBJ
> Major : 3
> Mandatory features : 0x0
> Not mandatory features : 0x0
> Forced RO : 0x0
> Pool set UUID : c05b266e-5f47-411a-8ba0-24e7f3390a7e
> UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Previous part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Next part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Previous replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Next replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Creation Time : Sat Mar 18 2017 06:44:41
> Alignment Descriptor : 0x000007f737777310[OK]
> Class : ELF64
> Data : 2's complement, little endian
> Machine : AMD X86-64
> Checksum : 0xb974bc8973773004 [OK]
>
> PMEM OBJ Header:
> Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> Lanes offset : 0x2000
> Number of lanes : 1024
> Heap offset : 0x302000
> Heap size : 4222607360
> Checksum : 0x9d564c515dd6ef76 [OK]
> Root offset : 0x0
> Part file:
> path : /dev/dax0.0
> type : device dax
> size : 4225761280
>
> POOL Header:
> Signature : PMEMOBJ
> Major : 3
> Mandatory features : 0x0
> Not mandatory features : 0x0
> Forced RO : 0x0
> Pool set UUID : c05b266e-5f47-411a-8ba0-24e7f3390a7e
> UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Previous part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Next part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Previous replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Next replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> Creation Time : Sat Mar 18 2017 06:44:41
> Alignment Descriptor : 0x000007f737777310[OK]
> Class : ELF64
> Data : 2's complement, little endian
> Machine : AMD X86-64
> Checksum : 0xb974bc8973773004 [OK]
>
> PMEM OBJ Header:
> Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> Lanes offset : 0x2000
> Number of lanes : 1024
> Heap offset : 0x302000
> Heap size : 4222607360
> Checksum : 0x9d564c515dd6ef76 [OK]
> Root offset : 0x0
>
> [EOF]
> out18.log.match:1 Poolset structure:
> out18.log:1 Poolset structure:
> out18.log.match:2 Number of replicas : 1
> out18.log:2 Number of replicas : 1
> out18.log.match:3 Replica 0 (master) - local, 1 part(s):
> out18.log:3 Replica 0 (master) - local, 1 part(s):
> out18.log.match:4 part 0:
> out18.log:4 part 0:
> out18.log.match:5 path : $(nW)
> out18.log:5 path : /dev/dax0.0
> out18.log.match:6 type : device dax
> out18.log:6 type : device dax
> out18.log.match:7 size : $(nW)
> out18.log:7 size : 4225761280
> out18.log.match:8
> out18.log:8
> out18.log.match:9 POOL Header:
> out18.log:9 POOL Header:
> out18.log.match:10 Signature : PMEMOBJ
> out18.log:10 Signature : PMEMOBJ
> out18.log.match:11 Major : $(nW)
> out18.log:11 Major : 3
> out18.log.match:12 Mandatory features : 0x0
> out18.log:12 Mandatory features : 0x0
> out18.log.match:13 Not mandatory features : 0x0
> out18.log:13 Not mandatory features : 0x0
> out18.log.match:14 Forced RO : 0x0
> out18.log:14 Forced RO : 0x0
> out18.log.match:15 Pool set UUID : $(nW)
> out18.log:15 Pool set UUID : c05b266e-5f47-411a-8ba0-24e7f3390a7e
> out18.log.match:16 UUID : $(nW)
> out18.log:16 UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:17 Previous part UUID : $(nW)
> out18.log:17 Previous part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:18 Next part UUID : $(nW)
> out18.log:18 Next part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:19 Previous replica UUID : $(nW)
> out18.log:19 Previous replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:20 Next replica UUID : $(nW)
> out18.log:20 Next replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:21 Creation Time : $(*)
> out18.log:21 Creation Time : Sat Mar 18 2017 06:44:41
> out18.log.match:22 Alignment Descriptor : $(nW)
> out18.log:22 Alignment Descriptor : 0x000007f737777310[OK]
> out18.log.match:23 Class : ELF64
> out18.log:23 Class : ELF64
> out18.log.match:24 Data : 2's complement, little endian
> out18.log:24 Data : 2's complement, little endian
> out18.log.match:25 Machine : AMD X86-64
> out18.log:25 Machine : AMD X86-64
> out18.log.match:26 Checksum : $(*)
> out18.log:26 Checksum : 0xb974bc8973773004 [OK]
> out18.log.match:27
> out18.log:27
> out18.log.match:28 PMEM OBJ Header:
> out18.log:28 PMEM OBJ Header:
> out18.log.match:29 $(OPT)Layout : pmempool
> out18.log:29 Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> out18.log:29 [skipping optional line]
> out18.log.match:30 $(OPT)Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> out18.log:29 Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> out18.log.match:31 Lanes offset : $(nW)
> out18.log:30 Lanes offset : 0x2000
> out18.log.match:32 Number of lanes : $(nW)
> out18.log:31 Number of lanes : 1024
> out18.log.match:33 Heap offset : $(nW)
> out18.log:32 Heap offset : 0x302000
> out18.log.match:34 Heap size : $(nW)
> out18.log:33 Heap size : 4222607360
> out18.log.match:35 Checksum : $(*)
> out18.log:34 Checksum : 0x9d564c515dd6ef76 [OK]
> out18.log.match:36 Root offset : $(nW)
> out18.log:35 Root offset : 0x0
> out18.log.match:37 Part file:
> out18.log:36 Part file:
> out18.log.match:38 path : $(nW)
> out18.log:37 path : /dev/dax0.0
> out18.log.match:39 type : device dax
> out18.log:38 type : device dax
> out18.log.match:40 size : $(nW)
> out18.log:39 size : 4225761280
> out18.log.match:41
> out18.log:40
> out18.log.match:42 POOL Header:
> out18.log:41 POOL Header:
> out18.log.match:43 Signature : PMEMOBJ
> out18.log:42 Signature : PMEMOBJ
> out18.log.match:44 Major : $(nW)
> out18.log:43 Major : 3
> out18.log.match:45 Mandatory features : 0x0
> out18.log:44 Mandatory features : 0x0
> out18.log.match:46 Not mandatory features : 0x0
> out18.log:45 Not mandatory features : 0x0
> out18.log.match:47 Forced RO : 0x0
> out18.log:46 Forced RO : 0x0
> out18.log.match:48 Pool set UUID : $(nW)
> out18.log:47 Pool set UUID : c05b266e-5f47-411a-8ba0-24e7f3390a7e
> out18.log.match:49 UUID : $(nW)
> out18.log:48 UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:50 Previous part UUID : $(nW)
> out18.log:49 Previous part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:51 Next part UUID : $(nW)
> out18.log:50 Next part UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:52 Previous replica UUID : $(nW)
> out18.log:51 Previous replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:53 Next replica UUID : $(nW)
> out18.log:52 Next replica UUID : ba8a0e4d-06f8-4b75-89b6-636d1b252ac9
> out18.log.match:54 Creation Time : $(*)
> out18.log:53 Creation Time : Sat Mar 18 2017 06:44:41
> out18.log.match:55 Alignment Descriptor : $(nW)
> out18.log:54 Alignment Descriptor : 0x000007f737777310[OK]
> out18.log.match:56 Class : $(nW)
> out18.log:55 Class : ELF64
> out18.log.match:57 Data : 2's complement, little endian
> out18.log:56 Data : 2's complement, little endian
> out18.log.match:58 Machine : AMD X86-64
> out18.log:57 Machine : AMD X86-64
> out18.log.match:59 Checksum : $(*)
> out18.log:58 Checksum : 0xb974bc8973773004 [OK]
> out18.log.match:60
> out18.log:59
> out18.log.match:61 PMEM OBJ Header:
> out18.log:60 PMEM OBJ Header:
> out18.log.match:62 Layout : pmempool
> out18.log:61 Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> out18.log:61 [skipping optional line]
> out18.log.match:63 Lanes offset : $(nW)
> out18.log:61 Layout : pmempool😘⠝⠧⠍⠇ɗNVMLӜ⥺🙋
> FAIL: match: out18.log.match:63 did not match pattern
> RUNTESTS: stopping: pmempool_info/TEST18 failed, TEST=all FS=any BUILD=debug
> ../Makefile.inc:328: recipe for target 'TEST18' failed | priority | unit tests pmempool info setup all pmem debug fails found on pmempool info setup all pmem debug poolset structure number of replicas replica master local part s part path dev type device dax size pool header signature pmemobj major mandatory features not mandatory features forced ro pool set uuid uuid previous part uuid next part uuid previous replica uuid next replica uuid creation time sat mar alignment descriptor class data s complement little endian machine amd checksum pmem obj header layout pmempool😘⠝⠧⠍⠇ɗnvmlӝ⥺🙋 lanes offset number of lanes heap offset heap size checksum root offset part file path dev type device dax size pool header signature pmemobj major mandatory features not mandatory features forced ro pool set uuid uuid previous part uuid next part uuid previous replica uuid next replica uuid creation time sat mar alignment descriptor class data s complement little endian machine amd checksum pmem obj header layout pmempool😘⠝⠧⠍⠇ɗnvmlӝ⥺🙋 lanes offset number of lanes heap offset heap size checksum root offset log match poolset structure log poolset structure log match number of replicas log number of replicas log match replica master local part s log replica master local part s log match part log part log match path nw log path dev log match type device dax log type device dax log match size nw log size log match log log match pool header log pool header log match signature pmemobj log signature pmemobj log match major nw log major log match mandatory features log mandatory features log match not mandatory features log not mandatory features log match forced ro log forced ro log match pool set uuid nw log pool set uuid log match uuid nw log uuid log match previous part uuid nw log previous part uuid log match next part uuid nw log next part uuid log match previous replica uuid nw log previous replica uuid log match next replica uuid nw log next replica uuid log match creation time log creation time sat mar log match alignment descriptor nw log alignment descriptor log match class log class log match data s complement little endian log data s complement little endian log match machine amd log machine amd log match checksum log checksum log match log log match pmem obj header log pmem obj header log match opt layout pmempool log layout pmempool😘⠝⠧⠍⠇ɗnvmlӝ⥺🙋 log log match opt layout pmempool😘⠝⠧⠍⠇ɗnvmlӝ⥺🙋 log layout pmempool😘⠝⠧⠍⠇ɗnvmlӝ⥺🙋 log match lanes offset nw log lanes offset log match number of lanes nw log number of lanes log match heap offset nw log heap offset log match heap size nw log heap size log match checksum log checksum log match root offset nw log root offset log match part file log part file log match path nw log path dev log match type device dax log type device dax log match size nw log size log match log log match pool header log pool header log match signature pmemobj log signature pmemobj log match major nw log major log match mandatory features log mandatory features log match not mandatory features log not mandatory features log match forced ro log forced ro log match pool set uuid nw log pool set uuid log match uuid nw log uuid log match previous part uuid nw log previous part uuid log match next part uuid nw log next part uuid log match previous replica uuid nw log previous replica uuid log match next replica uuid nw log next replica uuid log match creation time log creation time sat mar log match alignment descriptor nw log alignment descriptor log match class nw log class log match data s complement little endian log data s complement little endian log match machine amd log machine amd log match checksum log checksum log match log log match pmem obj header log pmem obj header log match layout pmempool log layout pmempool😘⠝⠧⠍⠇ɗnvmlӝ⥺🙋 log log match lanes offset nw log layout pmempool😘⠝⠧⠍⠇ɗnvmlӝ⥺🙋 fail match log match did not match pattern runtests stopping pmempool info failed test all fs any build debug makefile inc recipe for target failed | 1 |
715,087 | 24,586,134,673 | IssuesEvent | 2022-10-13 19:54:11 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] Malformed Page-URL's can pass through Studio's validation check by using the browsers auto-correct | bug priority: low CI validate | ### Duplicates
- [X] I have searched the existing issues
### Latest version
- [x] The issue is in the latest released 4.0.x
- [X] The issue is in the latest released 3.1.x
### Describe the issue
Modifying the Page-URL in the content form to a name that suggests the name be capitalized passes through studio's check if the value is valid.
### Steps to reproduce
Steps:
1. Create a site from Empty Blueprint
2. Edit the Home Page
3. Edit the Page-URL
4. Enter in the following value "test-africa". Your browser will highlight "africa", right-click it and select the corrected capitalized version of the value "Africa". Select save&close.
5. See the content form is able to be saved and results in the page being broken.
### Relevant log output
_No response_
### Screenshots and/or videos
https://www.loom.com/share/d63176e3845542d49e3d4dff71680750
| 1.0 | [studio-ui] Malformed Page-URL's can pass through Studio's validation check by using the browsers auto-correct - ### Duplicates
- [X] I have searched the existing issues
### Latest version
- [x] The issue is in the latest released 4.0.x
- [X] The issue is in the latest released 3.1.x
### Describe the issue
Modifying the Page-URL in the content form to a name that suggests the name be capitalized passes through studio's check if the value is valid.
### Steps to reproduce
Steps:
1. Create a site from Empty Blueprint
2. Edit the Home Page
3. Edit the Page-URL
4. Enter in the following value "test-africa". Your browser will highlight "africa", right-click it and select the corrected capitalized version of the value "Africa". Select save&close.
5. See the content form is able to be saved and results in the page being broken.
### Relevant log output
_No response_
### Screenshots and/or videos
https://www.loom.com/share/d63176e3845542d49e3d4dff71680750
| priority | malformed page url s can pass through studio s validation check by using the browsers auto correct duplicates i have searched the existing issues latest version the issue is in the latest released x the issue is in the latest released x describe the issue modifying the page url in the content form to a name that suggests the name be capitalized passes through studio s check if the value is valid steps to reproduce steps create a site from empty blueprint edit the home page edit the page url enter in the following value test africa your browser will highlight africa right click it and select the corrected capitalized version of the value africa select save close see the content form is able to be saved and results in the page being broken relevant log output no response screenshots and or videos | 1 |
587,363 | 17,613,959,242 | IssuesEvent | 2021-08-18 07:21:14 | BAMWelDX/weldx | https://api.github.com/repos/BAMWelDX/weldx | closed | add weldx extension manifest | ASDF low priority | while I don't think we have to switch over to the new manifest style of loading our extension, maybe it would still be good to create such a manifest for the weldx extension as a place to collect the relevant extension metadata (and prepare for switching later)
reference: https://asdf.readthedocs.io/en/al-503-document-2.8-features/asdf/extending/manifests.html#extension-manifests | 1.0 | add weldx extension manifest - while I don't think we have to switch over to the new manifest style of loading our extension, maybe it would still be good to create such a manifest for the weldx extension as a place to collect the relevant extension metadata (and prepare for switching later)
reference: https://asdf.readthedocs.io/en/al-503-document-2.8-features/asdf/extending/manifests.html#extension-manifests | priority | add weldx extension manifest while i don t think we have to switch over to the new manifest style of loading our extension maybe it would still be good to create such a manifest for the weldx extension as a place to collect the relevant extension metadata and prepare for switching later reference | 1 |
192,353 | 6,848,989,056 | IssuesEvent | 2017-11-13 20:27:05 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] Spinner/state doesn't work for dependencies | bug priority: low | When publishing items the spinner appears on the items indicating they're being published. However, this doesn't appear on items that are dependencies of other items.
Steps to Reproduce
===============
* Create a site using Editorial BP
* Create an article called `one` under `articles`
* Copy `articles/2017` and paste it under `articles/one`
* Publish the item `articles/one/2017/3/Top Clubs in Virginia`
Note the spinner on the article `Top Clubs in Virginia` but not on the parent `one`. Also note that `one` is indeed published correctly, but the UI doesn't reflect that. | 1.0 | [studio-ui] Spinner/state doesn't work for dependencies - When publishing items the spinner appears on the items indicating they're being published. However, this doesn't appear on items that are dependencies of other items.
Steps to Reproduce
===============
* Create a site using Editorial BP
* Create an article called `one` under `articles`
* Copy `articles/2017` and paste it under `articles/one`
* Publish the item `articles/one/2017/3/Top Clubs in Virginia`
Note the spinner on the article `Top Clubs in Virginia` but not on the parent `one`. Also note that `one` is indeed published correctly, but the UI doesn't reflect that. | priority | spinner state doesn t work for dependencies when publishing items the spinner appears on the items indicating they re being published however this doesn t appear on items that are dependencies of other items steps to reproduce create a site using editorial bp create an article called one under articles copy articles and paste it under articles one publish the item articles one top clubs in virginia note the spinner on the article top clubs in virginia but not on the parent one also note that one is indeed published correctly but the ui doesn t reflect that | 1 |
403,098 | 11,835,737,233 | IssuesEvent | 2020-03-23 11:10:34 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | missing redraw for local copy | bug: pending priority: low scope: UI understood: clear | <!-- IMPORTANT
Bug reports that do not make an effort to help the developers will be closed without notice.
Make sure that this bug has not already been opened and/or closed by searching the issues on GitHub, as duplicate bug reports will be closed.
A bug report simply stating that Darktable crashes is unhelpful, so please fill in most of the items below and provide detailed information.
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Creating a local copy does not show the white triangle on top-right of the thumb. Likewise resync a local copy does not remove the white-triangle.
**To Reproduce**
<!-- Provide detailed steps that can reproduce the behavior, such as:
1. Go to lt
2. Click on 'Local copy'
4. See no local copy flag in top-right corner
-->
This is due to the last rewrite of the lt. Note that changing the collection and back to the previous one properly shows the flag. So there is **just a missing redraw**.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
Local copy flag is shown or hidden when needed.
**Platform (please complete the following information):**
- Darktable Version: current master
- OS: Debian Linux
| 1.0 | missing redraw for local copy - <!-- IMPORTANT
Bug reports that do not make an effort to help the developers will be closed without notice.
Make sure that this bug has not already been opened and/or closed by searching the issues on GitHub, as duplicate bug reports will be closed.
A bug report simply stating that Darktable crashes is unhelpful, so please fill in most of the items below and provide detailed information.
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Creating a local copy does not show the white triangle on top-right of the thumb. Likewise resync a local copy does not remove the white-triangle.
**To Reproduce**
<!-- Provide detailed steps that can reproduce the behavior, such as:
1. Go to lt
2. Click on 'Local copy'
4. See no local copy flag in top-right corner
-->
This is due to the last rewrite of the lt. Note that changing the collection and back to the previous one properly shows the flag. So there is **just a missing redraw**.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
Local copy flag is shown or hidden when needed.
**Platform (please complete the following information):**
- Darktable Version: current master
- OS: Debian Linux
| priority | missing redraw for local copy important bug reports that do not make an effort to help the developers will be closed without notice make sure that this bug has not already been opened and or closed by searching the issues on github as duplicate bug reports will be closed a bug report simply stating that darktable crashes is unhelpful so please fill in most of the items below and provide detailed information describe the bug creating a local copy does not show the white triangle on top right of the thumb likewise resync a local copy does not remove the white triangle to reproduce provide detailed steps that can reproduce the behavior such as go to lt click on local copy see no local copy flag in top right corner this is due to the last rewrite of the lt note that changing the collection and back to the previous one properly shows the flag so there is just a missing redraw expected behavior local copy flag is shown or hidden when needed platform please complete the following information darktable version current master os debian linux | 1 |
624,462 | 19,698,572,439 | IssuesEvent | 2022-01-12 14:35:16 | BeccaLyria/discord-documentation | https://api.github.com/repos/BeccaLyria/discord-documentation | closed | [DOC] - Add documentation for the "Ban Appeal config" feature | 🏁 status: ready for dev 🟩 priority: low ⭐ goal: addition 📄 aspect: text good first issue | ### What work needs to be performed?
This would involve adding details similar to this on the ["Configure your Server" page](https://docs.beccalyria.com/#/configure-server):
| Setting | Value | Description |
| :-: | :-: | :-: |
| Ban Appeal Config | appeal_link: string | You can provide a link to a Google Form (or another service) for appealing a ban. This link is included in the DM sent to the banned user by Becca. |
### Additional information
_No response_ | 1.0 | [DOC] - Add documentation for the "Ban Appeal config" feature - ### What work needs to be performed?
This would involve adding details similar to this on the ["Configure your Server" page](https://docs.beccalyria.com/#/configure-server):
| Setting | Value | Description |
| :-: | :-: | :-: |
| Ban Appeal Config | appeal_link: string | You can provide a link to a Google Form (or another service) for appealing a ban. This link is included in the DM sent to the banned user by Becca. |
### Additional information
_No response_ | priority | add documentation for the ban appeal config feature what work needs to be performed this would involve adding details similar to this on the setting value description ban appeal config appeal link string you can provide a link to a google form or another service for appealing a ban this link is included in the dm sent to the banned user by becca additional information no response | 1 |
540,190 | 15,802,435,994 | IssuesEvent | 2021-04-03 09:43:44 | DeadlyBossMods/DBM-TBC | https://api.github.com/repos/DeadlyBossMods/DBM-TBC | closed | [TASK] Convert classic mods back to spellId | Low Priority On Hold | Switch classic raids back to spellIds. Not a lazy retail mod merge in either, there are stll some differences in a few places, especially since retail mods fell behind sync of classic ones quite a bit later on.
This is a super slow priority task.
- [x] AQ20
- [x] AQ40
- [x] Azeroth
- [x] BWL
- [x] MC
- [x] Naxx
- [x] Onyxia
- [x] Party-Classic
- [x] ZG | 1.0 | [TASK] Convert classic mods back to spellId - Switch classic raids back to spellIds. Not a lazy retail mod merge in either, there are stll some differences in a few places, especially since retail mods fell behind sync of classic ones quite a bit later on.
This is a super slow priority task.
- [x] AQ20
- [x] AQ40
- [x] Azeroth
- [x] BWL
- [x] MC
- [x] Naxx
- [x] Onyxia
- [x] Party-Classic
- [x] ZG | priority | convert classic mods back to spellid switch classic raids back to spellids not a lazy retail mod merge in either there are stll some differences in a few places especially since retail mods fell behind sync of classic ones quite a bit later on this is a super slow priority task azeroth bwl mc naxx onyxia party classic zg | 1 |
812,014 | 30,311,911,563 | IssuesEvent | 2023-07-10 13:20:31 | IGS/gEAR | https://api.github.com/repos/IGS/gEAR | opened | Remove toggle for enabling/disabling projection learning | Low Priority | Sometime last year, @hertzron had requested we have a toggle to enable or disable displaying of the "Transfer Learning" link until stability issues were worked out. I feel we are at a point where the toggle can be removed and the "Transfer Learning" link is permanent. One reason is also that @carlocolantuoni noted the link sometimes does not load instantaneously since gEAR has to go through a server-side check to know if the link should be displayed or hidden. | 1.0 | Remove toggle for enabling/disabling projection learning - Sometime last year, @hertzron had requested we have a toggle to enable or disable displaying of the "Transfer Learning" link until stability issues were worked out. I feel we are at a point where the toggle can be removed and the "Transfer Learning" link is permanent. One reason is also that @carlocolantuoni noted the link sometimes does not load instantaneously since gEAR has to go through a server-side check to know if the link should be displayed or hidden. | priority | remove toggle for enabling disabling projection learning sometime last year hertzron had requested we have a toggle to enable or disable displaying of the transfer learning link until stability issues were worked out i feel we are at a point where the toggle can be removed and the transfer learning link is permanent one reason is also that carlocolantuoni noted the link sometimes does not load instantaneously since gear has to go through a server side check to know if the link should be displayed or hidden | 1 |
144,779 | 5,545,816,783 | IssuesEvent | 2017-03-22 22:37:45 | ponylang/ponyc | https://api.github.com/repos/ponylang/ponyc | closed | Identity comparison for boxed types is weird | bug: 4 - in progress difficulty: 2 - medium priority: 1 - low | ```pony
actor Main
new create(env: Env) =>
let a = U8(2)
let b = U8(2)
env.out.print((a is b).string())
foo(env, a, b)
fun foo(env: Env, a: Any, b: Any) =>
env.out.print((a is b).string())
```
This code prints `true` then `false`, but I'd expect it to print `true` then `true`. This is because
- the numbers are unboxed in `create`, `is` compares their values, which compare equal
- the numbers are boxed in `foo` (i.e. objects with type descriptors are allocated to handle them through the `Any` interface), `is` compares the address of the boxes, which don't compare equal since a different box is allocated for each number
The `digestof` operator behaves in the same way. This means that a boxed value in a `SetIs` or a similar collection can't be retrieved.
I think we should modify the `is` and `digestof` operators to work on values when they handle a boxed values. I'm going to categorise the issue as a bug since IMO this is a principle of least surprise bug. | 1.0 | Identity comparison for boxed types is weird - ```pony
actor Main
new create(env: Env) =>
let a = U8(2)
let b = U8(2)
env.out.print((a is b).string())
foo(env, a, b)
fun foo(env: Env, a: Any, b: Any) =>
env.out.print((a is b).string())
```
This code prints `true` then `false`, but I'd expect it to print `true` then `true`. This is because
- the numbers are unboxed in `create`, `is` compares their values, which compare equal
- the numbers are boxed in `foo` (i.e. objects with type descriptors are allocated to handle them through the `Any` interface), `is` compares the address of the boxes, which don't compare equal since a different box is allocated for each number
The `digestof` operator behaves in the same way. This means that a boxed value in a `SetIs` or a similar collection can't be retrieved.
I think we should modify the `is` and `digestof` operators to work on values when they handle a boxed values. I'm going to categorise the issue as a bug since IMO this is a principle of least surprise bug. | priority | identity comparison for boxed types is weird pony actor main new create env env let a let b env out print a is b string foo env a b fun foo env env a any b any env out print a is b string this code prints true then false but i d expect it to print true then true this is because the numbers are unboxed in create is compares their values which compare equal the numbers are boxed in foo i e objects with type descriptors are allocated to handle them through the any interface is compares the address of the boxes which don t compare equal since a different box is allocated for each number the digestof operator behaves in the same way this means that a boxed value in a setis or a similar collection can t be retrieved i think we should modify the is and digestof operators to work on values when they handle a boxed values i m going to categorise the issue as a bug since imo this is a principle of least surprise bug | 1 |
539,549 | 15,790,635,956 | IssuesEvent | 2021-04-02 02:04:36 | AH-64D-Apache-Official-Project/AH-64D | https://api.github.com/repos/AH-64D-Apache-Official-Project/AH-64D | closed | AFM rotor brake | enhancement low priority | https://github.com/SachaOropeza/AH64D-Project/issues/53
The rotor brake switch doesn't actually enable the brake on the aircraft.
Shouldn't be too many lines of code:
[Here](https://community.bistudio.com/wiki/setRotorBrakeRTD) is the function for it | 1.0 | AFM rotor brake - https://github.com/SachaOropeza/AH64D-Project/issues/53
The rotor brake switch doesn't actually enable the brake on the aircraft.
Shouldn't be too many lines of code:
[Here](https://community.bistudio.com/wiki/setRotorBrakeRTD) is the function for it | priority | afm rotor brake the rotor brake switch doesn t actually enable the brake on the aircraft shouldn t be too many lines of code is the function for it | 1 |
778,949 | 27,334,380,791 | IssuesEvent | 2023-02-26 02:09:52 | noisy/portfolio | https://api.github.com/repos/noisy/portfolio | opened | "How I bankrupt" images have fixed sizes 640px | bug Priority: Low | Images in blog's post "How I bankrupt" have fixed sizes 640px and don't react to responsive changing.
Steps:
1. Open page https://krzysztofszumny.com/post/how-i-bankrupt-my-first-startup-by-not-understanding-the-definition-of-mvp-minimum-viable-product
2. Change pages's width below 640px
Expected result: Images are changing their sizes to fit the page
Actual result: Images don't change their sizes
Screenshot

Desktop
Ubuntu 20.04.4 LTS Chrome 110.0.5481.100 | 1.0 | "How I bankrupt" images have fixed sizes 640px - Images in blog's post "How I bankrupt" have fixed sizes 640px and don't react to responsive changing.
Steps:
1. Open page https://krzysztofszumny.com/post/how-i-bankrupt-my-first-startup-by-not-understanding-the-definition-of-mvp-minimum-viable-product
2. Change pages's width below 640px
Expected result: Images are changing their sizes to fit the page
Actual result: Images don't change their sizes
Screenshot

Desktop
Ubuntu 20.04.4 LTS Chrome 110.0.5481.100 | priority | how i bankrupt images have fixed sizes images in blog s post how i bankrupt have fixed sizes and don t react to responsive changing steps open page change pages s width below expected result images are changing their sizes to fit the page actual result images don t change their sizes screenshot desktop ubuntu lts chrome | 1 |
247,967 | 7,926,166,757 | IssuesEvent | 2018-07-06 00:13:39 | UTAS-HealthSciences/mylo-mate | https://api.github.com/repos/UTAS-HealthSciences/mylo-mate | closed | Manage Files nav bar icon disappearance | bug low priority |

Disappears when in Unit Admin screen.

| 1.0 | Manage Files nav bar icon disappearance -

Disappears when in Unit Admin screen.

| priority | manage files nav bar icon disappearance disappears when in unit admin screen | 1 |
331,220 | 10,062,071,525 | IssuesEvent | 2019-07-22 23:31:29 | lightingft/appinventor-sources | https://api.github.com/repos/lightingft/appinventor-sources | opened | Scatter Chart Point Style | Part: Designer Priority: Low Status: To Do Type: Feature | The Scatter Chart should have a style option to select the circle shape of the Scatter Chart points. | 1.0 | Scatter Chart Point Style - The Scatter Chart should have a style option to select the circle shape of the Scatter Chart points. | priority | scatter chart point style the scatter chart should have a style option to select the circle shape of the scatter chart points | 1 |
410,649 | 11,995,017,980 | IssuesEvent | 2020-04-08 14:35:15 | GoSecure/pyrdp | https://api.github.com/repos/GoSecure/pyrdp | closed | 24bpp makes the PyRDP player crash | bug low-priority | Tested with Liveplayer with FreeRDP --> win10 and win8 --> win10.
Its a free() error, so most likely due to a bug in rle.c when decompressing 24bpp bitmaps
Crashes after an undefined amount of time (seconds to minutes) | 1.0 | 24bpp makes the PyRDP player crash - Tested with Liveplayer with FreeRDP --> win10 and win8 --> win10.
Its a free() error, so most likely due to a bug in rle.c when decompressing 24bpp bitmaps
Crashes after an undefined amount of time (seconds to minutes) | priority | makes the pyrdp player crash tested with liveplayer with freerdp and its a free error so most likely due to a bug in rle c when decompressing bitmaps crashes after an undefined amount of time seconds to minutes | 1 |
776,135 | 27,248,343,901 | IssuesEvent | 2023-02-22 05:19:06 | curiouslearning/FeedTheMonsterJS | https://api.github.com/repos/curiouslearning/FeedTheMonsterJS | closed | PWA app update | Low Priority | How should we handle if there is a newer version of the app available to install? | 1.0 | PWA app update - How should we handle if there is a newer version of the app available to install? | priority | pwa app update how should we handle if there is a newer version of the app available to install | 1 |
627,994 | 19,958,692,800 | IssuesEvent | 2022-01-28 04:37:06 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | [UX] APIM carbon console - Add user flow - UI issues | Priority/Low Affected/2.1.0 | **Description:**
***Not fulfilling [checklist items](https://docs.google.com/spreadsheets/d/1l6YKXSbmtykvvn_NvX6uJbXSsZvpT8jn72Qoi_FoJq8/edit#gid=1221574205):***
Error recognition - Is it precisely indicate the problem
Flexibility and efficiency of use - Does the task cater to both experienced and inexperienced users
Match between system and the real world - Is design match with real world conventions, concepts
***Related task:***
Create users and assign roles to users
***Issues and proposed solutions:***
APIM Carbon console - Add New User screen
- Should have a guideline about Username policy pattern. The error message appears as password policy violated. Need to define the policy.
- If you click Next without entering username, password etc the error message is "Username pattern policy violated". This can be reworded to "Enter all required fields".
- When a username that is already existing in the system is entered, the error message is "Could not add user PRIMARY/minoli. Error is: UserAlreadyExisting:Username already exists in the system. Pick another username."
Reword the error message to "Could not add user PRIMARY/minoli. The username already exists in the system. Enter another username".
- The word 'user name' should be one word.
- Change Password screen - When a wrong Current Password is entered, the error message is "Could not change password of admin. Error is: Error while updating password. Wrong old credential provided". This can be reworded to "Could not change the password of <username>. The current password you entered is incorrect".
- Assign Roles screen - There is a section called "Unassigned Roles". This is empty if there are no unassigned roles. When this is empty, it should have a message saying "No unassigned roles found".
**Suggested Labels**
UX, Improvements, 2.1.0 | 1.0 | [UX] APIM carbon console - Add user flow - UI issues - **Description:**
***Not fulfilling [checklist items](https://docs.google.com/spreadsheets/d/1l6YKXSbmtykvvn_NvX6uJbXSsZvpT8jn72Qoi_FoJq8/edit#gid=1221574205):***
Error recognition - Is it precisely indicate the problem
Flexibility and efficiency of use - Does the task cater to both experienced and inexperienced users
Match between system and the real world - Is design match with real world conventions, concepts
***Related task:***
Create users and assign roles to users
***Issues and proposed solutions:***
APIM Carbon console - Add New User screen
- Should have a guideline about Username policy pattern. The error message appears as password policy violated. Need to define the policy.
- If you click Next without entering username, password etc the error message is "Username pattern policy violated". This can be reworded to "Enter all required fields".
- When a username that is already existing in the system is entered, the error message is "Could not add user PRIMARY/minoli. Error is: UserAlreadyExisting:Username already exists in the system. Pick another username."
Reword the error message to "Could not add user PRIMARY/minoli. The username already exists in the system. Enter another username".
- The word 'user name' should be one word.
- Change Password screen - When a wrong Current Password is entered, the error message is "Could not change password of admin. Error is: Error while updating password. Wrong old credential provided". This can be reworded to "Could not change the password of <username>. The current password you entered is incorrect".
- Assign Roles screen - There is a section called "Unassigned Roles". This is empty if there are no unassigned roles. When this is empty, it should have a message saying "No unassigned roles found".
**Suggested Labels**
UX, Improvements, 2.1.0 | priority | apim carbon console add user flow ui issues description not fulfilling error recognition is it precisely indicate the problem flexibility and efficiency of use does the task cater to both experienced and inexperienced users match between system and the real world is design match with real world conventions concepts related task create users and assign roles to users issues and proposed solutions apim carbon console add new user screen should have a guideline about username policy pattern the error message appears as password policy violated need to define the policy if you click next without entering username password etc the error message is username pattern policy violated this can be reworded to enter all required fields when a username that is already existing in the system is entered the error message is could not add user primary minoli error is useralreadyexisting username already exists in the system pick another username reword the error message to could not add user primary minoli the username already exists in the system enter another username the word user name should be one word change password screen when a wrong current password is entered the error message is could not change password of admin error is error while updating password wrong old credential provided this can be reworded to could not change the password of the current password you entered is incorrect assign roles screen there is a section called unassigned roles this is empty if there are no unassigned roles when this is empty it should have a message saying no unassigned roles found suggested labels ux improvements | 1 |
24,542 | 2,668,835,714 | IssuesEvent | 2015-03-23 11:52:38 | Araq/Nim | https://api.github.com/repos/Araq/Nim | closed | Compiler SIGSEGV on illegal 'type.name' usage. | Low Priority Semcheck | The following incorrect code causes the compiler to crash:
```Nimrod
echo type.name
```
The complete traceback:
```
Traceback (most recent call last)
nimrod.nim(91) nimrod
nimrod.nim(55) handleCmdLine
main.nim(308) mainCommand
main.nim(73) commandCompileToC
modules.nim(194) compileProject
modules.nim(152) compileModule
passes.nim(193) processModule
passes.nim(137) processTopLevelStmt
sem.nim(405) myProcess
sem.nim(379) semStmtAndGenerateGenerics
semstmts.nim(1370) semStmt
semexprs.nim(860) semExprNoType
semexprs.nim(1991) semExpr
semexprs.nim(1599) semMagic
semexprs.nim(832) semEcho
semexprs.nim(41) semExprWithType
semexprs.nim(1955) semExpr
semtypes.nim(1074) semTypeNode
semdata.nim(307) checkSonsLen
semdata.nim(304) illFormedAst
renderer.nim(1302) renderTree
renderer.nim(496) gsub
renderer.nim(1013) gsub
SIGSEGV: Illegal storage access. (Attempt to read from nil?)
```
With parenthesis:
```Nimrod
echo(type.name)
```
The compiler manages to exit correctly, printing the line number (albeit with an misguided error message):
```
/tmp/aporia/a7.nim(1, 10) Error: ')' expected
Traceback (most recent call last)
nimrod.nim(91) nimrod
nimrod.nim(55) handleCmdLine
main.nim(308) mainCommand
main.nim(73) commandCompileToC
modules.nim(194) compileProject
modules.nim(152) compileModule
passes.nim(191) processModule
syntaxes.nim(69) parseTopLevelStmt
parser.nim(1982) parseTopLevelStmt
parser.nim(1904) complexOrSimpleStmt
parser.nim(1851) simpleStmt
parser.nim(1163) parseExprStmt
parser.nim(752) simpleExpr
parser.nim(748) simpleExprAux
parser.nim(1099) primary
parser.nim(680) primarySuffix
parser.nim(664) namedParams
parser.nim(411) exprColonEqExprListAux
parser.nim(147) eat
lexer.nim(226) lexMessage
msgs.nim(832) message
msgs.nim(814) liMessage
msgs.nim(730) handleError
> Process terminated with exit code 256
``` | 1.0 | Compiler SIGSEGV on illegal 'type.name' usage. - The following incorrect code causes the compiler to crash:
```Nimrod
echo type.name
```
The complete traceback:
```
Traceback (most recent call last)
nimrod.nim(91) nimrod
nimrod.nim(55) handleCmdLine
main.nim(308) mainCommand
main.nim(73) commandCompileToC
modules.nim(194) compileProject
modules.nim(152) compileModule
passes.nim(193) processModule
passes.nim(137) processTopLevelStmt
sem.nim(405) myProcess
sem.nim(379) semStmtAndGenerateGenerics
semstmts.nim(1370) semStmt
semexprs.nim(860) semExprNoType
semexprs.nim(1991) semExpr
semexprs.nim(1599) semMagic
semexprs.nim(832) semEcho
semexprs.nim(41) semExprWithType
semexprs.nim(1955) semExpr
semtypes.nim(1074) semTypeNode
semdata.nim(307) checkSonsLen
semdata.nim(304) illFormedAst
renderer.nim(1302) renderTree
renderer.nim(496) gsub
renderer.nim(1013) gsub
SIGSEGV: Illegal storage access. (Attempt to read from nil?)
```
With parenthesis:
```Nimrod
echo(type.name)
```
The compiler manages to exit correctly, printing the line number (albeit with an misguided error message):
```
/tmp/aporia/a7.nim(1, 10) Error: ')' expected
Traceback (most recent call last)
nimrod.nim(91) nimrod
nimrod.nim(55) handleCmdLine
main.nim(308) mainCommand
main.nim(73) commandCompileToC
modules.nim(194) compileProject
modules.nim(152) compileModule
passes.nim(191) processModule
syntaxes.nim(69) parseTopLevelStmt
parser.nim(1982) parseTopLevelStmt
parser.nim(1904) complexOrSimpleStmt
parser.nim(1851) simpleStmt
parser.nim(1163) parseExprStmt
parser.nim(752) simpleExpr
parser.nim(748) simpleExprAux
parser.nim(1099) primary
parser.nim(680) primarySuffix
parser.nim(664) namedParams
parser.nim(411) exprColonEqExprListAux
parser.nim(147) eat
lexer.nim(226) lexMessage
msgs.nim(832) message
msgs.nim(814) liMessage
msgs.nim(730) handleError
> Process terminated with exit code 256
``` | priority | compiler sigsegv on illegal type name usage the following incorrect code causes the compiler to crash nimrod echo type name the complete traceback traceback most recent call last nimrod nim nimrod nimrod nim handlecmdline main nim maincommand main nim commandcompiletoc modules nim compileproject modules nim compilemodule passes nim processmodule passes nim processtoplevelstmt sem nim myprocess sem nim semstmtandgenerategenerics semstmts nim semstmt semexprs nim semexprnotype semexprs nim semexpr semexprs nim semmagic semexprs nim semecho semexprs nim semexprwithtype semexprs nim semexpr semtypes nim semtypenode semdata nim checksonslen semdata nim illformedast renderer nim rendertree renderer nim gsub renderer nim gsub sigsegv illegal storage access attempt to read from nil with parenthesis nimrod echo type name the compiler manages to exit correctly printing the line number albeit with an misguided error message tmp aporia nim error expected traceback most recent call last nimrod nim nimrod nimrod nim handlecmdline main nim maincommand main nim commandcompiletoc modules nim compileproject modules nim compilemodule passes nim processmodule syntaxes nim parsetoplevelstmt parser nim parsetoplevelstmt parser nim complexorsimplestmt parser nim simplestmt parser nim parseexprstmt parser nim simpleexpr parser nim simpleexpraux parser nim primary parser nim primarysuffix parser nim namedparams parser nim exprcoloneqexprlistaux parser nim eat lexer nim lexmessage msgs nim message msgs nim limessage msgs nim handleerror process terminated with exit code | 1 |
719,207 | 24,751,262,388 | IssuesEvent | 2022-10-21 13:55:17 | qutebrowser/qutebrowser | https://api.github.com/repos/qutebrowser/qutebrowser | opened | More advanced session commands | priority: 2 - low | Some inspiration from [tab-manager/tab-manager.py at master - tab-manager - Codeberg.org](https://codeberg.org/mister_monster/tab-manager/src/branch/master/tab-manager.py):
- Adding a tab to a session (#3853, #4346)
- Removing a tab from a session
- Renaming a session
- Exporting a session to HTML
- Merging multiple sessions | 1.0 | More advanced session commands - Some inspiration from [tab-manager/tab-manager.py at master - tab-manager - Codeberg.org](https://codeberg.org/mister_monster/tab-manager/src/branch/master/tab-manager.py):
- Adding a tab to a session (#3853, #4346)
- Removing a tab from a session
- Renaming a session
- Exporting a session to HTML
- Merging multiple sessions | priority | more advanced session commands some inspiration from adding a tab to a session removing a tab from a session renaming a session exporting a session to html merging multiple sessions | 1 |
265,924 | 8,360,555,516 | IssuesEvent | 2018-10-03 11:57:15 | angular/angular-cli | https://api.github.com/repos/angular/angular-cli | closed | typeChecking: false has no effect | effort1: easy (hours) freq1: low priority: 2 (required) severity3: broken | <!--
IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION YOUR ISSUE MIGHT BE CLOSED WITHOUT INVESTIGATING
-->
### Bug Report or Feature Request (mark with an `x`)
```
- [x] bug report -> please search issues before submitting
- [ ] feature request
```
### Versions.
<!--
Output from: `ng --version`.
If nothing, output from: `node --version` and `npm --version`.
Windows (7/8/10). Linux (incl. distribution). macOS (El Capitan? Sierra?)
-->
node: v6.9.5
npm: 3.10.10
### Repro steps.
<!--
Simple steps to reproduce this bug.
Please include: commands run, packages added, related code changes.
A link to a sample repo would help too.
-->
Set `typeChecking: false` in AotPlugin options.
### Desired functionality.
<!--
What would like to see implemented?
What is the usecase?
-->
Skip type check.
### Mention any other details that might be useful.
<!-- Please include a link to the repo if this is related to an OSS project. -->
I am using `@ngtools/webpack` directly in my own setup. The issue was introduced with the `1.3.0` release. `1.2.13` is the last release where this works. | 1.0 | typeChecking: false has no effect - <!--
IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION YOUR ISSUE MIGHT BE CLOSED WITHOUT INVESTIGATING
-->
### Bug Report or Feature Request (mark with an `x`)
```
- [x] bug report -> please search issues before submitting
- [ ] feature request
```
### Versions.
<!--
Output from: `ng --version`.
If nothing, output from: `node --version` and `npm --version`.
Windows (7/8/10). Linux (incl. distribution). macOS (El Capitan? Sierra?)
-->
node: v6.9.5
npm: 3.10.10
### Repro steps.
<!--
Simple steps to reproduce this bug.
Please include: commands run, packages added, related code changes.
A link to a sample repo would help too.
-->
Set `typeChecking: false` in AotPlugin options.
### Desired functionality.
<!--
What would like to see implemented?
What is the usecase?
-->
Skip type check.
### Mention any other details that might be useful.
<!-- Please include a link to the repo if this is related to an OSS project. -->
I am using `@ngtools/webpack` directly in my own setup. The issue was introduced with the `1.3.0` release. `1.2.13` is the last release where this works. | priority | typechecking false has no effect if you don t fill out the following information your issue might be closed without investigating bug report or feature request mark with an x bug report please search issues before submitting feature request versions output from ng version if nothing output from node version and npm version windows linux incl distribution macos el capitan sierra node npm repro steps simple steps to reproduce this bug please include commands run packages added related code changes a link to a sample repo would help too set typechecking false in aotplugin options desired functionality what would like to see implemented what is the usecase skip type check mention any other details that might be useful i am using ngtools webpack directly in my own setup the issue was introduced with the release is the last release where this works | 1 |
213,892 | 7,261,311,721 | IssuesEvent | 2018-02-18 19:31:53 | SmartlyDressedGames/Unturned-4.x-Community | https://api.github.com/repos/SmartlyDressedGames/Unturned-4.x-Community | closed | Splash screen | Priority: Low Status: Complete Type: Cleanup | One requirement for UE4 is the splash screen, so we need the SDG + UE4 intro during loading | 1.0 | Splash screen - One requirement for UE4 is the splash screen, so we need the SDG + UE4 intro during loading | priority | splash screen one requirement for is the splash screen so we need the sdg intro during loading | 1 |
738,059 | 25,543,670,647 | IssuesEvent | 2022-11-29 17:01:59 | SlimeVR/SlimeVR-Server | https://api.github.com/repos/SlimeVR/SlimeVR-Server | opened | OpenXR Support | Type: Feature Request Priority: Low | OpenXR support for SlimeVR would be nice for future-proofing and extended compatability. | 1.0 | OpenXR Support - OpenXR support for SlimeVR would be nice for future-proofing and extended compatability. | priority | openxr support openxr support for slimevr would be nice for future proofing and extended compatability | 1 |
221,969 | 7,404,099,131 | IssuesEvent | 2018-03-20 02:37:25 | QueensRideshare/Qshare | https://api.github.com/repos/QueensRideshare/Qshare | closed | Filter: Text Search Bar 💡 | enhancement priority: nice-to-have (low) | • Add text filter at top of ride table
• Filter against ride parameters ```name, origin, destination, description```
• Keep list of **all** rides in state, but only show rides matching filter in table
• Live update table with each new character typed | 1.0 | Filter: Text Search Bar 💡 - • Add text filter at top of ride table
• Filter against ride parameters ```name, origin, destination, description```
• Keep list of **all** rides in state, but only show rides matching filter in table
• Live update table with each new character typed | priority | filter text search bar 💡 • add text filter at top of ride table • filter against ride parameters name origin destination description • keep list of all rides in state but only show rides matching filter in table • live update table with each new character typed | 1 |
415,279 | 12,127,291,507 | IssuesEvent | 2020-04-22 18:28:22 | Reese2596/CPW215-Spring2020-DataTracker | https://api.github.com/repos/Reese2596/CPW215-Spring2020-DataTracker | opened | Fill out Features paragraph | interface low priority | The "Features" paragraph on the home page was not filled out because we don't have/know all the features that will be implemented. Near the end of our project this quarter, I believe that we will be able to properly fill this out with what features we have implemented. | 1.0 | Fill out Features paragraph - The "Features" paragraph on the home page was not filled out because we don't have/know all the features that will be implemented. Near the end of our project this quarter, I believe that we will be able to properly fill this out with what features we have implemented. | priority | fill out features paragraph the features paragraph on the home page was not filled out because we don t have know all the features that will be implemented near the end of our project this quarter i believe that we will be able to properly fill this out with what features we have implemented | 1 |
596,364 | 18,104,126,786 | IssuesEvent | 2021-09-22 17:12:45 | NOAA-GSL/VxLegacyIngest | https://api.github.com/repos/NOAA-GSL/VxLegacyIngest | closed | Verify sub-hourly RTMA | Type: Task Priority: Low | ---
Author Name: **jeffrey.a.hamilton** (jeffrey.a.hamilton)
Original Redmine Issue: 72725, https://vlab.ncep.noaa.gov/redmine/issues/72725
Original Date: 2019-12-19
Original Assignee: jeffrey.a.hamilton
---
Guoqing has requested verification of the sub-hourly RTMA fields in the ceiling and visibility apps. Work on making this happen, the grids are located on Jet in the following directory:
lfs1/projects/nrtrr/workflow/RTMA_3D_RU/run
| 1.0 | Verify sub-hourly RTMA - ---
Author Name: **jeffrey.a.hamilton** (jeffrey.a.hamilton)
Original Redmine Issue: 72725, https://vlab.ncep.noaa.gov/redmine/issues/72725
Original Date: 2019-12-19
Original Assignee: jeffrey.a.hamilton
---
Guoqing has requested verification of the sub-hourly RTMA fields in the ceiling and visibility apps. Work on making this happen, the grids are located on Jet in the following directory:
lfs1/projects/nrtrr/workflow/RTMA_3D_RU/run
| priority | verify sub hourly rtma author name jeffrey a hamilton jeffrey a hamilton original redmine issue original date original assignee jeffrey a hamilton guoqing has requested verification of the sub hourly rtma fields in the ceiling and visibility apps work on making this happen the grids are located on jet in the following directory projects nrtrr workflow rtma ru run | 1 |
460,732 | 13,217,463,594 | IssuesEvent | 2020-08-17 06:48:31 | kubesphere/kubesphere | https://api.github.com/repos/kubesphere/kubesphere | closed | The source address is not fully displayed | area/console kind/bug kind/need-to-verify priority/low |
**Describe the Bug**
The source address is not fully displayed, and the full information is not displayed when the mouse moves to this point
<img width="823" alt="display" src="https://user-images.githubusercontent.com/36271543/87771392-de429380-c852-11ea-9416-11f2deac662e.png">
**Versions Used**
KubeSphere:3.0.0
Kubernetes:
host-v1.16.12
member1-v1.18.5
member2-v1.17.8
**Environment**
host: 1node /ubuntu 16.04 4cpu/16g
member1: 2 nodes /ubuntu 16.04 4cpu/16g
member1: 2 nodes /centos7 8cpu/16g
/kind bug
/area console
/assign @harrisonliu5
/milestone 3.0.0
/priority low
| 1.0 | The source address is not fully displayed -
**Describe the Bug**
The source address is not fully displayed, and the full information is not displayed when the mouse moves to this point
<img width="823" alt="display" src="https://user-images.githubusercontent.com/36271543/87771392-de429380-c852-11ea-9416-11f2deac662e.png">
**Versions Used**
KubeSphere:3.0.0
Kubernetes:
host-v1.16.12
member1-v1.18.5
member2-v1.17.8
**Environment**
host: 1node /ubuntu 16.04 4cpu/16g
member1: 2 nodes /ubuntu 16.04 4cpu/16g
member1: 2 nodes /centos7 8cpu/16g
/kind bug
/area console
/assign @harrisonliu5
/milestone 3.0.0
/priority low
| priority | the source address is not fully displayed describe the bug the source address is not fully displayed and the full information is not displayed when the mouse moves to this point img width alt display src versions used kubesphere kubernetes host environment host ubuntu nodes ubuntu nodes kind bug area console assign milestone priority low | 1 |
551,242 | 16,165,284,469 | IssuesEvent | 2021-05-01 11:07:57 | SunstriderEmu/BugTracker | https://api.github.com/repos/SunstriderEmu/BugTracker | closed | Auction House Mail Bug | confirmed low priority | **Describe the bug**
<!--- A clear and concise description of what the bug is. -->
As an Alliance player, I am receiving mail that says "Horde Auction House".
**To Reproduce**
<!--- Steps to reproduce the behavior. Note that providing as much details as possible will help us fix it faster! -->
1. Receive mail from the Alliance Auction House
2. Noice that it says "Horde Auction House"
<!--- Please include ids of affected creatures / items / quests with a link to the relevant wowhead-like page. -->
Mail sent from the Alliance Auction House.
**Expected behavior**
<!--- A clear and concise description of what you expected to happen. -->
Mail from the Alliance Auction House to say "Alliance Auction House".
**Screenshots/videos**
<!--- Adding screenshots/videos to help explain your problem is *extremely* appreciated. -->
**Additional context**
<!--- Add any other context about the problem here. -->
| 1.0 | Auction House Mail Bug - **Describe the bug**
<!--- A clear and concise description of what the bug is. -->
As an Alliance player, I am receiving mail that says "Horde Auction House".
**To Reproduce**
<!--- Steps to reproduce the behavior. Note that providing as much details as possible will help us fix it faster! -->
1. Receive mail from the Alliance Auction House
2. Noice that it says "Horde Auction House"
<!--- Please include ids of affected creatures / items / quests with a link to the relevant wowhead-like page. -->
Mail sent from the Alliance Auction House.
**Expected behavior**
<!--- A clear and concise description of what you expected to happen. -->
Mail from the Alliance Auction House to say "Alliance Auction House".
**Screenshots/videos**
<!--- Adding screenshots/videos to help explain your problem is *extremely* appreciated. -->
**Additional context**
<!--- Add any other context about the problem here. -->
| priority | auction house mail bug describe the bug as an alliance player i am receiving mail that says horde auction house to reproduce receive mail from the alliance auction house noice that it says horde auction house mail sent from the alliance auction house expected behavior mail from the alliance auction house to say alliance auction house screenshots videos additional context | 1 |
621,221 | 19,580,474,262 | IssuesEvent | 2022-01-04 20:34:45 | oxen-io/lokinet | https://api.github.com/repos/oxen-io/lokinet | closed | Network Level PoW? | enhancement question low priority | Popular Hidden services on Tor often experience DoS and DDoS attacks, unlike traditional web services, hidden services cannot deploy intermediate services like Cloudflare which can check browser fingerprints (Tor browser prevents this) and IP address reputation. Tor Hidden services also lack the ability to block spammers via IP addresses since Tor cannot reveal the IP address of the person requesting the content.
The common approach to deal with such DoS attempts usually involves load balancing (Onion balance) or serving a CAPTCHA that users must complete. Neither of which provide very effective protection.
Adding protection at the Application layer with a CAPTCHA adds a new vector for spammers, adding protections at higher layers for Lokinet would require each SNApp operator to run additional software rather than being easily fixed inside Lokinet.
Although Lokinet operates many layers below Tor browser and a single layer below Tor itself i think it would still be a good idea to look at adding protection against DoS and DDoS attacks at the network layer.
What im proposing is that any SNApp operator should be able to enable a setting called SNApp PoW in lokinet.ini (By default it would be turned off).
“SNApp PoW = True”
“SNApp PoW = Hard” (Allows the user to tune the difficulty of such a PoW)
If SNApp PoW is turned on then any introduction sets that the SNApp now publishes should require the introducer and the endpoint to verify the specified amount of PoW.
This means that before the attacker can establish a connection to the SNApp he must have a valid PoW, which is checked by the introducer before they handout the endpoints for a SNApp and by the endpoints themselves. | 1.0 | Network Level PoW? - Popular Hidden services on Tor often experience DoS and DDoS attacks, unlike traditional web services, hidden services cannot deploy intermediate services like Cloudflare which can check browser fingerprints (Tor browser prevents this) and IP address reputation. Tor Hidden services also lack the ability to block spammers via IP addresses since Tor cannot reveal the IP address of the person requesting the content.
The common approach to deal with such DoS attempts usually involves load balancing (Onion balance) or serving a CAPTCHA that users must complete. Neither of which provide very effective protection.
Adding protection at the Application layer with a CAPTCHA adds a new vector for spammers, adding protections at higher layers for Lokinet would require each SNApp operator to run additional software rather than being easily fixed inside Lokinet.
Although Lokinet operates many layers below Tor browser and a single layer below Tor itself i think it would still be a good idea to look at adding protection against DoS and DDoS attacks at the network layer.
What im proposing is that any SNApp operator should be able to enable a setting called SNApp PoW in lokinet.ini (By default it would be turned off).
“SNApp PoW = True”
“SNApp PoW = Hard” (Allows the user to tune the difficulty of such a PoW)
If SNApp PoW is turned on then any introduction sets that the SNApp now publishes should require the introducer and the endpoint to verify the specified amount of PoW.
This means that before the attacker can establish a connection to the SNApp he must have a valid PoW, which is checked by the introducer before they handout the endpoints for a SNApp and by the endpoints themselves. | priority | network level pow popular hidden services on tor often experience dos and ddos attacks unlike traditional web services hidden services cannot deploy intermediate services like cloudflare which can check browser fingerprints tor browser prevents this and ip address reputation tor hidden services also lack the ability to block spammers via ip addresses since tor cannot reveal the ip address of the person requesting the content the common approach to deal with such dos attempts usually involves load balancing onion balance or serving a captcha that users must complete neither of which provide very effective protection adding protection at the application layer with a captcha adds a new vector for spammers adding protections at higher layers for lokinet would require each snapp operator to run additional software rather than being easily fixed inside lokinet although lokinet operates many layers below tor browser and a single layer below tor itself i think it would still be a good idea to look at adding protection against dos and ddos attacks at the network layer what im proposing is that any snapp operator should be able to enable a setting called snapp pow in lokinet ini by default it would be turned off “snapp pow true” “snapp pow hard” allows the user to tune the difficulty of such a pow if snapp pow is turned on then any introduction sets that the snapp now publishes should require the introducer and the endpoint to verify the specified amount of pow this means that before the attacker can establish a connection to the snapp he must have a valid pow which is checked by the introducer before they handout the endpoints for a snapp and by the endpoints themselves | 1 |
132,047 | 5,168,651,838 | IssuesEvent | 2017-01-17 22:10:14 | TechReborn/TechReborn | https://api.github.com/repos/TechReborn/TechReborn | closed | Config Missing Some Formatting | LOW PRIORITY | **Mod version:** [2.0.6.71](https://minecraft.curseforge.com/projects/techreborn/files/2356177)
**Forge version:** [1.11-13.19.1.2188](http://files.minecraftforge.net/maven/net/minecraftforge/forge/1.11-13.19.1.2188/forge-1.11-13.19.1.2188-changelog.txt)
There is an entry with missing localization in `main.cfg`.

There is also an entry with inconsistent formatting:

That's all there is to this issue, really. | 1.0 | Config Missing Some Formatting - **Mod version:** [2.0.6.71](https://minecraft.curseforge.com/projects/techreborn/files/2356177)
**Forge version:** [1.11-13.19.1.2188](http://files.minecraftforge.net/maven/net/minecraftforge/forge/1.11-13.19.1.2188/forge-1.11-13.19.1.2188-changelog.txt)
There is an entry with missing localization in `main.cfg`.

There is also an entry with inconsistent formatting:

That's all there is to this issue, really. | priority | config missing some formatting mod version forge version there is an entry with missing localization in main cfg there is also an entry with inconsistent formatting that s all there is to this issue really | 1 |
189,346 | 6,796,843,221 | IssuesEvent | 2017-11-01 20:27:54 | techx/quill | https://api.github.com/repos/techx/quill | closed | Autocomplete / dropdown school name | contributor friendly Priority: Low Status: Available Type: Enhancement | Forcing standard school names would make it easier for organizers to search for schools. Semantic-UI has the perfect [autocomplete dropdown](https://semantic-ui.com/modules/dropdown.html#search-selection) module for this, so this should be a relatively simple change. | 1.0 | Autocomplete / dropdown school name - Forcing standard school names would make it easier for organizers to search for schools. Semantic-UI has the perfect [autocomplete dropdown](https://semantic-ui.com/modules/dropdown.html#search-selection) module for this, so this should be a relatively simple change. | priority | autocomplete dropdown school name forcing standard school names would make it easier for organizers to search for schools semantic ui has the perfect module for this so this should be a relatively simple change | 1 |
34,408 | 2,780,302,957 | IssuesEvent | 2015-05-06 02:53:15 | broadinstitute/hellbender | https://api.github.com/repos/broadinstitute/hellbender | closed | See which Picard tools can be easily converted to ReadWalkers | enhancement Picard PRIORITY_LOW question ReadWalker tools | To help unify things, we should start moving Picard tools to ReadWalkers, where possible. Most of the tools that came from picard.sam should fall under this category.
For tools that can't easily be converted, we should make a note of the traversal pattern so we can consider how to implement it later. | 1.0 | See which Picard tools can be easily converted to ReadWalkers - To help unify things, we should start moving Picard tools to ReadWalkers, where possible. Most of the tools that came from picard.sam should fall under this category.
For tools that can't easily be converted, we should make a note of the traversal pattern so we can consider how to implement it later. | priority | see which picard tools can be easily converted to readwalkers to help unify things we should start moving picard tools to readwalkers where possible most of the tools that came from picard sam should fall under this category for tools that can t easily be converted we should make a note of the traversal pattern so we can consider how to implement it later | 1 |
398,894 | 11,742,475,149 | IssuesEvent | 2020-03-12 00:55:45 | thaliawww/concrexit | https://api.github.com/repos/thaliawww/concrexit | closed | Change mentions of 'supporter' to 'benefactor' | priority: low technical change | In GitLab by @se-bastiaan on Sep 8, 2018, 16:40
### One-sentence description
Change mentions of 'supporter' to 'benefactor' for begunstigers
### Why?
It is the translation we use in all official documents that was decided upon by the Translacie.
### Current implementation
We use several different names for the 'begunstiger' membership type.
### Desired implementation
Always use 'benefactor' as translation. | 1.0 | Change mentions of 'supporter' to 'benefactor' - In GitLab by @se-bastiaan on Sep 8, 2018, 16:40
### One-sentence description
Change mentions of 'supporter' to 'benefactor' for begunstigers
### Why?
It is the translation we use in all official documents that was decided upon by the Translacie.
### Current implementation
We use several different names for the 'begunstiger' membership type.
### Desired implementation
Always use 'benefactor' as translation. | priority | change mentions of supporter to benefactor in gitlab by se bastiaan on sep one sentence description change mentions of supporter to benefactor for begunstigers why it is the translation we use in all official documents that was decided upon by the translacie current implementation we use several different names for the begunstiger membership type desired implementation always use benefactor as translation | 1 |
490,516 | 14,135,332,136 | IssuesEvent | 2020-11-10 01:26:20 | drashland/sinco | https://api.github.com/repos/drashland/sinco | opened | Add Support For Web Scraping | Priority: Low Type: Enhancement | ## Summary
What:
Unsure what this entails, but one use case for using these tools for scraping is running a script on a browser
Why:
There definitely seems a use case for this feature
## Acceptance Criteria
Below is a list of tasks that must be completed before this issue can be closed.
- [ ] Write documentation
- [ ] Write unit tests
- [ ] Write integration tests
- [ ] develop feature
## Example Pseudo Code (for implementation)
```typescript
// Add example pseudo code for implementation
```
| 1.0 | Add Support For Web Scraping - ## Summary
What:
Unsure what this entails, but one use case for using these tools for scraping is running a script on a browser
Why:
There definitely seems a use case for this feature
## Acceptance Criteria
Below is a list of tasks that must be completed before this issue can be closed.
- [ ] Write documentation
- [ ] Write unit tests
- [ ] Write integration tests
- [ ] develop feature
## Example Pseudo Code (for implementation)
```typescript
// Add example pseudo code for implementation
```
| priority | add support for web scraping summary what unsure what this entails but one use case for using these tools for scraping is running a script on a browser why there definitely seems a use case for this feature acceptance criteria below is a list of tasks that must be completed before this issue can be closed write documentation write unit tests write integration tests develop feature example pseudo code for implementation typescript add example pseudo code for implementation | 1 |
527,094 | 15,308,460,010 | IssuesEvent | 2021-02-24 22:30:04 | konveyor/forklift-ui | https://api.github.com/repos/konveyor/forklift-ui | closed | Determine and show the default migration network for hosts before they are configured | blocked enhancement low-priority | Related RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1908034
From the discussion here: https://github.com/konveyor/virt-ui/pull/244#issuecomment-731192433
Right now there is no way for the UI to know what network will be used for a host unless it was specified by the user (the default is determined in the backend at migration time), so we just display (default) and no other information. We also can't specify in the list of networks to choose from which is the default.
If there is some way of identifying from vSphere which one is the default network, we should use that to display details in the table before a host is configured, and distinguish the default network in the network selection modal.
Relevant chatter from Slack:
@jortel :
> I wonder if it's simple as always the one named: "Management Network"?
@fdupont-redhat :
> this will be the network of the hostname / IP address of the host in vSphere inventory. So, it's probable that it could be determined by some network calculation. Hacky and not sure how accurate it will be.
| 1.0 | Determine and show the default migration network for hosts before they are configured - Related RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1908034
From the discussion here: https://github.com/konveyor/virt-ui/pull/244#issuecomment-731192433
Right now there is no way for the UI to know what network will be used for a host unless it was specified by the user (the default is determined in the backend at migration time), so we just display (default) and no other information. We also can't specify in the list of networks to choose from which is the default.
If there is some way of identifying from vSphere which one is the default network, we should use that to display details in the table before a host is configured, and distinguish the default network in the network selection modal.
Relevant chatter from Slack:
@jortel :
> I wonder if it's simple as always the one named: "Management Network"?
@fdupont-redhat :
> this will be the network of the hostname / IP address of the host in vSphere inventory. So, it's probable that it could be determined by some network calculation. Hacky and not sure how accurate it will be.
| priority | determine and show the default migration network for hosts before they are configured related rhbz from the discussion here right now there is no way for the ui to know what network will be used for a host unless it was specified by the user the default is determined in the backend at migration time so we just display default and no other information we also can t specify in the list of networks to choose from which is the default if there is some way of identifying from vsphere which one is the default network we should use that to display details in the table before a host is configured and distinguish the default network in the network selection modal relevant chatter from slack jortel i wonder if it s simple as always the one named management network fdupont redhat this will be the network of the hostname ip address of the host in vsphere inventory so it s probable that it could be determined by some network calculation hacky and not sure how accurate it will be | 1 |
728,464 | 25,080,297,848 | IssuesEvent | 2022-11-07 18:42:03 | kubermatic/kubeone | https://api.github.com/repos/kubermatic/kubeone | opened | Spike: integrate Terraform CLI in the KubeOne binary | kind/feature priority/low sig/cluster-management | ### Description of the feature you would like to add / User story
As part of our single binary experience improvements, we might want to consider integrating Terraform CLI in the KubeOne binary. This should improve the getting started experience in a way that users don't have to download and install Terraform themselves, but instead have it as part of KubeOne. As a result, users can create and maintain a cluster just with a single KubeOne binary that has everything (Terraform, Terraform configs, KubeOneCluster manifest template,...). | 1.0 | Spike: integrate Terraform CLI in the KubeOne binary - ### Description of the feature you would like to add / User story
As part of our single binary experience improvements, we might want to consider integrating Terraform CLI in the KubeOne binary. This should improve the getting started experience in a way that users don't have to download and install Terraform themselves, but instead have it as part of KubeOne. As a result, users can create and maintain a cluster just with a single KubeOne binary that has everything (Terraform, Terraform configs, KubeOneCluster manifest template,...). | priority | spike integrate terraform cli in the kubeone binary description of the feature you would like to add user story as part of our single binary experience improvements we might want to consider integrating terraform cli in the kubeone binary this should improve the getting started experience in a way that users don t have to download and install terraform themselves but instead have it as part of kubeone as a result users can create and maintain a cluster just with a single kubeone binary that has everything terraform terraform configs kubeonecluster manifest template | 1 |
347,297 | 10,428,195,311 | IssuesEvent | 2019-09-16 21:50:14 | clearlinux/clr-installer | https://api.github.com/repos/clearlinux/clr-installer | closed | Creating ISO can fail silently | bug duplicate low priority | **Describe the bug**
If you attempt to create an ISO image and run out of disk space, the image generation will claim success though no ISO is found.
**To Reproduce**
Steps to reproduce the behavior:
1. Ensure you /tmp is nearly all in use
2. create a live desktop image
3. clr-installer -c scripts/live-desktop.yaml
4. See that no ISO is found, but there is an error in the /root/clr-installer.log file
**Expected behavior**
If the ISO creation fails;
1. Do not remove the raw .img file
2. Show the error message to the end-user
3. Exist with a non-zero status
**Screenshots**
2019/09/12 16:51:14 [INF] Generating ISO image
2019/09/12 16:51:14 [INF] Building ISO image
2019/09/12 16:51:14 [INF] Making temp directories for ISO creation
2019/09/12 16:51:14 [INF] Making squashfs of rootfs
2019/09/12 16:51:14 [DBG] mksquashfs /tmp/install-715971415 /tmp/clrCdroot-465578636/images/rootfs.img -b 131072 -comp gzip -e boot/ -e proc/ -e sys/ -e dev/ -e run/
2019/09/12 16:51:33 [DBG] Parallel mksquashfs: Using 16 processors
2019/09/12 16:51:33 [DBG] Creating 4.0 filesystem on /tmp/clrCdroot-465578636/images/rootfs.img, block size 131072.
2019/09/12 16:51:33 [DBG] ^M[==========\ ] 40500/225813 17%!!(MISSING)
(MISSING)
2019/09/12 16:51:51 [DBG] Write failed because No space left on device
2019/09/12 16:51:51 [DBG] FATAL ERROR:Failed to write to output filesystem
2019/09/12 16:51:51 [INF] Cleaning up from ISO creation
**2019/09/12 16:51:51 [ERR] exit status 1**
2019/09/12 16:51:51 [INF] Installation completed
2019/09/12 16:51:51 [INF] Umounting rootDir: /tmp/install-715971415
2019/09/12 16:51:51 [DBG] Unmounted ok: /tmp/install-715971415/sys
2019/09/12 16:51:51 [DBG] Unmounted ok: /tmp/install-715971415/proc
2019/09/12 16:51:51 [DBG] Unmounted ok: /tmp/install-715971415/dev
2019/09/12 16:51:51 [DBG] Unmounted ok: /tmp/install-715971415/boot
2019/09/12 16:51:55 [DBG] Unmounted ok: /tmp/install-715971415
2019/09/12 16:51:55 [INF] Removing rootDir: /tmp/install-715971415
2019/09/12 16:51:55 [DBG] losetup -d /dev/loop0
**2019/09/12 16:51:55 [DBG] Removing raw image file: dev-clear-live-desktop.img**
**Environment (please complete the following information):**
- Clear Linux OS Version: 2.1.1
| 1.0 | Creating ISO can fail silently - **Describe the bug**
If you attempt to create an ISO image and run out of disk space, the image generation will claim success though no ISO is found.
**To Reproduce**
Steps to reproduce the behavior:
1. Ensure you /tmp is nearly all in use
2. create a live desktop image
3. clr-installer -c scripts/live-desktop.yaml
4. See that no ISO is found, but there is an error in the /root/clr-installer.log file
**Expected behavior**
If the ISO creation fails;
1. Do not remove the raw .img file
2. Show the error message to the end-user
3. Exist with a non-zero status
**Screenshots**
2019/09/12 16:51:14 [INF] Generating ISO image
2019/09/12 16:51:14 [INF] Building ISO image
2019/09/12 16:51:14 [INF] Making temp directories for ISO creation
2019/09/12 16:51:14 [INF] Making squashfs of rootfs
2019/09/12 16:51:14 [DBG] mksquashfs /tmp/install-715971415 /tmp/clrCdroot-465578636/images/rootfs.img -b 131072 -comp gzip -e boot/ -e proc/ -e sys/ -e dev/ -e run/
2019/09/12 16:51:33 [DBG] Parallel mksquashfs: Using 16 processors
2019/09/12 16:51:33 [DBG] Creating 4.0 filesystem on /tmp/clrCdroot-465578636/images/rootfs.img, block size 131072.
2019/09/12 16:51:33 [DBG] ^M[==========\ ] 40500/225813 17%!!(MISSING)
(MISSING)
2019/09/12 16:51:51 [DBG] Write failed because No space left on device
2019/09/12 16:51:51 [DBG] FATAL ERROR:Failed to write to output filesystem
2019/09/12 16:51:51 [INF] Cleaning up from ISO creation
**2019/09/12 16:51:51 [ERR] exit status 1**
2019/09/12 16:51:51 [INF] Installation completed
2019/09/12 16:51:51 [INF] Umounting rootDir: /tmp/install-715971415
2019/09/12 16:51:51 [DBG] Unmounted ok: /tmp/install-715971415/sys
2019/09/12 16:51:51 [DBG] Unmounted ok: /tmp/install-715971415/proc
2019/09/12 16:51:51 [DBG] Unmounted ok: /tmp/install-715971415/dev
2019/09/12 16:51:51 [DBG] Unmounted ok: /tmp/install-715971415/boot
2019/09/12 16:51:55 [DBG] Unmounted ok: /tmp/install-715971415
2019/09/12 16:51:55 [INF] Removing rootDir: /tmp/install-715971415
2019/09/12 16:51:55 [DBG] losetup -d /dev/loop0
**2019/09/12 16:51:55 [DBG] Removing raw image file: dev-clear-live-desktop.img**
**Environment (please complete the following information):**
- Clear Linux OS Version: 2.1.1
| priority | creating iso can fail silently describe the bug if you attempt to create an iso image and run out of disk space the image generation will claim success though no iso is found to reproduce steps to reproduce the behavior ensure you tmp is nearly all in use create a live desktop image clr installer c scripts live desktop yaml see that no iso is found but there is an error in the root clr installer log file expected behavior if the iso creation fails do not remove the raw img file show the error message to the end user exist with a non zero status screenshots generating iso image building iso image making temp directories for iso creation making squashfs of rootfs mksquashfs tmp install tmp clrcdroot images rootfs img b comp gzip e boot e proc e sys e dev e run parallel mksquashfs using processors creating filesystem on tmp clrcdroot images rootfs img block size m missing missing write failed because no space left on device fatal error failed to write to output filesystem cleaning up from iso creation exit status installation completed umounting rootdir tmp install unmounted ok tmp install sys unmounted ok tmp install proc unmounted ok tmp install dev unmounted ok tmp install boot unmounted ok tmp install removing rootdir tmp install losetup d dev removing raw image file dev clear live desktop img environment please complete the following information clear linux os version | 1 |
505,713 | 14,644,110,213 | IssuesEvent | 2020-12-25 21:00:40 | SupremeObsidian/ProjectManager | https://api.github.com/repos/SupremeObsidian/ProjectManager | opened | Daily token take config | Low Priority enhancement | Make a config option for configuring the number of tokens taken per day | 1.0 | Daily token take config - Make a config option for configuring the number of tokens taken per day | priority | daily token take config make a config option for configuring the number of tokens taken per day | 1 |
211,346 | 7,200,491,432 | IssuesEvent | 2018-02-05 19:14:29 | haskell/cabal | https://api.github.com/repos/haskell/cabal | opened | cabal init: "getDirectoryContents:openDirStream: resource exhausted (Too many open files)" | cabal-install: cmd/init priority: low | Ran into this when i ran init accidentally in the wrong directory (with a lot of subdirectories with a lot of contents). The crawl for module contents combined with lazy IO clearly ran into the usual sorts of resource limits.
This is a corner case, but in general it is better to defensively code around these things. | 1.0 | cabal init: "getDirectoryContents:openDirStream: resource exhausted (Too many open files)" - Ran into this when i ran init accidentally in the wrong directory (with a lot of subdirectories with a lot of contents). The crawl for module contents combined with lazy IO clearly ran into the usual sorts of resource limits.
This is a corner case, but in general it is better to defensively code around these things. | priority | cabal init getdirectorycontents opendirstream resource exhausted too many open files ran into this when i ran init accidentally in the wrong directory with a lot of subdirectories with a lot of contents the crawl for module contents combined with lazy io clearly ran into the usual sorts of resource limits this is a corner case but in general it is better to defensively code around these things | 1 |
258,681 | 8,178,903,103 | IssuesEvent | 2018-08-28 15:00:42 | spacetelescope/webbpsf | https://api.github.com/repos/spacetelescope/webbpsf | closed | Update filter profiles, for NIRISS and others | priority:low | <a href="https://github.com/mperrin"><img src="https://avatars0.githubusercontent.com/u/1151745?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [mperrin](https://github.com/mperrin)**
_Thursday Jan 24, 2013 at 20:42 GMT_
_Originally opened as https://github.com/mperrin/webbpsf/issues/7_
----
Need to work with Vicki, Loic, et al. to ensure we have updated filter profiles.
| 1.0 | Update filter profiles, for NIRISS and others - <a href="https://github.com/mperrin"><img src="https://avatars0.githubusercontent.com/u/1151745?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [mperrin](https://github.com/mperrin)**
_Thursday Jan 24, 2013 at 20:42 GMT_
_Originally opened as https://github.com/mperrin/webbpsf/issues/7_
----
Need to work with Vicki, Loic, et al. to ensure we have updated filter profiles.
| priority | update filter profiles for niriss and others issue by thursday jan at gmt originally opened as need to work with vicki loic et al to ensure we have updated filter profiles | 1 |
317,967 | 9,672,201,925 | IssuesEvent | 2019-05-22 02:23:41 | ReliefApplications/bms_front | https://api.github.com/repos/ReliefApplications/bms_front | closed | Uniformity of Function - Magnifying Glass - Low | In progress Low Priority | On Distribution Validated page, the magnifying modal should display same data as ‘Household Information Summary’ from Add New Beneficiary step 4. | 1.0 | Uniformity of Function - Magnifying Glass - Low - On Distribution Validated page, the magnifying modal should display same data as ‘Household Information Summary’ from Add New Beneficiary step 4. | priority | uniformity of function magnifying glass low on distribution validated page the magnifying modal should display same data as ‘household information summary’ from add new beneficiary step | 1 |
681,182 | 23,299,829,262 | IssuesEvent | 2022-08-07 06:22:16 | zot4plan/Zot4Plan | https://api.github.com/repos/zot4plan/Zot4Plan | opened | INS-19 About us Page | Priority: low Type: feature request | **Story**
A new page for team members
**Requirement**
Friendly & creative design | 1.0 | INS-19 About us Page - **Story**
A new page for team members
**Requirement**
Friendly & creative design | priority | ins about us page story a new page for team members requirement friendly creative design | 1 |
253,536 | 8,057,437,366 | IssuesEvent | 2018-08-02 15:23:25 | openfaas/faas-cli | https://api.github.com/repos/openfaas/faas-cli | closed | RPi: Kubernetes error function stuck in ImageInspectError state | priority/low support | Hi, I trying to deploy a home made function, but every time I, the pod that run that function goes in ImageInspectError state...
When I deploy a test function like `figlet`, everything work, but when it come to something I write myself, it fails.
I think I'm maybe missing something in the functioning because I'm pretty new to K8s and Openfaas.
I'm running OpenFaas with Kubernetes on a 2 nodes Raspberry PI 3B+ cluster.
## Expected Behaviour
Pod should run.
## Current Behaviour
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-masternode 1/1 Running 1 1d
kube-system kube-apiserver-masternode 1/1 Running 1 1d
kube-system kube-controller-manager-masternode 1/1 Running 1 1d
kube-system kube-dns-7f9b64f644-x42sr 3/3 Running 3 1d
kube-system kube-proxy-wrp6f 1/1 Running 1 1d
kube-system kube-proxy-x6pvq 1/1 Running 1 1d
kube-system kube-scheduler-masternode 1/1 Running 1 1d
kube-system weave-net-4995q 2/2 Running 3 1d
kube-system weave-net-5g7pd 2/2 Running 3 1d
openfaas-fn figlet-7f556fcd87-wrtf4 1/1 Running 0 4h
openfaas-fn testfaceraspi-7f6fcb5897-rs4cq 0/1 ImageInspectError 0 2h
openfaas alertmanager-66b98dd4d4-kcsq4 1/1 Running 1 1d
openfaas faas-netesd-5b5d6d5648-mqftl 1/1 Running 1 1d
openfaas gateway-846f8b5686-724q8 1/1 Running 2 1d
openfaas nats-86955fb749-7vsbm 1/1 Running 1 1d
openfaas prometheus-6ffc57bb8f-fpk6r 1/1 Running 1 1d
openfaas queue-worker-567bcf4d47-ngsgv 1/1 Running 2 1d
```
The `testfaceraspi` doesn't run.
Logs from the pod :
```
$ kubectl logs testfaceraspi-7f6fcb5897-rs4cq -n openfaas-fn
Error from server (BadRequest): container "testfaceraspi" in pod "testfaceraspi-7f6fcb5897-rs4cq" is waiting to start: ImageInspectError
```
Pod describe :
```
$ kubectl describe pod -n openfaas-fn testfaceraspi-7f6fcb5897-rs4cq
Name: testfaceraspi-7f6fcb5897-rs4cq
Namespace: openfaas-fn
Node: workernode/10.192.79.198
Start Time: Thu, 12 Jul 2018 11:39:05 +0200
Labels: faas_function=testfaceraspi
pod-template-hash=3929761453
Annotations: prometheus.io.scrape=false
Status: Pending
IP: 10.40.0.16
Controlled By: ReplicaSet/testfaceraspi-7f6fcb5897
Containers:
testfaceraspi:
Container ID:
Image: gallouche/testfaceraspi
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImageInspectError
Ready: False
Restart Count: 0
Liveness: exec [cat /tmp/.lock] delay=3s timeout=1s period=10s #success=1 #failure=3
Readiness: exec [cat /tmp/.lock] delay=3s timeout=1s period=10s #success=1 #failure=3
Environment:
fprocess: python3 index.py
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5qhnn (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-5qhnn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5qhnn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning DNSConfigForming 2m (x1019 over 3h) kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
```
And the event logs :
```
$ kubectl get events --sort-by=.metadata.creationTimestamp -n openfaas-fn
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
14m 1h 347 testfaceraspi-7f6fcb5897-rs4cq.1540db41e89d4c52 Pod Warning DNSConfigForming kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
4m 1h 75 figlet-7f556fcd87-wrtf4.1540db421002b49e Pod Warning DNSConfigForming kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
10m 10m 1 testfaceraspi-7f6fcb5897-d6z78.1540df9ed8b91865 Pod Normal Scheduled default-scheduler Successfully assigned testfaceraspi-7f6fcb5897-d6z78 to workernode
10m 10m 1 testfaceraspi-7f6fcb5897.1540df9ed6eee11f ReplicaSet Normal SuccessfulCreate replicaset-controller Created pod: testfaceraspi-7f6fcb5897-d6z78
10m 10m 1 testfaceraspi-7f6fcb5897-d6z78.1540df9eef3ef504 Pod Normal SuccessfulMountVolume kubelet, workernode MountVolume.SetUp succeeded for volume "default-token-5qhnn"
4m 10m 27 testfaceraspi-7f6fcb5897-d6z78.1540df9eef5445c0 Pod Warning DNSConfigForming kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
8m 9m 8 testfaceraspi-7f6fcb5897-d6z78.1540df9f670d0dad Pod spec.containers{testfaceraspi} Warning InspectFailed kubelet, workernode Failed to inspect image "gallouche/testfaceraspi": rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2/l: invalid argument
9m 9m 7 testfaceraspi-7f6fcb5897-d6z78.1540df9f670fcf3e Pod spec.containers{testfaceraspi} Warning Failed kubelet, workernode Error: ImageInspectError
```
## Steps to Reproduce (for bugs)
1. Deploy OpenFaas on a 2 node k8s cluster
2. Create function with `faas new testfaceraspi --lang python3-armhf`
3. Add the following code in the `handler.py` :
```python
import json
def handle(req):
jsonl = json.loads(req)
return ("Found " + str(jsonl["nbFaces"]) + " faces in OpenFaas Function on raspi !")
```
4. Change gateway and image in the `.yml`
```yaml
provider:
name: faas
gateway: http://127.0.0.1:31112
functions:
testfaceraspi:
lang: python3-armhf
handler: ./testfaceraspi
image: gallouche/testfaceraspi
```
5. Run `faas build -f testfacepi.yml`
6. Login in DockerHub with `docker login`
7. Run `faas push -f testfacepi.yml`
8. Run `faas deploy -f testfacepi.yml`
## Your Environment
* FaaS-CLI version ( Full output from: `faas-cli version` ):
```
Commit: 3995a8197f1df1ecdf524844477cffa04e4690ea
Version: 0.6.11
```
* Docker version ( Full output from: `docker version` ):
```
Client:
Version: 18.04.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 3d479c0
Built: Tue Apr 10 18:25:24 2018
OS/Arch: linux/arm
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.04.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 3d479c0
Built: Tue Apr 10 18:21:25 2018
OS/Arch: linux/arm
Experimental: false
```
* Are you using Docker Swarm (FaaS-swarm ) or Kubernetes (FaaS-netes)?
Using kubernetes.
* Operating System and version (e.g. Linux, Windows, MacOS):
```
Distributor ID: Raspbian
Description: Raspbian GNU/Linux 9.4 (stretch)
Release: 9.4
Codename: stretch
```
Thanks by advance !
| 1.0 | RPi: Kubernetes error function stuck in ImageInspectError state - Hi, I trying to deploy a home made function, but every time I, the pod that run that function goes in ImageInspectError state...
When I deploy a test function like `figlet`, everything work, but when it come to something I write myself, it fails.
I think I'm maybe missing something in the functioning because I'm pretty new to K8s and Openfaas.
I'm running OpenFaas with Kubernetes on a 2 nodes Raspberry PI 3B+ cluster.
## Expected Behaviour
Pod should run.
## Current Behaviour
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-masternode 1/1 Running 1 1d
kube-system kube-apiserver-masternode 1/1 Running 1 1d
kube-system kube-controller-manager-masternode 1/1 Running 1 1d
kube-system kube-dns-7f9b64f644-x42sr 3/3 Running 3 1d
kube-system kube-proxy-wrp6f 1/1 Running 1 1d
kube-system kube-proxy-x6pvq 1/1 Running 1 1d
kube-system kube-scheduler-masternode 1/1 Running 1 1d
kube-system weave-net-4995q 2/2 Running 3 1d
kube-system weave-net-5g7pd 2/2 Running 3 1d
openfaas-fn figlet-7f556fcd87-wrtf4 1/1 Running 0 4h
openfaas-fn testfaceraspi-7f6fcb5897-rs4cq 0/1 ImageInspectError 0 2h
openfaas alertmanager-66b98dd4d4-kcsq4 1/1 Running 1 1d
openfaas faas-netesd-5b5d6d5648-mqftl 1/1 Running 1 1d
openfaas gateway-846f8b5686-724q8 1/1 Running 2 1d
openfaas nats-86955fb749-7vsbm 1/1 Running 1 1d
openfaas prometheus-6ffc57bb8f-fpk6r 1/1 Running 1 1d
openfaas queue-worker-567bcf4d47-ngsgv 1/1 Running 2 1d
```
The `testfaceraspi` doesn't run.
Logs from the pod :
```
$ kubectl logs testfaceraspi-7f6fcb5897-rs4cq -n openfaas-fn
Error from server (BadRequest): container "testfaceraspi" in pod "testfaceraspi-7f6fcb5897-rs4cq" is waiting to start: ImageInspectError
```
Pod describe :
```
$ kubectl describe pod -n openfaas-fn testfaceraspi-7f6fcb5897-rs4cq
Name: testfaceraspi-7f6fcb5897-rs4cq
Namespace: openfaas-fn
Node: workernode/10.192.79.198
Start Time: Thu, 12 Jul 2018 11:39:05 +0200
Labels: faas_function=testfaceraspi
pod-template-hash=3929761453
Annotations: prometheus.io.scrape=false
Status: Pending
IP: 10.40.0.16
Controlled By: ReplicaSet/testfaceraspi-7f6fcb5897
Containers:
testfaceraspi:
Container ID:
Image: gallouche/testfaceraspi
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImageInspectError
Ready: False
Restart Count: 0
Liveness: exec [cat /tmp/.lock] delay=3s timeout=1s period=10s #success=1 #failure=3
Readiness: exec [cat /tmp/.lock] delay=3s timeout=1s period=10s #success=1 #failure=3
Environment:
fprocess: python3 index.py
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5qhnn (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-5qhnn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5qhnn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning DNSConfigForming 2m (x1019 over 3h) kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
```
And the event logs :
```
$ kubectl get events --sort-by=.metadata.creationTimestamp -n openfaas-fn
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
14m 1h 347 testfaceraspi-7f6fcb5897-rs4cq.1540db41e89d4c52 Pod Warning DNSConfigForming kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
4m 1h 75 figlet-7f556fcd87-wrtf4.1540db421002b49e Pod Warning DNSConfigForming kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
10m 10m 1 testfaceraspi-7f6fcb5897-d6z78.1540df9ed8b91865 Pod Normal Scheduled default-scheduler Successfully assigned testfaceraspi-7f6fcb5897-d6z78 to workernode
10m 10m 1 testfaceraspi-7f6fcb5897.1540df9ed6eee11f ReplicaSet Normal SuccessfulCreate replicaset-controller Created pod: testfaceraspi-7f6fcb5897-d6z78
10m 10m 1 testfaceraspi-7f6fcb5897-d6z78.1540df9eef3ef504 Pod Normal SuccessfulMountVolume kubelet, workernode MountVolume.SetUp succeeded for volume "default-token-5qhnn"
4m 10m 27 testfaceraspi-7f6fcb5897-d6z78.1540df9eef5445c0 Pod Warning DNSConfigForming kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
8m 9m 8 testfaceraspi-7f6fcb5897-d6z78.1540df9f670d0dad Pod spec.containers{testfaceraspi} Warning InspectFailed kubelet, workernode Failed to inspect image "gallouche/testfaceraspi": rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2/l: invalid argument
9m 9m 7 testfaceraspi-7f6fcb5897-d6z78.1540df9f670fcf3e Pod spec.containers{testfaceraspi} Warning Failed kubelet, workernode Error: ImageInspectError
```
## Steps to Reproduce (for bugs)
1. Deploy OpenFaas on a 2 node k8s cluster
2. Create function with `faas new testfaceraspi --lang python3-armhf`
3. Add the following code in the `handler.py` :
```python
import json
def handle(req):
jsonl = json.loads(req)
return ("Found " + str(jsonl["nbFaces"]) + " faces in OpenFaas Function on raspi !")
```
4. Change gateway and image in the `.yml`
```yaml
provider:
name: faas
gateway: http://127.0.0.1:31112
functions:
testfaceraspi:
lang: python3-armhf
handler: ./testfaceraspi
image: gallouche/testfaceraspi
```
5. Run `faas build -f testfacepi.yml`
6. Login in DockerHub with `docker login`
7. Run `faas push -f testfacepi.yml`
8. Run `faas deploy -f testfacepi.yml`
## Your Environment
* FaaS-CLI version ( Full output from: `faas-cli version` ):
```
Commit: 3995a8197f1df1ecdf524844477cffa04e4690ea
Version: 0.6.11
```
* Docker version ( Full output from: `docker version` ):
```
Client:
Version: 18.04.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 3d479c0
Built: Tue Apr 10 18:25:24 2018
OS/Arch: linux/arm
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.04.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 3d479c0
Built: Tue Apr 10 18:21:25 2018
OS/Arch: linux/arm
Experimental: false
```
* Are you using Docker Swarm (FaaS-swarm ) or Kubernetes (FaaS-netes)?
Using kubernetes.
* Operating System and version (e.g. Linux, Windows, MacOS):
```
Distributor ID: Raspbian
Description: Raspbian GNU/Linux 9.4 (stretch)
Release: 9.4
Codename: stretch
```
Thanks by advance !
| priority | rpi kubernetes error function stuck in imageinspecterror state hi i trying to deploy a home made function but every time i the pod that run that function goes in imageinspecterror state when i deploy a test function like figlet everything work but when it come to something i write myself it fails i think i m maybe missing something in the functioning because i m pretty new to and openfaas i m running openfaas with kubernetes on a nodes raspberry pi cluster expected behaviour pod should run current behaviour namespace name ready status restarts age kube system etcd masternode running kube system kube apiserver masternode running kube system kube controller manager masternode running kube system kube dns running kube system kube proxy running kube system kube proxy running kube system kube scheduler masternode running kube system weave net running kube system weave net running openfaas fn figlet running openfaas fn testfaceraspi imageinspecterror openfaas alertmanager running openfaas faas netesd mqftl running openfaas gateway running openfaas nats running openfaas prometheus running openfaas queue worker ngsgv running the testfaceraspi doesn t run logs from the pod kubectl logs testfaceraspi n openfaas fn error from server badrequest container testfaceraspi in pod testfaceraspi is waiting to start imageinspecterror pod describe kubectl describe pod n openfaas fn testfaceraspi name testfaceraspi namespace openfaas fn node workernode start time thu jul labels faas function testfaceraspi pod template hash annotations prometheus io scrape false status pending ip controlled by replicaset testfaceraspi containers testfaceraspi container id image gallouche testfaceraspi image id port tcp host port tcp state waiting reason imageinspecterror ready false restart count liveness exec delay timeout period success failure readiness exec delay timeout period success failure environment fprocess index py mounts var run secrets kubernetes io serviceaccount from default token ro conditions type status initialized true ready false podscheduled true volumes default token type secret a volume populated by a secret secretname default token optional false qos class besteffort node selectors tolerations node kubernetes io not ready noexecute for node kubernetes io unreachable noexecute for events type reason age from message warning dnsconfigforming over kubelet workernode search line limits were exceeded some search paths have been omitted the applied search line is openfaas fn svc cluster local svc cluster local cluster local heig vd ch einet ad eivd ch web ad eivd ch and the event logs kubectl get events sort by metadata creationtimestamp n openfaas fn last seen first seen count name kind subobject type reason source message testfaceraspi pod warning dnsconfigforming kubelet workernode search line limits were exceeded some search paths have been omitted the applied search line is openfaas fn svc cluster local svc cluster local cluster local heig vd ch einet ad eivd ch web ad eivd ch figlet pod warning dnsconfigforming kubelet workernode search line limits were exceeded some search paths have been omitted the applied search line is openfaas fn svc cluster local svc cluster local cluster local heig vd ch einet ad eivd ch web ad eivd ch testfaceraspi pod normal scheduled default scheduler successfully assigned testfaceraspi to workernode testfaceraspi replicaset normal successfulcreate replicaset controller created pod testfaceraspi testfaceraspi pod normal successfulmountvolume kubelet workernode mountvolume setup succeeded for volume default token testfaceraspi pod warning dnsconfigforming kubelet workernode search line limits were exceeded some search paths have been omitted the applied search line is openfaas fn svc cluster local svc cluster local cluster local heig vd ch einet ad eivd ch web ad eivd ch testfaceraspi pod spec containers testfaceraspi warning inspectfailed kubelet workernode failed to inspect image gallouche testfaceraspi rpc error code unknown desc error response from daemon readlink var lib docker l invalid argument testfaceraspi pod spec containers testfaceraspi warning failed kubelet workernode error imageinspecterror steps to reproduce for bugs deploy openfaas on a node cluster create function with faas new testfaceraspi lang armhf add the following code in the handler py python import json def handle req jsonl json loads req return found str jsonl faces in openfaas function on raspi change gateway and image in the yml yaml provider name faas gateway functions testfaceraspi lang armhf handler testfaceraspi image gallouche testfaceraspi run faas build f testfacepi yml login in dockerhub with docker login run faas push f testfacepi yml run faas deploy f testfacepi yml your environment faas cli version full output from faas cli version commit version docker version full output from docker version client version ce api version go version git commit built tue apr os arch linux arm experimental false orchestrator swarm server engine version ce api version minimum version go version git commit built tue apr os arch linux arm experimental false are you using docker swarm faas swarm or kubernetes faas netes using kubernetes operating system and version e g linux windows macos distributor id raspbian description raspbian gnu linux stretch release codename stretch thanks by advance | 1 |
709,318 | 24,373,508,544 | IssuesEvent | 2022-10-03 21:37:53 | raceintospace/raceintospace | https://api.github.com/repos/raceintospace/raceintospace | closed | Job review gets worse while prestige rises? | bug Low Priority | A user wrote me last night to remark that in the game, his prestige was doing very well, but his job rating was progressively bad. Anyone know why this might be? He can be emailed directly at shch0003@yandex.ru if you need details.


| 1.0 | Job review gets worse while prestige rises? - A user wrote me last night to remark that in the game, his prestige was doing very well, but his job rating was progressively bad. Anyone know why this might be? He can be emailed directly at shch0003@yandex.ru if you need details.


| priority | job review gets worse while prestige rises a user wrote me last night to remark that in the game his prestige was doing very well but his job rating was progressively bad anyone know why this might be he can be emailed directly at yandex ru if you need details | 1 |
121,271 | 4,807,158,464 | IssuesEvent | 2016-11-02 20:37:50 | dealii/dealii | https://api.github.com/repos/dealii/dealii | opened | Static analysis: dealii-git/include/deal.II/base/thread_management.h | Low priority Starter project | ```
dealii-git/include/deal.II/base/thread_management.h 2809 warn V690 The 'TaskDescriptor' class implements a copy constructor, but lacks the '=' operator. It is dangerous to use such a class.
dealii-git/include/deal.II/base/thread_management.h 2944 warn V678 An object is used as an argument to its own method. Consider checking the first actual argument of the 'destroy' function.
dealii-git/include/deal.II/base/thread_management.h 2910 warn V730 Not all members of a class are initialized inside the constructor. Consider inspecting: task_is_done.
dealii-git/include/deal.II/base/thread_management.h 2918 warn V730 Not all members of a class are initialized inside the constructor. Consider inspecting: task_is_done.
```
We should address these warnings and errors from the static analysis tool PVS. In response to #3342. | 1.0 | Static analysis: dealii-git/include/deal.II/base/thread_management.h - ```
dealii-git/include/deal.II/base/thread_management.h 2809 warn V690 The 'TaskDescriptor' class implements a copy constructor, but lacks the '=' operator. It is dangerous to use such a class.
dealii-git/include/deal.II/base/thread_management.h 2944 warn V678 An object is used as an argument to its own method. Consider checking the first actual argument of the 'destroy' function.
dealii-git/include/deal.II/base/thread_management.h 2910 warn V730 Not all members of a class are initialized inside the constructor. Consider inspecting: task_is_done.
dealii-git/include/deal.II/base/thread_management.h 2918 warn V730 Not all members of a class are initialized inside the constructor. Consider inspecting: task_is_done.
```
We should address these warnings and errors from the static analysis tool PVS. In response to #3342. | priority | static analysis dealii git include deal ii base thread management h dealii git include deal ii base thread management h warn the taskdescriptor class implements a copy constructor but lacks the operator it is dangerous to use such a class dealii git include deal ii base thread management h warn an object is used as an argument to its own method consider checking the first actual argument of the destroy function dealii git include deal ii base thread management h warn not all members of a class are initialized inside the constructor consider inspecting task is done dealii git include deal ii base thread management h warn not all members of a class are initialized inside the constructor consider inspecting task is done we should address these warnings and errors from the static analysis tool pvs in response to | 1 |
261,793 | 8,246,160,750 | IssuesEvent | 2018-09-11 12:04:57 | meetalva/alva | https://api.github.com/repos/meetalva/alva | closed | Create Logger concept | priority: low type: feature | For proper debugging when developing and also when debugging incoming issues we need to have the ability to get more information.
We should create a basic concept on how we can manage this without to much overhead. | 1.0 | Create Logger concept - For proper debugging when developing and also when debugging incoming issues we need to have the ability to get more information.
We should create a basic concept on how we can manage this without to much overhead. | priority | create logger concept for proper debugging when developing and also when debugging incoming issues we need to have the ability to get more information we should create a basic concept on how we can manage this without to much overhead | 1 |
567,366 | 16,856,719,231 | IssuesEvent | 2021-06-21 07:48:19 | eu-digital-green-certificates/dgc-lib | https://api.github.com/repos/eu-digital-green-certificates/dgc-lib | opened | Improve CSCA Validation | enhancement low priority | ## Current Implementation
When trying to validate a CSCA a DSC will be checked against the whole list of downloaded CSCA.
## Suggested Enhancement
To improve performance it would make sense to search in the list of trusted CSCA for a matching CSCA by its Subject and then do the actual Issuer Check on the found certificate.
## Expected Benefits
Improved performance. | 1.0 | Improve CSCA Validation - ## Current Implementation
When trying to validate a CSCA a DSC will be checked against the whole list of downloaded CSCA.
## Suggested Enhancement
To improve performance it would make sense to search in the list of trusted CSCA for a matching CSCA by its Subject and then do the actual Issuer Check on the found certificate.
## Expected Benefits
Improved performance. | priority | improve csca validation current implementation when trying to validate a csca a dsc will be checked against the whole list of downloaded csca suggested enhancement to improve performance it would make sense to search in the list of trusted csca for a matching csca by its subject and then do the actual issuer check on the found certificate expected benefits improved performance | 1 |
544,623 | 15,895,080,654 | IssuesEvent | 2021-04-11 12:43:58 | ihhub/fheroes2 | https://api.github.com/repos/ihhub/fheroes2 | opened | Dragon City has wrong number of guardians | bug low priority | In the OG we have 1 Black Dragon, 1 Red Dragon and 3 Green Dragons.

In fheroes2 we have 1Black, 2 Red and 3 Green Dragons.

Also they should be in separate stacks. | 1.0 | Dragon City has wrong number of guardians - In the OG we have 1 Black Dragon, 1 Red Dragon and 3 Green Dragons.

In fheroes2 we have 1Black, 2 Red and 3 Green Dragons.

Also they should be in separate stacks. | priority | dragon city has wrong number of guardians in the og we have black dragon red dragon and green dragons in we have red and green dragons also they should be in separate stacks | 1 |
285,060 | 8,753,778,542 | IssuesEvent | 2018-12-14 09:35:24 | jens-maus/RaspberryMatic | https://api.github.com/repos/jens-maus/RaspberryMatic | closed | Ship current firmware of all Homematic devices with RaspberryMatic | WebUI enhancement low priority | Hi,
currently one has to search the eq3-website regularly for firmware-updates for specific devices. If a new firmware update is found it has to be downloaded to a PC and then uploaded to RaspberryMatic.
This costs a lot of time.
Please, ship the current firmware-updates of all Homematic devices with RaspberryMatic. | 1.0 | Ship current firmware of all Homematic devices with RaspberryMatic - Hi,
currently one has to search the eq3-website regularly for firmware-updates for specific devices. If a new firmware update is found it has to be downloaded to a PC and then uploaded to RaspberryMatic.
This costs a lot of time.
Please, ship the current firmware-updates of all Homematic devices with RaspberryMatic. | priority | ship current firmware of all homematic devices with raspberrymatic hi currently one has to search the website regularly for firmware updates for specific devices if a new firmware update is found it has to be downloaded to a pc and then uploaded to raspberrymatic this costs a lot of time please ship the current firmware updates of all homematic devices with raspberrymatic | 1 |
419,154 | 12,218,286,652 | IssuesEvent | 2020-05-01 19:00:24 | aol/moloch | https://api.github.com/repos/aol/moloch | closed | SPIGraph sort should sort by selected data type | low priority viewer | Sorting will always sort by number of sessions. This is somewhat confusing and should provide the option to sort by whatever data type the graph is showing.
| 1.0 | SPIGraph sort should sort by selected data type - Sorting will always sort by number of sessions. This is somewhat confusing and should provide the option to sort by whatever data type the graph is showing.
| priority | spigraph sort should sort by selected data type sorting will always sort by number of sessions this is somewhat confusing and should provide the option to sort by whatever data type the graph is showing | 1 |
788,850 | 27,769,982,694 | IssuesEvent | 2023-03-16 13:55:45 | hybridly/hybridly | https://api.github.com/repos/hybridly/hybridly | closed | Dot-notated paths passed to the `useProperty` become invalid when reaching for a key past a nullable key | priority: low | ### Describe the bug
Dot-notated paths passed to the `useProperty` composable are valid only until a nullable key is reached.
### Reproduction
https://github.com/nhedger/hybridly-useProperty-issue
### Steps to reproduce
### Create the following data objects.
```php
<?php # app/Data/GlobalProperties.php
namespace App\Data;
use Spatie\LaravelData\Data;
class GlobalProperties extends Data
{
public function __construct(public readonly SecurityData $security) {}
}
```
```php
<?php # app/Data/SecurityData.php
namespace App\Data;
use Spatie\LaravelData\Data;
class SecurityData extends Data
{
public function __construct(
public readonly ?UserData $user,
public readonly string $test,
) {}
}
```
```php
<?php # app/Data/UserData.php
namespace App\Data;
use Spatie\LaravelData\Data;
class UserData extends Data
{
public function __construct(
public readonly ?int $id,
public readonly string $name,
public readonly string $email,
) {}
}
```
### Try accessing any key on the `user`
Try accessing any key on the `user` and see that it produces TypeScript error TS2345.
```ts
// TS2345: Argument of type '"security.user.id"' is not assignable to parameter of type 'Path '.
const name = useProperty('security.user.id');
// TS2345: Argument of type '"security.user.name"' is not assignable to parameter of type 'Path '.
const name = useProperty('security.user.name');
// TS2345: Argument of type '"security.user.email"' is not assignable to parameter of type 'Path '.
const name = useProperty('security.user.email');
```
### System information
```bash
System:
OS: macOS 13.1
CPU: (16) x64 Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Memory: 4.99 GB / 32.00 GB
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 19.3.0 - ~/Library/Caches/fnm_multishells/40931_1677913160299/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 9.2.0 - ~/Library/Caches/fnm_multishells/40931_1677913160299/bin/npm
Browsers:
Firefox: 110.0.1
Safari: 16.2
npmPackages:
hybridly: link:../hybridly/packages/hybridly => 0.1.0-alpha.2
```
### Used package manager
npm
### Logs
_No response_
### Validations
- [X] Read the [docs](https://hybridly.dev).
- [X] Check that there isn't [already an issue](https://github.com/hybridly/hybridly/issues) that reports the same bug to avoid creating a duplicate.
- [X] Make sure this is a Hybridly issue and not an issue related to something else (Vite, Vue...). For example, if it's a Vue SFC related bug, it should likely be reported to [vuejs/core](https://github.com/vuejs/core) instead.
- [X] Check that this is a concrete bug. For Q&A open a [GitHub Discussion](https://github.com/hybridly/hybridly/discussions).
- [X] The provided reproduction is a [minimal reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) of the bug. | 1.0 | Dot-notated paths passed to the `useProperty` become invalid when reaching for a key past a nullable key - ### Describe the bug
Dot-notated paths passed to the `useProperty` composable are valid only until a nullable key is reached.
### Reproduction
https://github.com/nhedger/hybridly-useProperty-issue
### Steps to reproduce
### Create the following data objects.
```php
<?php # app/Data/GlobalProperties.php
namespace App\Data;
use Spatie\LaravelData\Data;
class GlobalProperties extends Data
{
public function __construct(public readonly SecurityData $security) {}
}
```
```php
<?php # app/Data/SecurityData.php
namespace App\Data;
use Spatie\LaravelData\Data;
class SecurityData extends Data
{
public function __construct(
public readonly ?UserData $user,
public readonly string $test,
) {}
}
```
```php
<?php # app/Data/UserData.php
namespace App\Data;
use Spatie\LaravelData\Data;
class UserData extends Data
{
public function __construct(
public readonly ?int $id,
public readonly string $name,
public readonly string $email,
) {}
}
```
### Try accessing any key on the `user`
Try accessing any key on the `user` and see that it produces TypeScript error TS2345.
```ts
// TS2345: Argument of type '"security.user.id"' is not assignable to parameter of type 'Path '.
const name = useProperty('security.user.id');
// TS2345: Argument of type '"security.user.name"' is not assignable to parameter of type 'Path '.
const name = useProperty('security.user.name');
// TS2345: Argument of type '"security.user.email"' is not assignable to parameter of type 'Path '.
const name = useProperty('security.user.email');
```
### System information
```bash
System:
OS: macOS 13.1
CPU: (16) x64 Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Memory: 4.99 GB / 32.00 GB
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 19.3.0 - ~/Library/Caches/fnm_multishells/40931_1677913160299/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 9.2.0 - ~/Library/Caches/fnm_multishells/40931_1677913160299/bin/npm
Browsers:
Firefox: 110.0.1
Safari: 16.2
npmPackages:
hybridly: link:../hybridly/packages/hybridly => 0.1.0-alpha.2
```
### Used package manager
npm
### Logs
_No response_
### Validations
- [X] Read the [docs](https://hybridly.dev).
- [X] Check that there isn't [already an issue](https://github.com/hybridly/hybridly/issues) that reports the same bug to avoid creating a duplicate.
- [X] Make sure this is a Hybridly issue and not an issue related to something else (Vite, Vue...). For example, if it's a Vue SFC related bug, it should likely be reported to [vuejs/core](https://github.com/vuejs/core) instead.
- [X] Check that this is a concrete bug. For Q&A open a [GitHub Discussion](https://github.com/hybridly/hybridly/discussions).
- [X] The provided reproduction is a [minimal reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) of the bug. | priority | dot notated paths passed to the useproperty become invalid when reaching for a key past a nullable key describe the bug dot notated paths passed to the useproperty composable are valid only until a nullable key is reached reproduction steps to reproduce create the following data objects php php app data globalproperties php namespace app data use spatie laraveldata data class globalproperties extends data public function construct public readonly securitydata security php php app data securitydata php namespace app data use spatie laraveldata data class securitydata extends data public function construct public readonly userdata user public readonly string test php php app data userdata php namespace app data use spatie laraveldata data class userdata extends data public function construct public readonly int id public readonly string name public readonly string email try accessing any key on the user try accessing any key on the user and see that it produces typescript error ts argument of type security user id is not assignable to parameter of type path const name useproperty security user id argument of type security user name is not assignable to parameter of type path const name useproperty security user name argument of type security user email is not assignable to parameter of type path const name useproperty security user email system information bash system os macos cpu intel r core tm cpu memory gb gb shell bin zsh binaries node library caches fnm multishells bin node yarn usr local bin yarn npm library caches fnm multishells bin npm browsers firefox safari npmpackages hybridly link hybridly packages hybridly alpha used package manager npm logs no response validations read the check that there isn t that reports the same bug to avoid creating a duplicate make sure this is a hybridly issue and not an issue related to something else vite vue for example if it s a vue sfc related bug it should likely be reported to instead check that this is a concrete bug for q a open a the provided reproduction is a of the bug | 1 |
702,509 | 24,124,149,558 | IssuesEvent | 2022-09-20 21:45:38 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] Remove/consolidate marketplace tabs on create site dialog | new feature priority: low | ### Duplicates
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
Today users have to between "collections" of blueprints by selecting a tab in the create site dialog. The options are "Private blueprints" and "public marketplace." This makes it difficult for first-time users to find all available options. We want to simplify this experience.
### Describe the solution you'd like
Current experience:
<img width="1652" alt="Screen Shot 2022-08-11 at 10 32 58 AM" src="https://user-images.githubusercontent.com/169432/184158704-80e9a532-1bcf-4811-a09e-acd4dbb03266.png">
What we would like instead is:
Show the private blueprints and marketplace blueprints at the same time.
- [x] Create from Git should always be first
- [x] Show private (out of the box) blueprints second
- [ ] Consider making "empty"/hello world the only "private blueprint"
- [x] As long as there is connectivity, connect to the marketplace and show available blueprints, listed in the o
<img width="1652" alt="Screen Shot 2022-08-11 at 10 32 58 AM" src="https://user-images.githubusercontent.com/169432/184161503-d3c0818d-26ab-42fa-85e4-0fe34cf4746b.png">
rder returned.
| 1.0 | [studio-ui] Remove/consolidate marketplace tabs on create site dialog - ### Duplicates
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
Today users have to between "collections" of blueprints by selecting a tab in the create site dialog. The options are "Private blueprints" and "public marketplace." This makes it difficult for first-time users to find all available options. We want to simplify this experience.
### Describe the solution you'd like
Current experience:
<img width="1652" alt="Screen Shot 2022-08-11 at 10 32 58 AM" src="https://user-images.githubusercontent.com/169432/184158704-80e9a532-1bcf-4811-a09e-acd4dbb03266.png">
What we would like instead is:
Show the private blueprints and marketplace blueprints at the same time.
- [x] Create from Git should always be first
- [x] Show private (out of the box) blueprints second
- [ ] Consider making "empty"/hello world the only "private blueprint"
- [x] As long as there is connectivity, connect to the marketplace and show available blueprints, listed in the o
<img width="1652" alt="Screen Shot 2022-08-11 at 10 32 58 AM" src="https://user-images.githubusercontent.com/169432/184161503-d3c0818d-26ab-42fa-85e4-0fe34cf4746b.png">
rder returned.
| priority | remove consolidate marketplace tabs on create site dialog duplicates i have searched the existing issues is your feature request related to a problem please describe today users have to between collections of blueprints by selecting a tab in the create site dialog the options are private blueprints and public marketplace this makes it difficult for first time users to find all available options we want to simplify this experience describe the solution you d like current experience img width alt screen shot at am src what we would like instead is show the private blueprints and marketplace blueprints at the same time create from git should always be first show private out of the box blueprints second consider making empty hello world the only private blueprint as long as there is connectivity connect to the marketplace and show available blueprints listed in the o img width alt screen shot at am src rder returned | 1 |
729,852 | 25,148,112,691 | IssuesEvent | 2022-11-10 07:47:32 | AY2223S1-CS2103T-W11-3/tp | https://api.github.com/repos/AY2223S1-CS2103T-W11-3/tp | closed | As an artist who may receive multiples of the same commission, I can duplicate my commissions | type.Story priority.Low :man_shrugging: not in v1.4 | ... so that I do not have to manually enter the same information every time. | 1.0 | As an artist who may receive multiples of the same commission, I can duplicate my commissions - ... so that I do not have to manually enter the same information every time. | priority | as an artist who may receive multiples of the same commission i can duplicate my commissions so that i do not have to manually enter the same information every time | 1 |
173,920 | 6,534,356,074 | IssuesEvent | 2017-08-31 10:22:57 | status-im/status-react | https://api.github.com/repos/status-im/status-react | opened | All letters are removed when tapping erase button one time in password field after incorrect attempt [develop] | bug ios low-priority | ### Description
*Type*: Bug
*Summary*: When I am trying to edit the password (like correct 1 last symbol) in the password field after previous password was not accepted I am taping erase on the keyboard and all previously letters are removed
#### Expected behavior
I should be able to remove symbols one by one in password field after previous attempt was not accepted
### Reproduction
- Open Status
- Create an account and remember password
- Click switch accounts
- Tap the account and enter incorrect password
- Tap into the password field
- Tap the erase button on the keyboard
### Additional Information
* Operating System: iOS
| 1.0 | All letters are removed when tapping erase button one time in password field after incorrect attempt [develop] - ### Description
*Type*: Bug
*Summary*: When I am trying to edit the password (like correct 1 last symbol) in the password field after previous password was not accepted I am taping erase on the keyboard and all previously letters are removed
#### Expected behavior
I should be able to remove symbols one by one in password field after previous attempt was not accepted
### Reproduction
- Open Status
- Create an account and remember password
- Click switch accounts
- Tap the account and enter incorrect password
- Tap into the password field
- Tap the erase button on the keyboard
### Additional Information
* Operating System: iOS
| priority | all letters are removed when tapping erase button one time in password field after incorrect attempt description type bug summary when i am trying to edit the password like correct last symbol in the password field after previous password was not accepted i am taping erase on the keyboard and all previously letters are removed expected behavior i should be able to remove symbols one by one in password field after previous attempt was not accepted reproduction open status create an account and remember password click switch accounts tap the account and enter incorrect password tap into the password field tap the erase button on the keyboard additional information operating system ios | 1 |
322,999 | 9,834,904,167 | IssuesEvent | 2019-06-17 10:52:35 | tud-zih-energy/lo2s | https://api.github.com/repos/tud-zih-energy/lo2s | closed | Look into PEBS | low priority | Can we use PEBS through perf_event with recent kernels?
Or maybe manually use PEBS? (https://github.com/andikleen/pmu-tools/tree/master/simple-pebs) | 1.0 | Look into PEBS - Can we use PEBS through perf_event with recent kernels?
Or maybe manually use PEBS? (https://github.com/andikleen/pmu-tools/tree/master/simple-pebs) | priority | look into pebs can we use pebs through perf event with recent kernels or maybe manually use pebs | 1 |
242,763 | 7,846,603,758 | IssuesEvent | 2018-06-19 15:54:58 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | closed | RAMP election | Bug: we allow users to create EPs even if there are no issues to close | Triage bug-high-priority caseflow-intake sierra | There was a situation where an NOD was mistakenly dated one day later than the RAMP opt-in. The CA was able to create the EP, but no VACOLS appeal issues were closed. We should tell users when there are no valid issues on the "finish" step. | 1.0 | RAMP election | Bug: we allow users to create EPs even if there are no issues to close - There was a situation where an NOD was mistakenly dated one day later than the RAMP opt-in. The CA was able to create the EP, but no VACOLS appeal issues were closed. We should tell users when there are no valid issues on the "finish" step. | priority | ramp election bug we allow users to create eps even if there are no issues to close there was a situation where an nod was mistakenly dated one day later than the ramp opt in the ca was able to create the ep but no vacols appeal issues were closed we should tell users when there are no valid issues on the finish step | 1 |
310,832 | 9,524,706,678 | IssuesEvent | 2019-04-28 06:08:13 | jeff-hykin/cpp-textmate-grammar | https://api.github.com/repos/jeff-hykin/cpp-textmate-grammar | opened | partial lambda breaks subsequent macro | Hard low priority 🐛 Bug | example code
```c++
#define TEST(name) _ts->test(name) = [&]() -> bool
#define ASSERT(expected, actual) \
```
image:

The `#define` does have the scope `meta.preprocessor.macro.cpp` | 1.0 | partial lambda breaks subsequent macro - example code
```c++
#define TEST(name) _ts->test(name) = [&]() -> bool
#define ASSERT(expected, actual) \
```
image:

The `#define` does have the scope `meta.preprocessor.macro.cpp` | priority | partial lambda breaks subsequent macro example code c define test name ts test name bool define assert expected actual image the define does have the scope meta preprocessor macro cpp | 1 |
633,411 | 20,254,105,817 | IssuesEvent | 2022-02-14 21:03:46 | coders-camp-2021-best-team/nodejs-project | https://api.github.com/repos/coders-camp-2021-best-team/nodejs-project | opened | feat/docker-compose | priority: low scope: app type: feat | **AC**
- [ ] postgresql
- [ ] pgadmin
- [ ] mysql
- [ ] phpmyadmin
- [ ] redis
- [ ] redis commander
- [ ] dockerize api server
- [ ] nginx load balancer | 1.0 | feat/docker-compose - **AC**
- [ ] postgresql
- [ ] pgadmin
- [ ] mysql
- [ ] phpmyadmin
- [ ] redis
- [ ] redis commander
- [ ] dockerize api server
- [ ] nginx load balancer | priority | feat docker compose ac postgresql pgadmin mysql phpmyadmin redis redis commander dockerize api server nginx load balancer | 1 |
135,958 | 5,267,323,111 | IssuesEvent | 2017-02-04 21:19:55 | LikeMyBread/Saylua | https://api.github.com/repos/LikeMyBread/Saylua | opened | Write Dungeons model serializer | Low Priority | As noted in #40, I didn't think it was immediately very useful for serialization to work both ways.
Leaving this ticket floating around for whenever it becomes necessary to save models back into files.
Most likely this will be when active map development begins. | 1.0 | Write Dungeons model serializer - As noted in #40, I didn't think it was immediately very useful for serialization to work both ways.
Leaving this ticket floating around for whenever it becomes necessary to save models back into files.
Most likely this will be when active map development begins. | priority | write dungeons model serializer as noted in i didn t think it was immediately very useful for serialization to work both ways leaving this ticket floating around for whenever it becomes necessary to save models back into files most likely this will be when active map development begins | 1 |
664,639 | 22,283,601,561 | IssuesEvent | 2022-06-11 08:54:04 | ngxs/store | https://api.github.com/repos/ngxs/store | closed | 🚀[FEATURE]: Add ActionContext and ActionStatus to the public api | domain: core ready to release priority: low type: feature state: has PR released | ### Relevant Package
This feature request is for @ngxs/store
### Description
Please add ActionContext and ActionStatus to the public API. The import:
```
import { ActionContext, ActionStatus } from '@ngxs/store/src/actions-stream';
```
worked in Angular v8 but not v10.
### Describe the problem you are trying to solve
We have a special way to handle errors for which we observe the Actions stream an filter them manually. Therefore we use ActionContext and ActionStatus.
### Describe the solution you'd like
Add ActionContext and ActionStatus to the public api:
https://github.com/ngxs/store/blob/cae998d1dfcf1ef9e78813aaf6eda9687e5f9a62/packages/store/src/actions-stream.ts#L10-L22
| 1.0 | 🚀[FEATURE]: Add ActionContext and ActionStatus to the public api - ### Relevant Package
This feature request is for @ngxs/store
### Description
Please add ActionContext and ActionStatus to the public API. The import:
```
import { ActionContext, ActionStatus } from '@ngxs/store/src/actions-stream';
```
worked in Angular v8 but not v10.
### Describe the problem you are trying to solve
We have a special way to handle errors for which we observe the Actions stream an filter them manually. Therefore we use ActionContext and ActionStatus.
### Describe the solution you'd like
Add ActionContext and ActionStatus to the public api:
https://github.com/ngxs/store/blob/cae998d1dfcf1ef9e78813aaf6eda9687e5f9a62/packages/store/src/actions-stream.ts#L10-L22
| priority | 🚀 add actioncontext and actionstatus to the public api relevant package this feature request is for ngxs store description please add actioncontext and actionstatus to the public api the import import actioncontext actionstatus from ngxs store src actions stream worked in angular but not describe the problem you are trying to solve we have a special way to handle errors for which we observe the actions stream an filter them manually therefore we use actioncontext and actionstatus describe the solution you d like add actioncontext and actionstatus to the public api | 1 |
208,658 | 7,157,022,907 | IssuesEvent | 2018-01-26 18:21:51 | pcdshub/pcdsdevices | https://api.github.com/repos/pcdshub/pcdsdevices | closed | Allow tests to run without bluesky | Low Priority bug | <!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Tests should be skipped if bluesky is not installed
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
Tries to import `pcdsdevices.make_daq_engine` which returns nothing (no exception). Then later in the test the function is referenced.
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Be more expansive with use of `requires_bluesky` decorator
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. Uninstall bluesky
2. Run tests
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| 1.0 | Allow tests to run without bluesky - <!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Tests should be skipped if bluesky is not installed
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
Tries to import `pcdsdevices.make_daq_engine` which returns nothing (no exception). Then later in the test the function is referenced.
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Be more expansive with use of `requires_bluesky` decorator
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. Uninstall bluesky
2. Run tests
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| priority | allow tests to run without bluesky expected behavior tests should be skipped if bluesky is not installed current behavior tries to import pcdsdevices make daq engine which returns nothing no exception then later in the test the function is referenced possible solution be more expansive with use of requires bluesky decorator steps to reproduce for bugs uninstall bluesky run tests context your environment | 1 |
103,070 | 4,164,391,903 | IssuesEvent | 2016-06-18 19:17:16 | ccrama/Slide | https://api.github.com/repos/ccrama/Slide | closed | Accessing /r/all from search doesn't work | bug priority: low | Try to search for /r/all. Once you select it from the suggestions, it says the sub isn't found. https://reddit.com/r/slideforreddit/comments/4ma0ua/cant_access_rall_as_a_casual_subreddit/ | 1.0 | Accessing /r/all from search doesn't work - Try to search for /r/all. Once you select it from the suggestions, it says the sub isn't found. https://reddit.com/r/slideforreddit/comments/4ma0ua/cant_access_rall_as_a_casual_subreddit/ | priority | accessing r all from search doesn t work try to search for r all once you select it from the suggestions it says the sub isn t found | 1 |
597,960 | 18,217,326,899 | IssuesEvent | 2021-09-30 06:47:11 | rism-digital/verovio | https://api.github.com/repos/rism-digital/verovio | closed | Delayed turn placement enhancements | enhancement low priority | Turns try to position themselves above the top line of a staff, and move upwards if they detected a collision.
Instead, it would be better if: (1) they are allowed to enter the staff if necessary, and (2) position themselves between the two notes of the turn (when the turn is delayed).
Here is an example from Beethoven piano sonata no. 19, mvmt. 1:
<img width="755" alt="Screen Shot 2020-08-19 at 2 41 04 PM" src="https://user-images.githubusercontent.com/3487289/90692843-aef4bd00-e22a-11ea-817c-bff14e8587c9.png">
A better placement would be closer to the level of the two adjacent notes related to the turn:
<img width="738" alt="Screen Shot 2020-08-19 at 2 45 33 PM" src="https://user-images.githubusercontent.com/3487289/90692879-bcaa4280-e22a-11ea-86d1-1f7ec306c012.png">
This would be somewhat similar to the automatic positioning for rests to avoid beams:
<img width="249" alt="Screen Shot 2020-08-19 at 2 47 34 PM" src="https://user-images.githubusercontent.com/3487289/90693039-0004b100-e22b-11ea-94b3-c415308b0fe0.png">
Almost never should the turn be placed further from the staff than a slur between the two adjacent turn notes.
Here is the original typesetting for the two turns:
<img width="341" alt="Screen Shot 2020-08-19 at 2 35 04 PM" src="https://user-images.githubusercontent.com/3487289/90693065-0c890980-e22b-11ea-9731-6bdca0dd512b.png">
<img width="335" alt="Screen Shot 2020-08-19 at 2 35 12 PM" src="https://user-images.githubusercontent.com/3487289/90693069-0e52cd00-e22b-11ea-8845-246d9a776298.png">
Here is an example from the same movement for the beam going in the opposite direction:
<img width="616" alt="Screen Shot 2020-08-19 at 2 36 04 PM" src="https://user-images.githubusercontent.com/3487289/90693242-540f9580-e22b-11ea-9d2d-567cc2e551eb.png">
In this case `@place="above"` would not try to place the turn between the two notes, since that would place the turn below the beam, but `@place="below"` should try to place the turn between the notes inside of the staff.
| 1.0 | Delayed turn placement enhancements - Turns try to position themselves above the top line of a staff, and move upwards if they detected a collision.
Instead, it would be better if: (1) they are allowed to enter the staff if necessary, and (2) position themselves between the two notes of the turn (when the turn is delayed).
Here is an example from Beethoven piano sonata no. 19, mvmt. 1:
<img width="755" alt="Screen Shot 2020-08-19 at 2 41 04 PM" src="https://user-images.githubusercontent.com/3487289/90692843-aef4bd00-e22a-11ea-817c-bff14e8587c9.png">
A better placement would be closer to the level of the two adjacent notes related to the turn:
<img width="738" alt="Screen Shot 2020-08-19 at 2 45 33 PM" src="https://user-images.githubusercontent.com/3487289/90692879-bcaa4280-e22a-11ea-86d1-1f7ec306c012.png">
This would be somewhat similar to the automatic positioning for rests to avoid beams:
<img width="249" alt="Screen Shot 2020-08-19 at 2 47 34 PM" src="https://user-images.githubusercontent.com/3487289/90693039-0004b100-e22b-11ea-94b3-c415308b0fe0.png">
Almost never should the turn be placed further from the staff than a slur between the two adjacent turn notes.
Here is the original typesetting for the two turns:
<img width="341" alt="Screen Shot 2020-08-19 at 2 35 04 PM" src="https://user-images.githubusercontent.com/3487289/90693065-0c890980-e22b-11ea-9731-6bdca0dd512b.png">
<img width="335" alt="Screen Shot 2020-08-19 at 2 35 12 PM" src="https://user-images.githubusercontent.com/3487289/90693069-0e52cd00-e22b-11ea-8845-246d9a776298.png">
Here is an example from the same movement for the beam going in the opposite direction:
<img width="616" alt="Screen Shot 2020-08-19 at 2 36 04 PM" src="https://user-images.githubusercontent.com/3487289/90693242-540f9580-e22b-11ea-9d2d-567cc2e551eb.png">
In this case `@place="above"` would not try to place the turn between the two notes, since that would place the turn below the beam, but `@place="below"` should try to place the turn between the notes inside of the staff.
| priority | delayed turn placement enhancements turns try to position themselves above the top line of a staff and move upwards if they detected a collision instead it would be better if they are allowed to enter the staff if necessary and position themselves between the two notes of the turn when the turn is delayed here is an example from beethoven piano sonata no mvmt img width alt screen shot at pm src a better placement would be closer to the level of the two adjacent notes related to the turn img width alt screen shot at pm src this would be somewhat similar to the automatic positioning for rests to avoid beams img width alt screen shot at pm src almost never should the turn be placed further from the staff than a slur between the two adjacent turn notes here is the original typesetting for the two turns img width alt screen shot at pm src img width alt screen shot at pm src here is an example from the same movement for the beam going in the opposite direction img width alt screen shot at pm src in this case place above would not try to place the turn between the two notes since that would place the turn below the beam but place below should try to place the turn between the notes inside of the staff | 1 |
108,724 | 4,349,597,559 | IssuesEvent | 2016-07-30 17:31:13 | JustArchi/ArchiSteamFarm | https://api.github.com/repos/JustArchi/ArchiSteamFarm | closed | Investigate if DistributeKeys is still needed | Discussion Feedback welcome Low priority | To me it seems redundant now, as ```ForwardKeysToOtherBots``` got significant improvements - https://github.com/JustArchi/ArchiSteamFarm/releases/tag/2.1.3.2
CC @Ryzhehvost @Pandiora - do you think that option still makes sense?
More info - https://github.com/JustArchi/ArchiSteamFarm/commit/a90573e0ea95ab89a632c8d180ddc92535f36106 | 1.0 | Investigate if DistributeKeys is still needed - To me it seems redundant now, as ```ForwardKeysToOtherBots``` got significant improvements - https://github.com/JustArchi/ArchiSteamFarm/releases/tag/2.1.3.2
CC @Ryzhehvost @Pandiora - do you think that option still makes sense?
More info - https://github.com/JustArchi/ArchiSteamFarm/commit/a90573e0ea95ab89a632c8d180ddc92535f36106 | priority | investigate if distributekeys is still needed to me it seems redundant now as forwardkeystootherbots got significant improvements cc ryzhehvost pandiora do you think that option still makes sense more info | 1 |
298,585 | 9,200,554,655 | IssuesEvent | 2019-03-07 17:19:55 | qissue-bot/QGIS | https://api.github.com/repos/qissue-bot/QGIS | closed | Last column of Postgis table is missing | Category: Data Provider Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report | ---
Author Name: **Redmine Admin** (Redmine Admin)
Original Redmine Issue: 153, https://issues.qgis.org/issues/153
Original Assignee: Gavin Macaulay -
---
When opening the attribut table of a postgis layer the last column of the postgis table is empty.
After identifying the same object with the identify tool, the column does not appear.
---
- [example.zip](https://issues.qgis.org/attachments/download/1689/example.zip) (anonymous -) | 1.0 | Last column of Postgis table is missing - ---
Author Name: **Redmine Admin** (Redmine Admin)
Original Redmine Issue: 153, https://issues.qgis.org/issues/153
Original Assignee: Gavin Macaulay -
---
When opening the attribut table of a postgis layer the last column of the postgis table is empty.
After identifying the same object with the identify tool, the column does not appear.
---
- [example.zip](https://issues.qgis.org/attachments/download/1689/example.zip) (anonymous -) | priority | last column of postgis table is missing author name redmine admin redmine admin original redmine issue original assignee gavin macaulay when opening the attribut table of a postgis layer the last column of the postgis table is empty after identifying the same object with the identify tool the column does not appear anonymous | 1 |
508,976 | 14,709,896,704 | IssuesEvent | 2021-01-05 03:37:57 | input-output-hk/cardano-ledger-specs | https://api.github.com/repos/input-output-hk/cardano-ledger-specs | closed | Invariants section at begining | formal-spec :scroll: priority low shelley era | Bring out the invariants up front in a separate section. Make it clear what should be proved.
We could even have side-conditions in various rules where there are key properties, which can be injected into the executable spec | 1.0 | Invariants section at begining - Bring out the invariants up front in a separate section. Make it clear what should be proved.
We could even have side-conditions in various rules where there are key properties, which can be injected into the executable spec | priority | invariants section at begining bring out the invariants up front in a separate section make it clear what should be proved we could even have side conditions in various rules where there are key properties which can be injected into the executable spec | 1 |
144,236 | 5,537,474,856 | IssuesEvent | 2017-03-21 22:11:35 | technomancers/2017SteamWorks | https://api.github.com/repos/technomancers/2017SteamWorks | closed | Add rumble while gear subsystem is deployed | Priority: Low Type: Enhancement | While the gear subsystem is open we should rumble the controller to let Ryan know that it is open and he should close it! | 1.0 | Add rumble while gear subsystem is deployed - While the gear subsystem is open we should rumble the controller to let Ryan know that it is open and he should close it! | priority | add rumble while gear subsystem is deployed while the gear subsystem is open we should rumble the controller to let ryan know that it is open and he should close it | 1 |
285,694 | 8,773,324,056 | IssuesEvent | 2018-12-18 16:34:58 | InfiniteFlightAirportEditing/Airports | https://api.github.com/repos/InfiniteFlightAirportEditing/Airports | opened | RJAH-Hyakuri AirBase/Ibaraki Airport-IBARAKI-JAPAN | Being Redone Low Priority | # Airport Name
< Hyakuri Airbase/ Ibaraki airport>
# Country?
< Japan >
# Improvements that need to be made?
< Redone from scratch >
# Are you working on this airport?
< Yes >
# Airport Priority? (A380, 10000ft+ Runway)
< Low >
| 1.0 | RJAH-Hyakuri AirBase/Ibaraki Airport-IBARAKI-JAPAN - # Airport Name
< Hyakuri Airbase/ Ibaraki airport>
# Country?
< Japan >
# Improvements that need to be made?
< Redone from scratch >
# Are you working on this airport?
< Yes >
# Airport Priority? (A380, 10000ft+ Runway)
< Low >
| priority | rjah hyakuri airbase ibaraki airport ibaraki japan airport name country improvements that need to be made are you working on this airport airport priority runway | 1 |
665,360 | 22,310,382,733 | IssuesEvent | 2022-06-13 16:25:42 | awslabs/smithy-rs | https://api.github.com/repos/awslabs/smithy-rs | opened | Potential bug with `ByteStream`'s implementation of `futures_core::stream::Stream` | low-priority | The implementation of futures::core::stream` for `ByteStream` is potentially bugged. We've never hit this bug because we only ever parameterize `ByteStream`s on `SdkBody` which emits all its data in a single chunk. If we used a different type that emitted data in chunks, `ByteStream` would perform incomplete reads of data. This is because we're only reading one chunk when we should be reading all remaining data.
```rust
impl<B> futures_core::stream::Stream for Inner<B>
where
B: http_body::Body,
{
type Item = Result<Bytes, B::Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
match self.project().body.poll_data(cx) {
Poll::Ready(Some(Ok(mut data))) => {
let len = data.chunk().len();
let bytes = data.copy_to_bytes(len);
Poll::Ready(Some(Ok(bytes)))
}
Poll::Ready(None) => Poll::Ready(None),
Poll::Ready(Some(Err(e))) => Poll::Ready(Some(Err(e))),
Poll::Pending => Poll::Pending,
}
}
fn size_hint(&self) -> (usize, Option<usize>) {
let size_hint = http_body::Body::size_hint(&self.body);
(
size_hint.lower() as usize,
size_hint.upper().map(|u| u as usize),
)
}
}
``` | 1.0 | Potential bug with `ByteStream`'s implementation of `futures_core::stream::Stream` - The implementation of futures::core::stream` for `ByteStream` is potentially bugged. We've never hit this bug because we only ever parameterize `ByteStream`s on `SdkBody` which emits all its data in a single chunk. If we used a different type that emitted data in chunks, `ByteStream` would perform incomplete reads of data. This is because we're only reading one chunk when we should be reading all remaining data.
```rust
impl<B> futures_core::stream::Stream for Inner<B>
where
B: http_body::Body,
{
type Item = Result<Bytes, B::Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
match self.project().body.poll_data(cx) {
Poll::Ready(Some(Ok(mut data))) => {
let len = data.chunk().len();
let bytes = data.copy_to_bytes(len);
Poll::Ready(Some(Ok(bytes)))
}
Poll::Ready(None) => Poll::Ready(None),
Poll::Ready(Some(Err(e))) => Poll::Ready(Some(Err(e))),
Poll::Pending => Poll::Pending,
}
}
fn size_hint(&self) -> (usize, Option<usize>) {
let size_hint = http_body::Body::size_hint(&self.body);
(
size_hint.lower() as usize,
size_hint.upper().map(|u| u as usize),
)
}
}
``` | priority | potential bug with bytestream s implementation of futures core stream stream the implementation of futures core stream for bytestream is potentially bugged we ve never hit this bug because we only ever parameterize bytestream s on sdkbody which emits all its data in a single chunk if we used a different type that emitted data in chunks bytestream would perform incomplete reads of data this is because we re only reading one chunk when we should be reading all remaining data rust impl futures core stream stream for inner where b http body body type item result fn poll next self pin cx mut context poll match self project body poll data cx poll ready some ok mut data let len data chunk len let bytes data copy to bytes len poll ready some ok bytes poll ready none poll ready none poll ready some err e poll ready some err e poll pending poll pending fn size hint self usize option let size hint http body body size hint self body size hint lower as usize size hint upper map u u as usize | 1 |
440,849 | 12,705,080,948 | IssuesEvent | 2020-06-23 03:26:48 | naphthasl/sakamoto | https://api.github.com/repos/naphthasl/sakamoto | closed | Improve the visual appeal of the static uploads page and tables in general | enhancement low priority | It needs more gradients!
Generally to bland, and with WAY too many solid black borders. Huge room for improvement. | 1.0 | Improve the visual appeal of the static uploads page and tables in general - It needs more gradients!
Generally to bland, and with WAY too many solid black borders. Huge room for improvement. | priority | improve the visual appeal of the static uploads page and tables in general it needs more gradients generally to bland and with way too many solid black borders huge room for improvement | 1 |
664,902 | 22,292,121,243 | IssuesEvent | 2022-06-12 14:16:37 | chaotic-aur/packages | https://api.github.com/repos/chaotic-aur/packages | closed | [Request] memtest86-efi | request:new-pkg priority:low | ### Link to the package(s) in the AUR
https://aur.archlinux.org/packages/memtest86-efi
### Utility this package has for you
Memtest For EFI Systems
### Do you consider the package(s) to be useful for every Chaotic-AUR user?
No, but for a few.
### Do you consider the package to be useful for feature testing/preview?
- [ ] Yes
### Have you tested if the package builds in a clean chroot?
- [X] Yes
### Does the package's license allow redistributing it?
YES!
### Have you searched the issues to ensure this request is unique?
- [X] YES!
### Have you read the README to ensure this package is not banned?
- [X] YES!
### More information
_No response_ | 1.0 | [Request] memtest86-efi - ### Link to the package(s) in the AUR
https://aur.archlinux.org/packages/memtest86-efi
### Utility this package has for you
Memtest For EFI Systems
### Do you consider the package(s) to be useful for every Chaotic-AUR user?
No, but for a few.
### Do you consider the package to be useful for feature testing/preview?
- [ ] Yes
### Have you tested if the package builds in a clean chroot?
- [X] Yes
### Does the package's license allow redistributing it?
YES!
### Have you searched the issues to ensure this request is unique?
- [X] YES!
### Have you read the README to ensure this package is not banned?
- [X] YES!
### More information
_No response_ | priority | efi link to the package s in the aur utility this package has for you memtest for efi systems do you consider the package s to be useful for every chaotic aur user no but for a few do you consider the package to be useful for feature testing preview yes have you tested if the package builds in a clean chroot yes does the package s license allow redistributing it yes have you searched the issues to ensure this request is unique yes have you read the readme to ensure this package is not banned yes more information no response | 1 |
189,971 | 6,803,348,525 | IssuesEvent | 2017-11-03 00:21:16 | ChromatixAU/phpcs-config-chromatix | https://api.github.com/repos/ChromatixAU/phpcs-config-chromatix | opened | Get Travis build passing with linting | bug low priority | I mightn't be understanding what's required to test the standard on itself - and maybe we can't do this?
https://travis-ci.org/ChromatixAU/phpcs-config-chromatix/jobs/296552117#L516-L518
```
> bash -c 'if [[ $OSTYPE == linux* ]]; then php vendor/bin/phpcs --config-set installed_paths ../../chromatix,../../wp-coding-standards/wpcs; fi'
Directory name must not be empty.
```
From running vendor/bin/phpcs directly locally, this appears to be coming from https://github.com/squizlabs/PHP_CodeSniffer/blob/b5d57ed4ab5162bdc776cdc299952ac6986c0177/src/Util/Standards.php#L93:
```
PHP Fatal error: Uncaught RuntimeException: Directory name must not be empty. in \v
endor\squizlabs\php_codesniffer\src\Util\Standards.php:93
```
Which is happening while 'check[ing] if the installed dir is actually a standard itself'.
So, might need to look into this further. In the meantime, will disable linting in the Travis build. | 1.0 | Get Travis build passing with linting - I mightn't be understanding what's required to test the standard on itself - and maybe we can't do this?
https://travis-ci.org/ChromatixAU/phpcs-config-chromatix/jobs/296552117#L516-L518
```
> bash -c 'if [[ $OSTYPE == linux* ]]; then php vendor/bin/phpcs --config-set installed_paths ../../chromatix,../../wp-coding-standards/wpcs; fi'
Directory name must not be empty.
```
From running vendor/bin/phpcs directly locally, this appears to be coming from https://github.com/squizlabs/PHP_CodeSniffer/blob/b5d57ed4ab5162bdc776cdc299952ac6986c0177/src/Util/Standards.php#L93:
```
PHP Fatal error: Uncaught RuntimeException: Directory name must not be empty. in \v
endor\squizlabs\php_codesniffer\src\Util\Standards.php:93
```
Which is happening while 'check[ing] if the installed dir is actually a standard itself'.
So, might need to look into this further. In the meantime, will disable linting in the Travis build. | priority | get travis build passing with linting i mightn t be understanding what s required to test the standard on itself and maybe we can t do this bash c if then php vendor bin phpcs config set installed paths chromatix wp coding standards wpcs fi directory name must not be empty from running vendor bin phpcs directly locally this appears to be coming from php fatal error uncaught runtimeexception directory name must not be empty in v endor squizlabs php codesniffer src util standards php which is happening while check if the installed dir is actually a standard itself so might need to look into this further in the meantime will disable linting in the travis build | 1 |
355,190 | 10,577,370,383 | IssuesEvent | 2019-10-07 19:57:43 | compodoc/compodoc | https://api.github.com/repos/compodoc/compodoc | closed | [FEATURE] Allow opting in to set all external links to open in new tab | Priority: Low Status: Accepted Type: Enhancement wontfix | Hi,
In my lib's I'm integrating the documentation as an `iframe` inside the angular demo application.
This works great except for links.
Local links **that are part of the documentation app**, are fine.
All other links, even on the same host (github) does not work as intended.
Now, I can come up with a claver wrapper for the wrapper that listens to iframe `location` change and move the top-level url if it's not a local address.
But, is it possible to opt in to a mode where all external links are added a `target="_blank"` attribute?
This will also help when people don't want users out of their docs...
For example see http://shlomiassaf.github.io/ngx-modialog/#/home
External links will not work. | 1.0 | [FEATURE] Allow opting in to set all external links to open in new tab - Hi,
In my lib's I'm integrating the documentation as an `iframe` inside the angular demo application.
This works great except for links.
Local links **that are part of the documentation app**, are fine.
All other links, even on the same host (github) does not work as intended.
Now, I can come up with a claver wrapper for the wrapper that listens to iframe `location` change and move the top-level url if it's not a local address.
But, is it possible to opt in to a mode where all external links are added a `target="_blank"` attribute?
This will also help when people don't want users out of their docs...
For example see http://shlomiassaf.github.io/ngx-modialog/#/home
External links will not work. | priority | allow opting in to set all external links to open in new tab hi in my lib s i m integrating the documentation as an iframe inside the angular demo application this works great except for links local links that are part of the documentation app are fine all other links even on the same host github does not work as intended now i can come up with a claver wrapper for the wrapper that listens to iframe location change and move the top level url if it s not a local address but is it possible to opt in to a mode where all external links are added a target blank attribute this will also help when people don t want users out of their docs for example see external links will not work | 1 |
735,051 | 25,376,557,873 | IssuesEvent | 2022-11-21 14:31:15 | oceanprotocol/ocean-subgraph | https://api.github.com/repos/oceanprotocol/ocean-subgraph | closed | Change type of `lastPriceToken` from string to Token | Type: Enhancement Priority: Low | On Order , `lastPriceToken` should be `Token` | 1.0 | Change type of `lastPriceToken` from string to Token - On Order , `lastPriceToken` should be `Token` | priority | change type of lastpricetoken from string to token on order lastpricetoken should be token | 1 |
695,929 | 23,876,845,969 | IssuesEvent | 2022-09-07 19:58:00 | IDAES/idaes-pse | https://api.github.com/repos/IDAES/idaes-pse | closed | Standardize naming in Heat Exchanger models | Priority:Low unit models IDAES v2.0 | Naming of components in heat exchangers still appears to be inconsistent in places (e.g. naming of sides) - we should resolve any remaining issues. | 1.0 | Standardize naming in Heat Exchanger models - Naming of components in heat exchangers still appears to be inconsistent in places (e.g. naming of sides) - we should resolve any remaining issues. | priority | standardize naming in heat exchanger models naming of components in heat exchangers still appears to be inconsistent in places e g naming of sides we should resolve any remaining issues | 1 |
121,566 | 4,818,138,779 | IssuesEvent | 2016-11-04 15:35:47 | dmwm/PHEDEX | https://api.github.com/repos/dmwm/PHEDEX | opened | Add proper dependency on SSL client library | Category: Command-line Tools Priority 2: Low | The 'phedex' CLI, the Lifecycle and Spacemon (through PHEDEX::CLI::UserAgent and PHEDEX::Testbed::Lifecycle::Datasvc) depend on the Net::SSL module which is distributed with the perl-Crypt-SSLeay rpm
But this rpm is not included as dependency of the PHEDEX rpm so it needs to be installed by hand on the system as root. In addition the tools fail with an obscure error if the rpm is not installed.
We should consider adding a spec file for perl-Crypt-SSLeay in the externals and add it as dependency in the PHEDEX spec file.
| 1.0 | Add proper dependency on SSL client library - The 'phedex' CLI, the Lifecycle and Spacemon (through PHEDEX::CLI::UserAgent and PHEDEX::Testbed::Lifecycle::Datasvc) depend on the Net::SSL module which is distributed with the perl-Crypt-SSLeay rpm
But this rpm is not included as dependency of the PHEDEX rpm so it needs to be installed by hand on the system as root. In addition the tools fail with an obscure error if the rpm is not installed.
We should consider adding a spec file for perl-Crypt-SSLeay in the externals and add it as dependency in the PHEDEX spec file.
| priority | add proper dependency on ssl client library the phedex cli the lifecycle and spacemon through phedex cli useragent and phedex testbed lifecycle datasvc depend on the net ssl module which is distributed with the perl crypt ssleay rpm but this rpm is not included as dependency of the phedex rpm so it needs to be installed by hand on the system as root in addition the tools fail with an obscure error if the rpm is not installed we should consider adding a spec file for perl crypt ssleay in the externals and add it as dependency in the phedex spec file | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.